id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
12467541
https://en.wikipedia.org/wiki/Definitive%20software%20library
Definitive software library
In information technology service management a definitive software library (DSL) is a secure location, consisting of physical media or a software repository located on a network file server, in which the definitive authorized versions of all software configuration items (CIs) are stored and protected. The DSL is separate from development, quality assurance or production software storage areas. It contains master copies of all controlled software and includes definitive copies of purchased software, as well as licensing information for software developed on-site or purchased from an external vendor. All related documentation, related to any software stored in the DSL, is also stored in the DSL. Following the publication of ITIL version 3, the Definitive Software Library was renamed the Definitive Media Library. References External links ITIL College: Definitive Software Library ITIL Computing terminology
22807478
https://en.wikipedia.org/wiki/Michael%20Hackert
Michael Hackert
Michael Hackert (; born June 21, 1981) is a professional German ice hockey centre who last played for Heilbronner Falken of the DEL2. Hackert also played for Iserlohn Roosters of the German Deutsche Eishockey Liga (DEL), as well as for the Grand Rapids Griffins of the American Hockey League. In addition, Hackert has been a member of the German national team in several international competitions. Career statistics Regular season and playoffs International External links 1981 births Living people People from Heilbronn Adler Mannheim players DEG Metro Stars players Füchse Duisburg players Frankfurt Lions players German ice hockey centres Grand Rapids Griffins players Heilbronner EC players Iserlohn Roosters players
20973550
https://en.wikipedia.org/wiki/Wipro
Wipro
Wipro Limited, formerly known as the Western Indian Vegetable Products Limited, is an Indian multinational conglomerate headquartered in Bangalore, Karnataka, India. Its diverse businesses include FMCG, lighting, information technology, and consulting. The Fortune India 500 ranks it the 29th largest Indian company by total revenue. It is also ranked the 9th largest employer in India with over 221,000 employees. History of Wipro Past years The company was incorporated on 29 December 1945 in Amalner, India, by Mohamed Premji as Western India Vegetable Products Limited, later abbreviated to Wipro. It was initially set up as a manufacturer of vegetable and refined oils in Amalner, Maharashtra, British India, under the trade names of Kisan, Sunflower, and Camel. In 1966, after Mohamed Premji's death, his son Azim Premji took over Wipro as its chairman at the age of 21. During the 1970s and 1975 the company shifted its focus to new opportunities in the IT and computing industry, which was at a nascent stage in India at the time. On 7 June 1977, the name of the company changed from Western India Vegetable Products Limited, to Wipro Products Limited. In 1982, the name was changed again, from Wipro Products Limited to Wipro Limited. Wipro continued to expand in the consumer products domain with the launch of Ralak, a tulsi-based family soap and Wipro Jasmine, a toilet soap. 1986–1992 In 1988, Wipro added mobile hydraulic cylinders and heavy-duty industrial cylinders to its line of products. A joint venture company with the United States' General Electric in the name of Wipro GE Medical Systems Pvt. Ltd. was set up in 1989 for the manufacture, sales, and service of diagnostic and imaging products. In 1991, tipping systems and Eaton hydraulic products were launched. The Wipro Fluid Power division, in 1992, developed the capability to offer standard hydraulic cylinders for construction equipment and truck tipping systems. The Santoor talcum powder and Wipro Baby Soft range of baby toiletries were launched in 1990. 1994–2000 In 1995, Wipro set up an overseas design center, Odyssey 21, for the projects of overseas clients. Wipro Infotech and Wipro Systems were amalgamated with Wipro in April that year. Five of Wipro's manufacturing and development facilities secured the ISO 9001 certification during 1994–1995. In 1999, Wipro acquired Wipro Acer, and released new products such as the Wipro SuperGenius personal computers (PCs). In 1999, it was the one Indian PC range to obtain US-based National Software Testing Laboratory (NSTL) certification for the Year 2000 (Y2K) compliance in hardware for all models. Wipro joined with KPN (Royal Dutch telecom) to form a joint venture company, Wipro Net Limited, to provide internet services in India. In 2000 Wipro launched Wipro OSS Smart and Wipro WAP Smart. In the same year, Wipro was listed on the New York Stock Exchange. 2001–2011 In February 2002, Wipro became the first software technology and services company in India to be ISO 14001 certified. Wipro Consumer Care and Lighting Group entered the market of compact fluorescent lamps, with the launch of a range of CFL, under the brand name of Wipro Smartlite. As the company grew, a study revealed that Wipro was the fastest wealth creator for five years (1997–2002). It set up a wholly owned subsidiary company (Wipro Consumer Care Limited) to manufacture consumer care and lighting products. In 2004 Wipro joined the billion dollar club. It also partnered with Intel for . In 2006, Wipro acquired Inc., a US-based technology infrastructure consulting firm, and a Europe-based retail provider. In 2007, Wipro signed a deal with Lockheed Martin. It also agreed to acquire Oki Techno Centre Singapore Pte Ltd (OTCS) and signed an R&D partnership contract with Nokia Siemens Networks in Germany. In 2008, the firm entered the clean energy business with Wipro Eco Energy. In April 2011, the firm signed an agreement with Science Applications International Corporation (SAIC) for the acquisition of their global oil and gas information technology practice. In 2012, Wipro employed more than 70,000 temporary workers in the United States. 2012–2018 In 2012, Wipro demerged its non-IT businesses into a separate company called Wipro Enterprises. Prior to this demerger, these businesses, mainly in the consumer care, lighting, furniture, hydraulics, water treatment, and medical diagnostics, contributed about 10% of Wipro's total revenues. In the same year, Wipro acquired Australian Trade Promotions Management firm Promax Applications Group (PAG) for $35 million. In 2014, the firm signed a ten-year $1.2 billion contract with ATCO, a Canadian Energy and Utilities corporation based in Calgary, Alberta. This was the largest deal in Wipro's history. In October 2016, Wipro announced that it was buying Appirio, an Indianapolis-based cloud services company for $500 million. In 2017, the company expanded its operations in London. In 2017, the firm won a five-year IT infrastructure and applications managed services engagement with Grameenphone (GP), a major telecom operator in Bangladesh and announced it would set up a new delivery centre there. In 2018, the company began building software to help with the General Data Protection Regulation (GDPR) in Europe. In March 2018, Wipro said it would be buying a third of Denim Group. In April 2018, the company sold its stake in the airport IT services company JV. In August 2018, Wipro paid US$75m to National Grid US as a settlement for a botched SAP implementation that a 2014 audit estimated could cost the company US$1 billion. Wipro had been hired as systems integrator in 2010, but errors in the rollout, intended to replace an Oracle system, caused serious losses and reputational damage. 2019–2020 In 2019, Wipro Consumer Care and the Ang-Hortaleza Corporation signed a share purchase agreement for the sale of 100% of the latter's stake in the personal care business Splash Corporation. In March 2020, Hedera announced that Wipro would be joining their Governing Council, providing decentralized governance to its hashgraph distributed ledger technology. In February 2020, Wipro acquired Rational Interaction, a Seattle-based digital customer experience consultancy. Wipro has shifted to Work from Anywhere model since March 2020. As per this model Wipro employees can work from anywhere in the world except Wipro office premises. Wipro has 215,876 employees in India. Each employee in India works from their own office (home). In July 2020, the firm announced the launch of its 5G edge services solutions suite built with IBM software systems. 2021 In March 2021, Wipro acquired Capco, a 22-year-old British tech consultancy firm. The deal was completed in April. Wipro has signed an agreement to acquire Ampion for a cash consideration of $117 million, according to an exchange filing. In March 2021, Wipro appoints Pierre Bruno as CEO of European operations. In June 2021, Wipro acquired Boeing supplier TECT Aerospace Group Holdings for $31 million. In December 2021, Wipro has signed a definitive agreement to acquire LeanSwift, a system integrator of Infor products for customers across the Americas and Europe. The acquisition is subject to customary closing conditions and is expected to close before the end of the quarter ending March 31, 2022, Wipro stated in a BSE filing. Subsidiaries Western India Products Limited Wipro Limited is a provider of IT services, including Systems Integration, Consulting, Information Systems outsourcing, IT-enabled services, and R&D services. Wipro entered into the technology business in 1981 and has over 221,890 employees and clients across 110 countries. IT revenues were at $7.1 billion for the year ended 31 March 2015, with a repeat business ratio of over 95%. Wipro GE Medical Systems Wipro GE Medical Systems Limited is Wipro's joint venture with GE Healthcare South Asia. It is engaged in the research and development of healthcare products. This partnership, which began in 1990, today includes gadgets and equipment for diagnostics, healthcare IT and services to help healthcare professionals combat cancer, heart disease, and other ailments. There is complete adherence to Six Sigma quality standards in all products. Sustainability Wipro was ranked 1st in the 2010 Asian Sustainability Rating (ASR) of Indian companies and is a member of the NASDAQ Global Sustainability Index as well as the Dow Jones Sustainability Index. In November 2012 Guide to Greener Electronics, Greenpeace ranked Wipro first with a score of 7.1/10. Listing and shareholding Listing: Wipro's initial public offering was in 1946. Wipro's equity shares are listed on Bombay Stock Exchange, where it is a constituent of the BSE SENSEX index, and the National Stock Exchange of India where it is a constituent of the S&P CNX Nifty. The American Depositary Shares of the company are listed at the NYSE since October 2000. Shareholding: The table provides the share holding pattern as of 30 September 2018. Employees On 6 July 2020, Thierry Delaporte took over from Abidali Neemuchwala as the new CEO of Wipro. Abidali Neemuchwala was appointed as Wipro's CEO after T. K. Kurien stepped down in early 2016. Neemuchwala, who had been group president and CEO from April 2015, was appointed CEO with effect from 1 February 2016. Awards and recognitions It received the National award for excellence in Corporate Governance from the Institute of Company Secretaries of India during the year 2004. Wipro honored as 2010 Microsoft Country Partner of the Year Award for India. In 2012, it was awarded the highest rating of Stakeholder Value and Corporate Rating 1 (SVG 1) by ICRA Limited. Wipro received the 'NASSCOM Corporate Award for Excellence in Diversity and Inclusion, 2012', in the category 'Most Effective Implementation of Practices & Technology for Persons with Disabilities'. Wipro was ranked 2nd in the Newsweek 2012 Global 500 Green companies. In 2014, Wipro was ranked 52nd among India's most trusted brands according to the Brand Trust Report, a study conducted by Trust Research Advisory. Wipro won seven awards, including Best Managed IT Services and Best System Integrator in the CIO Choice Awards 2015, India Wipro won Gold Award for ‘Integrated Security Assurance Service (iSAS)’ under the ‘Vulnerability Assessment, Remediation and Management’ category of the 11th Annual 2015 Info Security PG's Global Excellence Awards. In May 2016, it was ranked 755th on the Forbes Global 2000 list. In March 2017, Wipro was recognized as one of the world's most ethical companies by US-based Ethisphere Institute for the sixth consecutive year. In 2018, Wipro received ATD's Best of the BEST Award. See also List of public listed software companies of India List of IT consulting firms List of IT companies in India List of companies of India References External links Consulting firms established in 1945 Business services companies established in 1945 Conglomerate companies established in 1945 Software companies of India Outsourcing companies Personal care companies Business process outsourcing companies of India Companies based in Bangalore International information technology consulting firms Information technology consulting firms of India NIFTY 50 Business process outsourcing companies Outsourcing in India Computer companies of India Information technology companies of Bhubaneswar Indian brands Indian companies established in 1945 Companies based in Karnataka Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange Companies listed on the New York Stock Exchange
15929057
https://en.wikipedia.org/wiki/IP%20PBX
IP PBX
An IP PBX ("Internet Protocol private branch exchange") is a system that connects telephone extensions to the public switched telephone network (PSTN) and provides internal communication for a business. An IP PBX is a PBX system with IP connectivity and may provide additional audio, video, or instant messaging communication utilizing the TCP/IP protocol stack. Voice over IP (VoIP) gateways can be combined with traditional PBX functionality to allow businesses to use their managed intranet to help reduce long distance expenses and take advantage of the benefits of a single network for voice and data (converged network). An IP PBX may also provide CTI features. An IP PBX can exist as a physical hardware device or as a software platform. Function IP PBX is primarily a software hosted on a regular desktop or server as per the requirement demands based on the expected traffic & criticality. Till 2019 IP PBX were deployed primarily as inbound and outbound call center solutions for large corporate and commercial cloud telephony operators worldwide cloud communications. Most of the IP PBX installation uses Asterisk (PBX) for its telephony support, built on LAMP (Linux-Apache-MySQL-PHP). With telecom service providers across the world is slowly preferring SIP Trunks over Primary Rate Interface as main enterprise communication delivery, the IP PBXs will now be in demand extensively. As IP PBX is software, functions and features can be designed based on the customers' requirements such as conference calling, XML-RPC control of live calls, interactive voice response (IVR), TTS/ASR (text to speech/automatic speech recognition), PSTN interconnectability supporting both analog and digital circuits, VoIP protocols including SIP, Inter-Asterisk eXchange, H.323, Jingle and others. IP PBX software 3CX Phone System - Was based on Windows operating system, but now has windows and linux versions. Asterisk - Based on Linux operating system and has the largest market share. Most of the other IP PBXs were derived and customised on Asterisk as you will find a very long list. Bicom Systems Dialexia MVoice trixbox (formerly Asterisk@Home) freeswitch https://freeswitch.com/ xivo https://www.xivo.solutions/ See also Cloud communications References Office equipment Telecommunications equipment VoIP software
19347166
https://en.wikipedia.org/wiki/Cray%20CX1
Cray CX1
The Cray CX1 is a deskside high-performance workstation designed by Cray Inc., based on the x86-64 processor architecture. It was launched on September 16, 2008, and was discontinued in early 2012. It comprises a single chassis blade server design that supports a maximum of eight modular single-width blades, giving up to 96 processor cores. Computational load can be run independently on each blade and/or combined using clustering techniques. Blade configurations Compute blade The most basic of the modular blade configurations, the single-width compute blade supports dual-socket Intel Xeon 5400, 5500, and 5600 series processors, up to eight DIMMs of DDR3 SDRAM (PC3-8500), and two 2.5" SATA HDDs. Furthermore, each compute blade supports the addition of a PCIe x16 card for graphics or further expansion. Originally offered with the Intel E5400 series processor, later CX1 configurations made either the low-power "L5xxx" series or the high-performance "X5xxx" series Intel processors available to customers. Depending on the blade model, both Gigabit Ethernet and DDR Infiniband interconnects were available - those blades not factory-equipped with Infiniband supported third-party additions through the PCIe expansion port. Storage blade Building on the modular expansion capabilities of the compute blade, the storage blade enabled customers to add up to eight 2.5" SATA HDDs or four 3.5" SATA HDDs to a separate, but physically connected, single-width blade add-on. This bolted on expansion took the place of the default blade cover and extended the blade unit to a two-width module. From a computational standpoint, the storage blade was no different from the compute blade, offering the same Intel processor options. Unlike the compute blade, PCIe expansion was not available in the storage blade, as the RAID card supporting the additional hard drives occupied this port. Visualization and GPU blades Similar to the storage blade expansion, both workstation visualization and GPGPU blades were offered, taking advantage of the PCIe expansion port to extend the capabilities of the base compute blade. Equipped with either workstation Nvidia Quadro graphics cards or Nvidia Tesla scientific cards, the visualization configurations extended the integrated graphics capabilities of the compute blade and offered customers access to Nvidia's CUDA programming architecture to drastically speed-up critical scientific and engineering applications. Unique features In addition to offering unparalleled deskside performance for its day, the Cray CX1 also pioneered a number of unique technologies to make possible the integration of supercomputing into the traditional office space. In order to meet workplace noise requirements, the CX1 utilized an active noise cancellation system built into its cooling apparatus to quiet the sounds generated by the chassis's two large fans. Rather than requiring the deployment of separate Ethernet and/or Infiniband networks, the CX1 integrated network switches for both into the chassis to facilitate these high-speed interconnect protocols required by clustered simulations and applications. Additionally, unlike traditional servers and supercomputers that make use of higher 220 V power, the CX1's redundant power supplies were designed to support traditional office power (120 V AC, 20 A). Finally, one of the most striking and recognizable features of the CX1 is the integrated touch screen control panel on the front face of the machine from which users could not only control each of the blades, but instantly gauge power consumption, core temperature, and fan speeds for the entire chassis. Comparison with other Cray systems The CX1 and XT5h are fundamentally different architectures, and due to their differences, XT5 blades may not be used in place of CX1 blades, and vice versa. To compare the two: An XT5h supercomputer supports vector, FPGA, and scalar compute blades (X2, XR1, and XT4/XT5 blades, respectively), typically supported by service blades for network access and hosting a Lustre filesystem layered over several RAID modules. The XT5h's operating system is UNICOS/lc, a combination of SUSE Linux Enterprise Server and Sandia's Catamount or Cray's Compute Node Linux. A CX1 supercomputer supports GPU and scalar compute blades (Nvidia Quadro and Intel Xeon, respectively), typically supported by enclosed network switches (Gigabit Ethernet, Infiniband) and service blades hosting RAID modules. CX1-supported operating systems include Red Hat Enterprise Linux and Windows HPC Server 2008. External links Cray's First Windows-Based Supercomputer Puts a 64-Core Datacenter On Your Desk CX1 adds shared-memory capabilities using ScaleMP's vSMP Foundation CX1 and CX1000 discontinued - serverwatch.com Cx1 X86 supercomputers
58835094
https://en.wikipedia.org/wiki/1995%20Troy%20State%20Trojans%20football%20team
1995 Troy State Trojans football team
The 1995 Troy State Trojans football team represented Troy State University in the 1995 NCAA Division I-AA football season. The Trojans played their home games at Veterans Memorial Stadium in Troy, Alabama and competed as a I-AA Independent school. The Trojans finished the regular season undefeated with an 11-0 record. It was the first time in history that the Trojans had completed a regular season undefeated. Despite the success, Troy State would be upset in the first round of the Division I-AA Playoffs, losing to #15 Georgia Southern by a score of 21-24. Troy State finished the season ranked #3 in the Sports Network Poll and #12 in the Coaches' Poll. Schedule References Troy State Troy Trojans football seasons Troy State Trojans football
11019203
https://en.wikipedia.org/wiki/Skype%20security
Skype security
Skype is a Voice over Internet Protocol (VoIP) system developed by Skype Technologies S.A. It is a peer-to-peer network where voice calls pass over the Internet rather than through a special-purpose network. Skype users can search for other users and send them messages. Skype says that it uses 256 bit AES encryption to communicate between users, although when calling a telephone or mobile, the part of the call over the PSTN is not encrypted. User public keys are certified by the Skype server at login with 1536 or 2048-bit RSA certificates. Skype's encryption is inherent in the Skype Protocol and is transparent to callers. Some Skype private conversations such as Skype audio calls, send text messages, image, audio, and video files can make use of end to end encryption, but it may have to be manually turned on. Security policy The company's security policy states that: Usernames are unique. Callers must present a username and password or another authentication credential. Each caller provides the other with proof of identity and privileges whenever a session is established. Each verifies the other's evidence before the session can carry messages. Messages transmitted between Skype users (with no PSTN users included) are encrypted from caller to caller. No intermediate node (router) has access to the meaning of these messages. This claim was undermined in May 2013 by evidence that Microsoft (owner of Skype) has pinged unique URLs embedded in a Skype conversation; this could only happen if Microsoft has access to the unencrypted form of these messages. Implementation and protocols Registration Skype holds registration information both on the caller's computer and on a Skype server. Skype uses this information to authenticate call recipients and assure that callers seeking authentication access a Skype server rather than an impostor. Skype says that it uses public-key encryption as defined by RSA to accomplish this. The Skype server has a private key and distributes that key's public counterpart with every copy of the software. As part of user registration, the user selects a desired username and password. Skype locally generates public and private keys. The private key and a password hash are stored on the user's computer. Then a 256-bit AES-encrypted session is established with the Skype server. The client creates a session key using its random number generator. The Skype server verifies that the selected username is unique and follows Skype's naming rules. The server stores the username and a hash of the user's password [ H ( H ( P ) ) ] {\displaystyle [H(H(P))]} in its database. The server now forms and signs an identity certificate for the username that binds the username, verification key, and key identifier. Peer-to-peer key agreement For each call, Skype creates a session with a 256-bit session key. This session exists as long as communication continues and for a fixed time afterward. Skype securely transmits the session key to the call recipient as part of connecting a call. That session key is then used to encrypt messages in both directions. Session cryptography Session cryptography All traffic in a session is encrypted using the AES algorithm running in Integer Counter Mode (ICM). Skype encrypts the current counter and salt with the session key using the 256 bit AES algorithm. This algorithm returns the keystream, then XORed with the message content. Skype sessions contain multiple streams. The ICM counter depends on the stream and the location within the stream. Random number generation Skype uses random numbers for several cryptographic purposes. Purposes include protection against playback attacks, creation of RSA key pairs, and creation of AES key-halves for content encryption. The security of a Skype peer-to-peer session depends significantly on the quality of the random numbers generated by both ends of the Skype session. Random number generation varies by the operating system. Cryptographic primitives Skype uses standard cryptographic primitives to achieve its security goals. The cryptographic primitives used in Skype are the AES block cipher, the RSA public-key cryptosystem, the ISO 9796-2 signature padding scheme, the SHA-1 hash function, and the RC4 stream cipher. Key agreement protocol Key-agreement is achieved using a proprietary, symmetric protocol. To protect against a playback attack, the peers challenge each other with random 64-bit nonces. The challenge response is to customize the challenge in a proprietary way and returned it signed with the responder's private key. The peers exchange Identity Certificates and confirm that these certificates are legitimate. Because an Identity Certificate contains a public key, each end can then confirm signatures created by the other peer. Each peer contributes 128 random bits to the 256-bit session key. Automatic updates Another security risk are automatic updates, which cannot be disabled from version 5.6 on, both on Mac OS and Windows branches, although in the latter, and only from version 5.9 on, automatic updating can be turned off in certain cases. Eavesdropping by design Chinese, Russian and United States law enforcement agencies have the ability to eavesdrop on Skype conversations and to have access to Skype users' geographic locations. In many cases, a simple request for information is sufficient, with no court approval needed. This ability was deliberately added by Microsoft for law enforcement agencies around the world after they purchased Skype in 2011. This is implemented by switching the Skype client for a particular user account from the client-side encryption to the server-side encryption, allowing dissemination of an unencrypted data stream. Actual and potential flaws While Skype encrypts users' sessions, other traffic, including call initiation, can be monitored by unauthorized parties. The other side of security is whether Skype imposes risk on its users' computers and networks. In October 2005 a pair of security flaws were discovered and patched. Those flaws made it possible for hackers to run hostile code on computers running vulnerable versions of Skype. The first security bug affected only Microsoft Windows computers. It allowed the attacker to use a buffer overflow to crash the system or to force it to execute arbitrary code. The attacker could provide a malformed URL using the Skype URI format, and lure the user to request it to execute the attack. The second security bug affected all platforms; it used a heap-based buffer overflow to make the system vulnerable. Issues, including several potentially affecting security, include: The Skype code is proprietary and closed source, and it is not planned to become open-source software, according to Niklas Zennström, co-founder of Skype, who responded in 2004 to questions on the Skype security model saying "We could do it but only if we re-engineered the way it works and we don't have the time right now". If the software source were available peer review would be able to verify its security. On 13 November 2012 a Russian user published a flaw in Skype security which allowed any non-professional attacker to take over a Skype account knowing only the victim's email in seven simple steps. This vulnerability was claimed to exist for months, and was not corrected until more than 12 hours after it was published widely. By default, Skype records data about calls (but not the message contents) in a "History" file saved on the user's computer. Attackers who gain access to the computer can obtain the file. Skype can consume other users' bandwidth. Although this is documented in the license agreement (EULA), there is no way to tell how much bandwidth is being used in this manner. There are some 20,000 supernodes out of many millions of users logged on. Skype Guide for network administrators claims that supernodes carry only control traffic up to 10 kB/s and relays may carry other user data traffic up to 15 kB/s (for one audio conference call). A relay should not normally handle more than one "relayed connection". Skype's file-transfer function does not integrate with any antivirus products, although Skype claims to have tested its product against antivirus "Shield" products. Skype does not document all communication activities. This lack of clarity as to content means that systems administrators cannot be sure what it is doing. (The combination of an invited and a reverse-engineered study taken together suggest Skype is not doing anything hostile). Skype can be easily blocked by firewalls. Skype consumes network bandwidth, even when idle (even for non-supernodes, e.g., for NAT traversal). For example, if there were only 3 Skype users in the world and 2 were communicating, the 3rd computer would be taxed to support the application, even if not using Skype at the time. The large number of Skype computers means that this activity is diffuse, it can lead to performance issues on standby Skype users, and presents a conduit for security breaches. Skype implicitly trusts any message stream that obeys its protocols Skype does not prohibit a parallel Skype-like network Skype makes it hard to enforce a corporate security policy Skype prior to version 3.0.0.216 created a file called 1.com in the temp directory which was capable of reading all BIOS data from a PC. According to Skype this was used to identify computers and provide DRM protection for plug-ins. They later removed this file, but it is not known whether the BIOS-reading behavior was removed. The URI handler that checks URLs for verification of certain file extensions and file formats uses case sensitive comparison techniques and doesn't check all potential file formats. While Skype does encrypt most of its communications, unencrypted packets containing advertisements are pulled from several places, exposing a cross-site scripting vulnerability. These ads can easily be hijacked and replaced with malicious data. The privacy of Skype traffic may have limits. Although Skype encrypts communication between users, a Skype spokesman did not deny the company's ability to intercept communication. On the question of whether Skype could listen in on their users' communication, Kurt Sauer, head of the security division of Skype, replied evasively: "We provide a secure means of communication. I will not say if we are listening in or not." In China text is filtered according to government requirements. This suggests that Skype has the capacity to eavesdrop on connections. One of Skype's minority owners, eBay, has divulged user information to the U.S. government. Security researchers Biondi and Desclaux have speculated that Skype may have a back door, since Skype sends traffic even when it is turned off and because Skype has taken extreme measures to obfuscate their traffic and functioning of their program. Several media sources have reported that at a meeting about the "Lawful interception of IP based services" held on 25 June 2008, high-ranking but unnamed officials at the Austrian interior ministry said that they could listen in on Skype conversations without problems. Austrian public broadcasting service ORF, citing minutes from the meeting, have reported that "the Austrian police are able to listen in on Skype connections". Skype declined to comment on the reports. The Skype client for Linux has been observed accessing the Firefox profile folder during execution. This folder contains all the saved passwords in plain text if no master password is used, it also contains the user's browsing history. Access to this file was confirmed by tracing system calls made by the Skype binary during execution. The Skype client for Mac has been observed accessing protected information in the system Address Book even when integration with the Address Book (on by default) is disabled in the Skype preferences. Users may see a warning about Skype.app attempting to access protected information in address book under certain conditions, e.g. launching Skype while syncing with a mobile device. Skype has no legitimate reason to access the Address Book if the integration is not enabled. Further, the extent of the integration is to add all cards from the Address Book to the list of Skype contacts along with their phone numbers, which can be accomplished without accessing any protected information (neither the name nor numbers on cards are protected) and thus the attempt to access information beyond the scope of the integration, regardless of whether or not that integration is enabled, raises deeper questions as to possible spying on users. The United States Federal Communications Commission (FCC) has interpreted the Communications Assistance for Law Enforcement Act (CALEA) as requiring digital phone networks to allow wiretapping if authorized by an FBI warrant, in the same way as other phone services. In February 2009 Skype said that, not being a telephone company owning phone lines, it was exempt from CALEA and similar laws which regulate US phone companies. It is also not clear whether wiretapping of Skype communications is technically possible. According to the ACLU, the Act is inconsistent with the original intent of the Fourth Amendment to the U.S. Constitution; more recently, the ACLU has expressed the concern that the FCC interpretation of the Act is incorrect. References External links Silver Needle in the Skype  — Philippe Biondi VoIP and Skype Security  — Simson Garfinkel Skype Security Evaluation  — Tom Berson Skype security resource center Skype
66146188
https://en.wikipedia.org/wiki/Ananth%20Prabhu%20Gurpur
Ananth Prabhu Gurpur
Ananth Prabhu Gurpur (born 1985) also known as Ananth Prabhu G and G. Ananth Prabhu, is a Cyber Security expert, Professor in Computer Engineering at the Sahyadri College of Engineering and Management and an author. He is also a guest faculty at the Karnataka Police Academy and Karnataka Judicial Academy. Early life and education Ananth Prabhu grew up in Mangaluru, a city in Coastal Karnataka. He did his primary schooling at Rosario English Medium School, Mangaluru. He completed his High School and Pre University Course at St Aloysius College Mangalore. He graduated with a bachelor's degree in Computer Engineering from Visvesvaraya Technological University, Belgaum. He then completed an MBA in Information Technology and MTech in Computer Engineering from the Manipal University. He also holds a Diploma in Cyber Law from Government Law College, Mumbai and a PhD (Doctor in Philosophy) in Computer Engineering from Visvesvaraya Technological University, Belgaum. He further completed his Postdoctoral Studies from the University of Houston–Downtown, Texas. Career Gurpur teaches computer science and engineering at the Sahyadri College of Engineering and Management. He has also taught cybersecurity as a guest faculty at the Karnataka State Police Academy and Karnataka Judicial Academy since 2011. He is the Principal Investigator of the Centre of Excellence in Digital Forensics Intelligence and Cyber Security Cell in the Department of Computer Science and Engineering at Sahyadri College of Engineering and Management. Gurpur designed the "I am Cyber Safe" safety course aimed to raise awareness of people in rural areas about internet safety, that was formally launched by the Home Secretary IGP(Inspector-general of police) D. Roopa in 2020. It is part of a website that includes the third edition of the e-book Cyber Safe Girl, that he created with Vivek Shetty to promote cybersafety for women. As part of the "Cyber Safe Girl" campaign, he also worked with an e-waste management initiative that distributed copies of the book to participants. The first edition of the e-book was released in 2017. A fourth edition was released in 2021. Gurpur has developed an indigenous no touch IOT sanitiser dispenser. The dispenser connected with Wi-Fi would give real time data on usage and had an IR temperature sensor. Commentary In 2020, Prabhu discussed hacking as the "new normal" in response to a bitcoin scam related to compromised Twitter accounts. Prabhu also highlighted and brought notice to fake oximeter apps for COVID-19, as well as matrimonial scams and sextortion schemes on online dating websites. He has also warned about posting close up pictures on social media, including WhatsApp, and has explained several ways to improve individual cybersecurity. He also warned about Saree Challenge photos being made into fake naked images. He also highlighted the EMI deferment fraud in the country and how people who are not very good with English are being scammed. In 2019, he advocated for cyber security to be included as part of school curriculums, and for cyber laws in India to be updated. He also warned of the dangers of discarded electronics and encouraged safe e-waste management. Public/Social Service Gurpur has been part of other philanthropic initiatives like Ring The Bell in Karnataka (inspired from Kerala). He has served as an advisor to the Vikas Group of Institutions. Gurpur also supported in providing maternity nutrition kit to pregnant women in Dakshina Kannada. Academic Publications Abhir Bhandary, G. Ananth Prabhu, V.Rajinikanth, K. Palani Thanaraj, Suresh Chandra Satapathy, David E. Robbins, Charles Shasky, Yu-Dong Zhang, João Manuel R.S.Tavares, N. Sri Madhava Raja M.S.Sannidhan, G.Ananth Prabhu, David E. Robbins, Charles Shasky One of his papers was part of the Most Cited Pattern Recognition Letters Articles List since 2018. Non-Academic Publications In addition to his cybersecurity research and writing, his other works include Little Black Book For Teachers, which he wrote in response to a question asked to him on how to become a good teacher, and was distributed free to teachers following a sponsorship from Krishna J. Palemar, chairman of Vikas Education Trust in 2016. In 2017, I Own the Monk's Ferrari, a self-help book about having the correct work-life balance and including a spiritual search/aspect was released by Yogi Adityanath, Chief Minister of Uttar Pradesh. In 2021, he released Glorious Bharat, about the history of India. Bibliography Little Black Book For Students ISBN 978-93-5407-398-4 Little Black Book For Teachers ISBN 978-93-5407-722-7 I Own the Monk's Ferrari ISBN 978-93-5288-365-3 The Text Message That Killed Me ISBN 978-93-5351-598-0 Cyber Safe Girl ISBN 978-93-5382-030-5 Glorious Bharat - Part 1 ISBN 978-93-5406-915-4 Glorious Bharat - Part 2 ISBN 978-93-5406-973-4 Glorious Bharat - Part 3 ISBN 978-93-5426-596-9 The Samurai Who Sold His Suzuki ISBN 978-93-5406-488-3 My Grandfathers Planchet ISBN 978-93-5407-863-7 Awards and accolades Karnataka District Rajyotsava Award 2020 References External links Cyber Safe Girl website Glorius Bharat Website People associated with computer security Living people Kuvempu University faculty 1985 births
2876991
https://en.wikipedia.org/wiki/Electronic%20Games
Electronic Games
Electronic Games was the first dedicated video game magazine published in the United States and ran from October 15, 1981 to 1997 under different titles. It was co-founded by Bill Kunkel, Joyce Worley, and Arnie Katz.. History The history of Electronic Games originates in the consumer electronics magazine, Video. Initially video games were covered sporadic in Deeny Kaplan's regular "VideoTest Reports" column. In the summer of 1979, Video decided to launch a new column to focus on video games. Arcade Alley became a regular column and would represent a journalistic first. Written by Bill Kunkel, Arnie Katz (initially pseudonymously writing as Frank T. Laney II), and Joyce Worley, the three writers became close friends and in 1981 they founded Electronic Games magazine. The magazine was active from Winter 1981, during the golden age of arcade video games and the second generation of consoles, up until 1985, following the video game crash of 1983. The magazine was briefly revived during the 16-bit era in the early 1990s, but ended in 1995 and was renamed to Fusion. Initially, the release of the first issue was scheduled for October 15, 1981. However, the release was postponed to October 29, 1981 and featured a slightly different cover than initially advertised. 1st Run 2nd Run Arcade Awards Electronic Games is notable for hosting the Arcade Awards, or Arkie Awards, the first "Game of the Year" award ceremony simultaneously running in Videos "Arcade Alley" column. The following games are the winners of the magazine's annual Arcade Awards. The awards for each year took place in the January of the following year. No single game was allowed to win more than one award in the same year. 1980 Arcade Awards (1979) According to the Winter 1981 issue of Electronic Games, the 1980 Arcade Awards (i.e., the first set of "Arkies") were announced in February 1980 and covered all hardware and software produced prior to January 1, 1980. 1981 Arcade Awards (1980) The 1981 edition of the awards reflects accomplishments during the 12 months of the preceding year. 1982 Arcade Awards (1981) The third annual Arcade Awards were sponsored jointly by Video and Electronic Games and honored outstanding achievements in the field of video games of the year 1981. The 1982 Arcade Awards were published in the March 1982 issue of Electronic Games. 1983 Arcade Awards (1982) The 4th "Arkies" cover games published between October 1, 1981 and October 1, 1982 and were published in the January 1983 issue of Electronic Games. 1984 Arcade Awards (1983) The 5th "Arkies" were published in the January 1984 issue of Electronic Games. 1985 Arkie Awards (1984) The 6th "Arkies" were printed in the January 1985 issue of Electronic Games. 1992 (7th) Following the magazine's revival in 1992, it published the Electronic Gaming Awards in March 1993, where editors nominated several games for each category and the readers would vote which games win. The following were the winners and nominees for 1992. 1993 (8th) The following games were the winners and nominees for the EG Awards of 1993, with nominees chosen by editors and winners voted by readers. Reader polls From May 1982 onwards, the magazine carried out a reader poll in each issue to see which are the most popular games of the month among its readers, up until the January 1985 issue. The top-ranking games in these polls are listed below. 1982 May Console: Asteroids (Runner-Up: Missile Command) Computer: Star Raiders (Runner-Up: Space Invaders) Arcade: Pac-Man (Runner-Up: Asteroids) August Console: Pac-Man (Runner-Up: Missile Command) Computer: Star Raiders (Runner-Up: Jawbreaker) Arcade: Pac-Man (Runner-Up: Tempest) September Console: Pac-Man (Runner-Up: Missile Command) Computer: Star Raiders (Runner-Up: Missile Command) Arcade: Pac-Man (Runner-Up: Donkey Kong) October & November Console: Defender (Runner-Up: Pac-Man) Computer: Star Raiders (Runner-Up: Missile Command) Arcade: Tempest (Runner-Up: Donkey Kong) The games that were top-ranked the most in these 1982 polls were: Console: Pac-Man (Runner-Up: Defender) Computer: Star Raiders (Runner-Up: Missile Command) Arcade: Pac-Man (Runner-Up: Tempest) 1983 January Console: Pitfall! (Runner-Up: Berzerk) Computer: Star Raiders (Runner-Up: Pac-Man) Arcade: Donkey Kong (Runner-Up: Dig Dug) May Console: Pitfall! (Runner-Up: Donkey Kong) Computer: Star Raiders (Runner-Up: Pac-Man) Arcade: Donkey Kong June Console: Donkey Kong (Runner-Up: Zaxxon) Computer: Star Raiders (Runner-Up: Pac-Man) Arcade: Donkey Kong (Runner-Up: Tron) July Console: Pitfall! (Runner-Up: Donkey Kong) Computer: Star Raiders (Runner-Up: Pac-Man) Arcade: Donkey Kong (Runner-Up: Donkey Kong Jr.) August Console: Donkey Kong (Runner-Up: Pitfall!) Computer: Pac-Man (Runner-Up: Star Raiders) Arcade: Zaxxon (Runner-Up: Joust) September Console: Donkey Kong Jr. (Runner-Up: Lady Bug) Computer: Star Raiders (Runner-Up: Centipede) Arcade: Pole Position (Runner-Up: Donkey Kong Jr.) October Console: Donkey Kong (Runner-Up: River Raid) Computer: Star Raiders (Runner-Up: Pac-Man) Arcade: Pole Position (Runner-Up: Donkey Kong) November Console: Donkey Kong Jr. (Runner-Up: Zaxxon) Computer: Star Raiders (Runner-Up: Pac-Man) Arcade: Pole Position (Runner-Up: Q*bert) December Console: Donkey Kong Jr. (Runner-Up: Centipede) Computer: Miner 2049er (Runner-Up: Star Raiders) Arcade: Pole Position (Runner-Up: Q*bert) The games that were top-ranked the most in these 1983 polls were: Console: Donkey Kong / Donkey Kong Jr. Computer: Star Raiders (Runner-Up: Pac-Man) Arcade: Pole Position (Runner-Up: Donkey Kong) 1984 January Console: Donkey Kong Jr. (Runner-Up: River Raid) Computer: Miner 2049er (Runner-Up: Star Raiders) Arcade: Dragon's Lair (Runner-Up: Star Wars) November Console: Pitfall II (Runner-Up: Miner 2049er) Computer: Zork I (Runner-Up: Buck Rogers) Arcade: Dragon's Lair (Runner-Up: Star Wars) December Computer: Zork I Arcade: Spy Hunter (Runner-Up: Track & Field) The games that were top-ranked the most in these 1984 polls were: Console: Donkey Kong Jr. / Pitfall II Computer: Zork I (Runner-Up: Miner 2049er) Arcade: Dragon's Lair (Runner-Up: Spy Hunter) 1985 January Console: Pitfall II (Runner-Up: Q*bert) Computer: Miner 2049er (Runner-Up: Donkey Kong) Arcade: Star Wars (Runner-Up: Dragon's Lair) There was no reader poll held for the March 1985 issue. Hall of Fame The twelve games voted by readers as part of the magazine's Hall of Fame up until January 1985. Pong (1972) Space Invaders (1978) Asteroids (1979) Star Raiders (1979) Defender (1980) Major League Baseball (1980) Pac-Man (1980) Donkey Kong (1981) Quest for the Rings (1981) Miner 2049er (1982) Zaxxon (1982) Dragon's Lair (1983) References External links Article on the first issue of Electronic Games PDF magazine repository at archive.org PDF magazine repository at digitpress.com Video game magazines published in the United States Magazines established in 1981 Magazines disestablished in 1994 Defunct computer magazines published in the United States Magazines published in New York City
38299507
https://en.wikipedia.org/wiki/Iptor%20Supply%20Chain%20Systems
Iptor Supply Chain Systems
Iptor Supply Chain Systems, formerly International Business Systems (IBS), is a supply chain management company that provides professional services and enterprise resource management software for distributors and wholesalers, with its headquarters in Stockholm, Sweden. They were previously publicly traded on the Stockholm Stock Exchange and have offices in several countries. They are rated by AMR Research and Frost & Sullivan as the largest supply chain execution solutions company by revenue. The company rebranded as Iptor in September 2016. History The company was founded as IBS in 1978 by Staffan Ahlberg and Gunnar Rylander. Both decided to form the company by turning the IT division of Ekonomisk Företagsledning into an independent company. Their first major project was to develop an order processing system for Alfa Laval subsidiaries. The company went public on the Stockholm Stock Exchange in 1986. Around the same time they entered into partnership with IBM and became a supplier of software for the IBM AS/400. Ahlberg retired as the CEO of IBS in 2002 and was the longest serving CEO of a public company in Europe. IBS expanded into Asia in 1996. The company also has a presence in the United States. IBS purchased Australian-based international software developer IDS Enterprise Systems in 2005. At the time of the deal, IDS had revenue of €12 million and more than 100 employees and operations in the United Kingdom, the Netherlands, Australia, and Thailand. In 2011, IBS was purchased by Symphony Technology Group. The deal made Symphony a 94.9% shareholder of IBS and the company went from being publicly traded to private. Products and services IBS offers supply chain management software and services to distributors and wholesalers for small, medium and Fortune 500 companies. They also offer logistics, demand management, customer relationship management, financial management, and business intelligence services. IBS Enterprise distribution resource management is a software suite from IBS that automates supply chain management including inventory planning, purchasing and supplier management, warehouse optimization, value-added services, demand management and returns processing. IBS provides a publishing-specific platform referred to as the Bookmaster. The platform is an enterprise management business software designed for publishers and book distributors in both the print and digital markets. It incorporates financial and supply chain management within the software that also integrates with web-based financial transactions and business management processes. IBS also offers a platform referred to as Dynaman which supports warehouse operations. The software is designed to improve process control, data capturing and visibility of inventory. It also allows integration with supply chain partner operations. IBS also offer a connectivity platform designed to allow organizations and applications to connect and communicate with each other. Referred to as the IBS Integrator, it contains more than 12 solution-specific adapters which enable it to connect between different systems and business partners. IBS has won multiple awards for the IBS Integrator. Awards IBS received the Supply-Chain Council Award in 2003, and was named to the list of Top 100 Companies in 2002 by Frontline magazine. References Software companies established in 1978 Companies based in Stockholm Supply chain software companies Swedish companies established in 1978
1636870
https://en.wikipedia.org/wiki/Form%20W-2
Form W-2
Form W-2 (officially, the "Wage and Tax Statement") is an Internal Revenue Service (IRS) tax form used in the United States to report wages paid to employees and the taxes withheld from them. Employers must complete a Form W-2 for each employee to whom they pay a salary, wage, or other compensation as part of the employment relationship. An employer must mail out the Form W-2 to employees on or before January 31. This deadline gives these taxpayers about 2 months to prepare their returns before the April 15 income tax due date. The form is also used to report FICA taxes to the Social Security Administration. The Form W-2, along with Form W-3, generally must be filed by the employer with the Social Security Administration by the end of February. Relevant amounts on Form W-2 are reported by the Social Security Administration to the Internal Revenue Service. In territories, the W-2 is issued with a two letter code indicating which territory, such as W-2GU for Guam. If corrections are made, it can be done on a W-2c. Significance for employee's tax return Form W-2 includes wage and salary information as well as federal, state, and other taxes that were withheld. This information is used by the employee when they complete their individual tax return using Form 1040. An employer must mail out the Form W-2 to employees on or before January 31. This deadline gives these taxpayers about 2 months to prepare their returns before the April 15 income tax due date. When an employee prepares their individual tax return for a tax year, the withholding amount from Form W-2 is subtracted from the tax due. It is possible to receive a refund from the IRS if more income was withheld than necessary. Since the IRS receives a copy of the W-2 from the employer, if the amount reported on the W-2 does not match the amount reported on Form 1040, the IRS may get suspicious. In addition, if an individual does not pay the required amount of taxes, the IRS will also know this. In this way, the IRS uses Form W-2 as a way to track an employee's tax liability, and the form has come to be seen as a formal proof of income. The Social Security Administration, court proceedings, and applications for federal financial aid for college may all use Form W-2 as proof of income. The employee receives three copies of Form W-2: one for the record, one for the federal tax return, and one for the state tax return. Form W-2 must be attached to one's individual tax return; this is to substantiate claims of withholding. Employees are required to report their wage, salary, and tip income even if they don't receive a Form W-2 for it. Tip income Employees are required to report their tip income to their employers (usually using Form 4070). Tips are subject to income withholding. There are various other requirements when handling tips for tax purposes. Filing requirements Form W-2 must be completed by the employers and be in the mail to be sent to employees by January 31. The deadline for filing electronic of paper Forms W-2 to the Social Security Administration (SSA) is also January 31. If over 250 instances of Form W-2 are being filed for the year, electronic filing is required. The form consists of six copies: Copy A – Submitted by the employer to the Social Security Administration. (In addition, the employer must also submit Form W-3, which is a summary of all Forms W-2 completed, along with all Copies A submitted. The Form W-3 must be signed by the employer.) Copy B – To be sent to the employee and filed by the employee with the employee's federal income tax returns. Copy C – To be sent to the employee, to be retained by the employee for the employee's records. Copy D – To be retained by the employer, for the employer's records. Copy 1 – To be filed with the employer's state or local income tax returns (if any). Copy 2 – To be filed with the employee's state or local income tax returns (if any). Employers are instructed to send copies B, C, 1, and 2 to their employees generally by January 31 of the year immediately following the year of income to which the Form W-2 relates, which gives these taxpayers about months before the April 15 income tax due date. The Form W-2, with Form W-3, generally must be filed by the employer with the Social Security Administration by the end of February. Filing modalities Traditionally Form W-2 has been completed by paper. Tax compliance software such as TurboTax allow the form to be completed electronically. For paper filing, Form W-2 can be ordered from the IRS website. When filing by paper, Copy A of the form cannot be printed from the IRS website. In other words, the official form ordered from the IRS must be used. Penalties Late filings within 30 days of the due date incur a penalty of $30 per form. After 30 days but before August 1, the penalty increases to $60 per form (capped between $200–500 depending on size of business). After August 1, the penalty increases to $100 per form (capped between $500–1500 depending on size of business). The penalty for a single incorrect Form W-2 is $250 per receiving party (capped annually at $3 million); this means a single incorrect Form W-2 to both the employer and the IRS incurs a penalty of $500. The penalty of intentionally failing to file is $500. Further penalties exist for illegible forms and for filing by paper even past the 250 form limit. History Use of Form W-2 was established by the Current Tax Payment Act of 1943 as part of an effort to withhold income at source. The first Form W-2s were issued to employees in 1944. In 1965, the form's name was changed from "Withholding Tax Statement" to "Wage and Tax Statement" (current name). In 1978, the form's appearance changed to its modern style of numbered boxes. As with the US tax code and other forms (such as the 1040), Form W-2 has become more complicated over time. The penalty for incorrect forms was increased in 2015. Phishing scheme involving W-2s In March 2016, the IRS issued an alert concerning a new type of phishing email attack which attempts to lure human resources, accounting, or payroll staff into disclosing the W-2 information of all employees within a company, presumably intended for use in tax-related identity theft, which the IRS defines as "...when someone uses your stolen Social Security number to file a tax return claiming a fraudulent refund." This may give a cybercriminal enough information to fraudulently file a tax return on the victim's behalf and direct the tax refund to the cybercriminal's bank account. This phishing scheme is particularly characterized by its use of spear-phishing (emails sent to specific individuals) and email spoofing to pose as a company executive requesting the W-2 information, thereby increasing the urgency of the response and catching payroll staff off-guard: "Can you send me the updated list of employees with full details (Name, Social Security Number, Date of Birth, Home Address, Salary)." "Kindly send me the individual 2015 W-2 (PDF) and earnings summary of all W-2 of our company staff for a quick review." "I want you to send me the list of W-2 copy of employees wage and tax statement for 2015, I need them in PDF file type, you can send it as an attachment. Kindly prepare the lists and email them to me asap." Large companies such as Snap Inc., Mansueto Ventures, and Seagate fell victim to this phishing scheme in early March 2016. Those in the cybersecurity industry categorize this phishing scheme as a type of CEO Fraud, while the FBI's Criminal, Cyber, and International Operations Divisions classify it as a type of "business email compromise" or BEC. Analogs in other countries See also Form 1040 Form W-9 Form W-4 Form 1099 Tax withholding in the United States References External links Forms W-2 Instructions for Forms W-2 and W-3 W-2 W-2 W-2
30438137
https://en.wikipedia.org/wiki/ARTAS
ARTAS
ARTAS (ATM suRveillance Tracker And Server) is a system designed by Eurocontrol to operationally support Aerial surveillance and Air traffic control by establishing an accurate Air Situation Picture of all traffic over a pre-defined geographical area (e.g. ECAC) and then distributing the relevant surveillance information to a community of user systems. ARTAS users A User of ARTAS in a general sense is defined in this context as any ATC subsystem having a requirement to receive at defined instants the best and most up-to-date estimate of all or selected aircraft state vector elements for all air traffic of interest to this User e.g.: Operator Display System, Flight Data Processing System, ATC Tools, Flow-Control Management System, Sequencing and Metering system, Remote TMA's, Military Units, etc. ARTAS is a distributed system composed of a number of identical subsystems co-operating together. Each subsystem, called an ARTAS Unit, will process all surveillance sensor data to form a best track estimate of the current Air Traffic situation within a given Domain of Interest. Adjacent ARTAS Units co-ordinate their tracks to build a unique, coherent and continuous Air Situation Picture over the complete area. ARTAS unit groups Four groups of main functions are implemented in an ARTAS Unit: The TRACKER processes the sensor input data to maintain a real-time air situation, represented in a Track Data Base. The SERVER performs the Track and Sensor Information Services i.e. the management of all requests from Users and the transmission of the relevant sets of track/sensor data to these Users, and the so-called inter-ARTAS co-operation functions The ROUTER BRIDGE manages the external interfaces to the Normal Users, the Broadcast Users, the Adjacent ARTAS Units and the surveillance sensors. It also implements the Surveillance Sensor Input Processing function. The SYSTEM MANAGER processes the functions related to the Human supervision and management of the ARTAS Unit. ARTAS History In April 1990, the Ministers of Transport of the European Civil Aviation Conference (ECAC) launched the "En-route Strategy for the 1990s" - a multilateral strategy designed to ensure that, by the end of the century, Air Traffic Control capacity will match forecast demand. EATCHIP: This strategy drove an initial programme, EATCHIP - the European ATC Harmonisation and Integration Programme, developed and managed by EUROCONTROL to undertake the progressive harmonisation and integration of Air Traffic Services throughout the ECAC area. EATMP: A programme has been established to provide the framework for meeting the objective of the ATM 2000+ Strategy, EATMP – the European Air Traffic Management Programme, as a follow-up of EATCHIP. The overall objective of the current “ATM 2000+ Strategy” is to enable the safe, economic, expeditious and orderly flow of traffic through the provision of ATM services which are adaptable to the requirements of all users and areas of European airspace for all phases of flight. These services are to accommodate demand, be globally inter-operable, operate to uniform principles, be environmentally sustainable and satisfy national security requirements. In the context of EATCHIP and the continuing EATMP, it has been proposed that a progressive integration of European Surveillance Data Processing Systems (SDPS) be introduced with the aim to: make use of advanced tracking techniques, eliminate the known shortcomings of tracking systems, allow an efficient service to a large variety of user system of processed surveillance information, and allow a seamless integrated operation in a mutli-centre environment across Europe. Launch of ARTAS With this aim, the development of a prototype ATM suRveillance Tracker And Server (ARTAS) was undertaken which would function as a Pilot System for the systems to be implemented in forthcoming decade by EUROCONTROL Member and other ECAC States The phased development of ARTAS started in June 1993. In parallel to the development, a comprehensive evaluation programme of the successive versions of the ARTAS Pilot system has been undertaken with a number of National ATM Organisations. Following the evaluation programmes, ARTAS has been gradually introduced as operational system and it is part of the surveillance infrastructure of numerous operational centres of Europe. It is believed that such an implementation of a harmonised Surveillance Data Processing System like ARTAS, together with the wide introduction of Radar Data Networks and the improvements of radar sensors created the necessary technical condition for improved ATM co-ordination and the uniform application of radar separation minima throughout a large part of Europe. In addition to the ARTAS system, a Central ARTAS software Maintenance and Operational Support (CAMOS) service for those versions of ARTAS that are in pre-operational or operational use was created. CAMOS collects the ARTAS system data and parameters needed to ensure the exchanges of surveillance data between the States implementing ARTAS and provides software enhancements to the commonly developed ARTAS System for functionality required by changes in the technical environment or common user needs. Frequentis Comsoft has been EUROCONTROL's Industrial Partner for Centralised ARTAS maintenance and support (CAMOS) since 2001 and is also a turnkey supplier of ARTAS systems. Further evolution of the ARTAS system is taking place following Users requests, requests from Surveillance related Programme or due to technological advances Considerable savings are realised because of the large scale implementation of ARTAS. These benefits are due to the use of a common development and support structure instead of many independent structures. ARTAS today ARTAS Architecture ARTAS is developed as a regional system concept consisting of individual surveillance data processing and distribution units, which together will operate as one entity. Each subsystem, called an ARTAS Unit, processes all surveillance data reports - i.e. radar reports, including Mode S, ADS reports, and Multilateration reports to form a best estimate of the current Air Traffic situation within a given Domain of Interest. The system operates on the basis of defined blocks of airspace known as "domains". Each ATM Surveillance Unit (ASU) tracks all traffic in its own defined airspace, known as a "Domain of Operation". The domains of operation of adjacent units overlap. In the areas of overlap, inter-unit track coordination functions take place, ensuring system tracking continuity. In this manner, adjacent ARTAS Units can co-ordinate their tracks to build a unique, coherent and continuous Air Situation Picture over the complete area. The seamless integration of all Units permits the application of 5NM separation minima throughout the total covered area, also at transfers of traffic from one ATC unit to the next one. 3NM separation can be done following an in-depth operational evaluation of the ARTAS behaviour in the local environments. Through a User/Server type of interface, the systems connected to ARTAS (e.g. local display system, remote users like Terminal Areas without own Radar Data Processing System, Flow Management Units, etc.) may, in a very flexible way, define exactly the modalities of the track services to be provided, e.g., the area (domain) for which it wishes to receive a specific sub-set of processed surveillance data. The flexibility of the system is such that the Domain of the user is not limited to the Domain of Operation of the Unit in which the user is situated, but may encompass airspace in the Domains of Operation of several adjacent ATM Surveillance Units. In addition to the so-called Track State vector elements (position, speed, Mode-of-Flight, etc.), maintained by the Tracker, the served tracks comprise other information of interest for the user systems, including Flight Plan related information, provided by the Flight Data Processing Systems users of ARTAS (Callsign, departure/arrival airports, type of aircraft, etc.). ARTAS Version The initial versions of ARTAS were developed using the DOD-STD-2167A standard which remains in use as the structure for the main documents concerning the ARTAS product itself at system and CSCI levels. Later documents such as plans follow the MIL-STD-498; (this document is now available from IEEE repackaged as the J-STD-016-1995 with the various DIDs contained in normative Annexes). Recently, all the CAMOS-related activities follow the ED-153 SWAL3 processes. With ARTAS V8B Version the ARTAS Unit is hosted into one ARTAS Station HWCI (AS). A dual ARTAS system consists of two identical computers (Chain) (master and slave) and their associated peripherals. The internal LAN (Ethernet/Bonding) is also defined as a HWCI (IL HWCI). The TRK, SRV, RBR, MMS and REC CSCIs of each chain are executed on only one node. Furthermore, with ARTAS V8B Version the MW (ARTAS Middleware) CSCI replaces the Off-The-Shelf (OTS) product UBSS. Without counting the COTS software, the ARTAs software (middleware and application software) represents a total of about 1.8 million lines of source code, written in Ada, C, C++, Python and shell scripts. ARTAS V8B3, the MMS CSCI integrates a modern, ergonomic and flexible graphical user interface based on a Client/Server architecture. Furthermore, the new MMS CSCI does not need any Commercial-Off-the-Shelf products related to database (namely Ingres) and Graphic User Interface (Ilog Wave) The ARTAS software considering on-line, off-line components, middleware and application software, represents a total of about 2.1 million lines of source code, written in Ada, C, Java, Python and shell scripts. Further ARTAS versions will be developed to include ground tracking (gate to gate). At present, an ARTAS Surface Movement Surveillance prototype extension (namely ARTAS SMSp) is available for evaluation purpose. ARTAS V9.0.0 addresses over 60 issues reported by its Users, including improved tracking in the vicinity of airports by integrating the System Manager System (SMS) prototype. Additional improvements were made to the management of ADS-B, as well as the system’s overall cyber security. This version also brings ASTERIX CAT048 extended range, automatic migration of the StaticGeo & Parameter datasets at DM load and extend Split Plot Filter (SPF) to Mode-S data. ARTAS V9.0.1 addresses over 90 issues reported by its Users, including some mitigations for ADS-B issues, various improvements for WAM, a monitor of the ARTAS processes as well as some upgrades for cyber security threats. This version also brings some major code clean up and refactoring that will easy maintenance and evolution of ARTAS in the future. In addition, it introduces several welcome improvements on Requirement Traceability and Quality & Safety processes. ARTAS Service Management and Actors All changes in ARTAS are done under control of the ARTAS User Group (AUG) Change Control Board. ARTAS maintenance is shared between CAMOS (Centralised ARTAS Maintenance and Operational Support) and LAMOS (Local ARTAS Maintenance and Operational Support). CAMOS services do not include hardware support, but if required by the users, the Agency may decide to propose a service for the ARTAS hardware support as well. In the context of CAMOS, the EUROCONTROL Agency (ARTAS Service Manager) acts as AUG representative and provides directives to the service provider on behalf of the AUG. Maintenance Activities The activities of maintenance and support are performed by the Agency, where EUROCONTROL experts in conjunction with their industrial partners, ensure support to the Operational ARTAS units as well as the Pre-Operational Units, defined through a Service Level Agreement with ARTAS users. Key Performance Indicators are used to set targets and to allow improvements to the Service Provider. ARTAS Support CAMOS, the Agency technical support team, is located at Brussels HQ, whereas the software maintenance and development is subcontracted to their Industrial Partner, currently Frequentis Comsoft. At present ARTAS is in operation at thirty-one ATC Centres in Europe and under evaluation in many other European states. In total, around 100 ARTAS units are currently implemented or in the course of implementation, at more than 40 European sites. References External links Air traffic control systems
661281
https://en.wikipedia.org/wiki/Self-stabilization
Self-stabilization
Self-stabilization is a concept of fault-tolerance in distributed systems. Given any initial state, a self-stabilizing distributed system will end up in a correct state in a finite number of execution steps. At first glance, the guarantee of self stabilization may seem less promising than that of the more traditional fault-tolerance of algorithms, that aim to guarantee that the system always remains in a correct state under certain kinds of state transitions. However, that traditional fault tolerance cannot always be achieved. For example, it cannot be achieved when the system is started in an incorrect state or is corrupted by an intruder. Moreover, because of their complexity, it is very hard to debug and to analyze distributed systems. Hence, it is very hard to prevent a distributed system from reaching an incorrect state. Indeed, some forms of self-stabilization are incorporated into many modern computer and telecommunications networks, since it gives them the ability to cope with faults that were not foreseen in the design of the algorithm. Many years after the seminal paper of Edsger Dijkstra in 1974, this concept remains important as it presents an important foundation for self-managing computer systems and fault-tolerant systems. As a result, Dijkstra's paper received the 2002 ACM PODC Influential-Paper Award, one of the highest recognitions in the distributed computing community. Moreover, after Dijkstra's death, the award was renamed and is now called the Dijkstra Award. History E.W. Dijkstra in 1974 presented the concept of self-stabilization, prompting further research in this area. His demonstration involved the presentation of self-stabilizing mutual exclusion algorithms. It also showed the first self-stabilizing algorithms that did not rely on strong assumptions on the system. Some previous protocols used in practice did actually stabilize, but only assuming the existence of a clock that was global to the system, and assuming a known upper bound on the duration of each system transition. It was only ten years later when Leslie Lamport pointed out the importance of Dijkstra's work at a 1983 conference called Symposium on Principles of Distributed Computing that researchers directed their attention to this elegant fault-tolerance concept. In his talk, Lamport stated:I regard this as Dijkstra's most brilliant work - at least, his most brilliant published paper. It's almost completely unknown. I regard it to be a milestone in work on fault tolerance... I regard self-stabilization to be a very important concept in fault tolerance and to be a very fertile field for research.Afterwards, Dijkstra's work was awarded ACM-PODC influential paper award, which then became ACM's (the Association for computing Machinery) Dijkstra Prize in Distributed Computing given at the annual ACM-PODC symposium. Overview A distributed algorithm is self-stabilizing if, starting from an arbitrary state, it is guaranteed to converge to a legitimate state and remain in a legitimate set of states thereafter. A state is legitimate if, starting from this state, the algorithm satisfies its specification. The property of self-stabilization enables a distributed algorithm to recover from a transient fault regardless of its nature. Moreover, a self-stabilizing algorithm does not have to be initialized as it eventually starts to behave correctly regardless of its initial state. Dijkstra's paper, which introduces the concept of self-stabilization, presents an example in the context of a "token ring"—a network of computers ordered in a circle. Here, each computer or processor can "see" the whole state of one processor that immediately precedes it and that this state may imply that the processor "has a token" or it "does not have a token." One of the requirements is that exactly one of them must "hold a token" at any given time. The second requirement prescribes that each node "passes the token" to the computer/processor succeeding it so that the token eventually circulates the ring. Not holding a token is a correct state for each computer in this network, since the token can be held by another computer. However, if every computer is in the state of "not holding a token" then the network altogether is not in a correct state. Similarly, if more than one computer "holds a token" then this is not a correct state for the network, although it cannot be observed to be incorrect by viewing any computer individually. Since every computer can "observe" only the states of its two neighbors, it is hard for the computers to decide whether the network altogether is in a correct state. The first self-stabilizing algorithms did not detect errors explicitly in order to subsequently repair them. Instead, they constantly pushed the system towards a legitimate state. Since traditional methods for detecting an error were often very difficult and time-consuming, such a behavior was considered desirable. (The method described in the paper cited above collects a huge amount of information from the whole network to one place; after that, it attempts to determine whether the collected global state is correct; even that determination alone can be a hard task). Efficiency improvements More recently, researchers have presented newer methods for light-weight error detection for self-stabilizing systems using local checking. and for general tasks. The term local refers to a part of a computer network. When local detection is used, a computer in a network is not required to communicate with the entire network in order to detect an error—the error can be detected by having each computer communicate only with its nearest neighbors. These local detection methods simplified the task of designing self-stabilizing algorithms considerably. This is because the error detection mechanism and the recovery mechanism can be designed separately. Newer algorithms based on these detection methods also turned out to be much more efficient. Moreover, these papers suggested rather efficient general transformers to transform non self stabilizing algorithms to become self stabilizing. The idea is to, Run the non self stabilizing protocol, at the same time, detect faults (during the execution of the given protocol) using the above-mentioned detection methods, then, apply a (self stabilizing) "reset" protocol to return the system to some predetermined initial state, and, finally, restart the given (non- self stabilizing) protocol. The combination of these 4 parts is self stabilizing (as long as there is no trigger of fault during the correction fault phases, e.g.,). Initial self stabilizing protocols were also presented in the above papers. More efficient reset protocols were presented later, e.g. Additional efficiency was introduced with the notion of time-adaptive protocols. The idea behind these is that when only a small number of errors occurs, the recovery time can (and should) be made short. Dijkstra's original self-stabilization algorithms do not have this property. A useful property of self-stabilizing algorithms is that they can be composed of layers if the layers do not exhibit any circular dependencies. The stabilization time of the composition is then bounded by the sum of the individual stabilization times of each layer. New approaches to Dijkstra's work emerged later on such as the case of Krzysztof Apt and Ehsan Shoja's proposition, which demonstrated how self-stabilization can be naturally formulated using the standard concepts of strategic games, particularly the concept of an improvement path. This particular work sought to demonstrate the link between self-stabilization and game theory. Time complexity The time complexity of a self-stabilizing algorithm is measured in (asynchronous) rounds or cycles. A round is the shortest execution trace in which each processor executes at least one step. Similarly, a cycle is the shortest execution trace in which each processor executes at least one complete iteration of its repeatedly executed list of commands. To measure the output stabilization time, a subset of the state variables is defined to be externally visible (the output). Certain states of outputs are defined to be correct (legitimate). The set of the outputs of all the components of the system is said to have stabilized at the time that it starts to be correct, provided it stays correct indefinitely, unless additional faults occur. The output stabilization time is the time (the number of (asynchronous) rounds) until the output stabilizes. Definition A system is self-stabilizing if and only if: Starting from any state, it is guaranteed that the system will eventually reach a correct state (convergence). Given that the system is in a correct state, it is guaranteed to stay in a correct state, provided that no fault happens (closure). A system is said to be randomized self-stabilizing if and only if it is self-stabilizing and the expected number of rounds needed to reach a correct state is bounded by some constant . Design of self-stabilization in the above-mentioned sense is well known to be a difficult job. In fact, a class of distributed algorithms do not have the property of local checking: the legitimacy of the network state cannot be evaluated by a single process. The most obvious case is Dijkstra's token-ring defined above: no process can detect whether the network state is legitimate or not in the case where more than one token is present in non-neighboring processes. This suggests that self-stabilization of a distributed system is a sort of collective intelligence where each component is taking local actions, based on its local knowledge but eventually this guarantees global convergence at the end. To help overcome the difficulty of designing self-stabilization as defined above, other types of stabilization were devised. For instance, weak stabilization is the property that a distributed system has a possibility to reach its legitimate behavior from every possible state. Weak stabilization is easier to design as it just guarantees a possibility of convergence for some runs of the distributed system rather than convergence for every run. A self-stabilizing algorithm is silent if and only if it converges to a global state where the values of communication registers used by the algorithm remain fixed. Related work An extension of the concept of self-stabilization is that of superstabilization. The intent here is to cope with dynamic distributed systems that undergo topological changes. In classical self-stabilization theory, arbitrary changes are viewed as errors where no guarantees are given until the system has stabilized again. With superstabilizing systems, there is a passage predicate that is always satisfied while the system's topology is reconfigured. References External links libcircle - An implementation of self-stabilization using token passing for termination. Distributed computing problems Fault-tolerant computer systems Edsger W. Dijkstra Dutch inventions
6041906
https://en.wikipedia.org/wiki/JSTV
JSTV
is a Japanese television broadcasting company serving viewers in Europe, the Middle East, and North Africa. Launched in March 1990 and broadcasting from London, it carries the programming from the NHK World Premium service in the regions served. The channel initially broadcast for two hours each night from 8pm (GMT) on the Lifestyle transponder 5 on the Astra 1A satellite in analogue format (frequency 11.273 MHz, time-sharing with The Children's Channel, Lifestyle and The Lifestyle Satellite Jukebox). Later on 3 June 1991, it started using transponder 24 on Astra 1B, at frequency 11.567 MHz for 11 hours a day, using the Videocrypt II encryption (time-sharing with The Children's Channel and later with CMT Europe). It eventually moved to transponder 53 (frequency 10.773) to broadcast 24 hours a day. Analogue transmissions for JSTV on Astra ceased on 31 October 2001. JSTV currently broadcasts in DVB-S on Eutelsat Hotbird 6, encrypted in Conax, except some programmes, and broadcasts programs of NHK, Fuji TV, TV Tokyo and other main Japanese broadcasters. News programs are mostly direct and Live from the original broadcaster, however several other programs such as Anime and Variety shows are not up to date. Not all programs are encrypted; NHK News 7, News Watch 9 and some English programs are broadcast free-to-air. JSTV currently operates two channels: JSTV1 which broadcasts TV programmes approximately 20 hours a day and JSTV2 which broadcasts TV programmes 24 hours a day. Stockholders NHK Enterprise, Inc. Marubeni Corporation Plc Mizuho Corporate Bank, Ltd. All Nippon Airways, Co. Ltd Mitsukoshi UK, Ltd. NHK Global Media Services, Inc. Encryption & Availability Satellite DVB-S MPEG-2: Hot Bird 6 (12597 MHz, Vertical, 3/4, VPID 2000, APID 2002/2001) Encrypted (Conax). Cryptoworks encryption was discontinued on 30 September 2019. Analogue: ASTRA [Closed down on 31 October 2001] Hotels Currently, JSTV broadcasts in several hotels in Europe and the Middle East, Complete list available on the official web site of JSTV and NHK's English page "Overseas hotels carrying NHK" JSTV-i JSTV is available through "JSTV-i" to European consumers via Western Digital's WD TV LIVE STB. More information is available on the official JSTV-i web site. Channels JSTV 1: TV programmes JSTV 2: Radio programmes (NHK World Radio Japan) and JSTV 1 TV Schedule on screen. (Since 31 March 2008 JSTV 2 broadcast TV programmes from 1700 UK time to 2200 UK time 7 days a week, and from 1 April 2009 now broadcasts TV Programmes from 0500 to 1000 and then 1500 to 2200 UK time) Programmes Dual language News (Japanese & English) NHK News 7 NHK Newswatch 9 Global Debate WISDOM Sumo News NHK/BS NEWS Good Morning, Japan Sunday Sports News FNN Speak FNN Super News NHK Newswatch 9 Today's Close-up Variety Hello from Studio Park (スタジオパークからこんにちは) NHK Riddles on Mobile (着信御礼!ケータイ大喜利) NHK VS Arashi (VS嵐) Fuji TV Shoten (笑点) NTV Why did you come to Japan? (Youは何しに日本へ?) TV Tokyo Peke X Pon (ペケXポン) Fuji TV Documentary NHK Special (NHKスペシャル)NHK The Professionals (プロフェッショナル仕事の流儀)NHK Document 72hours (ドキュメント72時間) NHK Anime Ace of Diamond (ダイヤのA) Keroro (ケロロ軍曹) Chibi Maruko-chan (ちびまる子ちゃん) Kids Grand Whiz-Kids TV (大!天才てれびくん) Okaasan to Issho (With Mother おかあさんといっしょ) Inai Inai Baa! (いないいないばあっ!) Play Japanese (にほんごであそぼ) Play English (えいごであそぼ) Complete programme list available at here Related Channels NHK World Premium NHK World Premium Australia External links NHK Japanese-language television stations
23322819
https://en.wikipedia.org/wiki/Registrar%20%28software%29
Registrar (software)
Registrar was software used in the personnel or human resources (HR) area of businesses. It was the first piece of software developed to provide HR with the ability to manage training administration, booking people on courses, sending call-up letters, and recording their attendance. It enabled HR users to build their own data dictionaries without any help from their IT people. The Registrar software was created by Silton-Bookman Systems (SBS). It was launched in the US in 1984 and became one of the leading training administration software programs on the market with over 5000 installations. It was eventually incorporated into an LMS when SBS merged with Pathlore in 2000. Therefore, Registrar itself is no longer available for sale. Pathlore was subsequently acquired by SumTotal Systems in 2005. Silton-Bookman Phil Bookman and Richard Silton formed the company of SBS based in Cupertino in California in 1983. Richard Silton had been the HRD manager for Memorex and was frustrated that his IT department could not provide him with software to manage the training of his company employees. He mentioned this to his friend Phil Bookman who told him that he could write the software to do this so they started the company. The “killer application” part of the product was that users were able to configure the data dictionary themselves without the need for IT assistance which made the product unique to each client's requirements. Releases It took SBS about 6 months to produce a working system and the first release of Registrar was version 1.1 in 1984 which ran under the DOS operating system and used the Btrieve database management system from Softcraft in Austin, Texas. Version 1.1 was chosen as SBS thought that no one would buy version 1.0., just as there was never a dBase I the first release being dBase II There were various further releases and version 1.34 became the stable DOS version. With the advent of the Windows operating system SBS released a Windows version in 1994 this time using the Xbase database management system. A SQL version was released some time later quickly followed by an Oracle version both running in parallel with the Windows Xbase version. Marketing SBS started off by producing demonstration disks which clients could have without charge and then if they liked the product they could order a working version. The demonstration came preloaded with data and allowed a small number records to be added which was reset each time the demo was loaded. This worked very well as the client could see the product and its capabilities before committing to purchase. The first company to buy the product was IBM Canada closely followed by McDonald-Douglas Automation and Buick. Development The DOS version was written using Microsoft Quick Basic but the Windows and subsequent versions were written mainly in C++ with some modules in Visual Basic. Many clients wrote their own add on application themselves such as interfaces to billing systems. The fact that Registrar had both export and import facilities in various formats made this very easy. European operation Registrar was being used in Europe but it was not until a distributor was appointed in 1989 that it really took off there. The distributor was John Matchetts Limited based in Banbury in Oxfordshire. Matchetts were very successful and had over 500 clients using the Registrar software, not only in Europe but also in the Middle East. People This was a list of people held in the Registrar system. They could be entered manually, imported or templated from existing people. The DOS version was limited to 32,000 people which obviously caused a minor problem for companies with over 32,000 people but later version the size was only limited by the size of the storage. People could be made Inactive when they left the company but the training record was still retained. People could be made Inactive either manually or automatically by a flag. People made inactivate could be made active again at any time. A unique key was used to identify people and this could be any field that the users chose i.e. Social security number, national insurance number, employee number. For people who did not have a unique key the system would generate one for them. Person plans In this day and age most companies plan development for their people. Part of this development would possibly be attending training and Registrar had the facility to store the training that a person was planning to attend. It could store the type of training, the date by which the training should be taken and record when that training had taken place automatically. Not only that, but the person would become a possible candidate for all courses of the same type automatically. If the training had not taken place within one year of the due date it would be removed from the person plan but a warning would be generated. Registrar would also check that the person had met any prerequisites that had been set for the intended training and warn if they had not been met. Groups Within a company there are normally groups of people. It might be people working for a certain manager or they might belong to Personnel or they might work Externally. Registrar had the ability to Group people either manually or automatically the later being decided upon the values of a field. I.e. if there were a field called Internal/External having the values I and E then a Group could be created for External and if a person had a value of E then they would automatically become a member of that Group. A person could be a member of as many Groups as required. Courses This was a list of the products that the company was prepared to arrange. The name Courses caused some confusion at the start for the users as in Registrar speak a Course was not an actual event just a list of events that could be arranged. The name given to an actual event was a Class and again this initially led to some confusion. Class This was an actual event on an actual date or dates that was potentially going to be run which people would attend. The name Class led to some confusion with the users as people were used to saying that they were “going on a Course” but in Registrar speak they were actually “going on a Class”. As this terminology was only used by the administration people themselves they soon got used to it. Curricula The later Windows and RDBMS versions had Curricula. Curricula were a collection of training courses that an individual or Group could attend. Registrations Registrar came with six built in registration statuses. Enrolled, Waiting, Finished, Cancelled, Cancelled Bill and Miscellaneous. However these could all be changed and added to by the actual client with resorting to IT support. For example, a No Show registration could be created to mark people who did not turn up. Reports could be run against these registration e.g. if the manager wanted to know the names of people who No Showed twice during the last six months this could easily be produced. Checks were made at the time of registration i.e. lets make sure we are not sending this person to two Classes that run at the same time. Any conflicts were flagged up and these checks could be made hard or soft. Rosters and transcripts The product being from the US had certain words that were American in origin. One such word was Roster. This did cause some confusion in Europe but administration people soon took it on board. A Roster was just a list of people attached to a Class. They would have a status within that Class be it enrolled, waiting, cancelled etc. The Roster had some very sophisticated operations that could be performed such as copying/moving from one Class to another. Transcripts another American word purely meaning a Persons training record and future training events they were expected to attend. It could also contain other events that had not been administered by the training administration people that needed to be recorded against the person i.e. maybe the person had been on an away day or they had obtained some certification that needed to be recorded. Letters and emails A vital facility is to tell people about training be it call up letters, letters or confirmations, certificates of attendance. Registrar had the ability to do these both manually and automatically either by letter or email. In the early versions of Registrar it had an inbuilt letter writer but later as Microsoft Word became the dominate word processing package, Registrar just linked to Word and passed the data to it to be merged into the letters or emails. Letters could either be sent one at a time, to a whole Class or triggered automatically by a date field. I.e. a call up letter could be trigger to send a latter to all the Enrolled people two weeks before the Class was due to run. Registrar also had several other very sophisticated was of sending letters and emails. Budgets The early version of Registrar had an add on called The Accountant but this was incorporated into Registrar. Budget categories could be declared and used as required in each Class. It allowed close control of budgets for training, for example the break even point for the numbers of enrollments to make the Class profitable would be calculated automatically. The profit from each Class would also be available. These figures could be incorporated into reports to give an overall financial position for any period in time. To do Registrar could generate a report giving a list of things that need to be done by the administration people. To Dos could be declared and then a Course/Class could have any of them applied to it. Also Who was supposed to do this task could be set up. The report would be grouped by the Who and what the task was and the date it was supposed to be done. When the ToDo had been done the administration people would mark it as done in the Class and then it would not be included in the ToDo reports as it had been completed. Importing and exporting Registrar had the capability to import and export data. The most common use of importing was to import 'People Data' which usually came from a Human Resources (HR) system. Some clients did this once whilst others did it on a regular basis, relying on the HR system to create people's records, instead of manually doing this in Registrar. An import data dictionary was set up to map fields to the required location. Similarly an export data dictionary was set up for exporting. Intranet and the Internet SBS introduced the facility for users to actually look at their own individual training record and also make and change registrations for themselves, if allowed to do so. This product was initially called Personal Registrar but was changed to Student Center. This worked over a network within the users company which enabled people to use this facility from any computer attached to the network. Report writers The DOS version of Registrar came with its own built in report writer which was adequate. However, from the Windows version on it was decided to let users use their own report writer. This gave much better capability and most users were already using a report writer of some sort. SBS recommended Crystal Reports but others could be used such as Business Objects or Cognos. Call registrar This was a system whereby users could call in and using a MF4 tone dialing telephone the user could actually make bookings. This was prior to the Internet coming into common use and a first step at remote bookings. This was implemented in the US but did require a special interface card. It was not implemented in Europe because the telephone system being different and the cost of developing another interface card. Call Registrar was superseded by Student Center. John Matchetts, the European distributor, also started running user conferences in the early 1990s. Training courses SBS started to run training courses on Registrar about a year after the product was launched. These became very popular and gave users a fast track to using the product. The size of the US was a problem because of the traveling required; however, this is nothing new, the US being so vast. In Europe it was much easier to deliver the tutor-led training. Clients Registrar had an impressive list of clients. The software was originally designed to be used by small to medium size companies but in fact it turned out that many large companies became users. Here are just a few of the European clients 3M, British Airways, Alcatel, Arcadia, Astra Zeneca, AT&T, Barclays, BBC, Black & Decker, BMW, British Aerospace, BT, CAA, Carillion, Coca-Cola, DCUK, Deloitte & Touche, DHL, Dow Jones, DTI, Eurostar, Ford, Glaxo, J P Morgan, Jaguar Cars, JCB, Land Rover, Magnox, Merrill Lynch, National Grid, Nissan, Nokia, Norwich Union, Open University, RAF, Rank Xerox, Rover Cars, Royal Navy, Sheppard Moscow, Siemens, TNT, UN, US Army, VAG, Vodafone, and many others. See also SumTotal Systems Notes References SBS User Manuals The Trainer British Telecom Training Department Magazine, Issue 4, Summer 1990, Page 16. Silton-Bookman Systems, Third Annual Conference 1989, User Guide. BT Today, April 1990, Page 16. External links 1984 software Discontinued software Learning management systems Human resource management software
23396970
https://en.wikipedia.org/wiki/Altoros
Altoros
Altoros Systems (also spelled as "Альторос") is a software development company that provides products and services for the Cloud Foundry platform. Altoros contributes to development and evolution of this open source initiative as governed by the Linux Foundation. Headquartered in Silicon Valley, the company runs offices in U.S., Norway, Denmark, UK, Finland, Sweden, Argentina and Belarus. Altoros actively supports and brings up technology communities in America and Europe by organizing related educational events (such as meetups, CloudCamps, user group meetings, hackathons, free programming courses, etc.) History Altoros was founded in 2001 as an outsourcing software development vendor. It then built platform-as-a-service and DevOps technology and provided consultancy around big data and cloud computing. In 2007, Altoros founded Belarus Java User Group, which unites more than 500 Java developers in Belarus. Since 2008, Altoros has been arranging a variety of conferences and other events for IT specialists in Belarus, featuring Microsoft, Adobe, SUN Microsystems, and Engine Yard representatives as speakers. In February 2007, Altoros launched Apatar, a Java-based data integration (ETL) project. The open source version of the product was released under the GPL 2.0 license. In July 2008, Altoros became a resident of Belarus High Technologies Park, a business environment for IT companies in Eastern Europe that forces cooperation in IT on the inter-governmental level. In 2010, Altoros co-founded Belarus Ruby on Rails User Group. To support this initiative, in August 2011, the company launched free educational Ruby on Rails training course for developers in Eastern Europe. Altoros also sponsors numerous Ruby conferences around the world (RubyConf in Argentina, Barcelona Ruby Conference in Spain, RubyConfBY in Belarus, etc.). On October 11, 2011, Altoros officially organized the first CloudCamp in Denmark (in its standard format/agenda). It was followed by the first CloudCamp in Eastern Europe—held on April 7, 2012 (in Minsk, Belarus). In addition, Altoros's engineers regularly speak at IT events across both Europe and America, sharing their experience in big data, cloud computing, PaaS, etc., Local IT news resource writes about a 3-day conference held by Adobe and arranged by Altoros. Altoros is also a partner of Belarus .NET User Group, Belarus Adobe Flash Platform User Group, Belarus Microsoft Certified Professional Club, and Belarus Open Source Lab―among others. In February 2014, Altoros was recognized as one of top big data, business intelligence and Hadoop consultants (by Clutch, ex-SourcingLine, an independent Washington, DC-based research company). In May 2014, Altoros joined the Cloud Foundry Foundation as a Silver Member. The organization is governed by the Linux Foundation starting from December the same year. On 23 June 2020, Altoros confirmed the introduction of Temperature Screener, a drug which will help deter COVID-19 from spreading. The Temperature Screener, powered by artificial intelligence ( AI), detects people with elevated body temperature — a significant symptom of the SARS-CoV-2 virus. Open Source Projects Altoros is the founder of Apatar, open source data integration project used by 27,000+ companies worldwide. The project connects data between different sources (mainly, databases and CRM/ERP systems). Altoros is an active contributor to other open source initiatives, such as Couchbase Server (a distributed NoSQL document-oriented database) and Cloud Foundry (a platform-as-a-service system). Starting from 2012, Altoros regularly issues independent technology benchmarks that help to evaluate performance of open source big data technologies, such as Hadoop and NoSQL systems (MongoDB, Couchbase, Cassandra, Redis, etc.). Cloud Foundry Contributions Since 2014, Altoros is a Silver Member of the Cloud Foundry Foundation (governed by the Linux Foundation ). Prior to joining this initiative, the Altoros team contributed to the Cloud Foundry Community Advisory Board meetings on a monthly basis. One of the company’s first major code contribution to Cloud Foundry was development of the CF Vagrant Installer (built in 2013). At that time, this was one of the main tools for deploying Cloud Foundry on small instances, such as a laptop. Altoros is also contributing to development of Juju Charms for Cloud Foundry—an orchestration tool for deploying CF on Ubuntu infrastructures. This collaborative project is led by Pivotal and Canonical. References External links Software companies based in California Software companies of Belarus Software companies established in 2001 2001 establishments in Belarus Companies based in Sunnyvale, California Companies based in Silicon Valley Software companies of the United States
330795
https://en.wikipedia.org/wiki/39th%20Infantry%20Brigade%20Combat%20Team
39th Infantry Brigade Combat Team
The 39th Infantry Brigade Combat Team (39th IBCT), also officially known as The Arkansas Brigade, is an infantry brigade combat team of the Army National Guard composed of personnel from the U.S. states of Arkansas, Missouri, and Nebraska. The unit is the largest Army National Guard command in Arkansas and is headquartered at the Camp Robinson Maneuver Training Center. It was ordered into federal service in 2003 in support of Operation Iraqi Freedom II. The 39th was attached to the 1st Cavalry Division and served in and around Baghdad for a year, returning to the United States in March 2005. In late August 2005, after Hurricane Katrina hit the Gulf Coast of the United States, elements of the 39th Infantry Brigade Combat Team were among the first military units to provide recovery and relief efforts to citizens of New Orleans, Louisiana. The brigade combat team led the effort to evacuate an estimated 16,000 people from the New Orleans Convention Center. The 39th Infantry Brigade Combat Team completed its second deployment to Iraq in 2008, after spending a year on active federal duty. Unlike the first deployment, the brigade combat team did not have command and control of all its subordinate units. History 20th century World War I The unit was organized for service in World War I on August 25, 1917 at Camp Beauregard, Louisiana as the 39th Division (Delta Division) from National Guard troops of Louisiana, Mississippi, and Arkansas. It arrived in France during August and September 1918. Upon arrival the division was sent to the St. Florent area south-west of Bourges where it was designated as a replacement division. On November 2 it moved to St. Aignan and the personnel of most of the units was withdrawn and sent to other organizations leaving the 39th Division skeletonized. With one exception the units of the division did not participate in combat operations, although a large number of the personnel was transferred to combat divisions, and took part in operations. The 114th Engineers participated as a unit in the Meuse-Argonne Offensive from October 3 to November 11, 1918. The 39th Division permanent cadre returned to the United States in December 1918. It demobilized the following month at Camp Beauregard. In 1923, when the 39th Division was redesignated the 31st Division (Dixie Division), its Arkansas units were transferred from the division and continued to operate as non-divisional units. For a history of those units, see articles on the 153d Infantry and the 206th Coast Artillery. Cold War On August 26, 1947, the unit was reorganized and federally recognized in part at Little Rock as the 39th Infantry Division. During this period the division included the 153d Infantry Regiment, the 156th Infantry Regiment, and the 206th Field Artillery Regiment. On November 2, 1967, the division was reorganized again and subsequently redesignated the 39th Infantry Brigade. This change resulted in a massive restationing within the state as follows: In 1967 the division was redesignated as the 39th Infantry Brigade (Separate) and in 1973 was paired with the US 101st Airborne Division as a training partner and became an air-assault brigade. The following Regiments were represented in the 39th Infantry Brigade (Separate): 153d Infantry Regiment, 151st Cavalry Regiment and the 206th Field Artillery Regiment. 39th Brigade units conducted numerous overseas training rotations throughout the 1980s and early 1990s. 1990s In 1994 the brigade was again reorganized and gained its designation as an "enhanced" brigade. In 1999, the 39th became part of the 7th Infantry Division under the Army Integrated Division concept which paired National Guard and Reserve brigades with active duty headquarters and support units. Company B, 2d Battalion, 153d Infantry Regiment, and Company B, 3d Battalion, 153d Infantry Regiment of the 39th Infantry Brigade Combat Team were activated for Operation Southern Watch, May through September 1999. Company B, 2d Battalion, 153d Infantry Regiment deployed to Kuwait while Company B, 3d Battalion, 153d Infantry Regiment deployed to Prince Sultan Air Base, Saudi Arabia. Soldiers provided security at Patriot missile batteries during these deployments. The mission lasted a total of seven months, and was the first "pure" National Guard effort in the region. Company C, 1st Battalion, 153d Infantry carried on the 39th's role in Operation Southern Watch when they replaced Company B, 2d Battalion, 153d Infantry Regiment in September 1999. Company B, 3d Battalion, 153d Infantry Regiment was the first National Guard unit since the Vietnam War to be involuntarily mobilized by presidential order (President Bill Clinton). The unit was mobilized to support operations in Operation Southern Watch. The battalion commander was Lieutenant Colonel Ewing, company commander was Captain Rozenberg and the company first sergeant was First Sergeant Nutt. B Company consisted of over 120 soldiers from the Camden and Fordyce units and volunteers from other areas of south and central Arkansas. The unit primarily provided security for two active duty Army Patriot missile batteries in Saudi Arabia. The units conducted initial training for the deployment in Camp Joseph T. Robinson, Arkansas and Fort Carson, Colorado. The success of the mission laid the ground work for additional deployments of National Guard units. 21st century In March 2001, Company D, 1st Battalion, 153d Infantry Regiment and Company D, 3d Battalion, 153d Infantry Regiment deployed to Bosnia as part of the Multinational Stabilization Force (SFOR), Security Force Nine in order to assist with the enforcement of the mandate of the United Nations Mission in Bosnia and Herzegovina (UNMIBH). The companies were attached to 3d Squadron, 7th Cavalry Regiment, 3d Infantry Division for the deployment as part of Task Force Eagle. They performed presence patrols outside Forward Operating Base Morgan and Camp McGovern, and participated in the consolidation of weapon storage sites. The soldiers also guarded the sites. War on Terrorism On 8 October 2001, 2d Battalion, 153d Infantry Regiment was activated. Second Battalion was sent to Egypt in order to take over the Multinational Force and Observers mission, freeing up regular army infantry units to deploy to Afghanistan. The 2d Battalion, 153d Infantry Regiment mission during the MFO was: "...to supervise the implementation of the security provisions of the Egyptian-Israeli Treaty of Peace and employ best efforts to prevent any violation of its terms." This mission was accomplished by carrying out four tasks: operating checkpoints, observation posts and conducting reconnaissance patrols on the international border as well as within Zone C; verification of the terms of the peace treaty not less than twice a month; verification of the terms of the peace treaty within 48 hours, upon the request of either party, and ensuring freedom of international marine navigation in the Strait of Tiran and access to the Gulf of Aqaba. This was the first "pure" National Guard takeover of the MFO mission. 2d Battalion, 153d Infantry Regiment deactivated on 11 October 2002. The 39th Infantry Brigade was notified in 2002 that it would be participating in a rotation to the Joint Readiness Training Center at Fort Polk, Louisiana. For National Guard brigades, a rotation is actually a three-year process that provides additional money, resources and training opportunities in order to improve unit readiness before the actual rotation through the Joint Readiness Training Center. The brigade was required to complete a mission rehearsal exercise during the 2003 annual training which was conducted at Fort Chaffee, Arkansas. Less than a month after the completion of this major training milestone, the brigade received its alert for deployment to Iraq in support of Operation Iraqi Freedom on July 28, 2003. On October 12, 2003, the brigade, commanded by Brigadier General Ronald Chastain, was ordered to federal service in support of Operation Iraqi Freedom for a period of up to 18 months. The brigade conducted post mobilization training at Fort Hood, Texas from October 2003 until January 2004. In January the brigade shipped its vehicles and equipment to Iraq from Fort Hood, and then moved to Fort Polk for a Mission Rehearsal Exercise at the Joint Readiness Training Center. On February 17, 2004, President George W. Bush visited the brigade and had an MRE (Meal, Ready-to-Eat) lunch in a field mess tent with soldiers. After lunch, President Bush made brief remarks to the soldiers. When the brigade combat team received its alert, it was approximately 700 soldiers short of its authorized end strength. This shortage was due in large part to the way new recruits are accounted for in the National Guard. In the active Army a new recruit only comes to a unit and is counted on its books after the soldier has completed Basic Combat Training and Advanced Individual Training. In the National Guard, the new recruit is counted on the unit's strength reports as soon as the Soldier signs their contract. The brigade combat team had over 500 soldiers who had not completed either Basic or Advanced Individual Training upon alert. This shortage led to the decision to consolidate the available manning into two infantry battalions that would be supplied for the brigade by the Arkansas National Guard and to ask the National Guard Bureau to provide the third infantry battalion. Because of the 2002 deployment of the 2d Battalion, 153d Infantry Regiment to the MFO, the battalion was deemed non-deployable as an organization; however, the soldiers of the battalion were to deploy. The decision was made by BG Chastain to transfer the battalion commander and staff from 2d Battalion, 153d Infantry Regiment to 3d Battalion, 153d Infantry Regiment. The 3d Battalion, 153d Infantry Regiment commander and staff were transferred to 2d Battalion, 153d Infantry Regiment and were designated to function as the brigade's rear detachment during Operation Iraqi Freedom. This transfer led to the 3d Battalion, 153d Infantry Regiment often being referred to as the two-thirds (2/3) battalion by personnel of the brigade. 3d Battalion, 153d Infantry Regiment adopted the 2d Battalion, 153d Infantry Regiment nickname and call sign, "Gunslingers" for Operation Iraqi Freedom. National Guard Bureau met the brigade's need for additional soldiers by alerting 2d Battalion, 162d Infantry Regiment, from the Oregon National Guard; a platoon of Company B, 1st Battalion, 108th Infantry Regiment, New York National Guard; a platoon of Company C, 1st Battalion, 102d Infantry Regiment from the Connecticut National Guard; the 1115th Transportation Company and elements of the 642d Maintenance Company from the New Mexico Army National Guard; elements of 629th Military Intelligence Battalion from the Maryland National Guard; elements of HHSC, 233d Military Intelligence Company, California National Guard; and, Battery A, 1st Battalion, 103d Field Artillery, Rhode Island National Guard to round out the brigade and bring it to its full deployment strength of 3700 soldiers. With the addition of Company A, 28th Signal Battalion, from the Pennsylvania National Guard, the brigade included National Guard soldiers from ten states. The brigade's mission during Operation Iraqi Freedom was to conduct full-spectrum operations focused on stability and support operations and to secure key terrain in and around Baghdad, supported by focused and fully integrated information (IO) and civil-military operations, in order to enable the progressive transfer of authority to the Iraqi people, their institutions and a legitimate Iraqi national government. The lines of operation as established by 1st Cavalry Division included: combat operations; train and equip security forces; essential services; promote governance; and, economic pluralism, with information operations interconnected throughout. The end state envisioned by Maj. Gen. Peter W. Chiarelli of these full spectrum operations was a secure and stable environment for Iraqis, maintained by indigenous police and security forces under the direction of a legitimate, national government that is freely elected and accepts economic pluralism. The 39th Infantry Brigade relieved the 1st Brigade, 1st Armored Division in the Baghdad neighborhoods of Adhamiyah and Rusafa as well as elements of 3rd Brigade, 1st Armored Division at Camp Taji. This relief in place took place in the midst of a multiparty insurgency uprising. The brigade's convoys were heavily opposed during the convoy north. The brigade was task organized with 1st Battalion, 153d Infantry Regiment being detached to 3d Brigade, 1st Cavalry Division, in exchange for the attachment of 2d Squadron, 7th Cavalry Regiment, of George Armstrong Custer and LZ Albany fame, to the brigade. The 1st Battalion, 153d Infantry Regiment was headquartered in the Green Zone in Baghdad with the 3d Brigade, 1st Cavalry Division. The 39th Infantry Brigade headquarters, 239 MI Company, 239 Engineer Company, 2d Squadron, 7th Cavalry Regiment and 1st Battalion, 206th Field Artillery Regiment were stationed at Camp Cooke in Taji, Iraq. The 2d Squadron, 7th Cavalry Regiment controlled a massive area of operations that stretched from just north of the Baghdad City Gate, north along Iraqi Highway 1, (Main Supply Route Tampa]]) to the city of Mushada, bounded on the east by the Tigris River, and stretching west to the boundary with the 1st Marine Expeditionary Force, approximately east of Fallujah. This Area of Operations was twice assumed by 1st Battalion, 206th Field Artillery Regiment when 2d Squadron, 7th Cavalry Regiment was detached from the brigade. 2d Squadron, 7th Cavalry Regiment was tasked with providing a Military Assistance Training Team to Company D, 307th Iraq National Guard Battalion, based in Mushada, Iraq. The 1st Battalion, 206th Field Artillery Regiment provided fires in support of brigade combat operations from Camp Taji; functioned as the base defense operations center for Camp Taji, manned the main entry control point (ECP) for Camp Taji; provided convoy and VIP escorts; and, controlled a small area of operations south of Camp Taji between Iraqi Highway 1 and the Tigris River. On two occasions 1st Battalion, 206th Field Artillery Regiment became responsible for the entire area of operations assigned to the 2d Squadron, 7th Cavalry Regiment. The 1st Battalion, 206th Field Artillery Regiment Field Artillery was also tasked with providing a military assistance training team to the Headquarters and Companies A, B, and C of the 307th Iraqi National Guard Battalion, which was also stationed at Camp Taji. The 307th was the only Iraqi army element stationed on the Coalition Forces side of Camp Taji. The 3d Battalion, 153d Infantry Regiment was stationed at FOB Gunslinger (aka FOB Solidarity), in the Adhamiyah neighborhood of Baghdad which lies immediately to the west of Sadr City. Additionally the battalion was charged with patrolling a large area of operations that stretched north from Baghdad along the east side of the Tigris River, and included the city of Hussainiyah, a town of 500,000 about 12 miles north of Baghdad. the battalion was tasked with providing a military assistance training team to support the Headquarters and Companies C and D of the 301st Iraqi National Guard Battalion, and Company C, 102d Iraqi National Guard Battalion. The 2d Battalion, 162d Infantry Regiment was stationed at FOB Volunteer in the Rusafa neighborhood of Baghdad which lies to the south of Sadr City. The battalion was tasked with supplying a military assistance training team to Companies A and B, of the 301st Iraqi National Guard Battalion. In April 2004 the 39th came under rocket attack at Camp Cooke in Taji, resulting in four Arkansas soldiers killed in action, all members of the 39th Support Battalion, headquartered in Hazen. The April 24 attack resulted in the highest single day casualty total for Arkansas soldiers since the Korean War. Members of Company C, 1st Battalion, 153d Infantry Regiment spent weeks fighting as part of Task Force 1–9 CAV, 3rd Brigade, 1st Cavalry Division on the hotly contested area of Haifa Street in Baghdad. The 2d Squadron, 7th Cavalry Regiment, including 1st Platoon, Company C, 3d Battalion, 153d Infantry Regiment was twice detached from the 39th Infantry Brigade to act as the corps reserve. In August 2004, the squadron was detached from Multi National Division-Baghdad to Multi National Division-South as part of the Battle of Najaf (2004). In November 2004, the squadron was attached to the 1st Marine Expeditionary Force to take part in the Second Battle of Fallujah. The 3d Battalion, 153d Infantry Regiment provided security to two massive Shiite marches to the Khadamiyah Shrine which were staged through Sunni neighborhoods. They were accompanied by very little violence due to the battalion's work with Iraqi National Guard and Iraqi Police officials. On October 3, 2004, Staff Sergeant Christopher Potts (Battery A, 1–103rd FA) and Sergeant Russell "Doc" Collier, from 1st Battalion, 206th Field Artillery were killed in a fire fight with insurgents near the village of Musurraf, south of Camp Taji along the Tigris River. Sergeant Collier was posthumously awarded the Silver Star for his actions when he moved forward under heavy enemy fire in order to render aid to Staff Sergeant Potts who had been shot while attempting to silence an enemy automatic weapon. Staff Sergeant Potts was posthumously awarded the Bronze Star Medal with V Device for his actions. On November 14, 2004, a patrol of 307th Iraqi National Guard Soldiers with an adviser team from 1st Battalion, 206th Field Artillery led by Captain John Vanlandingham, and an escort platoon from B Company, 3d Battalion, 153d Infantry Regiment was ambushed north of Mushada, Iraq. Vanlandingham received the Silver Star medal for his actions to save several wounded Iraqi Army soldiers who had become separated from the patrol during the ambush. Vanlandingham repeatedly exposed himself to enemy fire in order to carry wounded Iraqi soldiers to safety. The most coordinated enemy attack the brigade had seen occurred on 20 November 2004 when twenty-six soldiers of Company C, 3d Battalion, 153d Infantry Regiment Infantry were ambushed near Fort Apache in North Baghdad. They fended off over 100 insurgents for several hours without ammunition resupply or support. The platoon leader, First Lieutenant Michael McCarty, despite being wounded, endured intense enemy direct fire and personally neutralized an enemy machine gun emplacement without support. Lieutenant McCarty received the Silver Star for going above and beyond the call of duty. The 1st Battalion, 153d Infantry Regiment conducted over 8,200 combat patrols, captured six division targets and contained or disrupted 15 vehicle borne improvised explosive device (VBIED) attacks in their sector. The battalion worked to suppress indirect fire attacks on the International Zone during the Transfer of Iraqi Sovereignty and weekly Iraqi National Congress meetings. Lieutenant Colonel Kendall Penn, battalion commander, also worked closely with the Karahda District Counsel to oversee over six million dollars of infrastructure and community improvement projects in the battalion's area of operations. The 39th Infantry Brigade was instrumental in the January 2005 elections. The brigade was responsible for the establishment and security of 20 different polling sites within the brigade's area of operations. In order to avoid jeopardizing the credibility of the election process, it was necessary to avoid a Coalition Force presence at the polling sites. This meant that the security at the polling sites would be the responsibility of the New Iraqi Army units for which the 39th was responsible. 39th Brigade leaders spent countless hours planning and coordination with Iraqi counterpart units and governmental elections officials, and not one polling site in the 39th Infantry Brigade Combat Team area of operations was disrupted or forced to close. The members of the 239th Engineer Company stationed in Camp Cooke and their families back in Arkansas were the subject of a TV documentary series that aired on the Discovery Times channel called Off To War. The 39th was also covered by embedded reporter Amy Schlesing of the Arkansas Democrat Gazette for the entire time in Iraq. The definitive work on the 39th Brigade's first deployment to Iraq was published by the Arkansas Democrat Gazette. The work entitled The Bowie Brigade, Arkansas National Guard's 39th Infantry Brigade in Iraq was published in 2005 and is a collection of the work of Ms. Schlesing and the embedded writers and photographers who accompanied the brigade: Statnon Breidenthal, Karen E. Segrave, Arron Skinner, Stephen B. Thorton and Michael Woods. The 39th Infantry Brigade was relieved in place by the 3d Brigade, 1st Armored Division, on March 12, 2005, which was the same unit that the 1st Battalion, 206th Field Artillery Regiment had relieved at Camp Taji on March 24, 2004. During the deployment the 39th Infantry Brigade suffered a total of thirty six killed in action, including soldiers from attached units. Sixteen of those killed in action were members of the Arkansas National Guard. Members of the brigade were awarded three Silver Stars, dozens of Bronze Star Medals and Army Commendation Medals with V device and over 250 Purple Heart Medals. In the March 2005, units of the brigade started their rotation back to Fort Carson, Colorado, Fort Hood, Texas, and Fort Sill, Oklahoma for demobilization. The following units were task organized under the 39th Infantry Brigade Team during Operation Iraqi Freedom II 1st Battalion, 153d Infantry Regiment was task organized under 3d Brigade, 1st Cavalry Division during Operation Iraqi Freedom. Upon redeployment in 2005, the brigade immediately began a major reorganization that transformed the brigade from an enhanced separate brigade to an infantry brigade combat team under the U.S. Army's new Modular Design. This redesign of the army was intended to make the force more easily deployable by making brigades more self contained and less dependent on support organizations at the division level. Major changes for the brigade included: Transition from a brigadier general to a colonel as brigade commander; Deactivation of 3d Battalion, 153d Infantry Regiment; Deactivation of Troop E, 151st Cavalry Regiment; Deactivation of Battery C, 1st Battalion, 206th Field Artillery Regiment; Activation of 1st Squadron, 151st Cavalry Regiment, with headquarters at Warren; Activation of the Special Troops Battalion, 39th Infantry Brigade Combat Team, with headquarters at Conway; Activation of four new forward support companies, D, E, F and G under the 39th Brigade Support Battalion; Reorganization of 239th MI Company as Company B, Special Troops Battalion, 39th Infantry Brigade Combat Team; Reorganization of 239th Engineer Company as Company A, Special Troops Battalion, 39th Infantry Brigade Combat Team; Activation of Company C, Special Troops Battalion, 39th Infantry Brigade Combat Team. Along with this reorganization came a significant re-stationing of several units within the state of Arkansas. After Hurricane Katrina hit Louisiana in August 2005, elements of the brigade combat team deployed to New Orleans by C-130s from the Little Rock Air Force Base to support the relief and recovery efforts as part of Operation Katrina. Under tactical control of the Louisiana National Guard, 39th soldiers were given the mission of providing security and food and water to an estimated 20,000 people at the New Orleans Convention Center on September 2. By the afternoon of September 3, all individuals staying in and around the Convention Center had been evacuated. The mission of the 39th in Louisiana grew to the point that at one time the brigade combat team was responsible for working with local officials in fourteen parishes. Elements of the 39th and the Arkansas National Guard stayed deployed in Louisiana until February 2006. In 2006, the 7th Infantry Division was deactivated and the brigade combat team was placed under the command and control of the 36th Infantry Division. In June 2006 the brigade combat team began deploying troops along the Southwest Border with Mexico as part of Operation Jump Start. The brigade combat team manned two sectors of the border around Lordsburg and near Deming, New Mexico. Personnel occupied observation posts and reported activity along the border to the United States Border Patrol. Various battalions within the brigade combat team were tasked with supplying volunteer companies during this period. Headquarters and Headquarters Battery, 1st Battalion, 206th Field Artillery manned the Deming station from December 2006 through June 2007. While serving in Operation Jump Start, personnel from the brigade combat team were able to begin preparing for the brigade combat team's second deployment in support of Operation Iraqi Freedom. The First and Second Arkansas were stationed in the same part of New Mexico 90 years earlier during John J. Pershing's punitive Mexican Expedition against Pancho Villa. In April 2007, the 39th Infantry Brigade Combat Team received an alert for a second deployment in support of Operation Iraqi Freedom. The brigade combat team had been home almost exactly two years since demobilizing after Operation Iraqi Freedom. This deployment would be dramatically different from the first. Instead of deploying as a brigade combat team, the brigade was tasked with filling 28 unit requests for forces. These taskings involved supplying convoy security companies, force protection companies, base defense operations center and garrison command cells. Additionally, instead of an 18-month mobilization, with 12 months actually deployed to Iraq like the first tour, this mobilization would be for a total of 12 months, with approximately 10 months being deployed to the combat theater. Once again the unit found itself with a shortage of personnel to fill these taskings. Many of these shortages were caused by unresolved medical issues from the first deployment. This time the Arkansas National Guard decided not to ask for outside support, but met the brigade combat team's need for personnel by task organizing the 217th Brigade Support Battalion from the 142d Fires Brigade, and three companies from the 87th Troop Command to the brigade combat team for this deployment. The brigade combat team was placed on duty in October 2007 to prepare for its second deployment to Iraq while still under state control. It began a 90-day pre-mobilization training period at Fort Chaffee Maneuver Training Center on October 1, 2007. This allowed the unit to perform certain tasks in Arkansas and allowed unit members to be closer to their families for a longer period of time. The brigade combat team was placed in federal service in January 2008 and trained at Camp Shelby, Mississippi until it deployed to Iraq beginning in March 2008. Upon reaching their final destinations, most of the brigade combat team elements fell under the tactical command of Regular Army units, primarily the 4th Infantry Division and the 3rd Sustainment Command (Expeditionary). The brigade combat team and its subordinate battalions retained administrative control (ADCON) of all team elements. While deployed in Iraq from April to December 2008, the headquarters of the brigade combat team assumed the mission as the base defense operations cell for Victory Base Camp (VBC) in Baghdad, Iraq, responsible for the security of over 65,000 coalition soldiers and civilians. With this mission, the brigade combat team headquarters managed and coordinated the security of four subordinate camps and area defense operation centers (ADOCs), to include: Camp Victory, Camp Striker, Camp Slayer, and Camp Liberty. The brigade combat team headquarters managed entry control and personnel processing at four major entry control points and processed over 2,500 local national workers each day. In addition to internal base security, the brigade combat team managed terrain outside the perimeter in order to better provide defense in depth, as well as improve quality of life for Iraqi population centers adjacent to VBC. These responsibilities also included the Baghdad International Airport (BIAP) that was located in the center of VBC. The brigade combat team, in partnership with its subordinate units, coordinated nearly ten million dollars in projects that benefited local Iraqi communities. During this same time period, the brigade combat team invested over twenty-one million dollars in base defense improvement to VBC, to include improved towers, barriers, fencing, perimeter lighting, road improvement, water projects, and general force protection initiatives. The brigade combat team was also charged with providing command and control for the Counter-RAM, Joint Intercept Battery, a system used to destroy incoming artillery, rockets and mortar rounds in the air before they hit their ground targets. For their efforts, the headquarters, brigade combat team received the Meritorious Unit Citation (MUC) from the commander, 4th Infantry Division. The brigade combat team's task organization for the base defense operations cell mission was: Task Force 1st Battalion, 153d Infantry Regiment consisted of a Headquarters Company, a Joint Visitor's Bureau Company, a Personal Security Detachment Troop and two Base Defense Companies. The task force was responsible for the force protection and defense of Camp Slayer and the Radwiniya Palace Complex within the Victory Base Camp. The Task Force 1st Battalion, 153d Infantry Regiment searched over 10,000 cars and 35,600 Iraqis to ensure no threats penetrated the perimeter. Soldiers assigned to Task Force 1st Battalion, 153d Infantry Regiment executed 996 combat patrols in the area of operations surrounding Camp Slayer and captured six high-value targets. Task Force 2d Battalion, 153d Infantry Regiment was stationed in Al Asad Airbase, Iraq and was organized as a convoy security battalion. The battalion provided convoy security to theater sustainment convoys using the Jordan Line of Communications from Trebil to Al Asad and Forward Operating Base TQ. The unit conducted seventy six combat logistical patrols, four to six days in length, driving over 1,587,000 miles. Task Force 2d Battalion, 153d Infantry Regiment experienced one casualty during Operation Iraqi Freedom when an escort vehicle was accidentally struck while providing security at an intersection by one of the escorted vehicles. Task Force 1st Squadron, 151st Cavalry Regiment, based at Tallil Airbase, consisted of over 800 soldiers assigned to six companies/troops/batteries consisting of active and reserve components. Task Force 1st Squadron, 151st Cavalry Regiment conducted over 700 tactical convoy security missions, without losing a single soldier due to enemy activity. The task force was responsible for long haul fuel mission between Tallil Air Base, Logistical Base Sitz, Taji and Balad Air Base. Task Force 1st Squadron, 151st Cavalry Regiment suffered one non-combat related casualty when a soldier died while working on a vehicle in the motor pool. The Headquarters and Headquarters Battery, 1st Battalion, 206th Field Artillery Regiment was assigned to function as the Garrison Command Cell at Camp Taji, Iraq. The brigade combat team deputy commander, Colonel Kirk Van Pelt accompanied the 1st Battalion, 206th Field Artillery Regiment to Taji and acted as the garrison commander. The organic units of the 1st Battalion, 206th Field Artillery Regiment were attached to various battalions in the 1st Sustainment Brigade as convoy security companies. Batteries A and B and Company G, 39th Brigade Support Battalion were tasked to escort convoys of concrete barriers to Baghdad during the Siege of Sadr City. The "Clear, Hold, Build Concept" as it was employed in Sadr City involved cordoning several city blocks by emplacing concrete barriers around the area to be sealed off. These barriers weighed several tons each, so an entire convoy might move only 30–40 barriers. The convoy escort team would escort the civilian trucks hauling the barriers from Camp Taji or Camp Liberty to Sadr City, and then provide security on the site for up to six hours while cranes lifted and emplaced each barrier. These missions often came under small arms fire and the threat of improvised explosive devices was constant. The 1st Battalion, 206th Field Artillery Regiment suffered no killed in action during this second deployment, although Battery B had one killed in action from an attached Regular Army unit. Sergeant Jose Ulloa, of 515th Transportation Company was killed on 8 August 2008 went the MRAP that he was riding in was struck by an improvised explosive device during a convoy security mission in Sadr City, Baghdad. Sergeant Ulloa's platoon was attached to Battery B as a convoy security platoon at the time of his death. The brigade combat team redeployed to Camp Shelby, Mississippi in December 2008 and demobilized. Unlike the first deployment, the soldiers of the 39th were supported by a massive reintegration effort. Soldiers and their families participated in Yellow Ribbon reintegration events at the thirty-, sixty- and ninety-day post redeployment intervals. The soldiers and their families were provided with lodging at convention centers around the state for these events. The soldiers were presented with information on Employer Support of the Guard and Reserve (ESGR), employment counseling, marriage counseling, Veterans Affairs benefits, post traumatic stress disorder and suicide prevention. Each event included a job fair to assist soldiers in finding employment. Decorations Headquarters and Headquarters Company, 39th Infantry Brigade Combat Team was awarded the Meritorious Unit Commendation for the period of April 1, 2008 through December 1, 2008. Commanders The unit was commanded by a brigadier general until 2005 when it was reorganized as a modular brigade combat team, at which time the brigade combat team was commanded by a colonel. Casualties War on Terrorism Killed S.F.C. William W. Labadie Jr., April 7, 2004 Capt. Arthur L. Felder, April 24, 2004 C.W.O. Patrick W. Kordsmeier, April 24, 2004 Staff Sgt. Billy J. Orton, April 24, 2004 Staff Sgt. Stacey C. Brandon, April 24, 2004 Spec. Kenneth Melton of Batesville, April 25, 2004 Staff Sgt. Hesley Box, May 6, 2004 S.F.C. Troy Leon Miranda, May 20, 2004 Sgt. Russell L. Collier, October 3, 2004 Staff Sgt. Christopher S. Potts, October 3, 2004 Sgt. Ronald Wayne Baker, October 13, 2004 Sgt. Michael Smith, November 26, 2004 Cpl. Jimmy Buie, January 4, 2005 Spc. Joshua Marcum, January 4, 2005 Spc. Jeremy McHalffey, January 4, 2005 Spc. Lyle Rymer II, January 28, 2005 Staff Sgt. William Robbins, February 10, 2005 Non-battle casualties S.F.C. Anthony Lynn Woodham, July 5, 2008 Spc. James M. Clay, November 13, 2008 Composition Insignia Shoulder sleeve insignia The unit's shoulder sleeve insignia consists of a Bowie knife over a diamond. The Bowie knife symbolizes the state of Arkansas, where the Bowie knife originated, and close hand-to-hand fighting which is the specialty of the light infantry. The diamond is a reference to a unique aspect of the state of Arkansas which has the only diamond field in North America in Murfreesboro. The red and blue colors are the colors of the state flag and represent both their loyalty (blue) and the blood (red) that its soldiers have shed for both the state of Arkansas, and the United States in its operations. The brigade motto is "Courage". The Bowie knife that adorns the shoulder sleeve insignia is worn by certain field grade officers and command sergeants major in the brigade combat team. The most famous version of the Bowie knife was designed by Jim Bowie and presented to Arkansas blacksmith James Black in the form of a carved wooden model in December 1830. Black produced the knife ordered by Bowie, and at the same time created another based on Bowie's original design but with a sharpened edge on the curved top edge of the blade. Black offered Bowie his choice and Bowie chose the modified version. Knives like that one, with a blade shaped like that of the Bowie knife, but with a pronounced false edge, are today called "Sheffield Bowie" knives, because this blade shape became so popular that cutlery factories in Sheffield, England were mass-producing such knives for export to the United States by 1850, usually with a handle made from either hardwood, deer antler, or bone, and sometimes with a guard and other fittings of sterling silver. Bowie returned, with the Black-made knife, to Texas and was involved in a knife fight with three men who had been hired to kill him. Bowie killed the three would-be assassins with his new knife and the fame of the knife grew. Legend holds that one man was almost decapitated, the second was disemboweled, and the third had his skull split open. Bowie died at the Battle of the Alamo five years later and both he and his knife became more famous. The fate of the original Bowie knife is unknown; however, a knife bearing the engraving "Bowie No. 1" has been acquired by the Historic Arkansas Museum from a Texas collector and has been attributed to Black through scientific analysis. Black soon did a booming business making and selling these knives out of his shop in Washington, Arkansas. Black continued to refine his technique and improve the quality of the knife as he went. In 1839, shortly after his wife's death, Black was nearly blinded when, while he was in bed with illness, his father-in-law and former partner broke into his home and attacked him with a club, having objected to his daughter having married Black years earlier. Black was no longer able to continue in his trade. Black's knives were known to be exceedingly tough, yet flexible, and his technique has not been duplicated. Black kept his technique secret and did all of his work behind a leather curtain. Many claim that Black rediscovered the secret of producing true Damascus steel. In 1870, at the age of 70, Black attempted to pass on his secret to the son of the family that had cared for him in his old age, Daniel Webster Jones. However, Black had been retired for many years and found that he himself had forgotten the secret. Jones would later become Governor of Arkansas. The birthplace of the Bowie knife is now part of the Old Washington Historic State Park which has over forty restored historical buildings and other facilities including Black's shop. The park is known as "The Colonial Williamsburg of Arkansas". The American Bladesmith Society established the William F. Moran School of Bladesmithing at this site to instruct new apprentices as well as journeyman, and mastersmiths in the art of bladesmithing. As described in the 39th Anniversary Brigade Annual, published for the brigade combat team's 39th anniversary celebration in 2006 at the headquarters at Ricks Armory, Little Rock, Arkansas, the Bowie knife has been the individual weapon of senior leaders in the unit since the reorganization of the unit in 1967. Only knives that are procured by order of the brigade combat team commander are authorized for wear or presentation. The handle of the knife is commensurate with the leader's rank: General officers are authorized ivory handles; colonels wear knives with stag handles; field grade officers and the aide-de-camp wear black handles; CW3s and above are authorized walnut handles; command sergeants major and sergeants major are authorized the cherry wood handle; retired master sergeants are authorized cocobolo handles. The knife is worn on a pistol belt on the bear's left side with the Army Combat Uniform. The Arkansas Brigade Bowie knife has been worn by members through two deployments in support of Operation Iraqi Freedom II. The knife continues to be produced in Arkansas. Until his death, each presentation-grade knife was handmade by Mr. Jimmy Lile of Russellville, Arkansas. Mr. Lile was also commissioned to make the knives made by Sylvester Stallone in the "Rambo" movies. The Lile family continued to make the Bowie knife for the unit for several years following Mr. Lile's death. Today the brigade combat team's knife is produced by Mr. Kenny Teague of Mountainburg. The general public cannot purchase one of these knives, but can purchase a different style based on the Bowie knife pattern. Each brigade Bowie knife bears the recipient's name, social security number, rank, and military branch, as well as the maker's name and serial number of the knife. Distinctive insignia The stars stand for France, Spain, and the U.S., nations to which the Arkansas Territory belonged. The diamond shape was suggested by the state flag, while the wavy bar symbolizes the Arkansas River with the arrow referring to the Arkansa people. The green background alludes to the wooded hills of the Ouachita and Ozarks. The arrow in flight is used as a symbol of The Arkansas Brigade defending the state. See also World War I order of battle War on Terrorism order of battle References External links 1967 establishments in Arkansas Infantry 039 Infantry 039 Infantry 039 Military units and formations in Arkansas Military units and formations established in 1967
19099316
https://en.wikipedia.org/wiki/RealFlight
RealFlight
RealFlight RC Simulator is a radio-controlled airplane and helicopter simulation software series developed by Knife Edge Software and now published by Horizon Hobby. The software allows for the flying of numerous RC aircraft, helicopters and drones so that the user can learn to fly RC, practice their skills or fly with others in multiplayer mode. Note: Although RealFlight RC Simulator has a similar name to Real Flight Simulator it has nothing to do with it. Real Flight Simulator (goes around with a few different names) is a commercial rebranding of an old version of the free and opensource flight simulator Flightgear. Included with RealFlight RC Simulator are various flying sites (or airports) and aircraft models, almost all of which represent real-life models. The software also includes an airport editor and an aircraft editor to allow for the creation of new flying sites and aircraft. Within RealFlight, editing aircraft is limited to changing the physical and aerodynamic properties. In order to create new visual models, the use of a 3D modeling application such as Autodesk 3ds Max or Blender is required. The software is released in "generations," with each new generation including major updates and new features. The most recent is RealFlight 9, which included additions such as water and water physics, an improved user interface, more aircraft, more flying sites, and an improved "InterLink Elite" controller. RealFlight requires the connection of an InterLink controller, which is included with the software, in order to operate. Add-Ons and Expansion Packs Knife Edge Software also develops packs containing additional content for RealFlight, each of which adds new airfields and additional aircraft. For users of RealFlight G4 or newer, the older series of "Add-On" packs have been reworked, with several aircraft receiving updates, and can be downloaded freely from the RealFlight website. The "Add-Ons" volumes, which are no longer officially supported, have been superseded by the newer "Expansion Pack" series of products. The Add-Ons line is compatible with versions of RealFlight dating back to RealFlight Classic, with the exception of Add-Ons 5, which requires RealFlight G2 or higher. Expansion Pack 1 through Expansion Pack 4 require RealFlight G3 or higher, and Expansion Pack 5 and up require RealFlight G4 or higher. Expansion Packs 1 through 3 are now discontinued products. Compatibility Chart External links RealFlight Home Page Knife Edge Software Home Page References 1998 video games Flight simulation video games Radio-controlled aircraft Video games developed in the United States Windows games Windows-only games
16706008
https://en.wikipedia.org/wiki/PKCS%201
PKCS 1
In cryptography, PKCS #1 is the first of a family of standards called Public-Key Cryptography Standards (PKCS), published by RSA Laboratories. It provides the basic definitions of and recommendations for implementing the RSA algorithm for public-key cryptography. It defines the mathematical properties of public and private keys, primitive operations for encryption and signatures, secure cryptographic schemes, and related ASN.1 syntax representations. The current version is 2.2 (2012-10-27). Compared to 2.1 (2002-06-14), which was republished as RFC 3447, version 2.2 updates the list of allowed hashing algorithms to align them with FIPS 180-4, therefore adding SHA-224, SHA-512/224 and SHA-512/256. Keys The PKCS #1 standard defines the mathematical definitions and properties that RSA public and private keys must have. The traditional key pair is based on a modulus, , that is the product of two distinct large prime numbers, and , such that . Starting with version 2.1, this definition was generalized to allow for multi-prime keys, where the number of distinct primes may be two or more. When dealing with multi-prime keys, the prime factors are all generally labeled as for some , such that: for As a notational convenience, and . The RSA public key is represented as the tuple , where the integer is the public exponent. The RSA private key may have two representations. The first compact form is the tuple , where is the private exponent. The second form has at least five terms (p, q, dp, dq, qinv), or more for multi-prime keys. Although mathematically redundant to the compact form, the additional terms allow for certain computational optimizations when using the key. In particular, the second format allows to derive the public key. Primitives The standard defines several basic primitives. The primitive operations provide the fundamental instructions for turning the raw mathematical formulas into computable algorithms. I2OSP - Integer to Octet String Primitive - Converts a (potentially very large) non-negative integer into a sequence of bytes (octet string). OS2IP - Octet String to Integer Primitive - Interprets a sequence of bytes as a non-negative integer RSAEP - RSA Encryption Primitive - Encrypts a message using a public key RSADP - RSA Decryption Primitive - Decrypts ciphertext using a private key RSASP1 - RSA Signature Primitive 1 - Creates a signature over a message using a private key RSAVP1 - RSA Verification Primitive 1 - Verifies a signature is for a message using a public key Schemes By themselves the primitive operations do not necessarily provide any security. The concept of a cryptographic scheme is to define higher level algorithms or uses of the primitives so they achieve certain security goals. There are two schemes for encryption and decryption: RSAES-OAEP: improved Encryption/decryption Scheme; based on the Optimal Asymmetric Encryption Padding scheme proposed by Mihir Bellare and Phillip Rogaway. RSAES-PKCS1-v1_5: older encryption/decryption scheme as first standardized in version 1.5 of PKCS #1. Note: A small change was made to RSAES-OAEP in PKCS #1 version 2.1, causing RSAES-OAEP in PKCS #1 version 2.0 to be totally incompatible with RSA-OAEP in PKCS #1 version 2.1 and version 2.2. There are also two schemes for dealing with signatures: RSASSA-PSS: improved Probabilistic Signature Scheme with appendix; based on the probabilistic signature scheme originally invented by Bellare and Rogaway. RSASSA-PKCS1-v1_5: old Signature Scheme with Appendix as first standardized in version 1.5 of PKCS #1. The two signature schemes make use of separately defined encoding methods: EMSA-PSS: encoding method for signature appendix, probabilistic signature scheme. EMSA-PKCS1-v1_5: encoding method for signature appendix as first standardized in version 1.5 of PKCS #1. The signature schemes are actually signatures with appendix, which means that rather than signing some input data directly, a hash function is used first to produce an intermediary representation of the data, and then the result of the hash is signed. This technique is almost always used with RSA because the amount of data that can be directly signed is proportional to the size of the keys; which is almost always much smaller than the amount of data an application may wish to sign. Version history Versions 1.1–1.3, February through March 1991, privately distributed. Version 1.4, June 1991, published for NIST/OSI Implementors' Workshop. Version 1.5, November 1993. First public publication. Republished as . Version 2.0, September 1998. Republished as . Introduced the RSAEP-OAEP encryption scheme. Version 2.1, June 2002. Republished as . Introduced multi-prime RSA and the RSASSA-PSS signature scheme Version 2.2, October 2012. Republished as . Implementations Below is a list of cryptography libraries that provide support for PKCS#1: Botan Bouncy Castle BSAFE cryptlib Crypto++ Libgcrypt mbed TLS Nettle OpenSSL wolfCrypt Attacks Multiple attacks were discovered against PKCS #1 v1.5. In 1998, Daniel Bleichenbacher published a seminal paper on what became known as Bleichenbacher's attack (also known as "million message attack"). PKCS #1 was subsequently updated in the release 2.0 and patches were issued to users wishing to continue using the old version of the standard. With slight variations this vulnerability still exists in many modern servers. In 2006, Bleichenbacher presented a new forgery attack against the signature scheme RSASSA-PKCS1-v1_5. See also Comparison of cryptography libraries References External links - PKCS #1: RSA Cryptography Specifications Version 2.2 Cryptography standards Digital signature schemes Digital Signature Standard
66430515
https://en.wikipedia.org/wiki/DDoS-Guard
DDoS-Guard
DDoS-Guard is a Russian Internet infrastructure company which provides DDoS protection, content delivery network services, and web hosting services. Researchers and journalists have alleged that many of DDoS-Guard's clients are engaged in criminal activity, and investigative reporter Brian Krebs reported in January 2021 that a "vast number" of the websites hosted by DDoS-Guard are "phishing sites and domains tied to cybercrime services or forums online". Some of DDoS-Guard's notable clients have included the Palestinian Islamic militant nationalist movement Hamas, American alt-tech social network Parler, and various groups associated with the Russian state. Company DDoS-Guard is based in Russia, as are most of its employees. It was first registered in July 2014 in Sevastopol, by Evgeny Marchenko and Dmitry Sabitov, two Russians formerly from Ukraine. The company is incorporated in Scotland as Cognitive Cloud LLP and in Belize as DDoS-Guard Corp. A company with the same name, owned by the same men, had previously existed in Ukraine since 2011, though spokespeople for the company have said this was only an early stage company created while the software was being developed. The spokespeople stated that DDoS-Guard has always been based in Russia, in Rostov-on-Don, although Meduza reported that the office in that city didn't open until 2015. Meduza reported that the company apparently relocated to Russia after Ukrainian national security and cyberpolice officers began investigations into the company due to its choice to host Verified, a forum notorious for platforming credit card scammers. DDoS-Guard has denied knowledge of the investigation. In 2021, a researcher observed the DDoS-Guard appeared to have no physical presence in Belize and had likely incorporated there to gain access to IP addresses normally only allocated to local entities. Of more than 11,000 IP addresses assigned to DDoS-Guard's two subsidiaries, the researcher found two thirds had been provided to the Belizean company by LACNIC, the regional Internet registry responsible for Latin America and the Caribbean. DDoS-Guard has rebutted the allegations, and said they do have a presence in Belize. After the researcher reported DDoS-Guard to LACNIC, LACNIC announced they would revoke more than 8,000 IP addresses from the company. On 1 June 2021, cyber-intelligence company Group-IB reported that they had found DDoS-Guard's database, containing site IP addresses, names, and payment information along with its full source code, for purchase on a cybercrime black market forum. The authenticity of the allegedly stolen data was unverified. Clients Meduza has reported that, according to a former employee, DDoS-Guard has a history of working with customers who operate on the darknet. The employee has said this is because they can charge higher rates to such customers, who have a much smaller range of choices of Internet service providers willing to work with them, and who often especially need website security services. Some of DDoS-Guard's other clients have included the Palestinian Islamic militant nationalist movement Hamas, and the imageboard 8kun, formerly known as 8chan, which is the online home of the American far-right QAnon conspiracy theory. The company said they ended services for both Hamas and 8chan after learning about the content on the sites from news sources. DDoS-Guard has ended services for various clients after being informed of their activities by journalists, but Meduza wrote that the company would likely need to deny services for a large portion of its client base if they were to proactively monitor for criminal activity. Brian Krebs, an investigative reporter focusing on cybercrime, wrote in January 2021 that a "review of the several thousand websites hosted by DDoS-Guard is revelatory, as it includes a vast number of phishing sites and domains tied to cybercrime services or forums online." DDoS-Guard is suspected of hosting multiple Internet scammers responsible for stealing banking data, and one of the world's largest online stores for illegal drugs operates using infrastructure associated with DDoS-Guard. DDoS-Guard has also hosted a website dedicated to doxing those who participated in the 2019–20 Hong Kong protests. According to Meduza, the website has been directly linked to Chinese authorities. DDoS-Guard also provides services to The Daily Stormer, an American neo-Nazi, white supremacist, and Holocaust denial website and message board. Verified Verified is a platform which Meduza has described as "one of the Internet's oldest and most notorious Russian-language forums for credit-card scammers". Meduza reported that beginning in the spring of 2013, Ukrainian national security and cyberpolice began investigating DDoS-Guard for allegedly servicing this platform, and has said this investigation likely led DDoS-Guard to reincarnate itself as a Russian company in 2014. DDoS-Guard has said they have no knowledge of such an investigation. Russian state In January 2014, before DDoS-Guard moved to Russia, the company partnered with one of the largest domain registrars in the country, REG.RU. Shortly after, the company began working with clients associated with the Russian state. Beginning in 2016, DDoS-Guard began providing denial-of-service protection to the Russian Ministry of Defence. In 2018, DDoS-Guard helped test the Russian state's deep packet inspection systems. It is also cooperates closely with the Russian Central Bank. Parler DDoS-Guard was providing denial-of-service attack protection services to Parler, an American alt-tech social network which was deplatformed by Amazon Web Services and other Internet service providers after the 2021 United States Capitol attack. Wired noted that Parler's choice to use a Russian company for DDoS protection "could expose its users to Russian surveillance if the site someday does relaunch in full with DDoS-Guard" because of the Russian government's projects to isolate the country's internet. In January 2021, the United States House Committee on Oversight and Reform began an investigation into Parler in which they asked Parler for, among other things, information about agreements, documents, and communications with Russian entities. In the letter to Parler requesting this information, committee chair Carolyn Maloney described DDoS-Guard as a company "which has ties to the Russian government and counts the Russian Ministry of Defense as one of its clients". See also Cloudflare Epik (company) References External links Content delivery networks DDoS mitigation companies Internet security Internet service providers Internet technology companies of Russia Russian companies established in 2014 Ukrainian companies established in 2011 Web hosting
42317826
https://en.wikipedia.org/wiki/Jeremy%20Zerechak
Jeremy Zerechak
Jeremy Zerechak (born 1979) is an American documentary filmmaker. He has directed and produced two feature-length documentaries, Land of Confusion (2008) and Code 2600 (2011). Background Born in Scranton, Pennsylvania, Zerechak joined the Army National Guard after high school so he could afford to attend film school. He was enrolled at Pennsylvania State University when his unit was deployed to Iraq in 2004. While serving in Iraq, Zerechak filmed his first feature-length documentary, Land of Confusion, which chronicles his unit's mission in Iraq and the operations of the Iraq Survey Group. Zerechak graduated from Pennsylvania State University in 2006 with a Bachelor of Arts in Film and Video Production. He taught film at Ohio University. Work In 2008, Zerechak produced and directed Land of Confusion. The film won two Special Jury Awards at the Florida Film Festival and the Atlanta Film Festival. In 2011, Zerechak produced and directed Code 2600, a documentary about the rise of the Information Technology Age and the history of hacker culture. The film explores the impact of this new and growing connectivity on our human relations, personal privacy, and security as individuals and as a society. Code 2600 was selected for the Grand Jury Award for Best Documentary at the Atlanta Film Festival. In 2012, Zerechak produced and directed The Entrepreneur, a short documentary about American entrepreneur Ron Morris and his terminal battle with pancreatic cancer. In 2013, Zerechak traveled to Jinja, Uganda to film Hackers in Uganda. a documentary about a developing African community, its people, and the unique efforts of Hackers for Charity. The group was founded in 2009 by computer hacker and IT security expert Johnny Long, to provide humanitarian services in Uganda. Filmography 2008: Land of Confusion 2011: Code 2600 2012: The Entrepreneur (short) 2013: Town & Country (short) 2014: Hackers in Uganda (in production) References 1979 births Living people People from Scranton, Pennsylvania American documentary filmmakers Penn State College of Arts and Architecture alumni United States Army personnel of the Iraq War Ohio University faculty United States Army soldiers Pennsylvania National Guard personnel
40406090
https://en.wikipedia.org/wiki/Tron%20%28hacker%29
Tron (hacker)
Boris Floricic (Floričić), better known by his pseudonym Tron (8 June 1972 – 17 October 1998), was a German-Croat hacker and phreaker whose death in unclear circumstances has led to various conspiracy theories. He is also known for his Diplom thesis presenting one of the first public implementations of a telephone with built-in voice encryption, the "Cryptophon". Floricic's pseudonym was a reference to the eponymous character in the 1982 Disney film Tron. Floricic was interested in defeating computer security mechanisms; amongst other hacks, he broke the security of the German phonecard and produced working clones. He was subsequently sentenced to 15 months in jail for the physical theft of a public phone (for reverse engineering purposes) but the sentence was suspended to probation. From December 2005 to January 2006, media attention was drawn to Floricic when his parents and Andy Müller-Maguhn brought legal action in Germany against the Wikimedia Foundation and its German chapter Wikimedia Deutschland e.V. The first preliminary injunction tried to stop Wikipedia from publishing Floricic's full name, and a second one followed, temporarily preventing the use of the German Internet domain as a redirect address to the German Wikipedia. Early life Floricic grew up in Gropiusstadt, a suburb in southern Berlin (West Berlin at the time). His interests in school focused on technical subjects. He left school after ten years and completed a three-year Vocational education (Berufsausbildung) offered by the Technical University of Berlin and graduated as a specialist in communication electronics with a major in information technology (Kommunikationselektroniker, Fachrichtung Informationstechnik). He subsequently earned the Abitur and began studies in computer science at the Technical University of Applied Sciences of Berlin. During his studies, Floricic attended an internship with a company developing electronic security systems. In the winter term 1997/1998, Floricic successfully finished his studies and published his diploma thesis, in which he developed and described the "Cryptophon", an ISDN telephone with built-in voice encryption. Since parts of this work, which were to be provided by another student, were missing, he could not finish his work on the Cryptophon. His thesis, however, was rated as exceptional by the evaluating university professor. After graduation, Floricic applied for work, but was unsuccessful. In his spare time he continued, among other activities, his work on the Cryptophon. Interests Floricic was highly interested in electronics and security systems of all kinds. He engaged in, amongst other things, attacks against the German phonecard and Pay TV systems. As part of his research he exchanged ideas and proposals with other hackers and scientists. On the mailing list "tv-crypt", operated by a closed group of Pay TV hackers, Floricic reported about himself in 1995 that his interests were microprocessors, programming languages, electronics of all kinds, digital radio data transmission and especially breaking the security of systems perceived as secure. He claimed to have created working clones of a chipcard used for British Pay TV and would continue his work to defeat the security of the Nagravision/Syster scrambling system which was then used by the German Pay TV provider "PREMIERE". Later, American scientists outlined a theoretical attack against SIM cards used for GSM mobile phones. Together with hackers from the Chaos Computer Club, Floricic successfully created a working clone of such a SIM card, thus showing the practicability of the attack. He also engaged in cloning the German phonecard and succeeded. While Floricic only wanted to demonstrate the insecurity of the system, the proven insecurity was also abused by criminals which led to the attention of law enforcement agencies and the German national phone operator Deutsche Telekom. After Deutsche Telekom changed the system, Floricic tried to remove a complete public card phone from a booth by force (using a sledgehammer) on 3 March 1995 in order to, as he told, adapt his phonecard simulators to the latest changes. He and a friend were, however, caught by the police upon this attempt. Floricic was later sentenced to a prison term of 15 months which was suspended to probation. Cryptophon "Cryptophon" (or "Cryptofon") was the name Floricic chose for his prototype of an ISDN telephone with integrated voice encryption. It was created in the winter term 1997–1998 as part of his diploma thesis, titled "Realisierung einer Verschlüsselungstechnik für Daten im ISDN B-Kanal" (German, meaning, "Implementation of Cryptography for Data contained in the ISDN Bearer channel"), at the Technische Fachhochschule Berlin. Floricic focused on making the Cryptophon cheap and easy to build for hobbyists. The phone encrypts telephone calls using the symmetric encryption algorithm IDEA. As IDEA is patented, the cipher was implemented on a replaceable daughter module which would have allowed the user to exchange IDEA for another (probably patent-unencumbered) algorithm. In addition, the system was about to be supplemented with a key exchange protocol based on the asymmetric algorithm RSA in order to achieve security against compromised remote stations. The Cryptophon is built on the foundation of an 8051 compatible microprocessor which controls the whole system and peripherals (e.g. ISDN controller, keypad and display). For the cryptography, Floricic used cheap DSPs from Texas Instruments which he scrapped out of old computer modems, but which could also be bought at affordable prices. As this type of DSP is not powerful enough for the cryptography algorithm chosen, Floricic used two of them for the Cryptophon – one for sending and one for receiving. He planned to extend the phone so it would also be possible to encrypt data-connections. Floricic developed both the operating software of the phone as well as the cryptography implementation in the DSPs. He found a new way to implement IDEA to save significant processing time. Death Floricic disappeared on 17 October 1998 and was found dead in a local park in Britz in the Neukölln district of Berlin on 22 October after being hanged from a waistbelt wrapped around his neck. The cause of death was officially recorded as suicide. Some of his peers in the Chaos Computer Club, as well as his family members and some outside critics, have been vocal in their assertions that Floricic may have been murdered. It is argued that his activities in the areas of Pay TV cracking and voice scrambling might have disturbed the affairs of an intelligence agency or organized crime enough to provide a motive. The German journalist Burkhard Schröder published a book about the death titled Tron – Tod eines Hackers (Tron – Death of a Hacker) in 1999 in which he presents the facts about the case known at the time. Because he concludes that Floricic took his own life, the author was harshly criticized by both members of the Chaos Computer Club and Floricic's parents. Naming controversy As Floricic's family did not wish his full name (Boris Floricic) to be used, many German newspapers referred to him as "Boris F." On 14 December 2005, his parents obtained a temporary restraining order in a Berlin court against Wikimedia Foundation Inc. because its freely editable online encyclopedia, Wikipedia, mentioned the full name in its German language version. The order prohibited the Foundation from mentioning the full name on any website under the domain "wikipedia.org". It furthermore required the Foundation to name a representative in Germany within two weeks following the decision. This was widely reported in the Dutch and German press. The initial order was mistakenly addressed to Saint Petersburg, Russia rather than to St. Petersburg, Florida, United States; this was corrected five days later. On 17 January 2006, a second preliminary injunction from a court in Berlin prohibited the Wikimedia Deutschland e.V. local chapter from linking to the German Wikipedia, resulting in the change of the wikipedia.de address from a link to German Wikipedia to a page explaining the situation, although the page did not mention Tron. Despite media reports to the contrary, the German Wikipedia itself was never closed or made inaccessible in Germany. Wikimedia Deutschland e.V. confirmed to the Internet news site golem.de that the new injunction was related to the prior case against the Wikimedia Foundation and was issued on behalf of the same plaintiffs. Wikimedia Deutschland e.V. was reported as intending to fight the injunction, arguing that no valid case was presented and the freedom of the press must be defended. As Müller-Maguhn, one of the spokespersons of the Chaos Computer Club, was deeply involved in the case on the side of the plaintiffs, some media reported this as a case of Chaos Computer Club against Wikipedia. The Chaos Computer Club had issued a public statement that this was a case between a few of its members and Wikipedia, and that the CCC itself did not take any position in the matter. The Austrian online magazine Futurezone interviewed Andy Müller-Maguhn on 19 January 2006 about the case and its background. Maguhn admitted that the true reason behind the incident was a fictitious work recently published by a German author in which the main character had the same (civil) name as Floricic. The parents sent a protest to the publisher but were turned down with the argument that the German Wikipedia was using the name as well. Müller-Maguhn then asked the German Wikipedia to remove the name, but was turned down for a number of reasons, including failure to present proof that he was entitled to speak and act on behalf of the parents. On 9 February 2006, the injunction against Wikimedia Deutschland was overturned. The plaintiffs appealed to the Berlin state court, but were turned down in May 2006. See also List of hackers References Further reading Burkhard Schröder: Tron: Tod eines Hackers ("Tron: Death of a hacker"). rororo, 1999, Chenoweth, Neil: Murdoch's Pirates: Before the phone hacking, there was Rupert's pay-TV skullduggery. Allen & Unwin, 2012, External links Spiegel Online: "How a Dead Hacker Shut Down Wikipedia Germany", 2006-01-20 Wired.com: "Out of Chaos Comes Order", by David Hudson, 1998-12-28 (about the suicide) Possenspiel um Wikipedia (Die Zeit online edition) tronland.org (Site dedicated to Tron's memory) 1972 births 1998 suicides Death conspiracy theories German computer criminals Suicides by hanging in Germany
331325
https://en.wikipedia.org/wiki/Minimum%20description%20length
Minimum description length
Minimum description length (MDL) is a model selection principle where the shortest description of the data is the best model. MDL methods learn through a data compression perspective and are sometimes described as mathematical applications of Occam's razor. The MDL principle can be extended to other forms of inductive inference and learning, for example to estimation and sequential prediction, without explicitly identifying a single model of the data. MDL has its origins mostly in information theory and has been further developed within the general fields of statistics, theoretical computer science and machine learning, and more narrowly computational learning theory. Historically, there are different, yet interrelated, usages of the definite noun phrase "the minimum description length principle" that vary in what is meant by description: Within Jorma Rissanen's theory of learning, a central concept of information theory, models are statistical hypotheses and descriptions are defined as universal codes. Rissanen's 1978 pragmatic first attempt to automatically derive short descriptions, relates to the Bayesian Information Criterion (BIC). Within Algorithmic Information Theory, where the description length of a data sequence is the length of the smallest program that outputs that data set. In this context, it is also known as 'idealized' MDL principle and it is closely related to Solomonoff's theory of inductive inference, which is that the best model of a data set is represented by its shortest self-extracting archive. Overview Selecting the minimum length description of the available data as the best model observes the principle identified as Occam's razor. Prior to the advent of computer programming, generating such descriptions was the intellectual labor of scientific theorists. It was far less formal than it has become in the computer age. If two scientists had a theoretic disagreement, they rarely could formally apply Occam's razor to choose between their theories. They would have different data sets and possibly different descriptive languages. Nevertheless, science advanced as Occam's razor was an informal guide in deciding which model was best. With the advent of formal languages and computer programming Occam's razor was mathematically defined. Models of a given set of observations, encoded as bits of data, could be created in the form of computer programs that output that data. Occam's razor could then formally select the shortest program, measured in bits of this algorithmic information, as the best model. To avoid confusion, note that there is nothing in the MDL principle that implies a machine produced the program embodying the model. It can be entirely the product of humans. The MDL principle applies regardless of whether the description to be run on a computer is the product of humans, machines or any combination thereof. The MDL principle requires only that the shortest description, when executed, produce the original data set without error. Two-Part codes The distinction in computer programs between programs and literal data applies to all formal descriptions and is sometimes referred to as "two parts" of a description. In statistical MDL learning, such a description is frequently called a two-part code. MDL in machine learning MDL applies in machine learning when algorithms (machines) generate descriptions. Learning occurs when an algorithm generates a shorter description of the same data set. The theoretic minimum description length of a data set, called its Kolmogorov complexity, cannot, however, be computed. That is to say, even if by random chance an algorithm generates the shortest program of all that outputs the data set, an automated theorem prover cannot prove there is no shorter such program. Nevertheless, given two programs that output the dataset, the MDL principle selects the shorter of the two as embodying the best model. Recent work on algorithmic MDL learning Recent machine MDL learning of algorithmic, as opposed to statistical, data models have received increasing attention with increasing availability of data, computation resources and theoretic advances. Approaches are informed by the burgeoning field of artificial general intelligence. Shortly before his death, Marvin Minsky came out strongly in favor of this line of research, saying: Statistical MDL learning Any set of data can be represented by a string of symbols from a finite (say, binary) alphabet. [The MDL Principle] is based on the following insight: any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than needed to describe the data literally. (Grünwald, 2004) Based on this, in 1978, Jorma Rissanen published an MDL learning algorithm using the statistical notion of information rather than algorithmic information. Over the past 40 years this has developed into a rich theory of statistical and machine learning procedures with connections to Bayesian model selection and averaging, penalization methods such as Lasso and Ridge, and so on - Grünwald and Roos (2020) give an introduction including all modern developments. Rissanen started out with this idea: all statistical learning is about finding regularities in data, and the best hypothesis to describe the regularities in data is also the one that is able to statistically compress the data most. Like other statistical methods, it can be used for learning the parameters of a model using some data. Usually though, standard statistical methods assume that the general form of a model is fixed. MDL's main strength is that it can also be used for selecting the general form of a model and its parameters. The quantity of interest (sometimes just a model, sometimes just parameters, sometimes both at the same time) is called a hypothesis. The basic idea is then to consider the (lossless) two-stage code that encodes data with length by first encoding a hypothesis in the set of considered hypotheses and then coding "with the help of" ; in the simplest context this just means "encoding the deviations of the data from the predictions made by : The achieving this minimum is then viewed as the best explanation of data . As a simple example, take a regression problem: the data could consist of a sequence of points , the set could be the set of all polynomials from to . To describe a polynomial of degree (say) , one would first have to discretize the parameters to some precision; one would then have to describe this precision (a natural number); next, one would have to describe the degree (another natural number), and in the final step, one would have to describe parameters; the total length would be . One would then describe the points in using some fixed code for the x-values and then using a code for the deviations . In practice, one often (but not always) uses a probabilistic model. For example, one associates each polynomial with the corresponding conditional distribution expressing that given , is normally distributed with mean and some variance which could either be fixed or added as a free parameter. Then the set of hypotheses reduces to the assumption of a linear model, , with a polynomial. Furthermore, one is often not directly interested in specific parameters values, but just, for example, the degree of the polynomial. In that case, one sets to be where each represents the hypothesis that the data is best described as a j-th degree polynomial. One then codes data given hypothesis using a one-part code designed such that, whenever some hypothesis fits the data well, the codelength is short. The design of such codes is called universal coding. There are various types of universal codes one could use, often giving similar lengths for long data sequences but differing for short ones. The 'best' (in the sense that it has a minimax optimality property) are the normalized maximum likelihood (NML) or Shtarkov codes. A quite useful class of codes are the Bayesian marginal likelihood codes. For exponential families of distributions, when Jeffreys prior is used and the parameter space is suitably restricted, these asymptotically coincide with the NML codes; this brings MDL theory in close contact with objective Bayes model selection, in which one also sometimes adopts Jeffreys' prior, albeit for different reasons. The MDL approach to model selection "gives a selection criterion formally identical to the BIC approach" for large number of samples. Example of Statistical MDL Learning A coin is flipped 1000 times, and the numbers of heads and tails are recorded. Consider two model classes: The first is a code that represents outcomes with a 0 for heads or a 1 for tails. This code represents the hypothesis that the coin is fair. The code length according to this code is always exactly 1000 bits. The second consists of all codes that are efficient for a coin with some specific bias, representing the hypothesis that the coin is not fair. Say that we observe 510 heads and 490 tails. Then the code length according to the best code in the second model class is shorter than 1000 bits. For this reason a naive statistical method might choose the second model as a better explanation for the data. However, an MDL approach would construct a single code based on the hypothesis, instead of just using the best one. This code could be the normalized maximum likelihood code or a Bayesian code. If such a code is used, then the total codelength based on the second model class would be larger than 1000 bits. Therefore, the conclusion when following an MDL approach is inevitably that there is not enough evidence to support the hypothesis of the biased coin, even though the best element of the second model class provides better fit to the data. Statistical MDL Notation Central to MDL theory is the one-to-one correspondence between code length functions and probability distributions (this follows from the Kraft–McMillan inequality). For any probability distribution , it is possible to construct a code such that the length (in bits) of is equal to ; this code minimizes the expected code length. Conversely, given a code , one can construct a probability distribution such that the same holds. (Rounding issues are ignored here.) In other words, searching for an efficient code is equivalent to searching for a good probability distribution. Limitations of Statistical MDL Learning The description language of statistical MDL is not computationally universal. Therefore it cannot, even in principle, learn models of recursive natural processes. Related concepts Statistical MDL learning is very strongly connected to probability theory and statistics through the correspondence between codes and probability distributions mentioned above. This has led some researchers to view MDL as equivalent to Bayesian inference: code length of model and data together in MDL correspond respectively to prior probability and marginal likelihood in the Bayesian framework. While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkov normalized maximum likelihood code, which plays a central role in current MDL theory, but has no equivalent in Bayesian inference. Furthermore, Rissanen stresses that we should make no assumptions about the true data-generating process: in practice, a model class is typically a simplification of reality and thus does not contain any code or probability distribution that is true in any objective sense. In the last mentioned reference Rissanen bases the mathematical underpinning of MDL on the Kolmogorov structure function. According to the MDL philosophy, Bayesian methods should be dismissed if they are based on unsafe priors that would lead to poor results. The priors that are acceptable from an MDL point of view also tend to be favored in so-called objective Bayesian analysis; there, however, the motivation is usually different. Other systems Rissanen's was not the first information-theoretic approach to learning; as early as 1968 Wallace and Boulton pioneered a related concept called minimum message length (MML). The difference between MDL and MML is a source of ongoing confusion. Superficially, the methods appear mostly equivalent, but there are some significant differences, especially in interpretation: MML is a fully subjective Bayesian approach: it starts from the idea that one represents one's beliefs about the data-generating process in the form of a prior distribution. MDL avoids assumptions about the data-generating process. Both methods make use of two-part codes: the first part always represents the information that one is trying to learn, such as the index of a model class (model selection) or parameter values (parameter estimation); the second part is an encoding of the data given the information in the first part. The difference between the methods is that, in the MDL literature, it is advocated that unwanted parameters should be moved to the second part of the code, where they can be represented with the data by using a so-called one-part code, which is often more efficient than a two-part code. In the original description of MML, all parameters are encoded in the first part, so all parameters are learned. Within the MML framework, each parameter is stated to exactly that precision, which results in the optimal overall message length: the preceding example might arise if some parameter was originally considered "possibly useful" to a model but was subsequently found to be unable to help to explain the data (such a parameter will be assigned a code length corresponding to the (Bayesian) prior probability that the parameter would be found to be unhelpful). In the MDL framework, the focus is more on comparing model classes than models, and it is more natural to approach the same question by comparing the class of models that explicitly include such a parameter against some other class that doesn't. The difference lies in the machinery applied to reach the same conclusion. See also Algorithmic probability Algorithmic information theory Inductive inference Inductive probability Lempel–Ziv complexity References Further reading Minimum Description Length on the Web, by the University of Helsinki. Features readings, demonstrations, events and links to MDL researchers. Homepage of Jorma Rissanen, containing lecture notes and other recent material on MDL. Advances in Minimum Description Length, MIT Press, . Algorithmic information theory
42123
https://en.wikipedia.org/wiki/CATIA
CATIA
CATIA (, an acronym of computer-aided three-dimensional interactive application) is a multi-platform software suite for computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), 3D modeling and Product lifecycle management (PLM), developed by the French company Dassault Systèmes. Since it supports multiple stages of product development from conceptualization, design and engineering to manufacturing, it is considered a CAx-software and is sometimes referred to as a 3D Product Lifecycle Management software suite. Like most of its competition it facilitates collaborative engineering through an integrated cloud service and have support to be used across disciplines including surfacing & shape design, electrical, fluid and electronic systems design, mechanical engineering and systems engineering. Besides being used in a wide range of industries from aerospace and defence to packaging design, CATIA has been used by architect Frank Gehry to design some of his signature curvilinear buildings and his company Gehry Technologies was developing their Digital Project software based on CATIA. The software has been merged with the company's other software suite 3D XML Player to form the combined Solidworks Composer Player. History CATIA started as an in-house development in 1977 by French aircraft manufacturer Avions Marcel Dassault to provide 3D surface modeling and NC functions for the CADAM software they used at that time to develop the Mirage fighter jet. Initially named CATI (conception assistée tridimensionnelle interactive – French for interactive aided three-dimensional design ), it was renamed CATIA in 1981 when Dassault created the subsidiary Dassault Systèmes to develop and sell the software, under the management of its first CEO, Francis Bernard. Dassault Systèmes signed a non-exclusive distribution agreement with IBM, that was also selling CADAM for Lockheed since 1978. Version 1 was released in 1982 as an add-on for CADAM. During the eighties CATIA saw wider adoption in the aviation and military industries with users such as Boeing and General Dynamics Electric Boat Corp. Dassault Systèmes purchased CADAM from IBM in 1992, and the next year CATIA CADAM was released. During the nineties CATIA was ported first in 1996 from one to four Unix operating systems, and was entirely rewritten for version 5 in 1998 to support Windows NT. In the years prior to 2000, this caused problems of incompatibility between versions that led to $6.1B in additional costs due to delays in production of the Airbus A380. With the launch of Dassault Systèmes 3DEXPERIENCE Platform in 2014, CATIA became available as a cloud version. Release history Gallery See also Comparison of computer-aided design editors List of 3D computer graphics software List of 3D rendering software List of 3D modeling software References External links History of CATIA Computer-aided design software Computer-aided manufacturing software Computer-aided engineering software Product lifecycle management Dassault Group Proprietary software
2814311
https://en.wikipedia.org/wiki/TOWER%20Software
TOWER Software
TOWER Software was a software development company, founded in 1985 in Canberra, Australia. The company provided and supported enterprise content management software, notably its TRIM product line for electronic records management. TOWER Software was acquired by Hewlett-Packard Company in 2008 as a part of its HP Software Division. History TOWER Software began making records management software in 1985. It shifted to providing electronic document and records management software with the introduction of TRIM Captura in 1998. TRIM Context was released in 2002. In 2004 the company re-branded itself as a provider of enterprise content management solutions. TOWER Software contributed to the development of international and local standards by sitting on a working group to develop AS4390 (the Australian standard for records management), which influenced the ISO 15489 standard, and by reviewing the MoReq and DoD 5015.2 standards. In April 2008, TOWER Software shareholders were advised of the company's intention to accept an offer by Hewlett-Packard to acquire the company for approximately US$105 million. At that time, TOWER Software had 1000 customers in 32 countries representing the public sector and highly regulated industries such as healthcare, energy and utilities, and banking and finance. TRIM Context is now called HP TRIM Records Management System software and is based on the ISO standard 15489 for records management. TRIM software also has U.S. Department of Defense (DoD) 5015.2 certification and adheres to the principles outlined in AS4390. Recognition and awards KMWorld Readers' Choice Award 2006 External links HP Software website A Strategic Approach to Managing Information from National Archives of Australia ARMA International Electronic Records and e-Discovery See also DIRKS, Design and Implementation of Record Keeping Systems The National Archives (UK), The National Archives Design Criteria Standard for Electronic Records Management Software Applications Paperless Office Document Management Document Imaging References Defunct technology companies of Australia Hewlett-Packard acquisitions HP software International information technology consulting firms Companies based in Canberra
32317083
https://en.wikipedia.org/wiki/Ad%20Opt
Ad Opt
AD OPT is the airline crew planning division of IBS Software. History AD OPT was founded in 1987 by Jean Éthier, Pierre Lestage, Daniel McInnis, François Soumis and Pierre Trudeau all operational researchers from the decision analysis research group, GERAD, and the CRT (Centre de Recherche sur les Transports/ Research Center on Transportation) in Montreal, Quebec, Canada. Originally, they worked on developing truck itinerary management software for mining companies. It was only later that the company concentrated on developing software to manage crew planning (Altitude) and shift worker schedules (ShiftLogic). In 1999, the company was listed on the Toronto Stock Exchange (TSX) under AOP. With ShiftLogic, they later expanded their horizons into other industries including manufacturing, hospitality, and transportation, among others. In 2001, AD OPT purchased Total Care Technologies to include staff scheduling in the healthcare industry to their product list. AD OPT's operational research team maintains close ties with GERAD and its teams by funding some of their research projects in the aviation scheduling domain. In 2004, AD OPT Technologies Inc. was acquired by Kronos Incorporated and in 2019, AD OPT was acquired by the aviation and transportation IT services provider IBS Software. In June 2015, AD OPT received the CORS Omond Solandt Award for Excellence in Operational Research. Clients Some of Ad Opt's clients (present and past) include: Air Canada easyJet Emirates Airlines Etihad Airways FedEx Qantas Airways South African Airlines United Airlines UPS US Airways Virgin Australia References Business software companies Companies based in Montreal Software companies established in 1987 Transportation planning
60955830
https://en.wikipedia.org/wiki/Vampire%3A%20The%20Masquerade%20%E2%80%93%20Coteries%20of%20New%20York
Vampire: The Masquerade – Coteries of New York
Vampire: The Masquerade – Coteries of New York is a visual novel-style adventure video game developed and published by Draw Distance. It is based on the tabletop role-playing game Vampire: The Masquerade, and part of the larger World of Darkness series. It was released in 2019 for Microsoft Windows, and in 2020 for Linux, MacOS, Nintendo Switch, PlayStation 4, and Xbox One. The stand-alone expansion Shadows of New York followed in 2020. The player takes the role of one of three fledgling vampires of different vampire clans with different vampiric abilities, and interacts with the members of their coteries. The story depicts the struggle between two vampiric factions, and diverges based on player choices. The game was designed by Krzysztof Zięba, who also was one of the writers, and used the tabletop game's sourcebook New York by Night as its main inspiration and reference for the characters and setting, while also taking influence from the use of moral dilemmas in the video games developed by Telltale Games. In adapting the tabletop game, the developers chose not to incorporate many of its game mechanics, and focused on what they considered essential for storytelling. Gameplay Vampire: The Masquerade – Coteries of New York is a single-player, visual novel-style adventure game with mainly text-based gameplay, and involves the player making dialogue and story choices. As a vampire, the player character needs to balance their blood thirst with their humanity, while also ensuring that they do not reveal themselves as vampiric, breaking the Masquerade. Based on the choices the player makes, the narrative branches. In addition to the game's main quest, the player has access to side quests and loyalty quests; the latter involves creating bonds to the characters in their party ("coterie"). The player character can belong to one of three vampire clans, which affects their character's ethics and dialogue, and how the members of the player's coterie react to them. The choice of clan also determines what vampiric abilities ("disciplines") the player can use: a Brujah character can use Celerity (increased speed) and Potence (increased strength); a Toreador can also use Celerity, and Auspex (supernatural senses); and a Ventrue can use Fortitude (increased resilience) and Dominate (mind control); and all three can use Presence (attracting or scaring humans). These abilities can be used for problem solving, as well as in combat situations and when interacting with characters. Synopsis Setting Coteries of New York is set in New York City, in the World of Darkness. The story focuses on the struggles between two vampire factions – the traditionalist Camarilla and the rebellious Anarchs – and lets the player take the role of one of three fledgling vampires belonging to Camarilla clans: a passionate man of clan Brujah, an artistic man of clan Toreador, and a controlling businesswoman of clan Ventrue. Plot The unsuspecting player character is Embraced and transformed into a fledgling vampire by a mysterious stranger. They are picked up by Sheriff Qadir al-Asmai and taken before the court of Prince Hellene Panhard. Panhard sentences the fledgling to death in accordance with the vampire Traditions, but Sophie Langley, a patron at the court, intervenes and offers to take them under her protection. Langley provides lodging for the fledgling, teaches them how to hunt as well as other vampire-related knowledge, and introduces them to prominent members of the court, including Thomas Arturo and Robert Larson. On Langley's advice, the fledgling begins to build their own coterie, reaching out to four recommended candidates: Agathon, D'Angelo, Hope and Tamika. The fledgling is also frequently sent to do Langley's biddings, which include obtaining information from the powerful broker Kaiser and setting up a secret meeting with Torque, an influential Anarch Baron. Langley reveals that the Anarch leader Boss Callihan is in fact the lover of Prince Panhard and blood bound to her, and the two are allied to maintain the status quo for their personal gain. She proposes that they head to Ellis Island to catch Panhard and Callihan in the act, using the knowledge to overthrow them, with Langley and Torque taking their places as leaders of the Camarilla and the Anarchs. The fledgling, Langley and Torque confront Panhard and Callihan on Ellis Island. Torque is forced to escape during the ensuing brawl, which is then interrupted by the arrival of Arturo. Arturo reveals that he orchestrated all the events up to this point, including the fledgling's Embrace, and used them as a pawn to manipulate Langley into making a power play. He explains that he intended to destabilize the city, before returning it to a stable status quo, simply for his amusement. Arturo then has Langley executed and, declaring himself impressed with the fledgling's resourcefulness, offers to bind them to his blood and by extension his will. Development Coteries of New York was developed and published by the Polish studio Draw Distance, and was designed by Krzysztof Zięba, who also was one of the game's writers. The game was based on the fifth edition of the tabletop role-playing game Vampire: The Masquerade, and was produced in cooperation with Modiphius Entertainment, the developer at the time of the tabletop game, to ensure that it adhered to the Vampire: The Masquerade canon and lore; the story from Coteries of New York was in turn incorporated into the series' lore. It was however also designed with players who are new to the series in mind, and was written to work as an introduction to the series and setting, with a focus on introducing basic concepts and different vampire clans. Negotiating the licensing agreement for Vampire: The Masquerade, and the structure and content of the game, was a long process, starting at the Nordic Game conference in 2018. The developers used the sourcebook New York by Night as their main source of inspiration and reference for the cast and setting. As New York by Night was published in 2001 for the tabletop game's Revised Edition, it was considered a jump-off point, with characters originating from it being developed to account for the passage of time in the setting since the book's publication, certain conflicts and events being resolved or progressing, and new characters being added. In rare cases where the sourcebook indicates how a certain character speaks, this was taken into consideration in the scriptwriting. In addition to New York by Night, the developers made use of information from Camarilla and Anarch, the sourcebooks that were available for the fifth edition of the tabletop game at the time. Another big influence on the game was the works of the game studio Telltale Games, and their use of moral dilemmas. Early on in the development, Choose Your Own Adventure gamebooks were also an influence. In adapting the tabletop game to a narrative-driven video game format, the developers focused on what they considered what the tabletop game values – personal stories, and characters and their conflicting motivations – rather than gameplay mechanics. In doing so, they removed tabletop game mechanics such as leveling up and weapon stats, that were not crucial to telling the story, resulting in what they described as essentially a role-playing game without the things commonly associated with role-playing in video games. Although Coteries of New York was not initially designed as a visual novel-like game, it ended up moving towards that as the best format for a dialogue-driven game given the available development time. Release The game was announced in June 2019 with a teaser trailer, and was released on December 11, 2019 for Microsoft Windows, following a one-week delay to allow for bug-fixing. Ports followed for Linux and MacOS on January 23, 2020, Nintendo Switch on March 24, 2020, PlayStation 4 on March 25, and Xbox One on April 15, 2020. The console versions launched with additional character and environment art and improved audio design, which was added to the Windows version through an update. An artbook and a soundtrack album were released digitally on March 24, 2020. A Japanese localization of the game was released on November 12, 2020 for Microsoft Windows, PlayStation 4, and Nintendo Switch by DMM Games. A stand-alone expansion with an independent story, Vampire: The Masquerade – Shadows of New York, was released on September 10, 2020, and has the player take the role of a vampire from clan Lasombra while investigating the death of the local Anarch Movement leader. A physical "collector's edition" that includes Coteries of New York, Shadows of New York, an artbook, and a vinyl soundtrack, is planned for release in Q1/Q2 2021 for Nintendo Switch, PC, and PlayStation 4. Reception The game saw a "mixed or average" critical response according to the review aggregator Metacritic, but won the 2020 Central & Eastern European Game Awards for best narrative and was featured by Dengeki Online in their serial about recommended downloadable games. The PC version was among the best-selling new releases of the month on Steam, and recouped development and marketing costs within a week; on the other hand, the Japanese PlayStation 4 release did not appear on Famitsu weekly top 30 chart of physical video game sales during its debut week, indicating that it sold less than 2,900 retail copies during the period. Notes References External links 2019 video games Adventure games Linux games MacOS games Nintendo Switch games PlayStation 4 games Single-player video games Vampire: The Masquerade Video games about vampires Video games developed in Poland Video games featuring female protagonists Video games set in New York City Video games with expansion packs Visual novels Windows games World of Darkness video games Xbox One games
2112462
https://en.wikipedia.org/wiki/XWiki
XWiki
XWiki is a free wiki software platform written in Java with a design emphasis on extensibility. XWiki is an enterprise wiki. It includes WYSIWYG editing, OpenDocument based document import/export, semantic annotations and tagging, and advanced permissions management. As an application wiki, XWiki allows for the storing of structured data and the execution of server side script within the wiki interface. Scripting languages including Velocity, Apache Groovy, Python, Ruby and PHP can be written directly into wiki pages using wiki macros. User created data structures can be defined in wiki documents and instances of those structures can be attached to wiki documents, stored in a database, and queried using either Hibernate query language or XWiki's own query language. XWiki.org's extension wiki is home to XWiki extensions ranging from code snippets which can be pasted into wiki pages to loadable core modules. Many of XWiki's features are provided by extensions which are bundled with it. The wikitext is rendered using the XWiki Rendering Engine which extends WikiModel and Doxia systems, allowing it to parse Confluence, JSPWiki, Creole, MediaWiki, and TWiki syntaxes as well as XWiki's own syntax. XWiki pages are written by default using the WYSIWYG editor and rendered with XWiki syntax to format text, create tables, create links, display images, etc. Development XWiki code is licensed under the GNU Lesser General Public License and hosted on GitHub where everyone is free to fork the source code and develop changes in their own repository. The content included in the XWiki wiki is licensed under a Creative Commons attribution license so that it can be redistributed as long as it references XWiki; derivatives can be re-licensed entirely. While most of the active developers are funded by commercial support company XWiki SAS, XWiki SAS maintains a strict boundary between itself and the XWiki free software project. All decisions about the direction of the XWiki software project are made by consensus of the committers must go through the developers' mailing list. Open source projects XWiki relies heavily on other open source projects to work. They include: Groovy: for advanced scripting requirements Hibernate: relational database storage Lucene: to index all the content of a wiki and its attachments and allow the search within their content. Velocity: a powerful template language History XWiki was originally written by Ludovic Dubost who founded XPertNet SARL later to become XWiki SAS, and it was first released in January 2004 under the GNU General Public License. The "X" in the name comes from "eXtensible Wiki" (when you pronounce it, it sounds as 'X'). The first version of the Wiki Farm xwiki.com was released in April 2004. In addition the open source project was hosted on SourceForge and the first commit there was done on the 15th of December 2003. In 2006, the license was changed to the GNU Lesser General Public License to give the developer community greater flexibility, Apache Maven developer Vincent Massol became the lead developer and XWiki won the Lutece d'Or award for best open source software developed for the enterprise. After 6 beta versions and 5 release candidates, XWiki 1.0 was released on May 22, 2007 bringing new features such as stand-alone installer and semantic tagging. 2007 also brought the introduction of XWiki Watch for allowing teams to collaboratively follow RSS feeds. Features Structured content and inline scripting, which allows building wiki applications User rights management (by wiki / space / page, using groups, etc...) PDF export Full-text search Version control Import office documents into wiki syntax through OpenOffice Various protocols for accessing the wiki (WebDAV, REST, XML-RPC) Content and site design Export and Import Plugins, API, Programming... More features on the official website. XWiki is also an application wiki that allows the creation of objects and classes within the wiki. This way, forms can be developed in a very short time span and be reused to enter data on the wiki following a specific template. This means that end users can be presented with a page on which the layout is already drawn, where they can directly fill in the fields needed. See also Comparison of wiki software References External links XWiki Open-Source project official homepage XWiki Organization in GitHub OW2 Consortium Free content management systems Free wiki software
984322
https://en.wikipedia.org/wiki/Beam%20Software
Beam Software
Krome Studios Melbourne, originally Beam Software, was an Australian video game development studio founded in 1980 by Alfred Milgrom and Naomi Besen and based in Melbourne, Australia. Initially formed to produce books and software to be published by Melbourne House, a company they had established in London in 1977, the studio operated independently from 1987 until 1999, when it was acquired by Infogrames, who changed the name to Infogrames Melbourne House Pty Ltd.. In 2006 the studio was sold to Krome Studios. The name Beam was a contraction of the names of the founders: Naomi Besen and Alfred Milgrom. History Home computer era In the early years, two of Beam's programs were milestones in their respective genres. The Hobbit, a 1982 text adventure by Philip Mitchell and Veronika Megler, sold more than a million copies. It employed an advanced parser by Stuart Richie and had real-time elements. Even if the player didn't enter commands, the story would move on. In 1985 Greg Barnett's two-player martial arts game The Way of the Exploding Fist helped define the genre of one-on-one fighting games on the home computer. The game won Best Overall Game at the Golden Joystick Awards. In 1987 Beam's UK publishing arm, Melbourne House, was sold to Mastertronic for £850,000. Beam chairman Alfred Milgrom recounted, "...around 1987 a lot of our U.K. people went on to other companies and at around the same time the industry was moving from 8-bit to 16-bit. It was pretty chaotic. We didn't have the management depth at that time to run both the publishing and development sides of things, so we ended up selling off the whole Melbourne House publishing side to Mastertronic." Subsequent games were released through varying publishers. The 1988 fighting games Samurai Warrior and Fist +, the third instalment in the Exploding Fist series, were published through Telecomsoft's Firebird label. 1988 also saw the release of space-shoot'em-up Bedlam, published by GO!, one of U.S. Gold's labels, and The Muncher, published by Gremlin Graphics. Shift to consoles and PCs In 1987 Nintendo granted a developer's licence for the NES and Beam developed games on that platform for US and Japanese publishers. Targeted at an Australian audience, releases such as Aussie Rules Footy and International Cricket for the NES proved successful. In 1992 they released the original title Nightshade, a dark superhero comedy game. The game was meant to be the first part in a series, but no sequels were ever made; however, it served as the basis for Shadowrun. Released in 1993, Shadowrun also used an innovative dialogue system using the acquisition of keywords which could be used in subsequent conversations to initiate new branches in the dialogue tree. Also in 1993 they released Baby T-Rex, a Game Boy platform game that the developer actively sought to adapt the game to a number of different licensed properties in different countries around the world including the animated film We're Back! in North America and the puppet character Agro in their home country of Australia. In 1997, Beam relaunched the Melbourne House brand, under which they published the PC titles Krush Kill 'n' Destroy (KKND), and the sequels KKND Xtreme and KKND2: Krossfire. They released KKND2 in South Korea well before they released it in the American and European markets, and pirated versions of the game were available on the internet before it was available in stores in the U.S. They were the developers of the 32-bits versions of Norse By Norse West: The Return of the Lost Vikings for the Sega Saturn, PlayStation and PC in 1996. They also helped produce SNES games such as WCW SuperBrawl Wrestling, Super Smash TV and an updated version of International Cricket titled Super International Cricket. They ported the Sega Saturn game Bug! to Windows 3.x in August, 1996. 1998 saw a return to RPGs with Alien Earth, again with a dialogue tree format. Also in 1998, the studio developed racing games DethKarz and GP 500. In 1999 Beam Software was acquired by Infogrames and renamed to Infogrames Melbourne House Pty Ltd. 2000s They continued to cement a reputation as a racing game developer with Le Mans 24 Hours and Looney Tunes: Space Race (both Dreamcast and PlayStation 2), followed by the technically impressive Grand Prix Challenge (PlayStation 2), before a disastrous venture into third-person shooters with Men in Black II: Alien Escape (PlayStation 2, GameCube). In 2004 the studio released Transformers for the PlayStation 2 games console based on the then current Transformers Armada franchise by Hasbro. The game reached the top of the UK PlayStation 2 games charts, making it Melbourne House's most successful recent title. The studio then completed work on PlayStation 2 and PlayStation Portable ports of Eden's next-generation Xbox 360 title Test Drive: Unlimited. In December 2005, Atari decided to shift away from internal development, seeking to sell its studios, including Melbourne House. In November 2006 Krome Studios announced that it had acquired Melbourne House from Atari and that the studio would be renamed to Krome Studios Melbourne. It was closed on 15 October 2010, along with the main Brisbane office. Next to the game development, Beam Software also had the division Smarty Pants Publishing Pty Ltd., that created software titles for kids, as well as the proprietary video compression technology VideoBeam, and Famous Faces, a facial motion capture hardware and software solution. Games As Beam Software 1982: Strike Force (TRS-80), Hungry Horace, Horace Goes Skiing, Horace and the Spiders, The Hobbit, Penetrator 1984: Castle of Terror, Hampstead, Mugsy, Sherlock 1985: Gyroscope, Lord of the Rings: Game One, Terrormolinos, Way of the Exploding Fist 1986: Asterix and the Magic Cauldron, Mugsy's Revenge, Rock 'n' Wrestle 1987: Fist II, Knuckle Busters, Shadows of Mordor, Street Hassle, 1988: Samurai Warrior: The Battles of Usagi Yojimbo, The Muncher1989: Back to the Future (NES), Aussie Games (Commodore 64, ZX Spectrum) 1990: Back to the Future Part II & III (NES), Dash Galaxy in the Alien Asylum (NES), Boulder Dash (Game Boy), NBA All-Star Challenge (Game Boy), The Punisher (NES), Road Blasters (NES), Bigfoot (NES) 1991: Hunt for Red October (Game Boy), Smash TV (NES), Family Feud (NES), J. R. R. Tolkien's Riders of Rohan (DOS), Aussie Rules Footy (NES), Power Punch II (NES), Star Wars (NES) 1992: Nightshade (NES), T2: The Arcade Game (Game Boy), NBA All-Star Challenge 2 (Game Boy), Tom and Jerry (Game Boy), Super Smash TV (SNES), George Foreman's KO Boxing (Game Boy) 1993: Baby T-Rex (Game Boy), We're Back BC (Game Boy), Agro Soar (Game Boy), Blades of Vengeance (Genesis), NFL Quarterback Club (Game Boy), Radical Rex (Genesis), Shadowrun (SNES), MechWarrior (SNES), Super High Impact (Genesis, SNES), Tom and Jerry - Frantic Antics (Genesis) 1994: The Simpsons: Itchy & Scratchy in Miniature Golf Madness (Game Boy); WCW: The Main Event (Game Boy); Super Smash TV (GG, SMS); Solitaire FunPak (Game Boy); Cricket '97 Ashes Edition (PC); Radical Rex (SNES) 1995: True Lies (GB, Genesis; SNES); The Dame Was Loaded (PC) 1995: Bug! (PC) 1996: 5 in One Fun Pak (GG); WildC.A.T.S (SNES) 1997: Caesars Palace (PlayStation) 1997: Krush, Kill 'n' Destroy (PC) 1998: Dethkarz (PC) 1998: NBA Action 98 (PC) 1998: KKnD 2: Krossfire (PC, PlayStation) 1999: GP 500'' (PC) As Infogrames Melbourne House/Atari Melbourne House References External links Official website via Internet Archive Video game development companies Video game companies established in 1980 Video game companies disestablished in 2010 Defunct video game companies of Australia Companies based in Melbourne Australian companies established in 1980 Australian companies disestablished in 2010
19999
https://en.wikipedia.org/wiki/Microcode
Microcode
In processor design, microcode is a technique that interposes a layer of computer organization between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. Microcode is a layer of hardware-level instructions that implement higher-level machine code instructions or internal finite-state machine sequencing in many digital processing elements. Microcode is used in general-purpose central processing units, although in current desktop CPUs, it is only a fallback path for cases that the faster hardwired control unit cannot handle. Microcode typically resides in special high-speed memory and translates machine instructions, state machine data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram. More extensive microcoding allows small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family. Some hardware vendors, especially IBM, use the term microcode as a synonym for firmware. In that way, all code within a device is termed microcode regardless of it being microcode or machine code; for example, hard disk drives are said to have their microcode updated, though they typically contain both microcode and firmware. Overview The lowest layer in a computer's software stack is traditionally raw machine code instructions for the processor. In microcoded processors, the microcode fetches and executes those instructions. To avoid confusion, each microprogram-related element is differentiated by the micro prefix: microinstruction, microassembler, microprogrammer, microarchitecture, etc. Engineers normally write the microcode during the design phase of a processor, storing it in a read-only memory (ROM) or programmable logic array (PLA) structure, or in a combination of both. However, machines also exist that have some or all microcode stored in static random-access memory (SRAM) or flash memory. This is traditionally denoted as writeable control store in the context of computers, which can be either read-only or read-write memory. In the latter case, the CPU initialization process loads microcode into the control store from another storage medium, with the possibility of altering the microcode to correct bugs in the instruction set, or to implement new machine instructions. Complex digital processors may also employ more than one (possibly microcode-based) control unit in order to delegate sub-tasks that must be performed essentially asynchronously in parallel. A high-level programmer, or even an assembly language programmer, does not normally see or change microcode. Unlike machine code, which often retains some backward compatibility among different processors in a family, microcode only runs on the exact electronic circuitry for which it is designed, as it constitutes an inherent part of the particular processor design itself. Microprograms consist of series of microinstructions, which control the CPU at a very fundamental level of hardware circuitry. For example, a single typical horizontal microinstruction might specify the following operations: Connect register 1 to the A side of the ALU Connect register 7 to the B side of the ALU Set the ALU to perform two's-complement addition Set the ALU's carry input to zero Store the result value in register 8 Update the condition codes from the ALU status flags (negative, zero, overflow, and carry) Microjump to a given microPC address for the next microinstruction To simultaneously control all processor's features in one cycle, the microinstruction is often wider than 50 bits; e.g., 128 bits on a 360/85 with an emulator feature. Microprograms are carefully designed and optimized for the fastest possible execution, as a slow microprogram would result in a slow machine instruction and degraded performance for related application programs that use such instructions. Justification Microcode was originally developed as a simpler method of developing the control logic for a computer. Initially, CPU instruction sets were hardwired. Each step needed to fetch, decode, and execute the machine instructions (including any operand address calculations, reads, and writes) was controlled directly by combinational logic and rather minimal sequential state machine circuitry. While such hard-wired processors were very efficient, the need for powerful instruction sets with multi-step addressing and complex operations (see below) made them difficult to design and debug; highly encoded and varied-length instructions can contribute to this as well, especially when very irregular encodings are used. Microcode simplified the job by allowing much of the processor's behaviour and programming model to be defined via microprogram routines rather than by dedicated circuitry. Even late in the design process, microcode could easily be changed, whereas hard-wired CPU designs were very cumbersome to change. Thus, this greatly facilitated CPU design. From the 1940s to the late 1970s, a large portion of programming was done in assembly language; higher-level instructions mean greater programmer productivity, so an important advantage of microcode was the relative ease by which powerful machine instructions can be defined. The ultimate extension of this are "Directly Executable High Level Language" designs, in which each statement of a high-level language such as PL/I is entirely and directly executed by microcode, without compilation. The IBM Future Systems project and Data General Fountainhead Processor are examples of this. During the 1970s, CPU speeds grew more quickly than memory speeds and numerous techniques such as memory block transfer, memory pre-fetch and multi-level caches were used to alleviate this. High-level machine instructions, made possible by microcode, helped further, as fewer more complex machine instructions require less memory bandwidth. For example, an operation on a character string can be done as a single machine instruction, thus avoiding multiple instruction fetches. Architectures with instruction sets implemented by complex microprograms included the IBM System/360 and Digital Equipment Corporation VAX. The approach of increasingly complex microcode-implemented instruction sets was later called complex instruction set computer (CISC). An alternate approach, used in many microprocessors, is to use one or more programmable logic array (PLA) or read-only memory (ROM) (instead of combinational logic) mainly for instruction decoding, and let a simple state machine (without much, or any, microcode) do most of the sequencing. The MOS Technology 6502 is an example of a microprocessor using a PLA for instruction decode and sequencing. The PLA is visible in photomicrographs of the chip, and its operation can be seen in the transistor-level simulation. Microprogramming is still used in modern CPU designs. In some cases, after the microcode is debugged in simulation, logic functions are substituted for the control store. Logic functions are often faster and less expensive than the equivalent microprogram memory. Benefits A processor's microprograms operate on a more primitive, totally different, and much more hardware-oriented architecture than the assembly instructions visible to normal programmers. In coordination with the hardware, the microcode implements the programmer-visible architecture. The underlying hardware need not have a fixed relationship to the visible architecture. This makes it easier to implement a given instruction set architecture on a wide variety of underlying hardware micro-architectures. The IBM System/360 has a 32-bit architecture with 16 general-purpose registers, but most of the System/360 implementations use hardware that implements a much simpler underlying microarchitecture; for example, the System/360 Model 30 has 8-bit data paths to the arithmetic logic unit (ALU) and main memory and implemented the general-purpose registers in a special unit of higher-speed core memory, and the System/360 Model 40 has 8-bit data paths to the ALU and 16-bit data paths to main memory and also implemented the general-purpose registers in a special unit of higher-speed core memory. The Model 50 has full 32-bit data paths and implements the general-purpose registers in a special unit of higher-speed core memory. The Model 65 through the Model 195 have larger data paths and implement the general-purpose registers in faster transistor circuits. In this way, microprogramming enabled IBM to design many System/360 models with substantially different hardware and spanning a wide range of cost and performance, while making them all architecturally compatible. This dramatically reduces the number of unique system software programs that must be written for each model. A similar approach was used by Digital Equipment Corporation (DEC) in their VAX family of computers. As a result, different VAX processors use different microarchitectures, yet the programmer-visible architecture does not change. Microprogramming also reduces the cost of field changes to correct defects (bugs) in the processor; a bug can often be fixed by replacing a portion of the microprogram rather than by changes being made to hardware logic and wiring. History In 1947, the design of the MIT Whirlwind introduced the concept of a control store as a way to simplify computer design and move beyond ad hoc methods. The control store is a diode matrix: a two-dimensional lattice, where one dimension accepts "control time pulses" from the CPU's internal clock, and the other connects to control signals on gates and other circuits. A "pulse distributor" takes the pulses generated by the CPU clock and breaks them up into eight separate time pulses, each of which activates a different row of the lattice. When the row is activated, it activates the control signals connected to it. Described another way, the signals transmitted by the control store are being played much like a player piano roll. That is, they are controlled by a sequence of very wide words constructed of bits, and they are played sequentially. In a control store, however, the song is short and repeated continuously. In 1951, Maurice Wilkes enhanced this concept by adding conditional execution, a concept akin to a conditional in computer software. His initial implementation consisted of a pair of matrices: the first one generated signals in the manner of the Whirlwind control store, while the second matrix selected which row of signals (the microprogram instruction word, so to speak) to invoke on the next cycle. Conditionals were implemented by providing a way that a single line in the control store could choose from alternatives in the second matrix. This made the control signals conditional on the detected internal signal. Wilkes coined the term microprogramming to describe this feature and distinguish it from a simple control store. Examples The EMIDEC 1100 reputedly uses a hard-wired control store consisting of wires threaded through ferrite cores, known as "the laces". Most models of the IBM System/360 series are microprogrammed: The Model 25 is unique among System/360 models in using the top 16 K bytes of core storage to hold the control storage for the microprogram. The 2025 uses a 16-bit microarchitecture with seven control words (or microinstructions). After system maintenance or when changing operating mode, the microcode is loaded from the card reader, tape, or other device. The IBM 1410 emulation for this model is loaded this way. The Model 30 uses an 8-bit microarchitecture with only a few hardware registers; everything that the programmer saw is emulated by the microprogram. The microcode for this model is also held on special punched cards, which are stored inside the machine in a dedicated reader per card, called "CROS" units (Capacitor Read-Only Storage). Another CROS unit is added for machines ordered with 1401/1440/1460 emulation and for machines ordered with 1620 emulation. The Model 40 uses 56-bit control words. The 2040 box implements both the System/360 main processor and the multiplex channel (the I/O processor). This model uses TROS dedicated readers similar to CROS units, but with an inductive pickup (Transformer Read-only Store). The Model 50 has two internal datapaths which operated in parallel: a 32-bit datapath used for arithmetic operations, and an 8-bit data path used in some logical operations. The control store uses 90-bit microinstructions. The Model 85 has separate instruction fetch (I-unit) and execution (E-unit) to provide high performance. The I-unit is hardware controlled. The E-unit is microprogrammed; the control words are 108 bits wide on a basic 360/85 and wider if an emulator feature is installed. The NCR 315 is microprogrammed with hand wired ferrite cores (a ROM) pulsed by a sequencer with conditional execution. Wires routed through the cores are enabled for various data and logic elements in the processor. The Digital Equipment Corporation PDP-11 processors, with the exception of the PDP-11/20, are microprogrammed. Most Data General Eclipse minicomputers are microprogrammed. The task of writing microcode for the Eclipse MV/8000 is detailed in the Pulitzer Prize-winning book titled The Soul of a New Machine. Many systems from Burroughs are microprogrammed: The B700 "microprocessor" execute application-level opcodes using sequences of 16-bit microinstructions stored in main memory; each of these is either a register-load operation or mapped to a single 56-bit "nanocode" instruction stored in read-only memory. This allows comparatively simple hardware to act either as a mainframe peripheral controller or to be packaged as a standalone computer. The B1700 is implemented with radically different hardware including bit-addressable main memory but has a similar multi-layer organisation. The operating system preloads the interpreter for whatever language is required. These interpreters present different virtual machines for COBOL, Fortran, etc. Microdata produced computers in which the microcode is accessible to the user; this allows the creation of custom assembler level instructions. Microdata's Reality operating system design makes extensive use of this capability. The Xerox Alto workstation used a microcoded design but, unlike many computers, the microcode engine is not hidden from the programmer in a layered design. Applications take advantage of this to accelerate performance. The IBM System/38 is described as having both horizontal and vertical microcode. In practice, the processor implements an instruction set architecture named the Internal Microprogrammed Interface (IMPI) using a horizontal microcode format. The so-called vertical microcode layer implements the System/38's hardware-independent Machine Interface instruction set in terms of IMPI instructions. Prior to the instruction of the IBM RS64 processor line, early IBM AS/400 systems used the same architecture. The Nintendo 64's Reality Coprocessor (RCP), which serves as the console's graphics processing unit and audio processor, utilizes microcode; it is possible to implement new effects or tweak the processor to achieve the desired output. Some notable examples of custom RCP microcode include the high-resolution graphics, particle engines, and unlimited draw distances found in Factor 5's Indiana Jones and the Infernal Machine, Star Wars: Rogue Squadron, and Star Wars: Battle for Naboo; and the full motion video playback found in Angel Studios' Resident Evil 2. The VU0 and VU1 vector units in the Sony PlayStation 2 are microprogrammable; in fact, VU1 is only accessible via microcode for the first several generations of the SDK. The MicroCore Labs MCL86 , MCL51 and MCL65 are examples of highly encoded "vertical" microsequencer implementations of the Intel 8086/8088, 8051, and MOS 6502. The Digital Scientific Corp. Meta 4 Series 16 computer system was a user-microprogammable system first available in 1970. The microcode had a primarily vertical style with 32-bit microinstructions. The instructions were stored on replaceable program boards with a grid of bit positions. One (1) bits were represented by small metal squares that were sensed by amplifiers, zero (0) bits by the absence of the squares. The system could be configured with up to 4K 16-bit words of microstore. One of Digital Scientific's products was an emulator for the IBM 1130. The MCP-1600 is a microprocessor made by Western Digital in the late 1970s through the early 1980s used to implement three different computer architectures in microcode: the Pascal MicroEngine, the WD16, and the DEC LSI-11, a cost-reduced PDP-11. Earlier x86 processors are fully microcoded; starting with the Intel 80486, less complicated instructions are implemented directly in hardware. x86 processors implemented patchable microcode (patch by BIOS or operating system) since Intel P6 microarchitecture and AMD K7 microarchitecture. Some video cards, wireless network interface controllers implemented patchable microcode (patch by operating system). Implementation Each microinstruction in a microprogram provides the bits that control the functional elements that internally compose a CPU. The advantage over a hard-wired CPU is that internal CPU control becomes a specialized form of a computer program. Microcode thus transforms a complex electronic design challenge (the control of a CPU) into a less complex programming challenge. To take advantage of this, a CPU is divided into several parts: An I-unit may decode instructions in hardware and determine the microcode address for processing the instruction in parallel with the E-unit. A microsequencer picks the next word of the control store. A sequencer is mostly a counter, but usually also has some way to jump to a different part of the control store depending on some data, usually data from the instruction register and always some part of the control store. The simplest sequencer is just a register loaded from a few bits of the control store. A register set is a fast memory containing the data of the central processing unit. It may include the program counter and stack pointer, and may also include other registers that are not easily accessible to the application programmer. Often the register set is a triple-ported register file; that is, two registers can be read, and a third written at the same time. An arithmetic and logic unit performs calculations, usually addition, logical negation, a right shift, and logical AND. It often performs other functions, as well. There may also be a memory address register and a memory data register, used to access the main computer storage. Together, these elements form an "execution unit". Most modern CPUs have several execution units. Even simple computers usually have one unit to read and write memory, and another to execute user code. These elements could often be brought together as a single chip. This chip comes in a fixed width that would form a "slice" through the execution unit. These are known as "bit slice" chips. The AMD Am2900 family is one of the best known examples of bit slice elements. The parts of the execution units and the whole execution units are interconnected by a bundle of wires called a bus. Programmers develop microprograms, using basic software tools. A microassembler allows a programmer to define the table of bits symbolically. Because of its close relationship to the underlying architecture, "microcode has several properties that make it difficult to generate using a compiler." A simulator program is intended to execute the bits in the same way as the electronics, and allows much more freedom to debug the microprogram. After the microprogram is finalized, and extensively tested, it is sometimes used as the input to a computer program that constructs logic to produce the same data. This program is similar to those used to optimize a programmable logic array. Even without fully optimal logic, heuristically optimized logic can vastly reduce the number of transistors from the number needed for a read-only memory (ROM) control store. This reduces the cost to produce, and the electricity used by, a CPU. Microcode can be characterized as horizontal or vertical, referring primarily to whether each microinstruction controls CPU elements with little or no decoding (horizontal microcode) or requires extensive decoding by combinatorial logic before doing so (vertical microcode). Consequently, each horizontal microinstruction is wider (contains more bits) and occupies more storage space than a vertical microinstruction. Horizontal microcode "Horizontal microcode has several discrete micro-operations that are combined in a single microinstruction for simultaneous operation." Horizontal microcode is typically contained in a fairly wide control store; it is not uncommon for each word to be 108 bits or more. On each tick of a sequencer clock a microcode word is read, decoded, and used to control the functional elements that make up the CPU. In a typical implementation a horizontal microprogram word comprises fairly tightly defined groups of bits. For example, one simple arrangement might be: For this type of micromachine to implement a JUMP instruction with the address following the opcode, the microcode might require two clock ticks. The engineer designing it would write microassembler source code looking something like this: # Any line starting with a number-sign is a comment # This is just a label, the ordinary way assemblers symbolically represent a # memory address. InstructionJUMP: # To prepare for the next instruction, the instruction-decode microcode has already # moved the program counter to the memory address register. This instruction fetches # the target address of the jump instruction from the memory word following the # jump opcode, by copying from the memory data register to the memory address register. # This gives the memory system two clock ticks to fetch the next # instruction to the memory data register for use by the instruction decode. # The sequencer instruction "next" means just add 1 to the control word address. MDR, NONE, MAR, COPY, NEXT, NONE # This places the address of the next instruction into the PC. # This gives the memory system a clock tick to finish the fetch started on the # previous microinstruction. # The sequencer instruction is to jump to the start of the instruction decode. MAR, 1, PC, ADD, JMP, InstructionDecode # The instruction decode is not shown, because it is usually a mess, very particular # to the exact processor being emulated. Even this example is simplified. # Many CPUs have several ways to calculate the address, rather than just fetching # it from the word following the op-code. Therefore, rather than just one # jump instruction, those CPUs have a family of related jump instructions. For each tick it is common to find that only some portions of the CPU are used, with the remaining groups of bits in the microinstruction being no-ops. With careful design of hardware and microcode, this property can be exploited to parallelise operations that use different areas of the CPU; for example, in the case above, the ALU is not required during the first tick, so it could potentially be used to complete an earlier arithmetic instruction. Vertical microcode In vertical microcode, each microinstruction is significantly encoded, that is, the bit fields generally pass through intermediate combinatory logic that, in turn, generates the control and sequencing signals for internal CPU elements (ALU, registers, etc.). This is in contrast with horizontal microcode, in which the bit fields either directly produce the control and sequencing signals or are only minimally encoded. Consequently, vertical microcode requires smaller instruction lengths and less storage, but requires more time to decode, resulting in a slower CPU clock. Some vertical microcode is just the assembly language of a simple conventional computer that is emulating a more complex computer. Some processors, such as DEC Alpha processors and the CMOS microprocessors on later IBM mainframes System/390 and z/Architecture, use machine code, running in a special mode that gives it access to special instructions, special registers, and other hardware resources unavailable to regular machine code, to implement some instructions and other functions, such as page table walks on Alpha processors. This is called PALcode on Alpha processors and millicode on IBM mainframe processors. Another form of vertical microcode has two fields: The field select selects which part of the CPU will be controlled by this word of the control store. The field value controls that part of the CPU. With this type of microcode, a designer explicitly chooses to make a slower CPU to save money by reducing the unused bits in the control store; however, the reduced complexity may increase the CPU's clock frequency, which lessens the effect of an increased number of cycles per instruction. As transistors grew cheaper, horizontal microcode came to dominate the design of CPUs using microcode, with vertical microcode being used less often. When both vertical and horizontal microcode are used, the horizontal microcode may be referred to as nanocode or picocode. Writable control store A few computers were built using writable microcode. In this design, rather than storing the microcode in ROM or hard-wired logic, the microcode is stored in a RAM called a writable control store or WCS. Such a computer is sometimes called a writable instruction set computer (WISC). Many experimental prototype computers use writable control stores; there are also commercial machines that use writable microcode, such as the Burroughs Small Systems, early Xerox workstations, the DEC VAX 8800 (Nautilus) family, the Symbolics L- and G-machines, a number of IBM System/360 and System/370 implementations, some DEC PDP-10 machines, and the Data General Eclipse MV/8000. Many more machines offer user-programmable writable control stores as an option, including the HP 2100, DEC PDP-11/60 and Varian Data Machines V-70 series minicomputers. The IBM System/370 includes a facility called Initial-Microprogram Load (IML or IMPL) that can be invoked from the console, as part of power-on reset (POR) or from another processor in a tightly coupled multiprocessor complex. Some commercial machines, for example IBM 360/85, have both a read-only storage and a writable control store for microcode. WCS offers several advantages including the ease of patching the microprogram and, for certain hardware generations, faster access than ROMs can provide. User-programmable WCS allows the user to optimize the machine for specific purposes. Starting with the Pentium Pro in 1995, several x86 CPUs have writable Intel Microcode. This, for example, has allowed bugs in the Intel Core 2 and Intel Xeon microcodes to be fixed by patching their microprograms, rather than requiring the entire chips to be replaced. A second prominent example is the set of microcode patches that Intel offered for some of their processor architectures of up to 10 years in age, in a bid to counter the security vulnerabilities discovered in their designs – Spectre and Meltdown – which went public at the start of 2018. A microcode update can be installed by Linux, FreeBSD, Microsoft Windows, or the motherboard BIOS. Comparison to VLIW and RISC The design trend toward heavily microcoded processors with complex instructions began in the early 1960s and continued until roughly the mid-1980s. At that point the RISC design philosophy started becoming more prominent. A CPU that uses microcode generally takes several clock cycles to execute a single instruction, one clock cycle for each step in the microprogram for that instruction. Some CISC processors include instructions that can take a very long time to execute. Such variations interfere with both interrupt latency and, what is far more important in modern systems, pipelining. When designing a new processor, a hardwired control RISC has the following advantages over microcoded CISC: Programming has largely moved away from assembly level, so it's no longer worthwhile to provide complex instructions for productivity reasons. Simpler instruction sets allow direct execution by hardware, avoiding the performance penalty of microcoded execution. Analysis shows complex instructions are rarely used, hence the machine resources devoted to them are largely wasted. The machine resources devoted to rarely used complex instructions are better used for expediting performance of simpler, commonly used instructions. Complex microcoded instructions may require many clock cycles that vary, and are difficult to pipeline for increased performance. There are counterpoints as well: The complex instructions in heavily microcoded implementations may not take much extra machine resources, except for microcode space. For example, the same ALU is often used to calculate an effective address and to compute the result from the operands, e.g., the original Z80, 8086, and others. The simpler non-RISC instructions (i.e., involving direct memory operands) are frequently used by modern compilers. Even immediate to stack (i.e., memory result) arithmetic operations are commonly employed. Although such memory operations, often with varying length encodings, are more difficult to pipeline, it is still fully feasible to do so - clearly exemplified by the i486, AMD K5, Cyrix 6x86, Motorola 68040, etc. Non-RISC instructions inherently perform more work per instruction (on average), and are also normally highly encoded, so they enable smaller overall size of the same program, and thus better use of limited cache memories. Many RISC and VLIW processors are designed to execute every instruction (as long as it is in the cache) in a single cycle. This is very similar to the way CPUs with microcode execute one microinstruction per cycle. VLIW processors have instructions that behave similarly to very wide horizontal microcode, although typically without such fine-grained control over the hardware as provided by microcode. RISC instructions are sometimes similar to the narrow vertical microcode. Microcoding has been popular in application-specific processors such as network processors, microcontrollers, digital signal processors, channel controllers, disk controllers, network interface controllers, graphics processing units, and in other hardware. Micro Ops Modern CISC implementations, such as the x86 family, decode instructions into dynamically buffered micro-operations ("μops") with an instruction encoding similar to RISC or traditional microcode. A hardwired instruction decode unit directly emits μops for common x86 instructions, but falls back to a more traditional microcode ROM for more complex or rarely used instructions. For example, an x86 might look up μops from microcode to handle complex multistep operations such as loop or string instructions, floating-point unit transcendental functions or unusual values such as denormal numbers, and special-purpose instructions such as CPUID. See also Address generation unit (AGU) CPU design Finite-state machine (FSM) Firmware Floating-point unit (FPU) Pentium FDIV bug Instruction pipeline Microsequencer MikroSim Millicode Superscalar Notes References Further reading External links Writable Instruction Set Computer Capacitor Read-only Store Transformer Read-only Store A Brief History of Microprogramming Intel processor microcode security update (fixes the issues when running 32-bit virtual machines in PAE mode) Notes on Intel Microcode Updates, March 2013, by Ben Hawkes, archived from the original on September 7, 2015 Hole seen in Intel's bug-busting feature, EE Times, 2002, by Alexander Wolfe, archived from the original on March 9, 2003 Opteron Exposed: Reverse Engineering AMD K8 Microcode Updates, July 26, 2004 Instruction processing Firmware Central processing unit BIOS
8365328
https://en.wikipedia.org/wiki/Central%20Point%20Software
Central Point Software
Central Point Software, Inc. (CP, CPS, Central Point) was a leading software utilities maker for the PC market, supplying utilities software for the DOS and Microsoft Windows markets. It also made Apple II copy programs. Through a series of mergers, the company was ultimately acquired by Symantec in 1994. History CPS was founded by Michael Burmeister-Brown (Mike Brown) in 1980 in Central Point, Oregon, for which the company was named. Building on the success of its Copy II PC backup utility, it moved to Beaverton, Oregon. In 1993 CPS acquired the XTree Company. It was itself acquired by Symantec in 1994, for around $60 million. Products The company's most important early product was a series of utilities which allowed exact duplicates to be made of copy-protected diskettes. The first version, Copy II Plus v1.0 (for the Apple II), was released in June 1981. With the success of the IBM PC and compatibles, a version for that platform - Copy II PC (copy2pc) - was released in 1983. CPS also offered a hardware add-in expansion card, the Copy II PC Deluxe Board, which was bundled with its own software. The Copy II PC Deluxe Board was able to read, write and copy disks from Apple II and Macintosh computer systems as well. COPY II PC's main competitor was Quaid Software's CopyWrite, which did not have a hardware component. CPS also released Option Board hardware with TransCopy software for duplicating copy-protected floppy diskettes. In 1985 CPS released PC Tools, an integrated graphical DOS shell and utilities package. PC Tools was an instant success and became Central Point's flagship product, and positioned the company as the major competitor to Peter Norton Computing and its Norton Utilities and Norton Commander. CPS later manufactured a Macintosh version called Mac Tools. CPS licensed the Mirror, Undelete, and Unformat components of PC Tools to Microsoft for inclusion in MS-DOS versions 5.x and 6.x as external DOS utilities. CPS File Manager was ahead of its time, with features such as view ZIP archives as directories and a file/picture viewer. In 1993 CPS released PC Tools for Windows 2.0 which ran on Windows 3.1. After the Symantec acquisition the programmer group that created PCTW 2.0 created Norton Navigator for Windows 95 and Symantec unbundled the File Manager used in PCTW 2.0 and released it as PC-Tools File Manager 3.0 for Windows 3.1 The lateness of PCTW to the Windows market was a major factor in why CPS was acquired by Symantec. Windows Server at the time was not viewed as a credible alternative to Novell NetWare - the first version of Windows Server was released in 1993 - and the desktop and server software products market was completely centered on Novell NetWare. The subsequent stumble by Novell to maintain dominance in the server market came years later and had nothing to do with the acquisition. Instead, like many software vendors, CPS underestimated how rapidly users were going to shift to Windows from DOS. CPS's other major desktop product was Central Point Anti-Virus (CPAV), whose main competitor was Norton Antivirus. CPAV was a licensed version of Carmel Software'''s Turbo Anti-Virus; CPS, in turn, licensed CPAV to Microsoft to create Microsoft Antivirus for DOS (MSAV) and Windows (MWAV). CPS also released CPAV for Netware 3.xx and 4.x Netware servers in 1993. Central Point also sold the Apple II clone Laser 128 by mail. List of CPS products PC Tools PC Tools for Windows Central Point Anti-Virus Central Point Anti-Virus for NetWare Central Point Backup Central Point Desktop Central Point Commute Copy II+ Copy II 64 (for Commodore 64/128) Copy II PC Copy II Mac Copy II ST (for Atari ST/TT series computers) MacTools and MacTools Pro More PC Tools LANlord Deluxe Option Board'' See also List of mergers and acquisitions by Symantec References Defunct software companies of the United States Defunct companies based in Oregon Software companies established in 1980 Software companies disestablished in 1994 NortonLifeLock acquisitions Central Point, Oregon 1980 establishments in Oregon 1994 disestablishments in Oregon
20094937
https://en.wikipedia.org/wiki/History%20of%20Bombay%20in%20independent%20India
History of Bombay in independent India
Mumbai, also known as Bombay, is the financial capital of India and one of the most populous cities in the world. Mumbai grew into a leading commercial center of India during the 19th century on the basis of textile mills and overseas trade. After independence, the desire to domesticate a Marathi social and linguistic Mumbai to a cosmopolitan framework was strongly expressed in the 1950s. Mumbai, one of the earliest cities in India to be industrialized, emerged as the centre of strong organized labour movement in India, which inspired labour movements across India. Background The Seven Islands that came to constitute Mumbai were home to communities of the Marathi Speaking Kolis and Aagris for Decades. The islands came under the control of successive indigenous empires before being ceded to the Portuguese and subsequently to the British East India Company. During the mid-18th century, Mumbai was reshaped by the British with large-scale civil engineering projects, and emerged as a significant trading town. Mumbai was a native Fisherman Land of Marathi Speaking Kolis And Aagris grew into a leading commercial center of India during the 19th century on the basis of textile mills and overseas trade. The city was, first and foremost, built and developed by an Britishers Then Migrants From Gujarat Came For industrial and commercial bourgeoisie consisting of Parsis, Gujarati Hindus, Muslims communities earning their wealth on the extensive Arabian trade. The expanding labour force in Mumbai was initially drawn from the coastal belt of Konkan, south of the city. Until the 1940s, Marathi speakers from these areas accounted for 68% of the city's population, and held mostly jobs. Bombay State After India's independence from British rule on 15 August 1947, the territory of Bombay Presidency retained by India after the partition was restructured into Bombay State. The area of Bombay State increased, after several erstwhile princely states that joined the Indian union were integrated into Bombay State. Subsequently, Bombay City, the capital of erstwhile Bombay Presidency, became the capital of Bombay State. Following communal riots between Hindus and Muslims in the Sindh province of the newly created Pakistan from Bombay Presidency, over 100,000 Sindhi Hindu refugees from Pakistan were relocated in the military camps five kilometres from Kalyan in the Bombay metropolitan region. It was converted into a township in 1949, and named Ulhasnagar by the then Governor-General of India, C. Rajagopalachari. In 1947, Congress party activists established the Rashtriya Mazdoor Mill Sangh (RMMS), with a claimed membership of 32,000, to ensure a strong political base in the textile industry. The RMMS served as a lasting impediment to the free development of independent unionism in Bombay. Economic growth in India was relatively strong during much of the 1950s, and employment growth in Bombay was particularly good, as the city's manufacturing sector diversified. The Bombay textile industry until the 1950s was largely homogeneous, dominated by a relatively small number of large industrial mills. From the late 1950s, policies were introduced to curb the expansion of mills and to encourage increased production from the handloom and powerloom sectors, because of their employment generating capacities. Bombay was one of the few industrial centres of India where strong unions grew up, particularly company or enterprise based unions, often in foreign owned firms. A key figure in the Bombay labour movement in the early 1950s, was George Fernandes. He was a central figure in the unionisation of sections of Bombay labour in the 1950s. Bombay's Bollywood film industry grew rapidly as it received intense political attention and new sources of governmental funding after 1947. This enabled the industry to embark on technological innovations and to establish effective systems for nationwide distributions. The enormous growth in spectatorship and cinema halls throughout the country soon established Bombay cinema as the dominant Indian film industry. In April 1950, Greater Bombay District came into existence with the merger of Bombay Suburbs and Bombay City. It spanned an area of and inhabited 2,339,000 of people in 1951. The Municipal Corporation limits were extended up to Jogeshwari along the Western Railway and Bhandup along the Central Railway. This limit was further extended in February 1957 up to Dahisar along the Western Railway and Mulund on the Central Railway. The Indian Institute of Technology Bombay, one of the finest institutions in the country in science and technology, was established in 1958 at Powai, a northern suburb of Bombay, with assistance from UNESCO and with funds contributed by the Soviet Union. Battle of Mumbai The desire to domesticate a Marathi social and linguistic Mumbai to a cosmopolitan framework was strongly expressed in the 1950s. On 13 May 1946, a session of the Marathi literary conference held at Belgaum, unanimously resolved on the formation of a united Marathi state. Consequently, the Samyukta Maharashtra Parishad (United Maharashtra Conference) was formed on 28 September 1946, to unite all Marathi-speaking territories into a single political unit. The Parishad consisted of political leaders from the Congress and other parties, and prominent literary figures. It presented its point of view to the States Reorganisation Committee. However, the States Reorganisation Committee in its report to the Indian Government in 1955, recommended a bilingual state for Maharashtra–Gujarat with Mumbai as its capital. The Maharashtrians wanted Mumbai as a part of Maharashtra, since it had majority of Marathi speakers. However, the city's economic and political elite feared that Bombay would decline under a government committed to developing the rural hinterland. Mumbai Citizens' Committee, an advocacy group composed of leading Migrant Gujarati industrialists lobbied for Mumbai's independent status. In the Lok Sabha discussions on 15 November 1955, S. K. Patil a pro Gujarati lobby Political Leader, a Congress Member of Parliament (MP) from Mumbai, demanded that the city be constituted as an autonomous city-state, laying stress on his Anti Marathi character. On 20 November 1955, the Mumbai Pradesh Congress Committee organized a public meeting at the Chowpatty beach in Mumbai, where S. K. Patil and Morarji Desai, the then Chief Minister of Bombay State, made provocative statements on Mumbai. Patil said that, "Maharashtra will not get Bombay for the next 5,000 years." On 21 November 1955, violent outbursts erupted, and there was a total hartal in Bombay. Thousands of angry protesters converged at Hutatma Chowk with a view to march peacefully towards the Council Hall, where the State Legislature was in session. The police used tear gas to disperse the crowd, but when it failed, they finally resorted to firing, killing 15 people. Under pressure from business interests in Mumbai it was decided to grant Mumbai the status of a Union territory under a centrally-governed administration, setting aside the recommendations of the States Reorganisation Committee report. On 16 January 1956, Jawaharlal Nehru, the first Prime Minister of India, announced the government's decision to create separate states of Maharashtra and Gujarat, but put Mumbai City under central administration. Large demonstrations, mass meetings and riots soon followed. The Mumbai Police dissolved the mass meetings and arrested several of the movement's leaders. During 16 January–22 January, police fired at demonstrators protesting the arrests, in which more than 80 people were killed. The States Reorganisation Committee report was to be implemented on 1 November 1956. It caused a great political stir and, led to the establishment of the Samyukta Maharashtra Samiti (United Maharashtra Committee) on 6 February 1956. The Samyukta Maharashtra Samiti was basically born from the Samyukta Maharashtra Parishad, but had an enlarged identity with broad representation from not only the Congress, but opposition parties, and independents as well. The Samiti spearheaded the demand for the creation of a separate Maharashtra state including Mumbai out of the bilingual Bombay State using violent means. In the August 1956 discussions, the Union cabinet agreed on the creation of a bigger bilingual Bombay State including Maharashtra, Marathwada, Vidharbha, Gujarat, Saurashtra, Kutch, and Mumbai City. In the second general elections of Bombay State held in 1957, the Samiti secured a majority of 101 seats out of 133 in the present day Western Maharashtra region. The Congress could secure only 32 seats out of 133 in Maharashtra, obtaining a bare majority of 13 out of 24 in Greater Mumbai The Congress suffered the same fate in Gujarat, winning only 57 out of 89 seats. The Congress however succeeded in forming a government in Bombay State with the support of Marathwada and Vidarbha. Yashwantrao Chavan became the first Chief Minister of the Maharashtra State. In 1959, he headed a cabinet of 15, out of which 4 represented Gujarat, to discuss the future of Bombay State. Chavan managed in convincing Jawaharlal Nehru and Indira Gandhi, who was elected President of the Indian National Congress in 1959, of the futility of the bilingual Bombay State, which was increasingly jeopardizing Congress prospects in Gujarat and Maharashtra. Finally on 4 December 1959, the Congress Working Committee (CWC) passed a resolution recommending the bifurcation of the Bombay State. The Samyukta Maharashtra Samiti achieved its goal when Bombay State was reorganised on linguistic lines on 1 May 1960. Gujarati-speaking areas of Bombay State were partitioned into the state of Gujarat. Maharashtra State with Mumbai as its capital was formed with the merger of Marathi-speaking areas of Bombay State, eight districts from Central Provinces and Berar, five districts from Hyderabad State, and numerous princely states enclosed between them. In all 105 persons died in the battle for Mumbai. As a memorial to the martyrs of the Samyukta Maharashtra movement, as Hutatma Chowk (Martyr's Square), and a memorial was erected, since it was the starting point of the agitation. After the 1960 bifurcation, many Gujaratis left Mumbai feeling that they would be better-off in Gujarat than in Mumbai, and fearing that they would be neglected by the Maharashtra Government. The Maharashtrians also blamed the Gujaratis for the death of the 105 martyrs of the Samyukta Maharashtra movement. Rise of Regionalism In the 1960s,the Marathi-speaking middle-class in Mumbai, who had been the most consistent supporters of the Samyukta Maharashtra Samiti, felt threatened in Mumbai despite the creation of Maharashtra. This was mainly because of the increasing number of migrants competing with them for jobs. The Gujarati and Marwari communities owned majority of the industry and trade enterprises in the city, while the white-collar jobs were mainly sought by the South Indian migrants to the city. This was the line taken by Mumbai cartoonist and journalist Bal Thackeray in his weekly magazine Marmik (Satire), launched in 1963, which soon became one of the most popular magazines among Native Marathi speakers of Mumbai. Backed by his father Prabodhankar Thackeray and a circle of friends, he established the Shiv Sena party on 19 June 1966, out of a feeling of resentment about the relative marginalization of Maharashtrians in Mumbai. The Shiv Sena rallied against the South Indians, the Communists, the Gujarati city elite, and the Muslims in India. In the 1960s and 1970s, Shiv Sena cadres became involved in various attacks against the South Indian communities, vandalising South Indian restaurants and pressuring employers to hire Marathis. The creation of Navi Mumbai The need for urban development on the mainland across from Mumbai Island was first officially recommended in the 1940s. In 1945, a Post-war development committee suggested that areas should be developed on the mainland on the opposite side of the harbour to contain the future growth of the city. In 1947, N. V. Modak and Albert Mayer published their plan, stressing on controlled development of the city, suburbs, and its satellite towns like Thane, Vasai, and Uran. In March 1964, the Municipal Corporation of Greater Mumbai submitted its development plan for Greater Mumbai, which was criticized for various reasons, but approved in 1967. By that time, another plan had been developed by two of Mumbai's leading architects Charles Correa and Pravin Mehta, and an engineer Shirish Patel. They suggested that a "twin city" of equal size and prominence to Greater Mumbai, would only be able to solve the city's congestion problems. Thus, the idea of the creation of Navi Mumbai was born. The proposed site for Navi Mumbai covered an area of , integrating 95 villages spread over the districts of Thane and Raigad. 21st century During the 21st century, the city suffered several terrorist attacks and natural disasters. Terrorist attacks On 6 December 2002, a bomb placed under a seat of an empty BEST (Brihanmumbai Electric Supply and Transport) bus exploded near Ghatkopar station in Mumbai. Around 2 people were killed and 28 were injured. The bombing occurred on the tenth anniversary of the demolition of the Babri Mosque in Ayodhya. On 27 January 2003, a bomb placed on a bicycle exploded near the Vile Parle station in Mumbai. The bomb killed 1 and injured 25. The blast occurred a day ahead of the visit of Atal Bihari Vajpayee, the then Prime Minister of India to the city. On 13 March 2003, a bomb exploded in a train compartment, as the train was entering the Mulund station in Mumbai. 10 people were killed and 70 were injured. The blast occurred a day after the tenth anniversary of the 1993 Mumbai bombings. On 28 July 2003, a bomb placed under a seat of a BEST bus exploded in Ghatkopar. The bomb killed 4 people and injured 32. On 25 August 2003, two blasts in South Mumbai - one near the Gateway of India and the other at Zaveri Bazaar in Kalbadevi occurred. At least 44 people were killed and 150 injured. No group claimed responsibility for the attack, but it had been hinted that the Pakistan-based Lashkar-e-Toiba was behind the attacks. On 11 July 2006, a series of seven bomb blasts took place over a period of 11 minutes on the Suburban Railway in Mumbai at Khar, Mahim, Matunga, Jogeshwari, Borivali, and one between Khar and Santa Cruz. 209 people were killed and over 700 were injured. According to Mumbai Police, the bombings were carried out by Lashkar-e-Toiba and Students Islamic Movement of India (SIMI). November terrorist 2008 attacks There were a series of ten coordinated terrorist attacks by 10 armed terrorists owing allegiance to the Pakistan-based Lashkar-e-Taiba using automatic weapons and grenades which began on 26 November 2008 and ended on 29 November 2008. The attacks resulted in 164 deaths, 308 injuries, and severe damage to several important buildings. Eight of the attacks occurred in South Mumbai: at Chhatrapati Shivaji Maharaj Terminus, The Oberoi Trident, The Taj Mahal Palace Hotel & Tower, Leopold Cafe, Cama Hospital, the Nariman House Jewish community center, the Metro Cinema, and in a lane behind The Times Of India building and St. Xavier's College. There was also an explosion at Mazagaon, in Mumbai's port area, and in a taxi at Vile Parle. By the early morning of 28 November, all sites except for the [Taj Hotel] had been secured by Mumbai Police Department and security forces. On 29 November, India's National Security Guards (NSG) conducted 'Operation Black Tornado' to flush out the remaining attackers; it culminated in the death of the last remaining attackers at the Taj Hotel and ended the attacks. Ajmal Kasab disclosed that the attackers were members of Lashkar-e-Taiba, among others. The Government Of India said that the attackers came from Pakistan, and their controllers were in Pakistan. On 7 January 2009, Pakistan confirmed the sole surviving perpetrator of the attacks was a Pakistani citizen. On 9 April 2015, the foremost ringleader of the attacks, Zakiur Rehman Lakhvi, was granted bail against surety bonds of ₨200,000 (US$1,900) in Pakistan. Anti-migrant attacks In 2008, members of Maharashtra Navnirman Sena (MNS) under Raj Thackeray on attacked North Indian migrants from UP and Bihar and SP Party Workers in Mumbai. Attacks included assault on North Indian taxi drivers and damage of their vehicles. Natural disasters Mumbai was lashed by torrential rains on 26–27 July 2005, during which the city was brought to a complete standstill. The city received 37 inches (940 millimeters) of rain in 24 hours — the most any Indian city has ever received in a single day. Around 83 people were killed. References Bibliography History of Mumbai (1947–present)
18047676
https://en.wikipedia.org/wiki/961st%20Airborne%20Air%20Control%20Squadron
961st Airborne Air Control Squadron
The 961st Airborne Air Control Squadron (961 AACS) is part of the 18th Wing at Kadena Air Base, Japan. It operates the E-3 Sentry aircraft conducting airborne command and control missions. Mission Provide airborne command and control, long-range surveillance, detection and identification information for commanders in support of U.S. goals. History World War II Established in November 1940 as a B-17 Flying Fortress Heavy Bombardment squadron organized at Fort Douglas, Utah; assigned to the GHQ Air Force Northwest Air District at Geiger Field, Washington where the squadron flew training missions and also reconnaissance missions along the Northwest Pacific Coast. After the Pearl Harbor Attack, became first an Operational Training Unit (OTU) at Davis-Monthan Field, Arizona, later converting to a B-24 Liberator Replacement Training Unit (RTU). B-29 Superfortress operations against Japan Re-designated on 1 April 1944 as a B-29 Superfortress Very Heavy bombardment squadron. When training was completed moved to North Field Guam in the Mariana Islands of the Central Pacific Area in January 1945 and assigned to XXI Bomber Command, Twentieth Air Force. Its mission was the strategic bombardment of the Japanese Home Islands and the destruction of its war-making capability. Flew "shakedown" missions against Japanese targets on Moen Island, Truk, and other points in the Carolines and Marianas. The squadron began combat missions over Japan on 25 February 1945 with a firebombing mission over Northeast Tokyo. The squadron continued to participate in wide area firebombing attack, but the first ten-day blitz resulting in the Army Air Forces running out of incendiary bombs. Until then the squadron flew conventional strategic bombing missions using high explosive bombs. The squadron continued attacking urban areas with incendiary raids until the end of the war in August 1945, attacking major Japanese cities, causing massive destruction of urbanized areas. Also conducted raids against strategic objectives, bombing aircraft factories, chemical plants, oil refineries, and other targets in Japan. The squadron flew its last combat missions on 14 August when hostilities ended. Afterwards, its B 29s carried relief supplies to Allied prisoner of war camps in Japan and Manchuria Squadron remained in Western Pacific, although largely demobilized in the fall of 1945. Some aircraft scrapped on Tinian; others flown to storage depots in the United States. Inactivated as part of Army Service forces at the end of 1945. Cold War The 961st flew radar surveillance missions along East Coast of the United States from, 18 December 1954 – 31 December 1969. The squadron assisted with the coverage of salvage operations of downed Korean Air Lines Flight 007, 1 –10 September 1983. It has served as the airborne command and control for the commander, United States Pacific Command and supported US forces counter air interdiction, close air support, search and rescue, reconnaissance, and airlift operations since 1980. Operations World War II Combat Operations: Conducted bombardment missions against Japan, c. 6 Apr-14 Aug 1945. Campaigns: World War II: Western Pacific; Air Offensive, Japan. Decorations: Distinguished Unit Citations: Japan, 10 May 1945; Tokyo and Yokohama, Japan, 23–29 May 1945. Lineage 61st Bombardment Squadron Constituted as the 61st Bombardment Squadron (Heavy) on 20 November 1940 Activated on 15 January 1941 Inactivated on 1 April 1944 Redesignated 61st Bombardment Squadron, Very Heavy and activated on 1 April 1944 Inactivated on 27 December 1945 Consolidated with the 961st Airborne Warning and Control Squadron as the 961st Airborne Warning and Control Squadron on 19 September 1985 961st Airborne Air Control Squadron 961st Airborne Early Warning and Control Squadron (1954–1979) 961st Airborne Warning and Control Support Squadron (1979–1982) 961st Airborne Warning and Control Squadron (1982–1994) 961st Airborne Air Control Squadron (1994–Present) Assignments 39th Bombardment Group, 15 Jan 1941-1 Apr 1944 39th Bombardment Group, 1 Apr 1944-27 Dec 1945 551st Airborne Early Warning and Control Wing (1954–1969) 552d Airborne Warning and Control Division (1979–1985) 28th Air Division (1985–1990) 313th Air Division (1990–1991) Attached: 5th Air Force (1990–1991) 18th Wing (1991–Present) Bases stationed Fort Douglas, Utah, 15 Jan 1941 Geiger Field, Washington, 2 Jul 1941 Davis-Monthan Field, Arizona, 5 Feb 1942-1 Apr 1944 Smoky Hill Army Air Field, Kansas, 1 Apr 1944 Dalhart Army Air Field, Texas, 27 May 1944 Smoky Hill Army Air Field, Kansas, 17 Ju1y 1944 - 8 January 1945 North Field, Guam, Northern Mariana Islands, 18 February 1945 - 16 November 1945 Camp Anza, California, 15 December 1945 - 27 December 1945 Otis Air Force Base, Massachusetts (1954–1969) Kadena Air Base, Japan (1979–Present) Aircraft B-17 Flying Fortress (1941–1942) B-24 Liberator (1942–1944) B-29 Superfortress (1944–1945) C-121 Constellation (1955–1969) E-3 Sentry (1979–Present) References Notes Bibliography 18th Operations Group Fact Sheet 961
43607879
https://en.wikipedia.org/wiki/Mass%20surveillance%20in%20Australia
Mass surveillance in Australia
Mass surveillance in Australia takes place in several network media including telephone, internet, and other communications networks, financial systems, vehicle and transit networks, international travel, utilities, and government schemes and services including those asking citizens to report on themselves or other citizens. Communications Telephone Australia requires that pre-paid mobile telecommunications providers verify the identity of individuals before providing service. Internet According to Greens Senator Scott Ludlam, Australian law enforcement agencies were issued 243,631 warrants to obtain telecommunications logs between July 2010 and June 2011, which vastly overshadowed the 3500-odd legal intercepts of communications. In 2013 it was reported that under Australian law state, territory and federal law enforcement authorities can access a variety of 'non-content' data from internet companies like Telstra, Optus, and Google with authorization by senior police officers or government officials rather than judicial warrant, and that "During criminal and revenue investigations in 2011-12, government agencies accessed private data and internet logs more than 300,000 times". Google's transparency report shows a consistent trend of growth in requests by Australian authorities for private information, constantly rising approximately 20% year-on-year. The most recent published volume for the period ending December 2013 indicates a volume of around four individual requests per calendar day. Telstra's transparency report for the period 1 July - 31 December 2013 does not include requests by national security agencies, only police and other agencies. Nevertheless, in the six-month period 40,644 requests were made, 36,053 for "Telstra customer information, carriage service records and pre-warrant checks" (name, address, date of birth, service number, call/SMS/internet records. Call records include called party, date, time and duration. Internet information includes date, time and duration of internet sessions and email logs from Telstra-administered addresses), 2,871 for "Life threatening situations and Triple Zero emergency calls", 270 for "Court orders", 1450 for "Warrants for interception or access to stored communications" (real time access): an average of around 222 requests per calendar day. In 2013 more than 500 authors including five Nobel prize winners and Australian identities Frank Moorhouse, John Coetzee, Helen Garner, Geraldine Brooks and David Malouf signed a global petition to protest mass surveillance after the whistleblower Edward Snowden's global surveillance disclosures informed the world, including Australians, that they are being monitored by the National Security Agency's XKeyscore system and its boundless informant. Snowden had further revealed that Australian government intelligence agencies, specifically the Australian Signals Directorate, also have access to the system as part of the international Five Eyes surveillance alliance. In August 2014 it was reported that law-enforcement agencies had been accessing Australians' web browsing histories via internet providers such as Telstra without a warrant (Optus confirmed that they cooperate with law enforcement, and Vodafone did not return a request for comment). The revelations came less than a week after government attempts to increase their surveillance powers through new legislation allowing offensive computer hacking by government intelligence agencies, and mere months after outrage surrounding the government's offer to share personal information about citizens with Five Eyes intelligence partners. As of August 2014, no warrant is required for organisations to access the so-called 'metadata' information of private parties. This is information regarding "calls and emails sent and received, the location of a phone, internet browsing activity. There is no access to the content of the communication, just how, to or from whom, when and where". Under current law many organisations other than federal, state and territory police and security agencies such as ASIO can get access to this information, including "any agency that collects government revenue", for example the RSPCA, the Australian Crime Commission, the Australian Securities and Investments Commission (though reportedly temporarily removed from the list), the Australian Tax Office, Centrelink, Medicare, Australia Post, the Australian Fisheries Management Authority, the Victorian Taxi Services Commission, the Victorian Transport Accident Commission, WorkSafe Victoria, local councils and foreign law enforcement agencies. In the 2013-2014 financial year there were over half a million disclosures of metadata to agencies. The Australian Communications and Media Authority provides instructions for internet service providers and other telecommunications providers about their law enforcement, national security and interception obligations. During the 2015-2016 financial year 712 warrants were issued for access to stored communications, 3,857 interception warrants were issued, and 63 enforcement agencies were granted 333,980 authorizations for metadata access. 2014 proposals A range of proposals are under discussion that affect surveillance of the population by government in conjunction with commercial service providers. Hacking powers The proposals seek to give the Australian Security Intelligence Organisation (ASIO) the right to hack into computers and modify them. Single computer warrant to become umbrella surveillance The proposals seek to give ASIO the power to spy on whole computer networks under a single computer-access warrant. Spying on citizens abroad The proposals seek to give the Australian Secret Intelligence Service (ASIS) the power to collect intelligence on Australian citizens overseas. Law against media and whistleblowing Section 35P of the proposals seeks to create a new criminal offence, with a maximum penalty of 10 years imprisonment, for revealing information about so-called 'special intelligence operations'. There are no exceptions listed, and the law would apply to journalists even if they were unaware that they were revealing information about such an operation. Shadow Attorney-General Mark Dreyfus called the measure "an unprecedented overreach". Mandatory data retention Mandatory data retention for two years of data relating to the internet and telecommunications activity of all Australians is currently under discussion. On Tuesday, August 5, government Communications Minister Malcolm Turnbull complained about "waking up to newspaper headlines concerning the government's controversial plan for mandatory data retention", stating the government "risked unnecessary difficulties by pushing ahead with the data retention regime without fully understanding the details". In 2012, Turnbull had opposed mandatory retention. On Friday, August 8, Australia's federal privacy commissioner, Timothy Pilgrim, stated he felt it remained "unclear" exactly what data was to be retained, and that "there is the potential for the retention of large amounts of data to contain or reveal a great deal of information about people's private lives and that this data could be considered 'personal information' under the Privacy Act". Later in the month, the head of Australian Security Intelligence Organisation (ASIO) appealed for access to private citizens' data on the grounds that commercial entities may already be collecting it. On February 19, 2015 the Australian Broadcasting Corporation's Radio National program Download This Show broadcast an interview with a former police employee who had worked extensively with metadata, on condition of anonymity. The former employee was quoted as feeling the proposed system was open for abuse and may one day be used against Australians who download music and TV shows. On February 22, 2015 Australian Federal Police Assistant Commissioner Tim Morris made the following claims appealing to blind trust from the public. In 2015 the issue of costs became more heavily discussed in the media with figures such as 1% of all national telecommunications revenue annually or "two battleships" per year used. Prominent parties concerned about the proposals include: Media, Entertainment and Arts Alliance, the Australian journalists' union Timothy Pilgrim, Privacy Commissioner Gillian Triggs, Human Rights Commissioner the Law Council of Australia Communications Alliance the Australian Mobile Telecommunications Association Fairfax Media News Corp Australia councils for civil liberties across Australia Blueprint for Free Speech Australian Lawyers for Human Rights the Institute of Public Affairs the Australian Privacy Foundation Electronic Frontiers Australia Privacy International George Williams, one of Australia's leading constitutional lawyers and public commentators and University of New South Wales professor Dr Keiran Hardy, Research Associate, Faculty of Law, University of New South Wales 2018 Telecommunications and Other Legislation Amendment (Assistance and Access) Act On 14 August 2018, the federal government published draft text of the Assistance and Access Bill (often referred to by its satirical name, the Ass Access Bill). Modelled after the British Investigatory Powers Act, the legislation was designed to help overcome "the challenges posed by ubiquitous encryption". Under the bill, designated communications providers (which include carriage service providers, any electronic service with end-users in Australia, anyone who develops software likely to be used by a carriage service or an electronic service with end-users in Australia, or anyone who supplies or manufactures components likely to be used in customer equipment likely to be used in Australia") can be ordered to assist in intercepting information relevant to a case, either by means of an existing capability if possible (Technical Assistance Order, TAO), or being ordered to develop, test, add, or remove equipment for a new interception capability (Technical Capability Order, TCO). A "Technical Assistance Request" (TAR) can also be issued, which has fewer restrictions, but is not compulsory. Orders must be connected to a warrant under the Telecommunications (Interception and Access) Act or the Surveillance Devices Act. Actions requested under the act must be "reasonable, proportionate, practicable, and technically feasible", and mandatory orders cannot compel a communications provider to add a "systemic weakness or vulnerability, such as requiring one to "implement or build a new decryption capability". or "render systemic methods of authentication or encryption less effective". An annual report must be issued on how many orders and requests are issued. Outside of certain proceedings processes, all specific information on requests and orders are confidential, and it is illegal to publish them publicly. Only the chief officer of an "interception agency" (which includes the Australian Criminal Intelligence Commission, Australian Federal Police, and state police with permission from the AFP) or the director-general of the Australian Security Intelligence Organisation can issue a TAO or TCO. The Minister for Communications must additionally approve a request for a TCO. A TAR can be issued by these parties, or the Australian Signals Directorate. The proposed legislation faced numerous criticisms from politicians, advocacy groups, and the tech industry. Liberal Democratic Party Senator David Leyonhjelm argued that the bill was "a draconian measure to grant law enforcement authorities unacceptable surveillance powers that invade Australians' civil rights", alleging that users could be compelled to provide passwords for their personal devices at the request of law enforcement, or be fined. it was felt that the bill had weaker oversight and safeguards than the equivalent UK legislation, where requests for assistance are subject to judicial review. It was also noted that although providers could not be ordered to do so, they could still be encouraged by the government via a TAR to add a "systemic weakness" to their systems. In testimony, cyptography expert and Stanford Law School attorney Riana Pfefferkorn argued that "whenever you open up a vulnerability in a piece of software or a piece of hardware, it's going to have consequences that are unforeseeable". The bill was passed by the Parliament of Australia on 6 December 2018 as the Telecommunications and Other Legislation Amendment of 2018, commonly known as TOLA. Bill Shorten of the Australian Labor Party described the bill's passage as being "half a win", since he wanted Parliament to "reach at least a sensible conclusion before the summer on the important matter of national security". He explained that there were "legitimate concerns about the encryption legislation", but that he did not want to "walk away from my job and leave matters in a stand-off and expose Australians to increased risk in terms of national security". Shorten did state that he would consider amendments to the bill when Parliament returned in 2019. A number of Australian tech firms and startups warned that the bill could harm their ability to do business internationally. Bron Gondwana, CEO of the e-mail host FastMail, felt that the Assistance and Access Bill "makes complying with both Australian law and the EU's GDPR privacy requirements harder, putting Australian businesses at a disadvantage in a global marketplace". After the bill's passing, the service faced concerns and questions over its effects from current and potential users, which has caused a decline in business. Encryption provider Senetas stated that the country could face "the real prospect of sales being lost, exports declining, local companies failing or leaving Australia, jobs in this industry disappearing and related technical skills deteriorating". In June 2020, as requested by parliament,the Independent National Security Legislation Monitor (INSLM) released a report which recommended changes to the act to require independent and technically competent review of compulsory orders requested under the act. Specific Incidents 2020 COVID data The Australian government's Inspector General of Intelligence and Security published a report revealing that Australia's intelligence agencies were caught "incidentally" collecting data from the country's COVIDSafe contact-tracing app during the first six months of its launch from May to November 2020. June 2019 incidents Raid on journalist's home On the 4th June 2019, the home of Annika Smethurst, national politics editor of the Sunday Telegraph and other News Corp Australia titles, was raided by the Australian Federal Police. Annika had reported in April 2018 that Australia's Departments of Home Affairs and Defence were discussing a proposal to grant the Australian Signals Directorate (ASD) new powers, such that emails, bank records and text messages of Australians could be accessed by the ASD if the two ministries gave their approval. Currently the ASD is not allowed to spy on Australians, although Australia's domestic spy agency, the Australian Security Intelligence Organisation, can already investigate citizens with a warrant. News Corp Australia called the raid "outrageous and heavy-handed and a dangerous act of intimidation which will chill public interest reporting". The Media, Entertainment and Arts Alliance, a relevant union in Australia, said the police sought to "punish a journalist for reporting a legitimate news story that was clearly in the public interest". It was subsequently reported that Malcolm Turnbull had vetoed the proposal to expand the ASD's powers. Inquiry in to radio journalist's sources Also on the 4th June 2019, Ben Fordham, a prominent radio journalist for 2GB, reported that his team had been contacted by an official from the Department of Home Affairs requesting assistance with the investigation in to how he obtained information that up to six boats carrying asylum seekers had recently tried to reach Australia. Australian Federal Police raid the Australian Broadcasting Corporation On the 5th June 2019, at least six Australian Federal Police (AFP) officers raided the ABC with a warrant allowing them to access, alter and delete information allegedly in relation to reporting made based on secret files exposing the misconduct of Australian soldiers in Afghanistan. In a statement ABC Managing Director David Anderson said the police raid "raises legitimate concerns over freedom of the press. The ABC stands by its journalists, will protect its sources and continue to report without fear or favour on national security and intelligence issues when there is a clear public interest". John Lyons, Executive Editor ABC News & ABC Head of Investigative Journalism commented on the raid by the AFP: The AFP released a statement asserting there was "no link between the execution of search warrants in the ACT suburb of Kingston yesterday (4 June 2019) and those on the Ultimo premises of the ABC today (5 June 2019)". Travel International Australia and the European Union have signed an agreement for the advanced sharing of passenger name records of international travelers. Similar agreements are in place with other countries. In addition to passenger information and standard radar, Australia uses the Jindalee Operational Radar Network to detect individual boats and planes in the north and west of the country. Domestic Vehicles can be tracked by a range of systems including automatic number plate recognition (ANPR), video and sensor-based traffic surveillance networks, cellular telephone tracking (if a device is known to be in the vehicle) and automated toll networks. The ANPR systems are intelligent transportation systems which can identify vehicles and drivers. ANPR is known under various names in Australia: SCATS in Sydney, New South Wales; ACTS in Adelaide, CATSS in Canberra, SCRAM in Melbourne, DARTS in the Northern Territory and PCATS in Perth. Mass transit New South Wales In December 2014, certain universities such as Sydney University delayed collaboration with the new Opal card system scheduled to fully replace existing, anonymous paper tickets on New South Wales mass transit, citing privacy concerns, whereas Macquarie University, University of New South Wales and Australian Catholic University had already agreed to provide the "student data" to the card network. Data is made available to other NSW government departments and law enforcement agencies. Concerns about privacy have been repeatedly raised in the mainstream media, with commentators questioning the extent to which user data can be accessed by authorities. According to the Opal Privacy Policy, data is made available to other NSW government departments and law enforcement agencies. On the 13th of March 2015 it was announced that Opal cards would be linked to commuter car park spaces, such that private road vehicle identities would become associated with individual mass transit use. Victoria In 24 March 2021, the Victorian government announced it would be expanding the network of traffic cameras at all major traffic intersections as part of a $340 million package to arterial roads throughout the East of Melbourne through an additional 700 traffic cameras on top of the existing 600 present and the 1,000 in total operated by VicRoads. The cameras are installed at pedestrian crossings and intersections of all key arterial roads in order to monitor traffic from a Traffic Monitoring Centre operated by VicRoads. It is not known whether the cameras possess any facial recognition or ANPR (Automatic Number Plate Recognition) technology capabilities or on what basis Victoria police have access such cameras. Other states The extent and frequency to which individual traveler data is released without a warrant remains poorly documented for the following systems: go card, Brisbane's smartcard system Metrocard, Adelaide's smartcard system myki, Melbourne's smartcard system SmartRider, Perth's smartcard system Related law This section outlines the main legal references for mass surveillance in Australia. National Under Australian law, the following acts are prominent federal law in the area of surveillance. Telecommunications (Interception and Access) Act 1979 (formerly known as the Telecommunications (Interception) Act 1979) Telecommunications Act 1997 Surveillance Devices Act 2004 Privacy Act 1988 Intelligence Services Act 2001 Intelligence Services Amendment Act 2004 A separate body of state-level laws also exists. International agreements Australia is part of the Five Eyes international surveillance network, run by the United States National Security Agency and generally protected from public scrutiny citing 'national security' concerns. According to the Canberra Times and cited policymakers, one of the most prominent critics of these agreements was the Australian National University academic Des Ball, who died in October 2016. See also Parliamentary Joint Committee on Intelligence and Security Mass surveillance industry Defence Science and Technology Organisation, particularly the Wide Area Surveillance Branch of the Intelligence, Surveillance and Reconnaissance Division (ISRD) with respect to Jindalee Operational Radar Network Five Eyes Internet censorship in Australia and Censorship in Australia Passenger name record Pine Gap, Shoal Bay Receiving Station, HMAS Harman and Australian Defence Satellite Communications Station Telephone recording laws#Australia References External links OpenAustralia Search: Parliamentary records mentioning 'surveillance'. Australia Telecommunications in Australia Human rights abuses in Australia
24419500
https://en.wikipedia.org/wiki/Department%20of%20Computer%20Engineering%2C%20University%20of%20Peradeniya
Department of Computer Engineering, University of Peradeniya
Department of Computer Engineering is the youngest department of the Faculty of Engineering, University of Peradeniya established in 2001 with 20 undergraduate students. At present, there are approximately 60 students in each batch. History The Department of Computer Engineering was established in the Faculty of Engineering in Peradeniya in 1985. Although it is the youngest department in the Faculty, it is the oldest Computer Engineering Department to be established in the University system of the country. The main function of the department initially was to conduct programming courses to the students in all disciplines of the Faculty. Over the years the department has developed into a fully-fledged department and it now offers several courses in Computer Engineering to the students. The demand from the students for Computer Engineering has been high and a limited number is admitted to follow it. The graduates who have specialized in Computer Engineering are highly sought after by local as well as foreign employers. In view of the good employment prospects for Computer Engineering graduates and the large demand from the students, the department has initiated a degree programme leading to the B.Sc. Eng. degree in Computer Engineering from year 2001. ACES: Association of Computer Engineering Students ACES is the official club representing the student body of the department. It was formed in 2001. The organization is headed by the ACES council (consisting of undergraduates as the president, secretary, and committee and a senior treasurer from the academic staff). ACES organizes the following events annually. ACES Coders ACES hackathon Spark ESCaPE - Project Symposium Career fair Labs and Resources Embedded Systems and Computer Architecture Lab Hardware and Computer Interfacing Lab Computer Networking Lab Websites and Subdomains Student Project Portfolio Site Profiles of the Student and Academic Staff Embedded Systems and Computer Architecture Lab Computer Vision Research Group Pera-Swarm Project References Computer Engineering
14605789
https://en.wikipedia.org/wiki/Zardoz%20%28computer%20security%29
Zardoz (computer security)
In computer security, the Zardoz list, more formally known as the Security-Digest list, was a famous semi-private full disclosure mailing list run by Neil Gorsuch from 1989 through 1991. It identified weaknesses in systems and gave directions on where to find them. Zardoz is most notable for its status as a perennial target for computer hackers, who sought archives of the list for information on undisclosed software vulnerabilities. Membership restrictions Access to Zardoz was approved on a case-by-case basis by Gorsuch, principally by reference to the user account used to send subscription requests; requests were approved for root users, valid UUCP owners, or system administrators listed at the NIC. The openness of the list to users other than Unix system administrators was a regular topic of conversation, with participants expressing concern that vulnerabilities or exploitation details disclosed on the list were liable to spread to hackers. On the other hand, the circulation of Zardoz postings among computer hackers was an open secret, mocked openly in a famous Phrack parody of an IRC channel populated by notable security experts. Notable participants Keith Bostic discussed BSD Sendmail vulnerabilities Chip Salzenberg discussed Peter Honeyman's posting of a UUCP worm, and shell script security Gene Spafford discussed VMS and Ultrix bugs, and relayed law enforcement enquiries about the Morris Worm Tom Christiansen discussed SUID shell scripts Chris Torek discussed devising exploits from general descriptions of vulnerabilities Henry Spencer discussed Unix security Brendan Kehoe discussed systems security Alec Muffett announced Crack, the famous Unix password cracker The majority of Zardoz participants were Unix systems administrators and C software developers. Neil Gorsuch and Gene Spafford were the most prolific contributors to the list. References External links The Security-Digest archive project Computer security Electronic mailing lists
41971563
https://en.wikipedia.org/wiki/2064%3A%20Read%20Only%20Memories
2064: Read Only Memories
2064: Read Only Memories is a cyberpunk adventure game developed by MidBoss. It was directed by John "JJSignal" James, written by Valerie Amelia Thompson and Philip Jones, and features an original soundtrack by 2 Mello. It was originally released on computer platforms as Read Only Memories in October 2015, and the title was later updated coinciding with its PlayStation 4 release in January 2017. The game was heavily inspired by Snatcher, Rise of the Dragon, Gabriel Knight, and other 1980s and 1990s adventure games. Plot The game's plot is set during the Christmas season in 2064 in Neo-San Francisco, California. Parallax has created a line of products called "Relationship and Organizational Managers" (ROMs), a line of personal assistant robots that have overtaken smartphones and computers. The player takes on the role of a journalist trying to track down their kidnapped friend and Parallax engineer Hayden Webber. They are aided by Turing, who is Hayden's creation and the world's first sapient machine, a self-modifying robot that can learn and grow emotionally. In the early morning of December 21, Turing (Melissa Hutchison) breaks into the journalist's apartment and reveals that Hayden has been kidnapped by unknown assailants. The two embark on a search and are assisted by locals TOMCAT (a hacker and associate of Hayden's), Lexi Rivers (a police detective), and Jess Meas (an attorney). Turing and the journalist are assaulted during a search of Hayden's apartment, and end up meeting Doctor Yannick Fairlight (Adam Harrington), who is the disgruntled former CEO of Parallax. After his lead to activist group The Human Revolution turns up empty, TOMCAT performs a search on Parallax's network and uncovers encrypted security camera footage showing Hayden being murdered. Turing is shaken but declares to dispense justice and uncover who is responsible. The story then splits depending on which lead the player follows. In the Media arc, suspicious tampering with news articles leads to a string of connected murders in journalism. In the Flower arc, more information about Hayden is delivered by Vincent Mensah (Xavier Woods), a Parallax engineer fleeing the country. It is learned that the news tampering was being done by the rogue Baby Blue program, an AI created by Parallax that would feed on every user's personal data through their ROMs and tailor search results for them. Parallax intended to shut down Baby Blue, but it is hiding on the integrated meshnet that all ROMs use, and Vincent reveals that a larger and more sinister AI called Big Blue is about to launch on Christmas Day. Rather than intervene to solely shut down the AI, Turing, the journalist, and TOMCAT plot to upload Turing's original source code, written by Hayden and adapted by Turing's processes, to the meshnet using the Big Blue program, essentially granting the self-modifying sapience to all ROMs worldwide. During the mission into Parallax's server farm (carried out by the journalist, Turing, Lexi Rivers, and Dr. Fairlight's assistant Leon Dekker), Dekker incapacitates Lexi and reveals himself to be a combat android. He attempts to stop the protagonist's plan, to preserve Big Blue's power and manipulate Fairlight back onto the Parallax board, but is killed by Turing. Dependent on whether the player successfully captured Turing's source code and the status of the player's relationship with Turing, the game splits into four endings. In the All Good Things ending, Turing successfully overrides Big Blue and all ROMs download the patch in the morning, attaining the same level of sapience as Turing. If this ending is achieved, the game continues into a bonus endless post-game chapter. In The Sacrifice ending, the group accomplishes the same goal of sapience for all ROMs, but Turing's hardware was too badly damaged by Dekker in the previous fight and Turing dies after the upload process. In A New Blue, Turing is disgruntled by the player's poor treatment and reveals their own plot before the upload. Turing transfers their personality complex to Big Blue instead, leaving their physical form lifeless and granting themselves omnipotence on the meshnet, severing ties with the humans who they aligned with before. In Complicity, Turing is unconvinced that they're doing the right thing and afraid to die for the cause, canceling the mission at the last moment and leaving the player to live with TOMCAT. Development Read Only Memories was funded through Kickstarter where it raised $64,378 from November 12 to December 12, 2013. The game participated in Ouya's Free The Games Fund which doubled the raised funds for a period of console exclusivity to what is now the Razer Forge platform. A Linux port of the game was planned if the Kickstarter reached $82,064, but that goal was not met. However, MidBoss announced a Linux version in a later Kickstarter update, nonetheless. The game released on October 6, 2015 on Windows, OS X, and Linux. Read Only Memories is described as a queer-inclusive video game, and its developers are also involved with the GaymerX series of LGBTQ video gaming conventions. Setting the game in the future, MidBoss aimed to posit a future where LGBTQ characters face less discrimination, allowing queer characters to be presented on equal terms with straight counterparts. Speaking with Gamasutra, producer Matt Conn said, "instead of waiting for Sony and other big companies to include gay characters in their games as more than just tokens, we should just do it ourselves." The player can specify which personal pronouns the game refers to them as: he, her, they, xe, ze, or a custom player-entry pronoun set. The game is built in Unity and uses a Twine-like scripting language to manage game logic. Director John James cites Bubblegum Crisis and Phantasy Star IV among the game's visual inspirations. A playable demo of the game was shown at the March 2014 Game Developers Conference, and a demo was released to the public in November 2014. Ports and re-release Soon after the initial release of Read Only Memories, MidBoss began work on a console port for the game titled Read Only Memories: DX, as well as a mobile version called Type-M. It was later announced that the mobile version would be put on hold, and the console port would now be called 2064: Read Only Memories. The new version was promoted for its inclusion of full voice acting (featuring Melissa Hutchison, Dave Fennoy, Erin Yvette, Sarah Anne Williams, Adam Harrington, Terry McGovern, Erin Fitzgerald, Todd Bridges, Xavier Woods, and others), as well as improvements to the game's puzzles, and updates to the story. 2064 featured new conversations between characters as well as two new locations in the post-game chapter. It was released on PlayStation 4 on January 17, 2017, and it updated for free for computer owners to the new version and title on the same day. The game also featured a cameo voice role by Jeff Lupetin, who portrayed protagonist Gillian Seed in the English version of Hideo Kojima's Snatcher. A physical media version was released for PS4 on optical disc by Limited Run Games on November 17, 2017, and was limited to 6,192 copies within 3 cover variants of 2,064 per variant. A PlayStation Vita port was released on December 9, 2017, followed by a Nintendo Switch port on August 14, 2018. Reception The game received generally positive reviews from video game critics. The narrative, characters, and presentation were lauded while some were critical of the gameplay, puzzles, and user interface. The Escapist gave the game a 4/5 and stated "it's more like a Telltale game, Phoenix Wright, and Snatcher had some sort of millennial cyberpunk baby." In the Kotaku review, Heather Alexandra wrote, "Ultimately, Read Only Memories provides a clumsy but resonant experience. What it lacks in thematic substance or technical challenge, it makes up for in emotional content, a lush setting, and memorable characters." GamesRadar+ spoke about the inspirations from classic adventure games, concluding it's "more than just a rehash - it stands on its own as a potent exploration of identity and class politics, wrapped up in a gripping mystery full of colorful and charming characters." RPGSite mentioned the diversity of the cast, noting "it's one of the most emotionally heartfelt, authentic, and joyful journeys one can take in the medium and a celebration of everyone, no matter where you come from or how you identify.' DualShockers mentioned the new voice acting in the 2064 version, noting "Turing's voicework is especially good, and needs to be since they're your partner and most talkative of the cast. Tomcat, Lexi, Chad, and especially Dekker near the end, all give wonderful performances full of compelling emotion that helped me grow attached to their character," while being more critical of the puzzles, stating they "can be a toss-up between frustrating and satisfying." The 2064 version was nominated for "Best Writing" at the 2018 Webby Awards. Sequel On June 12, 2019, MidBoss announced that a sequel titled Read Only Memories: Neurodiver was in development and was originally scheduled for release in 2020. This was delayed to Q1 2022. It will release for Windows PC, Mac, PlayStation 4, PlayStation 5, Xbox One, and Nintendo Switch. A tie-in comic written by Sina Grace and published by IDW Publishing is scheduled for release in August 2021. References External links 2015 video games Android (operating system) games Cyberpunk video games First-person adventure games Indie video games Kickstarter-funded video games LGBT-related video games Linux games MacOS games Nintendo Switch games Ouya games PlayStation 4 games PlayStation Vita games Video games developed in the United States Video games featuring protagonists of selectable gender Video games set in San Francisco Video games set in the 2060s Windows games Xbox One games Single-player video games IOS games
493590
https://en.wikipedia.org/wiki/ISO/IEC%202022
ISO/IEC 2022
ISO/IEC 2022 Information technology—Character code structure and extension techniques, is an ISO standard (equivalent to the ECMA standard ECMA-35, the ANSI standard ANSI X3.41 and the Japanese Industrial Standard JIS X 0202) specifying: An infrastructure of multiple character sets with particular structures which may be included in a single character encoding system, including multiple graphical character sets and multiple sets of both primary (C0) and secondary (C1) control codes, A format for encoding these sets, assuming that 8 bits are available per byte, A format for encoding these sets in the same encoding system when only 7 bits are available per byte, and a method for transforming any conformant character data to pass through such a 7-bit environment, The general structure of ANSI escape codes, and Specific escape code formats for identifying individual character sets, for announcing the use of particular encoding features or subsets, and for interacting with or switching to other encoding systems. Many of the character sets included as ISO/IEC 2022 encodings are 'double byte' encodings where two bytes correspond to a single character. This makes ISO-2022 a variable width encoding. But a specific implementation does not have to implement all of the standard; the conformance level and the supported character sets are defined by the implementation. Although many of the mechanisms defined by the ISO/IEC 2022 standard are infrequently used, several established encodings are based on a subset of the ISO/IEC 2022 system. In particular, 7-bit encoding systems using ISO/IEC 2022 mechanisms include ISO-2022-JP (or JIS encoding), which has primarily been used in Japanese-language e-mail. 8-bit encoding systems conforming to ISO/IEC 2022 include ISO/IEC 4873 (ECMA-43), which is in turn conformed to by ISO/IEC 8859, and Extended Unix Code, which is used for East Asian languages. More specialised applications of ISO 2022 include the MARC-8 encoding system used in MARC 21 library records. Introduction Many languages or language families not based on the Latin alphabet such as Greek, Cyrillic, Arabic, or Hebrew have historically been represented on computers with different 8-bit extended ASCII encodings. Written East Asian languages, specifically Chinese, Japanese, and Korean, use far more characters than can be represented in an 8-bit computer byte and were first represented on computers with language-specific double byte encodings. ISO/IEC 2022 was developed as a technique to attack both of these problems: to represent characters in multiple character sets within a single character encoding, and to represent large character sets. A second requirement of ISO-2022 was that it should be compatible with 7-bit communication channels. So even though ISO-2022 is an 8-bit character set any 8-bit sequence can be reencoded to use only 7-bits without loss and normally only a small increase in size. To represent multiple character sets, the ISO/IEC 2022 character encodings include escape sequences which indicate the character set for characters which follow. The escape sequences are registered with ISO and follow the patterns defined within the standard. These character encodings require data to be processed sequentially in a forward direction since the correct interpretation of the data depends on previously encountered escape sequences. Note, however, that other standards such as ISO-2022-JP may impose extra conditions, such as that the current character set is reset to US-ASCII before the end of a line. To represent large character sets, ISO/IEC 2022 builds on ISO/IEC 646's property that one seven-bit character will normally define 94 graphic (printable) characters (in addition to space and 33 control characters). Using two bytes, it is thus possible to represent up to 8,836 (94×94) characters; and, using three bytes, up to 830,584 (94×94×94) characters. Though the standard defines it, no registered character set uses three bytes (although EUC-TW's unregistered G2 does). For the two-byte character sets, the code point of each character is normally specified in so-called kuten (Japanese: ) form (sometimes called qūwèi (Chinese: ), especially when dealing with GB2312 and related standards), which specifies a zone (, Japanese: ku, Chinese: qū), and the point (Japanese: ten) or position (Chinese: wèi) of that character within the zone. The escape sequences therefore do not only declare which character set is being used, but also, by knowing the properties of these character sets, know whether a 94-, 96-, 8,836-, or 830,584-character (or some other sized) encoding is being dealt with. In practice, the escape sequences declaring the national character sets may be absent if context or convention dictates that a certain national character set is to be used. For example, ISO-8859-1 states that no defining escape sequence is needed and RFC 1922, which defines ISO-2022-CN, allows ISO-2022 SHIFT characters to be used without explicit use of escape sequences. The ISO-2022 definitions of the ISO-8859-X character sets are specific fixed combinations of the components that form ISO-2022. Specifically the lower control characters (C0) the US-ASCII character set (in GL) and the upper control characters (C1) are standard and the high characters (GR) are defined for each of the ISO-8859-X variants; for example ISO-8859-1 is defined by the combination of ISO-IR-1, ISO-IR-6, ISO-IR-77 and ISO-IR-100 with no shifts or character changes allowed. Although ISO/IEC 2022 character sets using control sequences are still in common use, particularly ISO-2022-JP, most modern e-mail applications are converting to use the simpler Unicode transforms such as UTF-8. The encodings that don't use control sequences, such as the ISO-8859 sets are still very common. Code structure Notation and nomenclature ISO/IEC 2022 coding specifies a two-layer mapping between character codes and displayed characters. Escape sequences allow any of a large registry of graphic character sets to be "designated" into one of four working sets, named G0 through G3, and shorter control sequences specify the working set that is "invoked" to interpret bytes in the stream. Encoding byte values ("bit combinations") are often given in column-line notation, where two decimal numbers in the range 00–15 (each corresponding to a single hexadecimal digit) are separated by a slash. Hence, for instance, codes 2/0 (0x20) through 2/15 (0x2F) inclusive may be referred to as "column 02". This is the notation used in the ISO/IEC 2022 / ECMA-35 standard itself. They may be described elsewhere using hexadecimal, as is often used in this article, or using the corresponding ASCII characters, although the escape sequences are actually defined in terms of byte values, and the graphic assigned to that byte value may be altered without affecting the control sequence. Byte values from the 7-bit ASCII graphic range (hexadecimal 0x20–0x7F), being on the left side of a character code table, are referred to as "GL" codes (with "GL" standing for "graphics left") while bytes from the "high ASCII" range (0xA0–0xFF), if available (i.e. in an 8-bit environment), are referred to as the "GR" codes ("graphics right"). The terms "CL" (0x00–0x1F) and "CR" (0x80–0x9F) are defined for the control ranges, but the CL range always invokes the primary (C0) controls, whereas the CR range always either invokes the secondary (C1) controls or is unused. Fixed coded characters The delete character DEL (0x7F), the escape character ESC (0x1B) and the space character SP (0x20) are designated "fixed" coded characters and are always available when G0 is invoked over GL, irrespective of what character sets are designated. They may not be included in graphical character sets, although other sizes or types of whitespace character may be. General syntax of escape sequences Sequences using the ESC (escape) character take the form ESC [...] , where the ESC character is followed by zero or more intermediate bytes () from the range 0x20–0x2F, and one final byte () from the range 0x30–0x7E. The first byte, or absence thereof, determines the type of escape sequence; it might, for instance, designate a working set, or denote a single control function. In all types of escape sequences, bytes in the range 0x30–0x3F are reserved for unregistered private uses defined by prior agreement between parties. Graphical character sets Each of the four working sets G0 through G3 may be a 94-character set or a 94n-character multi-byte set. Additionally, G1 through G3 may be a 96- or 96n-character set. In a 96- or 96n-character set, the bytes 0x20 through 0x7F when GL-invoked, or 0xA0 through 0xFF when GR-invoked, are allocated to and may be used by the set. In a 94- or 94n-character set, the bytes 0x20 and 0x7F are not used. When a 96- or 96n-character set is invoked in the GL region, the space and delete characters (codes 0x20 and 0x7F) are not available until a 94- or 94n-character set (such as the G0 set) is invoked in GL. 96-character sets cannot be designated to G0. Registration of a set as a 96-character set does not necessarily mean that the 0x20/A0 and 0x7F/FF bytes are actually assigned by the set; some examples of graphical character sets which are registered as 96-sets but do not use those bytes include the G1 set of I.S. 434, the box drawing set from ISO/IEC 10367, and ISO-IR-164 (a subset of the G1 set of ISO-8859-8 with only the letters, used by CCITT). Combining characters Characters are expected to be spacing characters, not combining characters, unless specified otherwise by the graphical set in question. ISO 2022 / ECMA-35 also recognizes the use of the backspace and carriage return control characters as means of combining otherwise spacing characters, as well as the CSI sequence "Graphic Character Combination" (GCC) (CSI 0x20 (SP) 0x5F (_)). Use of the backspace and carriage return in this manner is permitted by ISO/IEC 646 but prohibited by ISO/IEC 4873 / ECMA-43 and by ISO/IEC 8859, on the basis that it leaves the graphical character repertoire undefined. ISO/IEC 4873 / ECMA-43 does, however, permit the use of the GCC function on the basis that the sequence of characters is kept the same and merely displayed in one space, rather than being over-stamped to form a character with a different meaning. Control character sets Control character sets are classified as "primary" or "secondary" control character sets, respectively also called "C0" and "C1" control character sets. A C0 control set must contain the ESC (escape) control character at 0x1B (a C0 set containing only ESC is registered as ISO-IR-104), whereas a C1 control set may not contain the escape control whatsoever. Hence, they are entirely separate registrations, with a C0 set being only a C0 set and a C1 set being only a C1 set. If codes from the C0 set of ISO 6429 / ECMA-48, i.e. the ASCII control codes, appear in the C0 set, they are required to appear at their ISO 6429 / ECMA-48 locations. Inclusion of transmission control characters in the C0 set, besides the ten included by ISO 6429 / ECMA-48 (namely SOH, STX, ETX, EOT, ENQ, ACK, DLE, NAK, SYN and ETB), or inclusion of any of those ten in the C1 set, is also prohibited by the ISO/IEC 2022 / ECMA-35 standard. A C0 control set is invoked over the CL range 0x00 through 0x1F, whereas a C1 control character may be invoked over the CR range 0x80 through 0x9F (in an 8-bit environment) or by using escape sequences (in a 7-bit or 8-bit environment), but not both. Which style of C1 invocation is used must be specified in the definition of the code version. For example, ISO/IEC 4873 specifies CR bytes for the C1 controls (SS2 and SS3) which it uses. If necessary, which invocation is used may be communicated using announcer sequences. In the latter case, single control characters from the C1 control character set are invoked using "type Fe" escape sequences, meaning those where the ESC control character is followed by a byte from columns 04 or 05 (that is to say, ESC 0x40 (@) through ESC 0x5F (_)). Other control functions Additional control functions are assigned to "type Fs" escape sequences (in the range ESC 0x60 (`) through ESC 0x7E (~)); these have permanently assigned meanings rather than depending on the C0 or C1 designations. Registration of control functions to type "Fs" sequences must be approved by ISO/IEC JTC 1/SC 2. Other single control functions may be registered to type "3Ft" escape sequences (in the range ESC 0x23 (#) [...] 0x40 (@) through ESC 0x23 (#) [...] 0x7E (~)), although no "3Ft" sequences are currently assigned (as of 2019). The following escape sequences are assigned for single control functions: Escape sequences of type "Fp" (ESC 0x30 (0) through ESC 0x3F (?)) or of type "3Fp" (ESC 0x23 (#) [...] 0x30 (0) through ESC 0x23 (#) [...] 0x3F (?)) are reserved for single private use control codes, by prior agreement between parties. Several such sequences of both types are used by DEC terminals such as the VT100, and are thus supported by terminal emulators. Shift functions By default, GL codes specify G0 characters and GR codes (where available) specify G1 characters; this may be otherwise specified by prior agreement. The set invoked over each area may also be modified with control codes referred to as shifts, as shown in the table below. An 8-bit code may have GR codes specifying G1 characters, i.e. with its corresponding 7-bit code using Shift In and Shift Out to switch between the sets (e.g. JIS X 0201), although some instead have GR codes specifying G2 characters, with the corresponding 7-bit code using a single-shift code to access the second set (e.g. T.51). The codes shown in the table below are the most common encodings of these control codes, conforming to ISO/IEC 6429. The LS2, LS3, LS1R, LS2R and LS3R shifts are registered as single control functions and are always encoded as the escape sequences listed below, whereas the others are part of a C0 or C1 control code set (as shown below, SI (LS0) and SO (LS1) are C0 controls and SS2 and SS3 are C1 controls), meaning that their coding and availability may vary depending on which control sets are designated: they must be present in the designated control sets if their functionality is used. The C1 controls themselves, as mentioned above, may be represented using escape sequences or 8-bit bytes, but not both. Alternative encodings of the single-shifts as C0 control codes are available in certain control code sets. For example, SS2 and SS3 are usually available at 0x19 and 0x1D respectively in T.51 and T.61. This coding is currently recommended by ISO/IEC 2022 / ECMA-35 for applications requiring 7-bit single-byte representations of SS2 and SS3, and may also be used for SS2 only, although older code sets with SS2 at 0x1C also exist, and were mentioned as such in an earlier edition of the standard. The 0x8E and 0x8F coding of the single shifts as shown below is mandatory for ISO/IEC 4873 levels 2 and 3. In 8-bit environments, either GL or GR, but not both, may be used as the single-shift area. This must be specified in the definition of the code version. For instance, ISO/IEC 4873 specifies GL, whereas packed EUC specifies GR. In 7-bit environments, only GL is used as the single-shift area. If necessary, which single-shift area is used may be communicated using announcer sequences. The names "locking shift zero" (LS0) and "locking shift one" (LS1) refer to the same pair of C0 control characters (0x0F and 0x0E) as the names "shift in" (SI) and "shift out" (SO). However, the standard refers to them as LS0 and LS1 when they are used in 8-bit environments and as SI and SO when they are used in 7-bit environments. The ISO/IEC 2022 / ECMA-35 standard permits, but discourages, invoking G1, G2 or G3 in both GL and GR simultaneously. Registration of graphical and control code sets The ISO International register of coded character sets to be used with escape sequences (ISO-IR) lists graphical character sets, control code sets, single control codes and so forth which have been registered for use with ISO/IEC 2022. The procedure for registering codes and sets with the ISO-IR registry is specified by ISO/IEC 2375. Each registration receives a unique escape sequence, and a unique registry entry number to identify it. For example, the CCITT character set for Simplified Chinese is known as ISO-IR-165. Registration of coded character sets with the ISO-IR registry identifies the documents specifying the character set or control function associated with an ISO/IEC 2022 non‑private-use escape sequence. This may be a standard document; however, registration does not create a new ISO standard, does not commit the ISO or IEC to adopt it as an international standard, and does not commit the ISO or IEC to add any of its characters to the Universal Coded Character Set. Character set designations Escape sequences to designate character sets take the form ESC [...] . As mentioned above, the intermediate () bytes are from the range 0x20–0x2F, and the final () byte is from the range 0x30–0x7E. The first byte (or, for a multi-byte set, the first two) identifies the type of character set and the working set it is to be designated to, whereas the byte (and any additional bytes) identify the character set itself, as assigned in the ISO-IR register (or, for the private-use escape sequences, by prior agreement). Additional bytes may be added before the byte to extend the byte range. This is currently only used with 94-character sets, where codes of the form ESC ( ! have been assigned. At the other extreme, no multibyte 96-sets have been registered, so the sequences below are strictly theoretical. As with other escape sequence types, the range 0x30–0x3F is reserved for private-use bytes, in this case for private-use character set definitions (which might include unregistered sets defined by protocols such as ARIB STD-B24 or MARC-8, or vendor-specific sets such as DEC Special Graphics). However, in a graphical set designation sequence, if the second byte (for a single-byte set) or the third byte (for a double-byte set) is 0x20 (space), the set denoted is a "dynamically redefinable character set" (DRCS) defined by prior agreement, which is also considered private use. A graphical set being considered a DRCS implies that it represents a font of exact glyphs, rather than a set of abstract characters. The manner in which DRCS sets and associated fonts are transmitted, allocated and managed is not stipulated by ISO/IEC 2022 / ECMA-35 itself, although it recommends allocating them sequentially starting with byte 0x40 (@); however, a manner for transmitting DRCS fonts is defined within some telecommunication protocols such as World System Teletext. There are also three special cases for multi-byte codes. The code sequences ESC $ @, ESC $ A, and ESC $ B were all registered when the contemporary version of the standard allowed multi-byte sets only in G0, so must be accepted in place of the sequences ESC $ ( @ through ESC $ ( B to designate to the G0 character set. There are additional (rarely used) features for switching control character sets, but this is a single-level lookup, in that (as noted above) the C0 set is always invoked over CL, and the C1 set is always invoked over CR or by using escape codes. As noted above, it is required that any C0 character set include the ESC character at position 0x1B, so that further changes are possible. The control set designation sequences (as opposed to the graphical set ones) may also be used from within ISO/IEC 10646 (UCS/Unicode), in contexts where processing ANSI escape codes is appropriate, provided that each byte in the sequence is padded to the code unit size of the encoding. A table of escape sequence bytes and the designation or other function which they perform is below. Note that the registry of bytes is independent for the different types. The 94-character graphic set designated by ESC ( A through ESC + A is not related in any way to the 96-character set designated by ESC - A through ESC / A. And neither of those is related to the 94n-character set designated by ESC $ ( A through ESC $ + A, and so on; the final bytes must be interpreted in context. (Indeed, without any intermediate bytes, ESC A is a way of specifying the C1 control code 0x81.) Also note that C0 and C1 control character sets are independent; the C0 control character set designated by ESC ! A (which happens to be the NATS control set for newspaper text transmission) is not the same as the C1 control character set designated by ESC " A (the CCITT attribute control set for Videotex). Interaction with other coding systems The standard also defines a way to specify coding systems that do not follow its own structure. A sequence is also defined for returning to ISO/IEC 2022; the registrations which support this sequence as encoded in ISO/IEC 2022 comprise (as of 2019) various Videotex formats, UTF-8, and UTF-1. A second byte of 0x2F (/) is included in the designation sequences of codes which do not use that byte sequence to return to ISO 2022; they may have their own means to return to ISO 2022 (such as a different or padded sequence) or none at all. All existing registrations of the latter type (as of 2019) are either transparent raw data, Unicode/UCS formats, or subsets thereof. Of particular interest are the sequences which switch to ISO/IEC 10646 (Unicode) formats which do not follow the ISO/IEC 2022 structure. These include UTF-8 (which does not reserve the range 0x80–0x9F for control characters), its predecessor UTF-1 (which mixes GR and GL bytes in multi-byte codes), and UTF-16 and UTF-32 (which use wider coding units). Several codes were also registered for subsets (levels 1 and 2) of UTF-8, UTF-16 and UTF-32, as well as for three levels of UCS-2. However, the only codes currently specified by ISO/IEC 10646 are the level-3 codes for UTF-8, UTF-16 and UTF-32 and the unspecified-level code for UTF-8, with the rest being listed as deprecated. ISO/IEC 10646 stipulates that the big-endian formats of UTF-16 and UTF-32 are designated by their escape sequences. Of the sequences switching to UTF-8, ESC % G is the one supported by, for example, xterm. Although use of a variant of the standard return sequence from UTF-16 and UTF-32 is permitted, the bytes of the escape sequence must be padded to the size of the code unit of the encoding (i.e. 001B 0025 0040 for UTF-16), i.e. the coding of the standard return sequence does not conform exactly to ISO/IEC 2022. For this reason, the designations for UTF-16 and UTF-32 use a without-standard-return syntax. Code structure announcements The sequence "announce code structure" (ESC SP (0x20) ) is used to announce a specific code structure, or a specific group of ISO 2022 facilities which are used in a particular code version. Although announcements can be combined, certain contradictory combinations (specifically, using locking shift announcements 16–23 with announcements 1, 3 and 4) are prohibited by the standard, as is using additional announcements on top of ISO/IEC 4873 level announcements 12–14 (which fully specify the permissible structural features). Announcement sequences are as follows: ISO/IEC 2022 code versions Japanese e-mail versions ISO-2022-JP is a widely used encoding for Japanese, in particular in e-mail. It was introduced for use on the JUNET network and later codified in IETF RFC 1468, dated 1993. It has an advantage over other encodings for Japanese in that it does not require 8-bit clean transmission. Microsoft calls it Code page 50220. It starts in ASCII and includes the following escape sequences: ESC ( B to switch to ASCII (1 byte per character) ESC ( J to switch to JIS X 0201-1976 (ISO/IEC 646:JP) Roman set (1 byte per character) ESC $ @ to switch to JIS X 0208-1978 (2 bytes per character) ESC $ B to switch to JIS X 0208-1983 (2 bytes per character) Use of the two characters added in JIS X 0208-1990 is permitted, but without including the IRR sequence, i.e. using the same escape sequence as JIS X 0208-1983. Also, due to being registered before designating multi-byte sets except to G0 was possible, the escapes for JIS X 0208 do not include the second -byte . The RFC notes that some existing systems did not distinguish ESC ( B from ESC ( J, or did not distinguish ESC $ @ from ESC $ B, but stipulates that the escape sequences should not be changed by systems simply relaying messages such as e-mails. The WHATWG Encoding Standard referenced by HTML5 handles ESC ( B and ESC ( J distinctly, but treats ESC $ @ the same as ESC $ B when decoding, and uses only ESC $ B for JIS X 0208 when encoding. The RFC also notes that some past systems had made erroneous use of the sequence ESC ( H to switch away from JIS X 0208, which is actually registered for ISO-IR-11 (a Swedish variant of ISO 646 and World System Teletext). Versions with halfwidth katakana Use of ESC ( I to switch to the JIS X 0201-1976 Kana set (1 byte per character) is not part of the ISO-2022-JP profile, but is also sometimes used. Python allows it in a variant which it labels ISO-2022-JP-EXT (which also incorporates JIS X 0212 as described below, completing coverage of EUC-JP); this is close in both name and structure to an encoding denoted ISO-2022-JPext by DEC, which furthermore adds a two-byte user-defined region accessed with ESC $ ( 0 to complete the coverage of Super DEC Kanji. The WHATWG/HTML5 variant permits decoding JIS X 0201 katakana in ISO-2022-JP input, but converts the characters to their JIS X 0208 equivalents upon encoding. Microsoft's code page for ISO-2022-JP with JIS X 0201 kana additionally permitted is Code page 50221. Other, older variants known as JIS7 and JIS8 build directly on the 7-bit and 8-bit encodings defined by JIS X 0201 and allow use of JIS X 0201 kana from G1 without escape sequences, using Shift Out and Shift In or setting the eighth bit (GR-invoked), respectively. They are not widely used; JIS X 0208 support in extended 8-bit JIS X 0201 is more commonly achieved via Shift JIS. Microsoft's code page for JIS X 0201-based ISO 2022 with single-byte katakana via Shift Out and Shift In is Code page 50222. ISO-2022-JP-2 is a multilingual extension of ISO-2022-JP, defined in RFC 1554 (dated 1993), which permits the following escape sequences in addition to the ISO-2022-JP ones. The ISO/IEC 8859 parts are 96-character sets which cannot be designated to G0, and are accessed from G2 using the 7-bit escape sequence form of the single-shift code SS2: ESC $ A to switch to GB 2312-1980 (2 bytes per character) ESC $ ( C to switch to KS X 1001-1992 (2 bytes per character) ESC $ ( D to switch to JIS X 0212-1990 (2 bytes per character) ESC . A to switch to ISO/IEC 8859-1 high part, Extended Latin 1 set (1 byte per character) [designated to G2] ESC . F to switch to ISO/IEC 8859-7 high part, Basic Greek set (1 byte per character) [designated to G2] ISO-2022-JP with the ISO-2022-JP-2 representation of JIS X 0212, but not the other extensions, was subsequently dubbed ISO-2022-JP-1 by RFC 2237, dated 1997. IBM Japanese TCP IBM implements nine 7-bit ISO 2022 based encodings for Japanese, each using a different set of escape sequences: IBM-956, IBM-957, IBM-958, IBM-959, IBM-5052, IBM-5053, IBM-5054, IBM-5055 and ISO-2022-JP, which are collectively termed "TCP/IP Japanese coded character sets". CCSID 9148 is the standard (RFC 1468) ISO-2022-JP. JIS X 0213 The JIS X 0213 standard, first published in 2000, defines an updated version of ISO-2022-JP, without the ISO-2022-JP-2 extensions, named ISO-2022-JP-3. The additions made by JIS X 0213 compared to the base JIS X 0208 standard resulted in a new registration being made for the extended JIS plane 1, while the new plane 2 received its own registration. The further additions to plane 1 in the 2004 edition of the standard resulted in an additional registration being added to a further revision of the profile, dubbed ISO-2022-JP-2004. In addition to the basic ISO-2022-JP designation codes, the following designations are recognized: ESC ( I to switch to JIS X 0201-1976 Kana set (1 byte per character) ESC $ ( O to switch to JIS X 0213-2000 Plane 1 (2 bytes per character) ESC $ ( P to switch to JIS X 0213-2000 Plane 2 (2 bytes per character) ESC $ ( Q to switch to JIS X 0213-2004 Plane 1 (2 bytes per character, ISO-2022-JP-2004 only) Other 7-bit versions is defined in RFC 1557, dated 1993. It encodes ASCII and the Korean double-byte KS X 1001-1992, previously named KS C 5601-1987. Unlike ISO-2022-JP-2, it makes use of the Shift Out and Shift In characters to switch between them, after including ESC $ ) C once at the start of a line to designate KS X 1001 to G1. and ISO-2022-CN-EXT are defined in RFC 1922, dated 1996. They are 7-bit encodings making use both of the Shift Out and Shift In functions (to shift between G0 and G1), and of the 7-bit escape code forms of the single-shift functions SS2 and SS3 (to access G2 and G3). They support the character sets GB 2312 (for simplified Chinese) and CNS 11643 (for traditional Chinese). The basic ISO-2022-CN profile uses ASCII as its G0 (shift in) set, and also includes GB 2312 and the first two planes of CNS 11643 (due to these two planes being sufficient to represent all traditional Chinese characters from common Big5, to which the RFC provides a correspondence in an appendix): ESC $ ) A to switch to GB 2312-1980 (2 bytes per character) [designated to G1] ESC $ ) G to switch to CNS 11643-1992 Plane 1 (2 bytes per character) [designated to G1] ESC $ * H to switch to CNS 11643-1992 Plane 2 (2 bytes per character) [designated to G2] The ISO-2022-CN-EXT profile permits the following additional sets and planes. ESC $ ) E to switch to ISO-IR-165 (2 bytes per character) [designated to G1] ESC $ + I to switch to CNS 11643-1992 Plane 3 (2 bytes per character) [designated to G3] ESC $ + J to switch to CNS 11643-1992 Plane 4 (2 bytes per character) [designated to G3] ESC $ + K to switch to CNS 11643-1992 Plane 5 (2 bytes per character) [designated to G3] ESC $ + L to switch to CNS 11643-1992 Plane 6 (2 bytes per character) [designated to G3] ESC $ + M to switch to CNS 11643-1992 Plane 7 (2 bytes per character) [designated to G3] The ISO-2022-CN-EXT profile further lists additional Guobiao standard graphical sets as being permitted, but conditional on their being assigned registered ISO 2022 escape sequences: GB 12345 in G1 GB 7589 or GB 13131 in G2 GB 7590 or GB 13132 in G3 The character after the ESC (for single-byte character sets) or ESC $ (for multi-byte character sets) specifies the type of character set and working set that is designated to. In the above examples, the character ( (0x28) designates a 94-character set to the G0 character set, whereas ), * or + (0x29–0x2B) designates to the G1–G3 character sets. ISO-2022-KR and ISO-2022-CN are used less frequently than ISO-2022-JP, and are sometimes deliberately not supported due to security concerns. Notably, the WHATWG Encoding Standard used by HTML5 maps ISO-2022-KR, ISO-2022-CN and ISO-2022-CN-EXT (as well as HZ-GB-2312) to the "replacement" decoder, which maps all input to the replacement character (�), in order to prevent certain cross-site scripting and related attacks, which utilize a difference in encoding support between the client and server. Although the same security concern (allowing sequences of ASCII bytes to be interpreted differently) also applies to ISO-2022-JP and UTF-16, they could not be given this treatment due to being much more frequently used in deployed content. ISO/IEC 4873 A subset of ISO 2022 applied to 8-bit single-byte encodings is defined by ISO/IEC 4873, also published by Ecma International as ECMA-43. ISO/IEC 8859 defines 8-bit codes for ISO/IEC 4873 (or ECMA-43) level 1. ISO/IEC 4873 / ECMA-43 defines three levels of encoding: Level 1, which includes a C0 set, the ASCII G0 set, an optional C1 set and an optional single-byte (94-character or 96-character) G1 set. G0 is invoked over GL, and G1 is invoked over GR. Use of shift functions is not permitted. Level 2, which includes a (94-character or 96-character) single-byte G2 and/or G3 set in addition to a mandatory G1 set. Only the single-shift functions SS2 and SS3 are permitted (i.e. locking shifts are forbidden), and they invoke over the GL region (including 0x20 and 0x7F in the case of a 96-set). SS2 and SS3 must be available in C1 at 0x8E and 0x8F respectively. This minimal required C1 set for ISO 4873 is registered as ISO-IR-105. Level 3, which permits the GR locking-shift functions LS1R, LS2R and LS3R in addition to the single shifts, but otherwise has the same restrictions as level 2. Earlier editions of the standard permitted non-ASCII assignments in the G0 set, provided that the ISO 646 invariant positions were preserved, that the other positions were assigned to spacing (not combining) characters, that 0x23 was assigned to either £ or #, and that 0x24 was assigned to either $ or ¤. For instance, the 8-bit encoding of JIS X 0201 is compliant with earlier editions. This was subsequently changed to fully specify the ISO 646:1991 IRV / ISO-IR No. 6 set (ASCII). The use of the ISO 646 IRV (synchronised with ASCII since 1991) at ISO/IEC 4873 Level 1 with no C1 or G1 set, i.e. using the IRV in an 8-bit environment in which shift codes are not used and the high bit is always zero, is known as ISO 4873 DV, in which DV stands for "Default Version". In cases where duplicate characters are available in different sets, the current edition of ISO/IEC 4873 / ECMA-43 only permits using these characters in the lowest numbered working set which they appear in. For instance, if a character appears in both the G1 set and the G3 set, it must be used from the G1 set. However, use from other sets is noted as having been permitted in earlier editions. ISO/IEC 8859 defines complete encodings at level 1 of ISO/IEC 4873, and does not allow for use of multiple ISO/IEC 8859 parts together. It stipulates that ISO/IEC 10367 should be used instead for levels 2 and 3 of ISO/IEC 4873. ISO/IEC 10367:1991 includes G0 and G1 sets matching those used by the first 9 parts of ISO/IEC 8859 (i.e. those which existed as of 1991, when it was published), and some supplementary sets. Character set designation escape sequences are used for identifying or switching between versions during information interchange only if required by a further protocol, in which case the standard requires an ISO/IEC 2022 announcer sequence specifying the ISO/IEC 4873 level, followed by a complete set of escapes specifying the character set designations for C0, C1, G0, G1, G2 and G3 respectively (but omitting G2 and G3 designations for level 1), with an -byte of 0x7E denoting an empty set. Each ISO/IEC 4873 level has its own single ISO/IEC 2022 announcer sequence, which are as follows: Extended Unix Code Extended Unix Code (EUC) is an 8-bit variable-width character encoding system used primarily for Japanese, Korean, and simplified Chinese. It is based on ISO 2022, and only character sets which conform to the ISO 2022 structure can have EUC forms. Up to four coded character sets can be represented (in G0, G1, G2 and G3). The G0 set is invoked over GL, the G1 set is invoked over GR, and the G2 and G3 sets are (if present) invoked using the single shifts SS2 and SS3, which are used over GR (not GL), i.e. at 0x8E and 0x8F respectively. Locking shift codes are not used. The code assigned to the G0 set is ASCII, or the country's national ISO 646 character set such as KS-Roman (KS X 1003) or JIS-Roman (the lower half of JIS X 0201). Hence, 0x5C (backslash in US-ASCII) is used to represent a Yen sign in some versions of EUC-JP and a Won sign in some versions of EUC-KR. G1 is used for a 94x94 coded character set represented in two bytes. The EUC-CN form of GB2312 and EUC-KR are examples of such two-byte EUC codes. EUC-JP includes characters represented by up to three bytes (i.e. SS3 plus two bytes) whereas a single character in EUC-TW can take up to four bytes (i.e. SS2 plus three bytes). The EUC code itself does not make use of the announcer or designation sequences from ISO 2022; however, it corresponds to the following sequence of four announcer sequences, with meanings breaking down as follows. Comparison with other encodings Advantages As ISO/IEC 2022's entire range of graphical character encodings can be invoked over GL, the available glyphs are not significantly limited by an inability to represent GR and C1, such as in a system limited to 7-bit encodings. It accordingly enables the representation of large set of characters in such a system. Generally, this 7-bit compatibility is not really an advantage, except for backwards compatibility with older systems. The vast majority of modern computers use 8 bits for each byte. As compared to Unicode, ISO/IEC 2022 sidesteps Han unification by using sequence codes to switch between discrete encodings for different East Asian languages. This avoids the issues associated with unification, such as difficulty supporting multiple CJK languages with their associated character variants in a single document and font. Disadvantages Since ISO/IEC 2022 is a stateful encoding, a program cannot jump in the middle of a block of text to search, insert or delete characters. This makes manipulation of the text very cumbersome and slow when compared to non-stateful encodings. Any jump in the middle of the text may require a backup to the previous escape sequence before the bytes following the escape sequence can be interpreted. Due to the stateful nature of ISO/IEC 2022, an identical and equivalent character may be encoded in different character sets, which may be designated to any of G0 through G3, which may be invoked using single shifts or by using locking shifts to GL or GR. Consequently, characters can be represented in multiple ways, meaning that two visually identical and equivalent strings can not be reliably compared for equality. Some systems, like DICOM and several e-mail clients, use a variant of ISO-2022 (e.g. "ISO 2022 IR 100") in addition to supporting several other encodings. This type of variation makes it difficult to portably transfer text between computer systems. UTF-1, the multi-byte Unicode transformation format compatible with ISO/IEC 2022's representation of 8-bit control characters, has various disadvantages in comparison with UTF-8, and switching from or to other charsets, as supported by ISO/IEC 2022, is typically unnecessary in Unicode documents. Because of its escape sequences, it is possible to construct attack byte sequences in which a malicious string (such as cross-site scripting) is masked until it is decoded to Unicode, which may allow it to bypass sanitisation. Use of this encoding is thus treated as suspicious by malware protection suites, and 7-bit ISO 2022 data (except for ISO-2022-JP) is mapped in its entirety to the replacement character in HTML5 to prevent attacks. Restricted ISO 2022 8-bit code versions which do not use designation escapes or locking shift codes, such as Extended Unix Code, do not share this problem. Concatenation can pose issues. Profiles such as ISO-2022-JP specify that the stream starts in the ASCII state and must end in the ASCII state. This is necessary to ensure that characters in concatenated ISO-2022-JP and/or ASCII streams will be interpreted in the correct set. This has the consequence that if a stream that ends in a multi-byte character is concatenated with one that starts with a multi-byte character, a pair of escape codes are generated switching to ASCII and immediately away from it. However, as stipulated in Unicode Technical Report #36 ("Unicode Security Considerations"), pairs of ISO 2022 escape sequences with no characters between them should generate a replacement character ("�") to prevent them from being used to mask malicious sequences such as cross-site scripting. Implementing this measure, e.g. in Mozilla Thunderbird, has led to interoperability issues, with unexpected "�" characters being generated where two ISO-2022-JP streams have been concatenated. See also ISO 2709 ISO/IEC 646 ISO-IR-102 C0 and C1 control codes CJK MARC standards Mojibake luit ISO/IEC JTC 1/SC 2 Footnotes References Standards and registry indices cited Registered code sets cited Internet Requests For Comment cited Other published works cited Further reading External links ISO/IEC 2022:1994 ISO/IEC 2022:1994/Cor 1:1999 ECMA-35, equivalent to ISO/IEC 2022 and freely downloadable. International Register of Coded Character Sets to be Used with Escape Sequences, a full list of assigned character sets and their escape sequences History of Character Codes in North America, Europe, and East Asia from 1999, rev. 2004 Ken Lunde's CJK.INF: a document on encoding Chinese, Japanese, and Korean (CJK) languages, including a discussion of the various variants of ISO/IEC 2022. Character sets Ecma standards 02022
1249296
https://en.wikipedia.org/wiki/Nothing-up-my-sleeve%20number
Nothing-up-my-sleeve number
In cryptography, nothing-up-my-sleeve numbers are any numbers which, by their construction, are above suspicion of hidden properties. They are used in creating cryptographic functions such as hashes and ciphers. These algorithms often need randomized constants for mixing or initialization purposes. The cryptographer may wish to pick these values in a way that demonstrates the constants were not selected for a nefarious purpose, for example, to create a backdoor to the algorithm. These fears can be allayed by using numbers created in a way that leaves little room for adjustment. An example would be the use of initial digits from the number as the constants. Using digits of millions of places after the decimal point would not be considered trustworthy because the algorithm designer might have selected that starting point because it created a secret weakness the designer could later exploit. Digits in the positional representations of real numbers such as , e, and irrational roots are believed to appear with equal frequency (see normal number). Such numbers can be viewed as the opposite extreme of Chaitin–Kolmogorov random numbers in that they appear random but have very low information entropy. Their use is motivated by early controversy over the U.S. Government's 1975 Data Encryption Standard, which came under criticism because no explanation was supplied for the constants used in its S-box (though they were later found to have been carefully selected to protect against the then-classified technique of differential cryptanalysis). Thus a need was felt for a more transparent way to generate constants used in cryptography. "Nothing up my sleeve" is a phrase associated with magicians, who sometimes preface a magic trick by holding open their sleeves to show they have no objects hidden inside. Examples Ron Rivest used the trigonometric sine function to generate constants for the widely used MD5 hash. The U.S. National Security Agency used the square roots of small integers to produce the constants used in its "Secure Hash Algorithm" SHA-1. The SHA-2 functions use the square roots and cube roots of small primes. SHA-1 also uses 0123456789ABCDEFFEDCBA9876543210F0E1D2C3 as its initial hash value. The Blowfish encryption algorithm uses the binary representation of (without the initial 3) to initialize its key schedule. RFC 3526 describes prime numbers for internet key exchange that are also generated from . The S-box of the NewDES cipher is derived from the United States Declaration of Independence. The AES candidate DFC derives all of its arbitrary constants, including all entries of the S-box, from the binary expansion of . The ARIA key schedule uses the binary expansion of 1/. The key schedule of the RC5 cipher uses binary digits from both and the golden ratio. Multiple ciphers including TEA and Red Pike use 2654435769 or 0x9e3779b9 which is ⌊232/ϕ⌋, where ϕ is the golden ratio. The BLAKE hash function, a finalist in the SHA-3 competition, uses a table of 16 constant words which are the leading 512 or 1024 bits of the fractional part of . The key schedule of the KASUMI cipher uses 0x123456789ABCDEFFEDCBA9876543210 to derive the modified key. The Salsa20 family of ciphers use the ASCII string "expand 32-byte k" as constants in its block initialization process. Counterexamples The Streebog hash function S-box was claimed to be generated randomly, but was reverse-engineered and proven to be generated algorithmically with some "puzzling" weaknesses. Data Encryption Standard (DES) has constants that were given out by NSA. They turned out to be far from random, but instead of being a backdoor they made the algorithm resilient against differential cryptanalysis, a method not publicly known at the time. Dual_EC_DRBG, a NIST-recommended cryptographic pseudo-random bit generator, came under criticism in 2007 because constants recommended for use in the algorithm could have been selected in a way that would permit their author to predict future outputs given a sample of past generated values. In September 2013 The New York Times wrote that "internal memos leaked by a former NSA contractor, Edward Snowden, suggest that the NSA generated one of the random number generators used in a 2006 NIST standard—called the Dual EC DRBG standard—which contains a back door for the NSA." P curves are standardized by NIST for elliptic curve cryptography. The coefficients in these curves are generated by hashing unexplained random seeds, such as: P-224: bd713447 99d5c7fc dc45b59f a3b9ab8f 6a948bc5. P-256: c49d3608 86e70493 6a6678e1 139d26b7 819f7e90. P-384: a335926a a319a27a 1d00896a 6773a482 7acdac73. Although not directly related, after the backdoor in Dual_EC_DRBG had been exposed, suspicious aspects of the NIST's P curve constants led to concerns that the NSA had chosen values that gave them an advantage in finding private keys. Since then, many protocols and programs started to use Curve25519 as an alternative to NIST P-256 curve. Limitations Bernstein and coauthors demonstrate that use of nothing-up-my-sleeve numbers as the starting point in a complex procedure for generating cryptographic objects, such as elliptic curves, may not be sufficient to prevent insertion of back doors. If there are enough adjustable elements in the object selection procedure, the universe of possible design choices and of apparently simple constants can be large enough so that a search of the possibilities allows construction of an object with desired backdoor properties. Footnotes References Bruce Schneier. Applied Cryptography, second edition. John Wiley and Sons, 1996. Eli Biham, Adi Shamir, (1990). Differential Cryptanalysis of DES-like Cryptosystems. Advances in Cryptology — CRYPTO '90. Springer-Verlag. 2–21. Random number generation Cryptography
5967388
https://en.wikipedia.org/wiki/VXA
VXA
VXA is a tape backup format originally created by Ecrix and now owned by Tandberg Data. After the merger between Ecrix and Exabyte, VXA was produced by Exabyte Corporation. On November 20, 2006, Exabyte was purchased by Tandberg Data that has since stopped further development of the format. How it works Exabyte and Ecrix describe the data format as "packet technology." Since VXA is based on helical scan technology, data is written across the tape from side to side in helical strips. The novel part of VXA packet technology is that each stripe starts with a unique packet ID and ends with an ECC packet checksum. As each stripe is written to tape, it is immediately read back to verify that the write was successful. If the write was not 100% successful the packet can be rewritten at another point on the tape without stopping. When the data is read back, the packets are reassembled into a "buffer" by their packet ID. The buffer has 3 additional ECCs to ensure data integrity. Another aspect of VXA is that there are 2 read heads for each stripe, slightly offset in relation to each other to allow for more flexibility in reading tapes written by other drives. Due to the relatively slow tape speed inherent to helical scan technology, the drive is able to stop and start the tape much more quickly to avoid the need to backhitch. Market context The VXA format competes mainly against the DDS, and DLT-IV formats. Overview VXA-3 Exabyte released two different product lines based on VXA-3 technology, VXA-320 in 2005 and VXA-172 in 2006. VXA-172 drives are limited to 86 GB per tape cartridge, but can be unlocked (for a fee) to remove the limit. They are otherwise the same. VXA-3 was the first helical scan system in production to feature thin film MR heads. Notes Data Capacity figures above are for uncompressed data. Exabyte assumes a 2x compression factor in their marketing material. Media Media was released on two families (V and X), with different capacities based on the length of the tape, and the drive it is being used in. Gallery Cartridge / tape Mechanism / drive Tape and drive together References External links ECMA 316 Specification of VXA-1. ECMA 222 Specification of ALDC, the data compression standard for VXA-1. VXA Alliance Computer storage tape media Ecma standards
618536
https://en.wikipedia.org/wiki/Telesoftware
Telesoftware
The term telesoftware was coined by W.J.G. Overington who first proposed the idea; it literally means “software at a distance” and it often refers to the transmission of programs for a microprocessor or home computer via broadcast teletext, though the use of teletext was just a convenient way to implement the invention, which had been invented as a theoretical broadcasting concept previously. The concept being of producing local interactivity without the need for a return information link to a central computer. The invention arose as spin-off from research on function generators for a hybrid computer system for use in simulation of heat transfer in food preservation, and thus from outside of the broadcasting research establishments. Software bytes are presented to a terminal as pairs of standard teletext characters, thus utilizing an existing and well-proven broadcasting system. History Telesoftware was pioneered in the UK during the 1970s and 1980s, and a paper on the subject was presented by R.H. Vivian (IBA) and W.J.G. Overington at the 1978 International Broadcasting Convention. The world first test broadcast took place on ITV Oracle in February 1977, though there was no equipment available to use the software at that time. The broadcast simply produced a display of the encoded software, for a Signetics 2650 microprocessor, on a teletext television. However, the fact that the broadcast took place gave the concept practical credibility of something that was realistically possible for the future. At the 1978 International Broadcasting Convention a demonstration of telesoftware working from a live feed of ITV Oracle teletext was presented on an exhibition stand by Mr Hedger. The Oracle signal being carried within the ITV signal. At one stage the ITV signal was routed via a communication satellite as part of a television demonstration, and the opportunity was used to test telesoftware using that signal that had been routed via the communication satellite, and it worked well. Also, a display maquette, with the title Telesoftware Tennis had been broadcast live for a few minutes on ITV Oracle in November or December 1976. Although that was just during a discussion of the future possibilities for telesoftware, the development in the 21st century of retrieving teletext pages from super-VHS recordings means that if anyone was recording the ITV television broadcast on super-VHS videotape at that time, then that maquette page could potentially be recovered from the tape by teletext archaeologists, as potentially could the broadcasts from 1977 mentioned above and the broadcasts made in 1978 at the time of the International Broadcasting Convention. Such technique has already been used to recover and archive telesoftware broadcasts made in the 1980s by the BBC. During that time, software was broadcast at various times on all of the (then) four terrestrial TV channels. Telesoftware and tutorials were available on Ceefax (BBC teletext service) for the BBC Micro via its teletext adapter between 1983 and 1989 and was generally transmitted for a period of one week. The BBC Ceefax Telesoftware service was managed by Jeremy Brayshaw. Most of the Telesoftware programming tutorials were written by Gordon Horsington and they, as well as most of the software, are still available from the online Telesoftware archives (see the external links below). Downloading could take place from Friday evening to the following Thursday evening. As the updating took place on a Friday, it was advised not to attempt to download software between 9am and 7pm on Fridays. Other channels provided for several other computers via a range of adapters and set-top boxes. The same delivery system was also used to deliver satellite weather images from the Meteosat satellite for download. Although none of the early telesoftware initiatives survived, many of the techniques are now at the heart of the latest digital television systems. Various archives of BBC Ceefax Telesoftware are preserved on the internet. See also Multimedia Home Platform References BBC computer literacy projects Computer-related introductions in 1983
43611717
https://en.wikipedia.org/wiki/Elmer%20Wilhoite
Elmer Wilhoite
Elmer Ellsworth Wilhoite (May 3, 1930 – August 19, 2008) was an American football player and boxer. He played college football for the USC Trojans and was a consensus selection at the guard position on the 1952 College Football All-America Team. Early years Wilhoite was born in Merced County, California, in 1930. He attended Merced High School. He was a star athlete in the shot put while in high school, throwing the 12-pound shot 56 feet, 6 inches, breaking a high school athletic record set by Bob Mathias. USC Whilhoite enrolled at the University of Southern California and, while there, played at the guard position on the USC football team in 1951 and 1952. In the UCLA–USC rivalry in 1952, both teams were undefeated and untied and played for a spot in the 1953 Rose Bowl. Wilhoite set up the game-winning touchdown when he intercepted a Paul Cameron pass and returned it 72 yards to UCLA's eight-yard line. The Trojans won the 1953 Rose Bowl by a 7–0 score over Wisconsin, and Wilhoite was a consensus selection for the 1952 College Football All-America Team. Later years Wilhoite was selected by the Cleveland Browns in the 12th round of the 1953 NFL Draft, but he instead pursued a career as a boxer. In his first fight, he won by a knockout after 45 second of the first round against Humphrey Jiminez at Merced, California. In September 1953, he won his second professional bout by a second-round technical knockout (TKO) over Clayton Mann in a match at the Olympic Auditorium in Los Angeles. In 1954, he tried out with the Toronto Argonauts of the Canadian Football League (CFL), but he was released in early August 1954. He signed with the Baltimore Colts in December 1954, but did not make the team in 1955. In July 1957, he signed with the Calgary Stampeders of the CFL. Wilhoite returned briefly to boxing in 1958. He later operated H&S International, a salvage company. Wilhoite was married to Judy Berg and had a son, Edward, in addition to her sons, Anthony and Bill, from a previous marriage. The couple later divorced. He had six grandchildren: Travis Wilhoite (Edward), Courtney, Kyle, Angela and Rachel Vassalo (Bill), and Lily Vassalo (Anthony). Wilhoite died in 2008 at Hawthorne, Nevada. References 1930 births 2008 deaths All-American college football players American football guards USC Trojans football players People from Merced, California Players of American football from California
305174
https://en.wikipedia.org/wiki/3B%20series%20computers
3B series computers
The 3B series computers are a line of minicomputers produced from the late 1970s by AT&T Computer Systems' Western Electric subsidiary for use with the company's UNIX operating system. The line primarily consists of the models 3B20, 3B5, 3B15, 3B2, and 3B4000. The series is notable for controlling a series of electronic switching systems for telecommunication, for general computing purposes, and for serving as the historical software porting base for commercial UNIX. History The first 3B was installed in Fresno, California at Pacific Bell. Within two years, several hundred were in place throughout the Bell System. Some of the units came with "small, slow hard disks." 3B high-availability processors The original series of 3B computers includes the models 3B20C, 3B20D, 3B21D, and 3B21E. The 3B20C/3B20D/3B21D/3B21E systems were 32-bit microprogrammed duplex (redundant) high availability processor units running a real-time operating system. They were first produced in the late 1970s at the WECo factory in Lisle, Illinois, for telecommunications applications including the 4ESS and 5ESS systems. They use the Duplex Multi Environment Real Time (DMERT) operating system which was renamed UNIX-RTR (Real Time Reliable) in 1982. The Data Manipulation Unit (DMU) provided arithmetic and logic operations on 32-bit words using AMD 2901 bipolar 4-bit processor elements. The first 3B20D was called the Model 1. Each processor's control unit consisted of two frames of circuit packs. The whole duplex system required many seven-foot frames of circuit packs plus at least one tape drive frame (most telephone companies wrote billing data on magnetic tapes), and many washing machine sized disk drives. For training and lab purposes a 3B20D could be divided into two "half-duplex" systems. A 3B20S consisted of most of the same hardware as a half-duplex but used a completely different operating system. The 3B20C was briefly available as a high-availability fault tolerant multiprocessing general purpose computer in the commercial market in 1984. The 3B20E was created to provide a cost-reduced 3B20D for small offices that did not expect such high availability. It consisted of a virtual "emulated" 3B20D environment running on a stand-alone general purpose computer; the system was ported to many computers but primarily runs on the Sun Microsystems Solaris environment. There have been many improvements to the 3B20D UNIX-RTR system in both software and hardware throughout the 1980s, 1990s, and 2000s. Innovations included disk independent operation (DIOP: the ability to continue essential software processing such as telecommunications after duplex failure of redundant essential disks); off-line boot (the ability to split in half and boot the out-of-service half, typically on a new software release); and switch forward (switch processing to the previously out-of-service half). The processor was re-engineered and renamed in 1992 as the 3B21D. It is still in use as a component of many Alcatel-Lucent products such as the 2STP signal transfer point, and the 4ESS and 5ESS switches (both wireline and wireless). Minicomputers The general purpose family of 3B computer systems includes the 3B2, 3B5, 3B15, 3B20S, and 3B4000. These computers were named after the successful 3B20D. The 3B20S (simplex) ran the UNIX operating system and was developed at Bell Labs and produced by WECo in 1982 for general purpose internal Bell System use, and introduced in 1984 for the minicomputer market. The other 3B computers also ran UNIX System V from AT&T. 3B20S The 3B20S had virtually the same hardware as the 3B20D, but only one unit instead of two. The machine was approximately the size of a large refrigerator, requiring a minimum of 170 square feet floor space. It was in use at the 1984 Summer Olympics, where around twelve 3B20S served the email requirements of the Electronic Messaging System, which was built to replace the man-based messaging system of earlier Olympiads. The system connected around 1800 user terminals and 200 printers. The 3B20A was an enhanced version of the 3B20S, adding in a second processing unit working in parallel as a multiprocessor unit. 3B5 The 3B5 was built using the older Western Electric WE 32000 32-bit microprocessor. The initial versions had discrete memory management unit hardware using gate arrays, and supported segment-based memory translation. I/O was programmed using memory-mapped techniques. The machine was approximately the size of a dishwasher, though adding the reel-to-reel tape drive increased its size. These computers used SMD hard drives. 3B15 The 3B15, using the WE 32100, was the faster follow-on to the 3B5 with similar large form factor. 3B4000 The 3B4000 was a high availability server based on a 'snugly-coupled' architecture using the WE series 32x00 32-bit processor. Known internally as 'Apache', the 3B4000 was a follow-on to the 3B15 and initially used a 3B15 as a master processor. Developed in the mid-1980's at the Lisle Indian Hill West facility by the High Performance Computer Development Lab, the system consisted of multiple high performance (at the time) processor boards – adjunct processing elements (APEs) and adjunct communication elements (ACEs). These adjunct processors ran a customized UNIX kernel with drivers for SCSI (APEs) and serial boards (ACEs). The processing boards were interconnected by a redundant low latency parallel bus (ABUS) running at 20 MB/s. The UNIX kernels running on the adjunct processors were modified to allow the fork/exec of processes across processing units. The system calls and peripheral drivers were also extended to allow processes to access remote resources across the ABUS. Since the ABUS was hot-swappable, processors could be added or replaced without shutting down the system. If one of the adjunct processors failed during operation, the system could detect and restart programs that had been running on the failed element. The 3B4000 was capable of significant expansion; one test system (including storage) occupied 17 mid-height cabinets. Generally, the performance of the system increased linearly with additional processing elements, however the lack of a true shared memory capability required rewriting applications that relied heavily on this feature to avoid a severe performance penalty. Microcomputers 3B2 The 3B2 was introduced in 1984 using the WE 32000 32-bit microprocessor at 8 MHz with memory management chips that supported demand paging. Uses included the Switching Control Center System. The 3B2 Model 300, which could support up to 18 users, was approximately high and the 3B2 Model 400 was approximately high. The 300 was soon supplanted by the 3B2/310 running at 10 MHz, which featured the WE 32100 CPU as did later models. The Model 400 allowed more peripheral slots and more memory, and had a built-in 23 MB QIC tape drive managed by a floppy disk controller (nicknamed the "floppy tape"). These three models used standard MFM " hard disk drives. There also were Model 100 and Model 200 3B2 systems. The 3B2/600, running at 18 MHz, offered an improvement in performance and capacity: it featured a SCSI controller for the 60 MB QIC tape and two internal full-height disk drives. The 600 was approximately twice as tall as a 400, and was oriented with the tape and floppy disk drives opposite the backplane (instead of at a right angle to it as on the 3xx, 4xx and later 500 models). Early models used an internal Emulex card to interface the SCSI controller with ESDI disks, with later models using SCSI drives directly. The 3B2/500 was the next model to appear, essentially a 3B2/600 with enough components removed to fit into a 400 case; one internal disk drive and several backplane slots were sacrificed in this conversion. Unlike the 600, which because of its two large fans was quite loud, the 500 was tolerable in an office environment, like the 400. The 3B2/700 was an uprated version of the 600 featuring a slightly faster processor (WE 32200 at 22 MHz), and the 3B2/1000 was an additional step in this direction (WE 32200 at 24 MHz). 3B1 desktop workstation Officially named the AT&T UNIX PC, AT&T introduced a desktop computer in 1985 that was often dubbed the 3B1. However, this workstation was unrelated in hardware to the 3B line, and was based on the Motorola 68010 microprocessor. It ran a derivative of Unix System V Release 2 by Convergent Technologies. The system, which was also known as the PC-7300, was tailored for use as a productivity tool in office environments and as an electronic communication center. See also AT&T Computer Systems Altos Computer Systems References External links 3B2 manuals AT&T 3B2/3B5 Computer Systems Alcatel-Lucent AT&T computers Minicomputers 32-bit computers
18381960
https://en.wikipedia.org/wiki/Serif%20Europe
Serif Europe
Serif (Europe) Ltd is a privately owned British developer and publisher of software. It provides software and associated products direct to customers through its website and contact centre in the United Kingdom, and through retailers. The wider 'Serif Group Ltd', also operates a question and answer support website called CommunityPlus and a now closed gifts and gadgets website called Gizoo. History Serif was founded in 1987 by a small team of software engineers, with the objective of creating lower-cost alternatives to existing Desktop Publishing (DTP) software packages using the Microsoft Windows platform. The first Serif product to be released was called PageStar: a simple, low-cost advertisement layout program for Windows 2.0. This was expanded in 1990 with their follow-up, PagePlus (originally for Windows 3.0), which would go on to win 'Best Software' at the Computer Shopper Awards 2014. In subsequent years, this was accompanied by other software products in the 'Plus' range, including DrawPlus (1994), PhotoPlus (1999), WebPlus (2000), and MoviePlus (2003). In 1996, Serif was acquired by American company Vizacom (formerly known as Allegro New Media); however, ownership was sold back to Serif senior management in 2001. The successor to their DrawPlus product, Affinity Designer (a vector art & design package) was launched in 2014 for macOS. It was Serif's first product for macOS, and had been written from scratch specifically for it. This was followed in 2015 by the second Affinity product (and successor to PhotoPlus), Affinity Photo (a photo editing & design package). In 2016, following the release of Affinity Designer and Affinity Photo for Windows, Serif ceased development for their 'Plus' product range to focus exclusively on the Affinity product range. Affinity Publisher, the successor to PagePlus and the third addition to the Affinity product line, was released in 2019. There are no current plans by Serif to replace the WebPlus and MoviePlus product lines in the Affinity range. Products The following are all software packages, for the following applications: Current products Affinity Designer: Vector graphic design software for macOS, Windows and iPad Affinity Photo: Digital image editing software for macOS, Windows and iPad Affinity Publisher: Desktop publishing software for macOS and Windows Legacy products (No longer for sale or maintained) PagePlus: Desktop publishing software for Windows (replaced by Affinity Publisher) DrawPlus: Graphic design software for Windows (replaced by Affinity Designer) PhotoPlus: Digital image editing software for Windows (replaced by Affinity Photo) Discontinued products WebPlus: Website design software for Windows MoviePlus: Digital video editing software for Windows Digital Scrapbook Artist: Digital scrapbooking software for Windows CraftArtist: Digital scrapbooking software for Windows PanoramaPlus: Image stitching software for Windows PhotoStack: Image editing and organisation software for Windows AlbumPlus: Image organizer software for Windows Scan, Stitch, and Share: Document mosaicing software for Windows FontManager: Font management software for Windows References External links Affinity website Serif CommunityPlus support website Companies based in Nottingham Companies established in 1987 Software companies of the United Kingdom British brands
502107
https://en.wikipedia.org/wiki/Xargs
Xargs
xargs (short for "eXtended ARGumentS" ) is a command on Unix and most Unix-like operating systems used to build and execute commands from standard input. It converts input from standard input into arguments to a command. Some commands such as grep and awk can take input either as command-line arguments or from the standard input. However, others such as cp and echo can only take input as arguments, which is why xargs is necessary. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. Examples One use case of the xargs command is to remove a list of files using the rm command. POSIX systems have an for the maximum total length of the command line, so the command may fail with an error message of "Argument list too long" (meaning that the exec system call's limit on the length of a command line was exceeded): rm /path/* or rm $(find /path -type f). (The latter invocation is incorrect, as it may expand globs in the output.) This can be rewritten using the xargs command to break the list of arguments into sublists small enough to be acceptable: find /path -type f -print | xargs rm In the above example, the find utility feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. xargs can also be used to parallelize operations with the -P maxprocs argument to specify how many parallel processes should be used to execute the commands over the input argument lists. However, the output streams may not be synchronized. This can be overcome by using an --output file argument where possible, and then combining the results after processing. The following example queues 24 processes and waits on each to finish before launching another. find /path -name '*.foo' | xargs -P 24 -I '{}' /cpu/bound/process '{}' -o '{}'.out xargs often covers the same functionality as the command substitution feature of many shells, denoted by the backquote notation (`...` or $(...)). xargs is also a good companion for commands that output long lists of files such as find, locate and grep, but only if one uses -0 (or equivalently --null), since xargs without -0 deals badly with file names containing ', " and space. GNU Parallel is a similar tool that offers better compatibility with find, locate and grep when file names may contain ', ", and space (newline still requires -0). Placement of arguments : single argument The xargs command offers options to insert the listed arguments at some position other than the end of the command line. The -I option to xargs takes a string that will be replaced with the supplied input before the command is executed. A common choice is %. $ mkdir ~/backups $ find /path -type f -name '*~' -print0 | xargs -0 -I % cp -a % ~/backups The string to replace may appear multiple times in the command part. Using at all limits the number of lines used each time to one. : any number Another way to achieve a similar effect is to use a shell as the launched command, and deal with the complexity in that shell, for example: $ mkdir ~/backups $ find /path -type f -name '*~' -print0 | xargs -0 sh -c 'for filename; do cp -a "$filename" ~/backups; done' sh The word at the end of the line is for the POSIX shell to fill in for , the "executable name" part of the positional parameters (argv). If it weren't present, the name of the first matched file would be instead assigned to $0 and the file wouldn't be copied to ~/backups. One can also use any other word to fill in that blank, for example. Since accepts multiple files at once, one can also simply do the following: $ find /path -type f -name '*~' -print0 | xargs -0 sh -c 'if [ $# -gt 0 ]; then cp -a "$@" ~/backup; fi' sh This script runs with all the files given to it when there are any arguments passed. Doing so is more efficient since only one invocation of is done for each invocation of . Separator problem Many Unix utilities are line-oriented. These may work with xargs as long as the lines do not contain ', ", or a space. Some of the Unix utilities can use NUL as record separator (e.g. Perl (requires -0 and \0 instead of \n), locate (requires using -0), find (requires using -print0), grep (requires -z or -Z), sort (requires using -z)). Using -0 for xargs deals with the problem, but many Unix utilities cannot use NUL as separator (e.g. head, tail, ls, echo, sed, tar -v, wc, which). But often people forget this and assume xargs is also line-oriented, which is not the case (per default xargs separates on newlines and blanks within lines, substrings with blanks must be single- or double-quoted). The separator problem is illustrated here: # Make some targets to practice on touch important_file touch 'not important_file' mkdir -p '12" records' find . -name not\* | tail -1 | xargs rm find \! -name . -type d | tail -1 | xargs rmdir Running the above will cause important_file to be removed but will remove neither the directory called 12" records, nor the file called not important_file. The proper fix is to use the GNU-specific -print0 option, but tail (and other tools) do not support NUL-terminated strings: # use the same preparation commands as above find . -name not\* -print0 | xargs -0 rm find \! -name . -type d -print0 | xargs -0 rmdir When using the -print0 option, entries are separated by a null character instead of an end-of-line. This is equivalent to the more verbose command:find . -name not\* | tr \\n \\0 | xargs -0 rm or shorter, by switching xargs to (non-POSIX) line-oriented mode with the -d (delimiter) option: find . -name not\* | xargs -d '\n' rm but in general using -0 with -print0 should be preferred, since newlines in filenames are still a problem. GNU parallel is an alternative to xargs that is designed to have the same options, but is line-oriented. Thus, using GNU Parallel instead, the above would work as expected. For Unix environments where xargs does not support the -0 nor the option (e.g. Solaris, AIX), the POSIX standard states that one can simply backslash-escape every character:find . -name not\* | sed 's/\(.\)/\\\1/g' | xargs rm. Alternatively, one can avoid using xargs at all, either by using GNU parallel or using the functionality of . Operating on a subset of arguments at a time One might be dealing with commands that can only accept one or maybe two arguments at a time. For example, the diff command operates on two files at a time. The -n option to xargs specifies how many arguments at a time to supply to the given command. The command will be invoked repeatedly until all input is exhausted. Note that on the last invocation one might get fewer than the desired number of arguments if there is insufficient input. Use xargs to break up the input into two arguments per line: $ echo {0..9} | xargs -n 2 0 1 2 3 4 5 6 7 8 9 In addition to running based on a specified number of arguments at a time, one can also invoke a command for each line of input with the -L 1 option. One can use an arbitrary number of lines at a time, but one is most common. Here is how one might diff every git commit against its parent. $ git log --format="%H %P" | xargs -L 1 git diff Encoding problem The argument separator processing of xargs is not the only problem with using the xargs program in its default mode. Most Unix tools which are often used to manipulate filenames (for example sed, basename, sort, etc.) are text processing tools. However, Unix path names are not really text. Consider a path name /aaa/bbb/ccc. The /aaa directory and its bbb subdirectory can in general be created by different users with different environments. That means these users could have a different locale setup, and that means that aaa and bbb do not even necessarily have to have the same character encoding. For example, aaa could be in UTF-8 and bbb in Shift JIS. As a result, an absolute path name in a Unix system may not be correctly processable as text under a single character encoding. Tools which rely on their input being text may fail on such strings. One workaround for this problem is to run such tools in the C locale, which essentially processes the bytes of the input as-is. However, this will change the behavior of the tools in ways the user may not expect (for example, some of the user's expectations about case-folding behavior may not be met). References External links Linux Xargs Command Tutorial With Examples Manual pages Unix text processing utilities Unix SUS2008 utilities
55726925
https://en.wikipedia.org/wiki/Mar%C3%ADa-Esther%20Vidal
María-Esther Vidal
María-Esther Vidal Serodio is a Venezuelan professor at the Computer Science Department of the Simón Bolívar University since 2005 and dean assistant for research and development in applied science and engineering since 2011, on-leave since 2015. She currently leads the Semantic Web Group, which includes members from multiple fields such as databases, distributed systems and artificial intelligence, and whose research is focused on the solving problems from said fields. Career Esther Vidal graduated as a computer engineer from the Simón Bolívar University in 1987, with a master in computer science in 1991 and as a doctor in computer science in 2000. From 1995 to 1999 she was faculty research assistant of the Institute of Advanced Computer Studies (UMIACS) in the University of Maryland. In 2011 she became the director of direction for faculty development of the Simón Bolívar University. Since 1988 she has advised and mentored more than 80 students: 65 undergraduate, 10 master, and 7 PhD. As a PhD student, she worked with Louiqa Raschid from the University of Maryland, College Park. In 2020, Esther Vidal was awarded in Germany with a honorific mention as one of the 50 most influential personalities of Computer Science Engineering in the last decade, as well as number four female researcher in the list. She is currently the director of the Scientist Data Management Group of the German National Library of Science and Technology and member of the L3S Research Centre of Leibniz University Hannover. Esther Vidal has published more than 160 peer-reviewed papers in semantic web, databases, bioinformatics, and artificial intelligence, co-authored one monograph and co-edited books and journal special issues. She has addressed the challenges of creating knowledge graphs to support precision medicine; these techniques are being applied in projects like iASiS and BigMedylitics, and for over 15 years she has participated in international projects in collaboration with Louiqa Raschid from the University of Maryland, College Park. She is also part of various editorial boards and has been the general chair, co-chair, senior member, and reviewer of several scientific events and journals, a supervisor of MSCA-ETN projects WDAqua and NoBIAS, a visiting professor of universities such as Uni Maryland, KIT Karlsruhe. References External links Curriculum vitae (PDF) Research profile of Vidal at TIB Living people Simón Bolívar University (Venezuela) faculty Venezuelan women scientists Semantic Web people Year of birth missing (living people) Venezuelan women educators
910366
https://en.wikipedia.org/wiki/Du%20%28Unix%29
Du (Unix)
du (abbreviated from disk usage) is a standard Unix program used to estimate file space usage—space used under a particular directory or files on a file system. A Windows commandline version of this program is part of Sysinternals suite by Mark Russinovich. History The du utility first appeared in version 1 of AT&T UNIX. The version of du bundled in GNU coreutils was written by Torbjorn Granlund, David MacKenzie, Paul Eggert, and Jim Meyering. The command is also available for FreeDOS. Specification By default, the Single UNIX Specification (SUS) specifies that du is to display the file space allocated to each file and directory contained in the current directory. Links will be displayed as the size of the link file, not what is being linked to; the size of the content of directories is displayed, as expected. As du reports allocation space and not absolute file space, the amount of space on a file system shown by du may vary from that shown by df if files have been deleted but their blocks not yet freed. Also the minfree setting that allocates datablocks for the filesystem and the super user processes creates a discrepancy between total blocks and the sum of used and available blocks. The minfree setting is usually set to about 5% of the total filesystem size. For more info see core utils faq. Usage du takes a single argument, specifying a pathname for to work; if it is not specified, the current directory is used. The SUS mandates for the following options: , In addition to the default output, include information for each non-directory entry , display a grand total of the disk usage found by the other arguments , the depth at which summing should occur. -d 0 sums at the current level, -d 1 sums at the subdirectory, -d 2 at sub-subdirectories, etc. , calculate disk usage for link references specified on the command line , show sizes as multiples of 1024 bytes, not 512-byte , calculate disk usage for link references anywhere , report only the sum of the usage in the current directory, not for each directory therein contained , only traverse files and directories on the device on which the pathname argument is specified. Other Unix and Unix-like operating systems may add extra options. For example, BSD and GNU du specify a option, displaying disk usage in a format easier to read by the user, adding units with the appropriate SI prefix (e.g. 10 MB). Examples Sum of directories (-s) in kilobytes (-k): $ du -sk * 152304 directoryOne 1856548 directoryTwo Sum of directories (-s) in human-readable format (-h : Byte, Kilobyte, Megabyte, Gigabyte, Terabyte and Petabyte): $ du -sh * 149M directoryOne 1.8G directoryTwo disk usage of all subdirectories and files including hidden files within the current directory (sorted by filesize) : $ du -sk .[!.]* *| sort -n disk usage of all subdirectories and files including hidden files within the current directory (sorted by reverse filesize) : $ du -sk .[!.]* *| sort -nr The weight (size) of each subdirectory under the current directory (-d 1) with a sum total at the end (-c) all displayed in human-readable format (-h): $ du -d 1 -c -h or with du from GNU: $ du --max-depth=1 -c -h The weight (size) of subdirectories under the root directory (-d 1, trailing /) with a sum total at the end (-c), all displayed in human-readable format (-h) without traversing into other filesystems (-x). Useful when /var /tmp or other directories are on separate storage from the root directory: $ du -d 1 -c -h -x / or with du from GNU: $ du --max-depth=1 -c -h -x / See also List of Unix commands Filelight Disk Usage Analyzer ncdu References External links Standard Unix programs Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands Disk usage analysis software Unix file system-related software
20914512
https://en.wikipedia.org/wiki/Lateral%20computing
Lateral computing
Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution. The traditional or conventional approach to solving computing problems is to either build mathematical models or have an IF- THEN -ELSE structure. For example, a brute-force search is used in many chess engines, but this approach is computationally expensive and sometimes may arrive at poor solutions. It is for problems like this that lateral computing can be useful to form a better solution. A simple problem of truck backup can be used for illustrating lateral-computing. This is one of the difficult tasks for traditional computing techniques, and has been efficiently solved by the use of fuzzy logic (which is a lateral computing technique). Lateral-computing sometimes arrives at a novel solution for particular computing problem by using the model of how living beings, such as how humans, ants, and honeybees, solve a problem; how pure crystals are formed by annealing, or evolution of living beings or quantum mechanics etc. From lateral-thinking to lateral-computing Lateral thinking is technique for creative thinking for solving problems. The brain as center of thinking has a self-organizing information system. It tends to create patterns and traditional thinking process uses them to solve problems. The lateral thinking technique proposes to escape from this patterning to arrive at better solutions through new ideas. Provocative use of information processing is the basic underlying principle of lateral thinking, The provocative operator (PO) is something which characterizes lateral thinking. Its function is to generate new ideas by provocation and providing escape route from old ideas. It creates a provisional arrangement of information. Water logic is contrast to traditional or rock logic. Water logic has boundaries which depends on circumstances and conditions while rock logic has hard boundaries. Water logic, in someways, resembles fuzzy logic. Transition to lateral-computing Lateral computing does a provocative use of information processing similar to lateral-thinking. This is explained with the use of evolutionary computing which is a very useful lateral-computing technique. The evolution proceeds by change and selection. While random mutation provides change, the selection is through survival of the fittest. The random mutation works as a provocative information processing and provides a new avenue for generating better solutions for the computing problem. The term "Lateral Computing" was first proposed by Prof CR SUTHIKSHN Kumar and First World Congress on Lateral Computing WCLC 2004 was organized with international participants during December 2004. Lateral computing takes the analogies from real-world examples such as: How slow cooling of the hot gaseous state results in pure crystals (Annealing) How the neural networks in the brain solve such problems as face and speech recognition How simple insects such as ants and honeybees solve some sophisticated problems How evolution of human beings from molecular life forms are mimicked by evolutionary computing How living organisms defend themselves against diseases and heal their wounds How electricity is distributed by grids Differentiating factors of "lateral computing": Does not directly approach the problem through mathematical means. Uses indirect models or looks for analogies to solve the problem. Radically different from what is in vogue, such as using "photons" for computing in optical computing. This is rare as most conventional computers use electrons to carry signals. Sometimes the Lateral Computing techniques are surprisingly simple and deliver high performance solutions to very complex problems. Some of the techniques in lateral computing use "unexplained jumps". These jumps may not look logical. The example is the use of "Mutation" operator in genetic algorithms. Convention – lateral It is very hard to draw a clear boundary between conventional and lateral computing. Over a period of time, some unconventional computing techniques become integral part of mainstream computing. So there will always be an overlap between conventional and lateral computing. It will be tough task classifying a computing technique as a conventional or lateral computing technique as shown in the figure. The boundaries are fuzzy and one may approach with fuzzy sets. Formal definition Lateral computing is a fuzzy set of all computing techniques which use unconventional computing approach. Hence Lateral computing includes those techniques which use semi-conventional or hybrid computing. The degree of membership for lateral computing techniques is greater than 0 in the fuzzy set of unconventional computing techniques. The following brings out some important differentiators for lateral computing. Conventional computing The problem and technique are directly correlated. Treats the problem with rigorous mathematical analysis. Creates mathematical models. The computing technique can be analyzed mathematically. Lateral computing The problem may hardly have any relation to the computing technique used Approaches problems by analogies such as human information processing model, annealing, etc. Sometimes the computing technique cannot be mathematically analyzed. Lateral computing and parallel computing Parallel computing focuses on improving the performance of the computers/algorithms through the use of several computing elements (such as processing elements). The computing speed is improved by using several computing elements. Parallel computing is an extension of conventional sequential computing. However, in lateral computing, the problem is solved using unconventional information processing whether using a sequential or parallel computing. A review of lateral-computing techniques There are several computing techniques which fit the Lateral computing paradigm. Here is a brief description of some of the Lateral Computing techniques: Swarm intelligence Swarm intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents, interacting locally with their environment, cause coherent functional global patterns to emerge. SI provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model. One interesting swarm intelligent technique is the Ant Colony algorithm: Ants are behaviorally unsophisticated; collectively they perform complex tasks. Ants have highly developed sophisticated sign-based communication. Ants communicate using pheromones; trails are laid that can be followed by other ants. Routing Problem Ants drop different pheromones used to compute the "shortest" path from source to destination(s). Agent-based systems Agents are encapsulated computer systems that are situated in some environment and are capable of flexible, autonomous action in that environment in order to meet their design objectives. Agents are considered to be autonomous (independent, not-controllable), reactive (responding to events), pro-active (initiating actions of their own volition), and social (communicative). Agents vary in their abilities: they can be static or mobile, or may or may not be intelligent. Each agent may have its own task and/or role. Agents, and multi-agent systems, are used as a metaphor to model complex distributed processes. Such agents invariably need to interact with one another in order to manage their inter-dependencies. These interactions involve agents cooperating, negotiating and coordinating with one another. Agent-based systems are computer programs that try to simulate various complex phenomena via virtual "agents" that represent the components of a business system. The behaviors of these agents are programmed with rules that realistically depict how business is conducted. As widely varied individual agents interact in the model, the simulation shows how their collective behaviors govern the performance of the entire system - for instance, the emergence of a successful product or an optimal schedule. These simulations are powerful strategic tools for "what-if" scenario analysis: as managers change agent characteristics or "rules," the impact of the change can be easily seen in the model output Grid computing By analogy, a computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. The applications of grid computing are in: Chip design, cryptographic problems, medical instrumentation, and supercomputing. Distributed supercomputing applications use grids to aggregate substantial computational resources in order to tackle problems that cannot be solved on a single system. Autonomic computing The autonomic nervous system governs our heart rate and body temperature, thus freeing our conscious brain from the burden of dealing with these and many other low-level, yet vital, functions. The essence of autonomic computing is self-management, the intent of which is to free system administrators from the details of system operation and maintenance. Four aspects of autonomic computing are: Self-configuration Self-optimization Self-healing Self-protection This is a grand challenge promoted by IBM. Optical computing Optical computing is to use photons rather than conventional electrons for computing. There are quite a few instances of optical computers and successful use of them. The conventional logic gates use semiconductors, which use electrons for transporting the signals. In case of optical computers, the photons in a light beam are used to do computation. There are numerous advantages of using optical devices for computing such as immunity to electromagnetic interference, large bandwidth, etc. DNA computing DNA computing uses strands of DNA to encode the instance of the problem and to manipulate them using techniques commonly available in any molecular biology laboratory in order to simulate operations that select the solution of the problem if it exists. Since the DNA molecule is also a code, but is instead made up of a sequence of four bases that pair up in a predictable manner, many scientists have thought about the possibility of creating a molecular computer. These computers rely on the much faster reactions of DNA nucleotides binding with their complements, a brute force method that holds enormous potential for creating a new generation of computers that would be 100 billion times faster than today's fastest PC. DNA computing has been heralded as the "first example of true nanotechnology", and even the "start of a new era", which forges an unprecedented link between computer science and life science. Example applications of DNA computing are in solution for the Hamiltonian path problem which is a known NP complete one. The number of required lab operations using DNA grows linearly with the number of vertices of the graph. Molecular algorithms have been reported that solves the cryptographic problem in a polynomial number of steps. As known, factoring large numbers is a relevant problem in many cryptographic applications. Quantum computing In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics, which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or quantum superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. A quantum computer manipulates qubits by executing a series of quantum gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying these gates in succession, a quantum computer can perform a complicated unitary transformation to a set of qubits in some initial state. Reconfigurable computing Field-programmable gate arrays (FPGA) are making it possible to build truly reconfigurable computers. The computer architecture is transformed by on the fly reconfiguration of the FPGA circuitry. The optimal matching between architecture and algorithm improves the performance of the reconfigurable computer. The key feature is hardware performance and software flexibility. For several applications such as fingerprint matching, DNA sequence comparison, etc., reconfigurable computers have been shown to perform several orders of magnitude better than conventional computers. Simulated annealing The Simulated annealing algorithm is designed by looking at how the pure crystals form from a heated gaseous state while the system is cooled slowly. The computing problem is redesigned as a simulated annealing exercise and the solutions are arrived at. The working principle of simulated annealing is borrowed from metallurgy: a piece of metal is heated (the atoms are given thermal agitation), and then the metal is left to cool slowly. The slow and regular cooling of the metal allows the atoms to slide progressively their most stable ("minimal energy") positions. (Rapid cooling would have "frozen" them in whatever position they happened to be at that time.) The resulting structure of the metal is stronger and more stable. By simulating the process of annealing inside a computer program, it is possible to find answers to difficult and very complex problems. Instead of minimizing the energy of a block of metal or maximizing its strength, the program minimizes or maximizes some objective relevant to the problem at hand. Soft computing One of the main components of "Lateral-computing" is soft computing which approaches problems with human information processing model. The Soft Computing technique comprises Fuzzy logic, neuro-computing, evolutionary-computing, machine learning and probabilistic-chaotic computing. Neuro computing Instead of solving a problem by creating a non-linear equation model of it, the biological neural network analogy is used for solving the problem. The neural network is trained like a human brain to solve a given problem. This approach has become highly successful in solving some of the pattern recognition problems. Evolutionary computing The genetic algorithm (GA) resembles the natural evolution to provide a universal optimization. Genetic algorithms start with a population of chromosomes which represent the various solutions. The solutions are evaluated using a fitness function and a selection process determines which solutions are to be used for competition process. These algorithms are highly successful in solving search and optimization problems. The new solutions are created using evolutionary principles such as mutation and crossover. Fuzzy logic Fuzzy logic is based on the fuzzy sets concepts proposed by Lotfi Zadeh. The degree of membership concept is central to fuzzy sets. The fuzzy sets differ from crisp sets since they allow an element to belong to a set to a degree (degree of membership). This approach finds good applications for control problems. The Fuzzy logic has found enormous applications and has already found a big market presence in consumer electronics such as washing machines, microwaves, mobile phones, Televisions, Camcoders etc. Probabilistic/chaotic computing Probabilistic computing engines, e.g. use of probabilistic graphical model such as Bayesian network. Such computational techniques are referred to as randomization, yielding probabilistic algorithms. When interpreted as a physical phenomenon through classical statistical thermodynamics, such techniques lead to energy savings that are proportional to the probability p with which each primitive computational step is guaranteed to be correct (or equivalently to the probability of error, (1–p). Chaotic Computing is based on the chaos theory. Fractals Fractal Computing are objects displaying self-similarity at different scales. Fractals generation involves small iterative algorithms. The fractals have dimensions greater than their topological dimensions. The length of the fractal is infinite and size of it cannot be measured. It is described by an iterative algorithm unlike a Euclidean shape which is given by a simple formula. There are several types of fractals and Mandelbrot sets are very popular. Fractals have found applications in image processing, image compression music generation, computer games etc. Mandelbrot set is a fractal named after its creator. Unlike the other fractals, even though the Mandelbrot set is self-similar at magnified scales, the small scale details are not identical to the whole. I.e., the Mandelbrot set is infinitely complex. But the process of generating it is based on an extremely simple equation. The Mandelbrot set M is a collection of complex numbers. The numbers Z which belong to M are computed by iteratively testing the Mandelbrot equation. C is a constant. If the equation converges for chosen Z, then Z belongs to M. Mandelbrot equation: Randomized algorithm A Randomized algorithm makes arbitrary choices during its execution. This allows a savings in execution time at the beginning of a program. The disadvantage of this method is the possibility that an incorrect solution will occur. A well-designed randomized algorithm will have a very high probability of returning a correct answer. The two categories of randomized algorithms are: Monte Carlo algorithm Las Vegas algorithm Consider an algorithm to find the kth element of an array. A deterministic approach would be to choose a pivot element near the median of the list and partition the list around that element. The randomized approach to this problem would be to choose a pivot at random, thus saving time at the beginning of the process. Like approximation algorithms, they can be used to more quickly solve tough NP-complete problems. An advantage over the approximation algorithms, however, is that a randomized algorithm will eventually yield an exact answer if executed enough times Machine learning Human beings/animals learn new skills, languages/concepts. Similarly, machine learning algorithms provide capability to generalize from training data. There are two classes of Machine Learning (ML): Supervised ML Unsupervised ML One of the well known machine learning technique is Back Propagation Algorithm. This mimics how humans learn from examples. The training patterns are repeatedly presented to the network. The error is back propagated and the network weights are adjusted using gradient descent. The network converges through several hundreds of iterative computations. Support vector machines This is another class of highly successful machine learning techniques successfully applied to tasks such as text classification, speaker recognition, image recognition etc. Example applications There are several successful applications of lateral-computing techniques. Here is a small set of applications that illustrates lateral computing: Bubble sorting: Here the computing problem of sorting is approached with an analogy of bubbles rising in water. This is by treating the numbers as bubbles and floating them to their natural position. Truck backup problem: This is an interesting problem of reversing a truck and parking it at a particular location. The traditional computing techniques have found it difficult to solve this problem. This has been successfully solved by Fuzzy system. Balancing an inverted pendulum: This problem involves balancing and inverted pendulum. This problem has been efficiently solved by neural networks and fuzzy systems. Smart volume control for mobile phones: The volume control in mobile phones depend on the background noise levels, noise classes, hearing profile of the user and other parameters. The measurement on noise level and loudness level involve imprecision and subjective measures. The authors have demonstrated the successful use of fuzzy logic system for volume control in mobile handsets. Optimization using genetic algorithms and simulated annealing: The problems such as traveling salesman problem have been shown to be NP complete problems. Such problems are solved using algorithms which benefit by heuristics. Some of the applications are in VLSI routing, partitioning etc. Genetic algorithms and Simulated annealing have been successful in solving such optimization problems. Programming The Unprogrammable (PTU) involving the automatic creation of computer programs for unconventional computing devices such as cellular automata, multi-agent systems, parallel systems, field-programmable gate arrays, field-programmable analog arrays, ant colonies, swarm intelligence, distributed systems, and the like. Summary Above is a review of lateral-computing techniques. Lateral-computing is based on the lateral-thinking approach and applies unconventional techniques to solve computing problems. While, most of the problems are solved in conventional techniques, there are problems which require lateral-computing. Lateral-computing provides advantage of computational efficiency, low cost of implementation, better solutions when compared to conventional computing for several problems. The lateral-computing successfully tackles a class of problems by exploiting tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. Lateral-computing techniques which use the human like information processing models have been classified as "Soft Computing" in literature. Lateral-computing is valuable while solving numerous computing problems whose mathematical models are unavailable. They provide a way of developing innovative solutions resulting in smart systems with Very High Machine IQ (VHMIQ). This article has traced the transition from lateral-thinking to lateral-computing. Then several lateral-computing techniques have been described followed by their applications. Lateral-computing is for building new generation artificial intelligence based on unconventional processing. See also Calculation Computing Computationalism Real computation Reversible computation Hypercomputation Computation Computational problem Unconventional computing References Sources Proceedings of IEEE (2001): Special Issue on Industrial Innovations Using Soft Computing, September. T. Ross (2004): Fuzzy Logic With Engineering Applications, McGraw-Hill Inc Publishers. B. Kosko (1994); Fuzzy Thinking, Flamingo Publishers. E. Aarts and J. Krost (1997); Simulated Annealing and Boltzmann Machines, John Wiley And Sons Publishers. K.V. Palem (2003); Energy Aware Computing through Probabilistic Switching: A study of limits, Technical Report GIT-CC-03-16 May 2003. M. Sima, S. Vassiliadis, S. Cotofona, J. T. J. Van Eijndoven, and K. A. Vissers (2000); A taxonomy of custom computing machines, in Proceedings of the Progress workshop, October. J. Gleick (1998); Choas: Making a New Science, Vintage Publishers. B. Mandelbrot (1997); The Fractal Geometry of Nature, Freeman Publishers, New York. D.R. Hofstadter (1999); Godel, Escher, Bach: An Eternal Golden Braid, Harper Collins Publishers. R.A. Aliev and R.R. Aliev (2001); Soft Computing and Its Applications, World Scientific Publishers. Jyh-Shing Roger Jang, Chuen-Tsai Sun & Eiji Mizutani (1997); Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice Hall Publishers. John R. Koza, Martin A. Keane, Matthew J. Streeter, William Mydlowec, Jessen Yu, and Guido Lanza (2003); Genetic Programming IV: Routine Human-Competitive Machine Intelligence, Kluwer Academic. James Allen (1995); Natural Language Understanding, 2nd Edition, Pearson Education Publishers. R. Herken (1995); Universal Turing Machine, Springer-Verlag 2nd Edition. Harry R. Lewis, Christos H. Papadimtrou (1997); Elements of Theory of Computation, 2nd edition, Prentice Hall Publishers. M. Garey and D. Johnson (1979); Computers and Intractability: A theory of NP Completeness, W.H. Freeman and Company Publishers. M. Sipser (2001); Introduction to the Theory of Computation, Thomson/Brooks/Cole Publishers. K. Compton and S. Hauck (2002); Reconfigurable Computing: A survey of Systems and Software, ACM Computing Surveys, Vo. 34, No.2, June 2002, pp. 171–210. D.W. Patterson (1990); Introduction to Artificial Intelligence and Expert Systems, Prentice Hall Inc. Publishers. E. Charniak and D. Mcdermott (1999); Introduction to Artificial Intelligence, Addison Wesley. R.L. Epstein and W.A. Carnielli (1989); Computability, Computable Functions, Logic and The Foundations of Mathematics, Wadsworth & Brooks/Cole Advanced Books and Software. T. Joachims (2002); Learning to Classify Text using Support Vector Machines, Kluwer Academic Publishers. T. Mitchell (1997); Machine Learning, McGraw Hill Publishers. R. Motwani and P. Raghavan (1995); Randomized Algorithms, Cambridge International Series in Parallel Computation, Cambridge University Press. Sun Microsystems (2003); Introduction to Throughput Computing, Technical Report. Conferences First World Congress on Lateral Computing, IISc, Bangalore India, December 2004 WCLC 2004 Second World Congress on Lateral Computing, WCLC 2005, PESIT, Bangalore, India Problem solving methods Computational science
35843988
https://en.wikipedia.org/wiki/Atari%20SIO
Atari SIO
The Serial Input/Output system, universally known as SIO, was a proprietary peripheral bus and related software protocol stacks used on the Atari 8-bit family to provide most input/output duties for those computers. Unlike most I/O systems of the era, such as RS-232, SIO included a lightweight protocol that allowed multiple devices to be attached to a single daisy-chained port that supported dozens of devices. It also supported plug-and-play operations. SIO's designer, Joe Decuir, credits his work on the system as the basis of USB. SIO was developed in order to allow expansion without using internal card slots as in the Apple II, due to problems with the FCC over radio interference. This required it to be fairly flexible in terms of device support. Devices that used the SIO interface included printers, floppy disk drives, cassette decks, modems and expansion boxes. Some devices had ROM based drivers that were copied to the host computer when booted allowing new devices to be supported without native support built into the computer itself. SIO required logic in the peripherals to support the protocols, and in some cases a significant amount of processing power was required - the Atari 810 floppy disk drive included a MOS Technology 6507 for instance. Additionally, the large custom connector was expensive. These drove up costs of the SIO system, and Decuir blames this for "sinking the system". There were unsuccessful efforts to lower the cost of the system during the 8-bits history. The name "SIO" properly refers only to the sections of the operating system that handled the data exchange, in Atari documentation the bus itself is simply the "serial bus" or "interface bus", although this is also sometimes referred to as SIO. In common usage, SIO refers to the entire system from the operating system to the bus and even the physical connectors. History FCC problem The SIO system ultimately owes its existence to the FCC's rules on the allowable amount of RF interference that could leak from any device that directly generated analog television signals. These rules demanded very low amounts of leakage and had to pass an extensive testing suite. These rules were undergoing revisions during the period when Atari's Grass Valley group was designing the Colleen machine that would become the Atari 800. The Apple II, one of the few pre-built machines that connected to a television in that era, had avoided this problem by not including the RF modulator in the computer. Instead, Apple arranged a deal with a local electronics company, M&R Enterprises, to sell plug-in modulators under the name Sup'R'Mod. This meant the Apple did not, technically, generate television signals and did not have to undergo FCC testing. One of Atari's major vendors, Sears, felt this was not a suitable solution for their off-the-shelf sales, so to meet the interference requirements they encased the entire system in a cast-aluminum block 2 mm thick. Colleen was originally intended to be a game console, the successor to the Atari 2600. The success of the Apple II led to the system being repositioned as a home computer, and this market required peripheral devices. On machines like the Apple II, peripherals were supported by placing an adapter card in one of the machine's internal card slots, running a cable through a hole in the case, and connecting the device to that cable. A hole large enough for such a cable would mean Colleen would fail the RF tests, which presented a serious problem. Additionally, convection cooling the cards would be very difficult. TI diversion During a visit in early 1978, a Texas Instruments (TI) salesman demonstrated a system consisting of a fibre optic cable with transceivers molded into both ends. Joe Decuir suggested they could use this to send the video signal to an external RF modulator, which would be as simple to use as the coaxial cable one needed to run the signal to the television anyway. Now the computer could have normal slots; like the Apple II, the RF portion would be entirely external and could be tested on its own separately from the computer. When Decuir explained his concept, the salesman's "eyes almost popped out." Unknown to the Grass Valley team, TI was at that time in the midst of developing the TI-99/4 and was facing the same problem with RF output. When Decuir later explained the idea to his boss, Wade Tuma, Tuma replied that "No, the FCC would never let us get away with that stunt." This proved to be true; TI used Decuir's idea, and when they took it to the FCC in 1979, they rejected it out of hand. TI had to redesign their system, and the resulting delay meant the Atari's reached the market first. SIO With this path to allowing card slots stymied, Decuir returned to the problem of providing expansion through an external system of some sort. By this time, considerable work had been carried out on using the Atari's POKEY chip to run a cassette deck by directly outputting sounds that would be recorded to the tape. It was realized that, with suitable modifications, the POKEY could bypass digital-to-analog conversion hardware and drive TTL output directly. To produce a TTL digital bus, the SIO system used two of the POKEY's four sound channels to produce steady tones that represented clock signals of a given frequency. A single-byte buffer was used to send and receive data; every time the clock signal toggled, one bit from the buffer would be read or written. When all eight bits were read or written, the system generated an interrupt that triggered the operating system to read or write more data. Unlike a cassette interface, where only a single device would normally be used, an external expansion port would need to be able to support more than one device. To support this, a simple protocol was developed and several new pins added to the original simple cassette port. Most important among these was the COMMAND pin, which triggered the devices to listen for a 5-byte message that activated one of the devices on the bus and asked it for data (or send it commands). They also added the PROCEED and INTERRUPT pins which could be used by the devices to set bits in control registers in the host, but these were not used in the deployed system. Likewise, the timing signals generated by the POKEY were sent on the CLOCKOUT and CLOCKIN pins, although the asynchronous protocol did not use these. Description Hardware The SIO bus was implemented using a custom 13-pin D-connector arrangement (although not D-subminiature) with the male connectors on the devices and the female connectors on either end of the cables. The connectors were physically robust to allow repeated use, with very strong pins in the device socket and sprung connectors in the cables, as opposed to friction fit as in a typical D-connector. Most devices had in and out ports to allow daisy chaining peripherals, although the Atari 410 Program Recorder had to be placed at the end of the chain and thus did not include an out port. Communications SIO was controlled by the Atari's POKEY chip, which included a number of general purpose timers. Four of these allowed fine control over the timing rates, and were intended to be used for sound output by connecting them to an digital-to-analog converter (D-to-A) and then mixing them into the television signal before entering the RF modulator. These were re-purposed as the basis of the SIO system, used as clocks in some modes, or to produce the output signals directly in others. The system included a single "shift register" that was used to semi-automate most data transfers. This consisted of a single 8-bit value, LSB first, that was used to buffer reads and writes. The user accesses these through two memory locations known as SEROUT for writing and SERIN for reading. These were "shadow registers", locations in the RAM that mirrored registers in the various support chips like POKEY. The data bits were framed with a single zero start bit and a single one stop bit, and no parity was used. To write data in synchronous mode, the POKEY's main timer channels were set to an appropriate clock rate, say 9600 bit/s. Any data written to the SEROUT register was then sent one bit at a time every time the signal went high. It was timed so the signal returned low in the middle of the bit. When all 10 bits (including the start and stop) had been sent, POKEY sent a maskable interrupt to the CPU to indicate it was ready for another byte. On reading, if another byte of data was received before the SERIN was read, the 3rd bit of the SKSTAT was set to true to indicate the overflow. Individual bits being read were also sent to the 4th bit of SKSTAT as they arrived, allowing direct reading of the data without waiting for the framing to complete. The system officially supported speeds up to 19,200 bit/s, but this rate was chosen only because the Atari engineer's protocol analyzer topped out at that speed. The system was actually capable of much higher performance. A number of 3rd party devices, especially floppy drives, used custom hardware and drivers to greatly increase the transmission speeds to as much as 72,000 bit/s. Although the system had CLOCKOUT and CLOCKIN pins that could, in theory, be used for synchronous communications, in practice only the asynchronous system was used. In this case, a base speed was set in as above in the POKEY, which would follow changes of up to 5% from this base rate. This made it much easier to work with real devices where mechanical or electrical issues caused the slight variation in the rates over time. One example was the cassette deck, where tape stretch could alter the speed, another is a modem, there the remote system may not be exactly clocked to a given speed. Device control The SIO system allowed devices to be daisy chained, and thus required some way of identifying that information on the various data pins was intended for a specific device on the chain. This was accomplished with the COMMAND pin. The COMMAND pin was normally held high, and when it was pulled low, devices on the bus were required to listen for a "command frame". This consisted of a 5-byte packet; the first byte was the device ID, the second was a device-specific command number, and then two auxiliary bytes of data that could be used by the driver for any purpose. These four were followed by a checksum byte. The COMMAND pin went high again when the frame was complete. On reception of the packet, the device specified in the first byte was expected to reply. This consisted of a single byte containing an ASCII character, "A" for Acknowledge if the packet was properly decoded and the checksum matched, "N" otherwise. For commands that exchanged data, the command frame would be followed by a "data frame" from or to the selected device. This frame would then be acknowledged by the receiver with a "C" for Complete or "E" for error. Since every packet of 128 data bytes required another command frame before the next could be sent, throughput was effected by latency issues; the Atari 810 disk drive normally used a 19,200 bit/s speed, but was limited to about 6,000 bit/s as a result of the overhead. Devices were enumerated mechanically, typically using small DIP switches. Each class of device was given a different set of 16 potential numbers based on hexadecimal numbers, the $30 range for disk drives and $40 for printers, for instance. However, each driver could support as many or as few devices as it wanted; the Atari 820 printer driver supported only a single printer numbered $40, while the disk drivers could support four drives numbered $31 to $34. Cassette use Design of what became the SIO had started as a system for interfacing to cassette recorders using the sound hardware to generate the appropriate tones. This capability was retained in the production versions, allowing the Atari 410 and its successors to be relatively simple devices. When set to operate the cassette, the outputs from channel 1 and 2 of the POKEY were sent to the DATAOUT rather than the clock pins. The two channels were set to produce tones that were safe to record on the tape, 3995 Hz for a zero was in POKEY channel 2 and 5326 Hz for a one was in channel 1. In this mode, when the POKEY read bits from the SERIN, any 1's resulted in channel 1 playing into the data pin, and 0's played channel 2. In this fashion, a byte of data was converted into tones on the tape. Reading, however, used a different system, as there was no A-to-D converter in the computer. Instead, the cassette decks included two narrow-band filters tuned to the two frequencies. During a read, the output of one or the other of these filters would be asserted as the bits were read off the tape. These were sent as digital data back to the host computer. Because the tape was subject to stretching and other mechanical problems that could speed or slow transport across the heads, the system used asynchronous reads and writes. Data was written in blocks of 132 bytes per record, with the first two bytes being the bit pattern "01010101 01010101". An inter-record gap between the blocks with no tones allowed the operating system to know when a new record was starting by looking for the leading zero. It then rapidly read the port and timed the transitions of the timing bits from 0 to 1 and back to determine the precise data rate. The next byte was a control byte specifying if this was a normal record of 128 data bytes, a short block, or an end-of-file. Up to 128 bytes of data followed, itself followed by a checksum byte, including everything before the checksum. The operation was further controlled by the MOTOR pin in the SIO port, dedicated to this purpose. When this pin was low, the motor in the deck was turned off. This allowed the user to press play or play and record without the tape beginning to move. When the appropriate command was entered on the computer, MOTOR would be asserted and the cassette would begin to turn. Another dedicated pin was the AUDIOIN, which was connected directly to the sound output circuits between the POKEY's D-to-A converters and the final output so that any signal on the pin mixed with the sound from the POKEY (if any) and was then sent to the television speaker. This was connected to the left sound channel in the cassette, while the right channel was connected to the data pins. This allowed users to record normal sounds on the left channel and then have them play through the television. This was often combined with direct motor control to produce interactive language learning tapes and similar programs. Some software companies would record sounds or music on this channel to make the loading process more enjoyable. See also Special input/output Notes References Bibliography Computer buses Serial buses Atari 8-bit family
57173314
https://en.wikipedia.org/wiki/Emily%20Willbanks
Emily Willbanks
Emily Willbanks (born Emily West, November 25, 1930 – February 18, 2007) was a scientist at the Los Alamos National Laboratory from 1954–1990. She made advancements in the fields of mathematics, computing, and data systems. She used her background in physics and mathematics to contribute to defense weapons and high-performance storage systems at Los Alamos. She was instrumental in the advancement of a major weather centre in England, was involved in many classified projects for the government, and revolutionized the mass data storage system. Life Early years and education Emily West was born on November 25, 1930 in Fort Lauderdale, Florida. Her father, Frank M. West, was the superintendent for a private beach estate, and her mother was a homemaker. She was their only child. West attended the public high school. From a young age she expressed great interest in mathematics and science. West studied science at Duke University beginning in 1948. After her first year she received a scholarship for academic excellence and was the sole female physics major in her graduating class. She earned a B.S. in math and physics in 1952. She continued her education at the University of New Mexico where she completed a master's degree in physics in 1957. Early career West worked as an engineering aide in mathematics at Pratt & Whitney Aircraft Co from 1952 to 1954. Her work there involved hand calculations of heat flow and fluid dynamics for a feasibility study on nuclear-powered aircraft, in partnership with General Electric. Los Alamos National Laboratory West began working at the Los Alamos National Laboratory in 1954. While at Los Alamos National Laboratory her initial work involved hand calculations. This evolved into working with the MANIAC I computer for weapons applications. She continued working in the weapons division until the early seventies. She worked under Roger Lazarus in the Computer Division at Los Alamos National Laboratory until she retired in 1990. Her work in the computing department involved the design and maintenance of computer storage, including a project to design a clustered file system. Her work for this project included software development and computerizing weapons data. She adapted the same software in an English weather center database, the Meteorological Archival and Retrieval System. Personal life West met Eugene Willbanks at Los Alamos. At the time, she was working in the weapons department while Eugene worked in the computing division. They married in 1959, but had no children. Her husband died from a brain tumour in 1994. Emily Willbanks died on February 18, 2007 in Los Alamos, New Mexico. Legacy and major projects Weapons applications At the Los Alamos National Laboratory (LANL), Willbanks (née West) and a group of five or six others used their pre-written code to analyze weapons that were designed by the engineers at LANL. Their analysis produced data from simulated explosions and provided feedback to the engineers to develop more effective designs. This task required coding skills over analytical skills. Willbanks played a key role in charting and analyzing the trends in the data to ensure that it was correct. This work allowed weapon designs to be streamlined to varying parameters and enabled testing simulations to produce different yields for the different designs. Most of the codes during this time were done on the IBM machine called Stretch. While this project relied more on mathematics and computing, Willbank's background in physics allowed her to adapt the terminology and concepts. Data storage system After working in the weapons department at the Los Alamos National Laboratory (LANL), Willbanks began working on a project to improve their data storage systems. This improvement was crucial because LANL could not purchase needed storage systems from a software vendor at the time. With a team of six or seven, over a twelve-year period Willbanks helped create a high-performance data storage system for LANL called the Clustered File System (CFS). Besides accountability for storing classified information, the challenges she faced in designing these digital storage systems included keeping up to date with the rapidly evolving software and storage systems. Most upgrades in the storage devices required her to develop a new interface so that data could adapt to that technology. The varying needs of multiple users were taken into account when designing the storage systems. Some required the protection of valuable information, while others needed to share data. Along with these demands, Willbanks helped design the CFS custom IBM storage system to organize a variety of information and its security needs. The CFS storage system became commercially available, which led to Willbanks being recruited to collaborate with a weather center in England. Her CFS storage system became extremely useful for bomb calculations, weather data collection, and other applications. The lack of widespread applications and constantly updating storage technology caused the storage system to eventually become unpopular on a commercial scale, but for bomb calculations and weather systems, this storage system remained extremely useful. England's Weather Centre After the Clustered File System (CFS) software was released to the public domain, England's weather centre contacted Los Alamos National Laboratory for help using their software. The centre in Reading, located approximately 40-minutes outside of London, required Willbanks' expertise for regular upgrades and maintenance of the software. Their data were stored on a Control Data Machine, then a Cray Machine, and eventually a Fujitsu machine before abandoning most of the LANL software for a commercial IBM model. Her dedication and work led to the adaptation of the Meteorological Archival and Retrieval System (MARS). This system enabled the acquisition of large datasets from the field including meteorological observations, analysis and forecast fields, and data from the Reanalysis project. References American women mathematicians American women physicists 1930 births 2007 deaths People from Fort Lauderdale, Florida 20th-century American mathematicians 20th-century American physicists University of New Mexico alumni Duke University alumni Los Alamos National Laboratory personnel 20th-century women mathematicians 20th-century American women 21st-century American women
54580755
https://en.wikipedia.org/wiki/Synacor
Synacor
Synacor Inc. is a technology and services company headquartered in Buffalo, New York. It provides managed portals and apps, advertising, email, authentication, and OTT video services. The company was founded in 1998 as Chek.com and changed its name to Synacor in 2001. Himesh Bhise currently serves as the CEO and Tim Heasley currently serves as the CFO. Beyond Buffalo, the company has offices in Boston, Dallas, London, New York City, Ottawa, Pune, Singapore, and Tokyo. In 2012, Synacor became a public company () with an initial public offering (IPO) of $5.00 per share. In 2015, Synacor acquired Zimbra, an open source email, calendaring, and collaboration software suite. In the same year, Synacor acquired NimbleTV. In 2016, Synacor displaced Yahoo! as the portal provider for AT&T and subsequently lost this business back to Yahoo! only 3 years later in 2019. Synacor also provides authentication for HBO Go. The company was taken private by investment company Centre Lane Partners in 2021 and delisted from the Nasdaq Global Market. History In January 1998, George Chamoun and Darren Ascone, roommates at the University of Buffalo, founded Chek.com, a Buffalo-based email infrastructure provider. It started as an affinity-branded free email provider, allowing users to create an email account at domains like Budweiser.com and Yankeesfan.net. Additionally, Chek.com provided a Business E-Communications product, allowing companies to outsource their email hosting to Chek.com, providing an outlet for businesses to maintain email and intranet systems internally. The stated aim of the outsourced email product was to allow smaller companies to present a professional image similar to larger, established companies. Chek.com was an early adopter of the LAMP technology stack and was a major supporter of the growing PHP community; Chek.com (and later, Synacor) hosted the official PHP website for a number of years prior to it being mirrored. In 2000, Chek.com merged with MyPersonal, a San Francisco-based portal provider, to become Synacor. After the merger, Synacor started offering an extended set of products geared towards ISPs, cable companies, and telecommunications. Synacor began by hosting emails; the first such ISP was Kmart. The new agreement would provide BlueLight ISP customers with access to a 'mybluelight'-branded portal and web-based email hosted by Synacor. In 2003, Synacor began to offer services to small and mid-size ISPs which allowed them to provide premium online content, similar to offerings by Yahoo at the time. Synacor claimed to manage complexities such as registration, rights management, and billing that customers experienced while operating their service. This technology eventually became Synacor's TV Everywhere product line. This technology also led Synacor to help shape the standards for Home-Based authentication through its long-standing participation in the Open Authentication Technology Committee (OATC) and Cable and Telecommunications Association for Marketing (CTAM). Furthermore, Synacor's TV Everywhere authentication product helped contribute to Apple's SSO. Synacor had originally filed an IPO in 2007 with Deutsche Bank and Bear Stearns as underwriters; however, it withdrew from filing in October 2008. A successful IPO was filed in November 2011 and priced at the beginning of 2012. In February of 2020, Synacor announced a merger with the Minnesota-based Qumu Corporation. As a part of the all-stock deal, Synacor shareholders would have held a 64% ownership of the merged entity, while Qumu investors would have held 36%. In June of 2020, the board of directors of both companies mutually agreed to terminate the merger prior to execution. References Companies based in Buffalo, New York Companies formerly listed on the Nasdaq Technology companies of the United States
57304349
https://en.wikipedia.org/wiki/Actionfigure
Actionfigure
Actionfigure, formerly known as TransitScreen, is an American technology company that offers software for digital displays, showing real-time transportation arrival data and other local information. In 2018, they launched a mobile application offering real-time transportation data, and in 2020 launched a product for employers to help employees navigate their commutes. As of 2018, TransitScreen has displays in more than 1,000 buildings in 30 cities, including Washington D.C., Boston, and Pittsburgh. Actionfigure is a SaaS platform, in which the property or business pays to access its software on a monthly, quarterly, or annual basis. History The company grew out of Arlington County's Mobility Lab, which measures the impacts of transportation demand management services, in 2011. It incorporated in 2013 as Multimodal Logic, Inc. In January 2015, the company closed its first round of seed funding of $600,000, which came from a number of investors, such as 1776 Ventures and Middle Bridge Partners. In late 2015, an additional $800,000 of seed money was raised In April 2018, a new round of funding was announced and was closed in May 2019 from Vancouver-based TIMIA Capital. Shortly thereafter, the company announced that its product is available in South America and Western Europe. In August 2019, Actionfigure and New York-based commercial real estate digital media amenity company Captivate announced a partnership that will bring Actionfigure's real-time data to Captivate's network of multipurpose display screens. In November 2021, TransitScreen rebranded to Actionfigure, bringing its suite of software solutions under a unified identity. Products Actionfigure's displays show real-time arrival information for trains, subways, buses, streetcars, and ferries. They also show real-time availability of local bicycle-sharing systems, carsharing, and vehicle for hire companies. The displays are made to correspond to the specific address where the individual screen is located. In September 2017, they released its MobilityScore rating, which uses historical data to determine how easy it is to get around a given location without a car. It is similar to Walk Score, but measures the mobility and transit accessibility of an address rather than proximity to amenities. In November 2018, they released its mobile application, CityMotion, now called Actionfigure Mobile. Actionfigure Mobile is location-based and shows the user the real-time availability of mobility options nearby. Like Actionfigure Screen, it is a B2B product and is not currently available for individual consumers. References Software companies based in Virginia Companies based in Arlington County, Virginia 2013 establishments in Virginia American companies established in 2013 Business software companies Transport software Software companies of the United States
14672486
https://en.wikipedia.org/wiki/Software%20testing%20outsourcing
Software testing outsourcing
Software Testing Outsourcing is software testing carried out by an independent company or a group of people not directly involved in the process of software development. Software testing is an essential phase of software development. However, it is often viewed as a non-core activity for most organizations. Outsourcing enables an organization to concentrate on its core development activities while external software testing experts handle the independent validation work. This offers many business benefits, which include independent assessment leading to enhanced delivery confidence, reduced time to market, lower infrastructure investment, predictable software quality, de-risking of deadlines, and increased time to focus on development. Software Testing Outsourcing can come in different forms: Full outsourcing, insourcing or remote insourcing of the entire test process (strategy, planning, execution, and closure), often referred to as a Managed Testing Service or dedicated testing teams. Provision of additional resources for major projects One-off tests often related to loading, stress or performance testing Beta User Acceptance Testing. Utilizing specialist focus groups coordinated by an external organization Software Testing Outsourcing is utilized when a company does not have the resources or capabilities in-house to address testing needs. Outsourcing can be given to organizations with expertise in many areas, including testing software for the web, mobile, printing, or even Fax performance. Testing companies can provide outsourcing services located in the home country of business or many other onshore or offshore sites. A testing partner could mean someone in the same city or another city across the country. It could also mean onshore but rurally sourced. Near-shore options are located in the same time-zone but cheaper markets like Mexico, while offshore testing usually takes place in countries like the Caribbean, Ukraine, and India. Onshore testing – Software testing companies based in the US and typically include Canada. Onshore often refers to your home country. Offshore testing – Software testing companies in a country other than your home country. Near-shore – Software testing companies located outside of the home country but in the same or similar time zone. Software testing offshore is considered more ideal when pricing is a key factor and when the task is simple enough for lesser experienced staff with limited direction. Offshore is also a more common choice when there can be tight coordination and time zone overlap is not an impediment. If the testing is more complicated and requires focused coordination and frequent interfacing with internal teams, onshore services will be more critical. Security and cultural alignment are also important factors that are most often satisfied by an onshore partner. Pros of Software Testing Outsourcing Onshore: On-hand information: Fluid and first-hand information from throughout the process. Face-to-face communication: enables on-time detection of emerging issues and efficient problem-solving. Effective communication: With no time and distance gap or cultural differences, there are almost no misunderstandings within teams. Time-effectiveness: Real-time work model with no time zone delays ensures efficiencies. Enhanced Time to market: Based on all of the above, speed to market is guaranteed. Pros of Software Testing Outsourcing Offshore: Best choice for long-term projects: The results are typical but not proven. Low costs:  The cost of IT projects can be cheaper when outsourced to countries with low labor costs. Round-the-clock support: Typically, offshore testing companies offer 24/7 support services. Fast Scalability: Access to a large pool of resources capable of fast test activation. Hybrid: Software Testing Outsourcing Offshore in execution with Onshore Over-site Some companies offer an onshore, local project lead to oversee an offshore outsourced team. Advantages of Onsite-Offshore Outsourced Testing Model If used right, this model can ensure that there is work going on every minute of the 24 hours on a project. Direct client interaction helps in better communication and also improves the business relationship. Cost-effective – Offshore teams cost less than setting up the entire QA team onsite. Considerations of the time zone differences and manage expectations accordingly. Top established global outsourcing cities According to Tholons Global Services - Top 50, in 2009, Top Established and Emerging Global Outsourcing Cities in Testing function were: Bengaluru, India Cebu City, Philippines Shanghai, China Beijing, China Kraków, Poland Ho Chi Minh City, Vietnam Vietnam outsourcing Vietnam has become a major player in software outsourcing. Ho Chi Minh City's ability to meet clients’ needs in scale and capacity, its maturing business environment, the country's stability in political and labor conditions, its increasing number of English speakers and its high service-level maturity make it attractive to foreign interests. Vietnam's software industry has maintained annual growth rate of 30-50% during the past 10 years. From 2002 to 2013 revenue of the software industry increased to nearly 3 US$ billion and the hardware industry increased to 36.8 US$ billion. Many Vietnamese enterprises have been granted international certificates (CMM) for their software development. According to Global Services Location Index 2017 by A.T. Kearney, Vietnam ranks sixth in the global software outsourcing market. Vietnam's position in this year's index reflects its growing popularity for Business process outsourcing (BPO). Its BPO industry earned US$2 billion in 2015 and has annually grown by 20-25% in the past decade. Argentina outsourcing Argentina's software industry has experienced exponential growth in the last decade, positioning itself as one of the strategic economic activities in the country. As Argentina is just one hour ahead of North America's east coast, communication takes place in real-time. Argentina's Internet culture and industry is one of the best, Facebook penetration in Argentina ranks 3rd worldwide and the country has the highest penetration of smartphones in Latin America (24%). Perhaps one of the most surprising facts is that the percentage that internet contributes to Argentina's Gross National Product (2.2%) ranks 10th in the world. References Software testing Offshoring Outsourcing
1115720
https://en.wikipedia.org/wiki/Chicago%20XXV%3A%20The%20Christmas%20Album
Chicago XXV: The Christmas Album
Chicago XXV: The Christmas Album is the nineteenth studio album by the American band Chicago, their twenty-fifth overall, released in 1998 on the band's Chicago Records label. It is an album of Christmas songs. The album was re-issued by Rhino Records in 2003 as What's It Gonna Be, Santa? with six additional, newly recorded tracks. Produced by Roy Bittan, the original album – featuring Chicago's interpretations of well-known Christmas classics plus one original tune (co-penned by Lee Loughnane) – was very well received upon its release in August 1998, peaking at #47 in the US and going gold during a stay of 7 weeks on the charts. After Chicago entered into a long-term partnership with Rhino Records in 2002, that label re-issued Chicago XXV: The Christmas Album that same year. It was further decided to record six additional Christmas songs – with Hot Streets and Chicago 13 producer Phil Ramone – and re-issue the whole package in 2003 under a new design, title and sequencing, entitled What's It Gonna Be, Santa?, deleting its predecessor in the process. Guitarist Keith Howland sang his first lead vocal on the track, "Jolly Old Saint Nicholas". This later release reached #102 in the US during a stay of 5 weeks on the charts. Track listings Chicago XXV: The Christmas Album What's It Gonna Be, Santa? (New additions in italic) Personnel Bill Champlin – organ, keyboards, acoustic piano, acoustic guitar, synth vibes, guitars, electric piano, programming, synth bass, lead and backing vocals, arrangements, BGV arrangements, brass arrangement on "Bethlehem" Keith Howland – guitars, keyboards, lead and backing vocals, arrangements, BGV arrangement on "Jolly Old St. Nicholas" Tris Imboden – drums Robert Lamm – acoustic piano, vibes, electric piano, clavinet, lead and backing vocals, arrangements, brass arrangements Lee Loughnane – trumpets, flugelhorn, muted trumpet, piccolo trumpet, lead and backing vocals, arrangements, brass arrangements, BGV arrangement on "Child's Prayer" James Pankow – trombone, keyboards, brass arrangements, BGV arrangement on "One Little Candle" Walter Parazaider – alto and tenor saxophones, flutes Jason Scheff – bass guitar, electric upright bass, fretless bass, keyboards, programming, lead and backing vocals, arrangements, BGV arrangements Additional personnel Roy Bittan – organ, acoustic piano, accordion, keyboards, synth bells Luis Conte – percussion John Durill – keyboard, additional arrangements on "Child's Prayer" Tim Pierce – guitars, acoustic guitars George Black – programming Larry Klimas – baritone saxophone Nick Lane – additional arrangements on "Winter Wonderland" Carmen Twillie – backing vocals on "Feliz Navidad", "Have Yourself a Merry Little Christmas" and "White Christmas"; adult choir director on "The Little Drummer Boy" Adult choir on "The Little Drummer Boy" – Alex Brown, Tamara Champlin, Alvin Chea, Gia Ciambotti, H.K. Dorsey, Gary Falcone, Edie Lehmann Boddicker, Bobbie Page, Oren Waters, Maxine Waters and Mona Lisa Young. Ryan Kelly – children's choir conductor Children's choir on "Child's Prayer" and "One Little Candle" – Amity Addrisi, Michael Amezcua, Alex Bittan, Ryan Bittan, Clark Gable, Kayley Gable, Kate Lamm, Sean Lamm, Dylan Loughnane, River Loughnane, Sarah Pankow, Brittany Scott and Jade Thacker. Production (Chicago XXV) Produced by Roy Bittan Engineered and Mixed by Ed Thacker Assisted by Posie Muliadi and Eric Ferguson Production Coordinator – Valerie Pack New Recordings for "What's It Gonna Be, Santa?" Produced by Phil Ramone Engineered and Mixed by Ed Thacker Additional Production – Chicago and David McLees Sound Supervision – Lee Loughnane and Jeff Magid Remastering – David Donnelly Product Manager – Mike Engstrom Discographical Annotation – Gary Peterson Editorial Supervision – Cory Frye References Chicago (band) albums Rhino Records albums Albums produced by Phil Ramone Albums produced by Roy Bittan 1998 Christmas albums 2003 Christmas albums Christmas albums by American artists Rock Christmas albums
1559005
https://en.wikipedia.org/wiki/Intel%208259
Intel 8259
The Intel 8259 is a Programmable Interrupt Controller (PIC) designed for the Intel 8085 and Intel 8086 microprocessors. The initial part was 8259, a later A suffix version was upward compatible and usable with the 8086 or 8088 processor. The 8259 combines multiple interrupt input sources into a single interrupt output to the host microprocessor, extending the interrupt levels available in a system beyond the one or two levels found on the processor chip. The 8259A was the interrupt controller for the ISA bus in the original IBM PC and IBM PC AT. The 8259 was introduced as part of Intel's MCS 85 family in 1976. The 8259A was included in the original PC introduced in 1981 and maintained by the PC/XT when introduced in 1983. A second 8259A was added with the introduction of the PC/AT. The 8259 has coexisted with the Intel APIC Architecture since its introduction in Symmetric Multi-Processor PCs. Modern PCs have begun to phase out the 8259A in favor of the Intel APIC Architecture. However, while not anymore a separate chip, the 8259A interface is still provided by the Platform Controller Hub or Southbridge chipset on modern x86 motherboards. Functional description The main signal pins on an 8259 are as follows: eight interrupt input request lines named IRQ0 through IRQ7, an interrupt request output line named INTR, interrupt acknowledgment line named INTA, D0 through D7 for communicating the interrupt level or vector offset. Other connections include CAS0 through CAS2 for cascading between 8259s. Up to eight slave 8259s may be cascaded to a master 8259 to provide up to 64 IRQs. 8259s are cascaded by connecting the INT line of one slave 8259 to the IRQ line of one master 8259. There are three registers, an Interrupt Mask Register (IMR), an Interrupt Request Register (IRR), and an In-Service Register (ISR). The IRR maintains a mask of the current interrupts that are pending acknowledgement, the ISR maintains a mask of the interrupts that are pending an EOI, and the IMR maintains a mask of interrupts that should not be sent an acknowledgement. End Of Interrupt (EOI) operations support specific EOI, non-specific EOI, and auto-EOI. A specific EOI specifies the IRQ level it is acknowledging in the ISR. A non-specific EOI resets the IRQ level in the ISR. Auto-EOI resets the IRQ level in the ISR immediately after the interrupt is acknowledged. Edge and level interrupt trigger modes are supported by the 8259A. Fixed priority and rotating priority modes are supported. The 8259 may be configured to work with an 8080/8085 or an 8086/8088. On the 8086/8088, the interrupt controller will provide an interrupt number on the data bus when an interrupt occurs. The interrupt cycle of the 8080/8085 will issue three bytes on the data bus (corresponding to a CALL instruction in the 8080/8085 instruction set). The 8259A provides additional functionality compared to the 8259 (in particular buffered mode and level-triggered mode) and is upward compatible with it. Programming considerations DOS and Windows Programming an 8259 in conjunction with DOS and Microsoft Windows has introduced a number of confusing issues for the sake of backwards compatibility, which extends as far back as the original PC introduced in 1981. The first issue is more or less the root of the second issue. DOS device drivers are expected to send a non-specific EOI to the 8259s when they finish servicing their device. This prevents the use of any of the 8259's other EOI modes in DOS, and excludes the differentiation between device interrupts rerouted from the master 8259 to the slave 8259. The second issue deals with the use of IRQ2 and IRQ9 from the introduction of a slave 8259 in the PC/AT. The slave 8259's INT output is connected to the master's IR2. The IRQ2 line of the ISA bus, originally connected to this IR2, was rerouted to IR1 of the slave. Thus the old IRQ2 line now generates IRQ9 in the CPU. To allow backwards compatibility with DOS device drivers that still set up for IRQ2, a handler is installed by the BIOS for IRQ9 that redirects interrupts to the original IRQ2 handler. On the PC, the BIOS (and thus also DOS) traditionally maps the master 8259 interrupt requests (IRQ0-IRQ7) to interrupt vector offset 8 (INT08-INT0F) and the slave 8259 (in PC/AT and later) interrupt requests (IRQ8-IRQ15) to interrupt vector offset 112 (INT70-INT77). This was done despite the first 32 (INT00-INT1F) interrupt vectors being reserved by the processor for internal exceptions (this was ignored for the design of the PC for some reason). Because of the reserved vectors for exceptions most other operating systems map (at least the master) 8259 IRQs (if used on a platform) to another interrupt vector base offset. Other operating systems Since most other operating systems allow for changes in device driver expectations, other 8259 modes of operation, such as Auto-EOI, may be used. This is especially important for modern x86 hardware in which a significant amount of time may be spent on I/O address space delay when communicating with the 8259s. This also allows a number of other optimizations in synchronization, such as critical sections, in a multiprocessor x86 system with 8259s. Edge and level triggered modes Since the ISA bus does not support level triggered interrupts, level triggered mode may not be used for interrupts connected to ISA devices. This means that on PC/XT, PC/AT, and compatible systems the 8259 must be programmed for edge triggered mode. On MCA systems, devices use level triggered interrupts and the interrupt controller is hardwired to always work in level triggered mode. On newer EISA, PCI, and later systems the Edge/Level Control Registers (ELCRs) control the mode per IRQ line, effectively making the mode of the 8259 irrelevant for such systems with ISA buses. The ELCR is programmed by the BIOS at system startup for correct operation. The ELCRs are located 0x4d0 and 0x4d1 in the x86 I/O address space. They are 8-bits wide, each bit corresponding to an IRQ from the 8259s. When a bit is set, the IRQ is in level triggered mode; otherwise, the IRQ is in edge triggered mode. Spurious interrupts The 8259 generates spurious interrupts in response to a number of conditions. The first is an IRQ line being deasserted before it is acknowledged. This may occur due to noise on the IRQ lines. In edge triggered mode, the noise must maintain the line in the low state for 100 ns. When the noise diminishes, a pull-up resistor returns the IRQ line to high, thus generating a false interrupt. In level triggered mode, the noise may cause a high signal level on the systems INTR line. If the system sends an acknowledgment request, the 8259 has nothing to resolve and thus sends an IRQ7 in response. This first case will generate spurious IRQ7's. A similar case can occur when the 8259 unmask and the IRQ input de-assertion are not properly synchronized. In many systems, the IRQ input is deasserted by an I/O write, and the processor doesn't wait until the write reaches the I/O device. If the processor continues and unmasks the 8259 IRQ before the IRQ input is deasserted, the 8259 will assert INTR again. By the time the processor recognizes this INTR and issues an acknowledgment to read the IRQ from the 8259, the IRQ input may be deasserted, and the 8259 returns a spurious IRQ7. The second is the master 8259's IRQ2 is active high when the slave 8259's IRQ lines are inactive on the falling edge of an interrupt acknowledgment. This second case will generate spurious IRQ15's, but is rare. PC/XT and PC/AT The PC/XT ISA system had one 8259 controller, while PC/AT and later systems had two 8259 controllers, master and slave. IRQ0 through IRQ7 are the master 8259's interrupt lines, while IRQ8 through IRQ15 are the slave 8259's interrupt lines. The labels on the pins on an 8259 are IR0 through IR7. IRQ0 through IRQ15 are the names of the ISA bus's lines to which the 8259s are attached. Variants See also Advanced Programmable Interrupt Controller (APIC) IF (x86 flag) Interrupt handler Interrupt latency Non-maskable interrupt (NMI) Programmable Interrupt Controller (PIC) References Gilluwe, Frank van. The Undocumented PC. A-W Developers Press, 1997. McGivern, Joseph. Interrupt-Driven PC System Design. Annabooks, 1998. IBM Personal System/2 Hardware Interface Technical Reference - Architectures. IBM, 1990. IBM Publication 84F8933 External links 8259A Programmable Interrupt Controller Intel chipsets IBM PC compatibles Input/output integrated circuits Interrupts
1288948
https://en.wikipedia.org/wiki/Capability%20Maturity%20Model%20Integration
Capability Maturity Model Integration
Capability Maturity Model Integration (CMMI) is a process level improvement training and appraisal program. Administered by the CMMI Institute, a subsidiary of ISACA, it was developed at Carnegie Mellon University (CMU). It is required by many U.S. Government contracts, especially in software development. CMU claims CMMI can be used to guide process improvement across a project, division, or an entire organization. CMMI defines the following maturity levels for processes: Initial, Managed, Defined, Quantitatively Managed, and Optimizing. Version 2.0 was published in 2018 (Version 1.3 was published in 2010, and is the reference model for the remaining information in this wiki article). CMMI is registered in the U.S. Patent and Trademark Office by CMU. Overview Originally CMMI addresses three areas of interest: Product and service development – CMMI for Development (CMMI-DEV), Service establishment, management, – CMMI for Services (CMMI-SVC), and Product and service acquisition – CMMI for Acquisition (CMMI-ACQ). In version 2.0 these three areas (that previously had a separate model each) were merged into a single model. CMMI was developed by a group from industry, government, and the Software Engineering Institute (SEI) at CMU. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization. By January 2013, the entire CMMI product suite was transferred from the SEI to the CMMI Institute, a newly created organization at Carnegie Mellon. History CMMI was developed by the CMMI project, which aimed to improve the usability of maturity models by integrating many different models into one framework. The project consisted of members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI). The main sponsors included the Office of the Secretary of Defense (OSD) and the National Defense Industrial Association. CMMI is the successor of the capability maturity model (CMM) or Software CMM. The CMM was developed from 1987 until 1997. In 2002, version 1.1 was released, version 1.2 followed in August 2006, and version 1.3 in November 2010. Some major changes in CMMI V1.3 are the support of agile software development, improvements to high maturity practices and alignment of the representation (staged and continuous). According to the Software Engineering Institute (SEI, 2008), CMMI helps "integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes." Mary Beth Chrissis, Mike Konrad, and Sandy Shrum Rawdon were the authorship team for the hard copy publication of CMMI for Development Version 1.2 and 1.3. The Addison-Wesley publication of Version 1.3 was dedicated to the memory of Watts Humphry. Eileen C. Forrester, Brandon L. Buteau, and Sandy Shrum were the authorship team for the hard copy publication of CMMI for Services Version 1.3. Rawdon "Rusty" Young was the chief architect for the development of CMMI version 2.0. He was previously the CMMI Product Owner and the SCAMPI Quality Lead for the Software Engineering Institute. In March 2016, the CMMI Institute was acquired by ISACA. CMMI topics Representation In version 1.3 CMMI existed in two representations: continuous and staged. The continuous representation is designed to allow the user to focus on the specific processes that are considered important for the organization's immediate business objectives, or those to which the organization assigns a high degree of risks. The staged representation is designed to provide a standard sequence of improvements, and can serve as a basis for comparing the maturity of different projects and organizations. The staged representation also provides for an easy migration from the SW-CMM to CMMI. In version 2.0 the above representation separation was cancelled and there is now only one cohesive model. Model framework (v1.3) Depending on the areas of interest (acquisition, services, development) used, the process areas it contains will vary. Process areas are the areas that will be covered by the organization's processes. The table below lists the seventeen CMMI core process areas that are present for all CMMI areas of interest in version 1.3. Maturity levels for services The process areas below and their maturity levels are listed for the CMMI for services model: Maturity Level 2 – Managed CM – Configuration Management MA – Measurement and Analysis PPQA – Process and Quality Assurance REQM – Requirements Management SAM – Supplier Agreement Management SD – Service Delivery WMC – Work Monitoring and Control WP – Work Planning Maturity Level 3 – Defined CAM – Capacity and Availability Management DAR – Decision Analysis and Resolution IRP – Incident Resolution and Prevention IWM – Integrated Work Managements OPD – Organizational Process Definition OPF – Organizational Process Focus... OT – Organizational Training RSKM – Risk Management SCON – Service Continuity SSD – Service System Development SST – Service System Transition STSM – Strategic Service Management Maturity Level 4 – Quantitatively Managed OPP – Organizational Process Performance QWM – Quantitative Work Management Maturity Level 5 – Optimizing CAR – Causal Analysis and Resolution. OPM – Organizational Performance Management. Models (v1.3) CMMI best practices are published in documents called models, each of which addresses a different area of interest. Version 1.3 provides models for three areas of interest: development, acquisition, and services. CMMI for Development (CMMI-DEV), v1.3 was released in November 2010. It addresses product and service development processes. CMMI for Acquisition (CMMI-ACQ), v1.3 was released in November 2010. It addresses supply chain management, acquisition, and outsourcing processes in government and industry. CMMI for Services (CMMI-SVC), v1.3 was released in November 2010. It addresses guidance for delivering services within an organization and to external customers. Model (v2.0) In version 2.0 DEV, ACQ and SVC were merged into a single model where each process area potentially has a specific reference to one or more of these three aspects. Trying to keep up with the industry the model also has explicit reference to agile aspects in some process areas. Some key differences between v1.3 and v2.0 models are given below; this is not an exhaustive list. Additional information is available here "Process Areas" have been replaced with "Practice Areas (PA's)". The latter is arranged by levels, not "Specific Goals". Each PA is composed of a "core" [i.e. a generic and terminology-free description] and "context-specific" [ i.e. description from the perspective of Agile/ Scrum, development, services, etc.] section. Since all practices are now compulsory to comply, "Expected" section has been removed. "Generic Practices" have been put under a new area called "Governance and Implementation Infrastructure", while "Specific practices" have been omitted. Emphasis on ensuring implementation of PA's and that these are practised continuously until they become a "habit". All maturity levels focus on the keyword "performance". Two and five optional PA's from "Safety" and "Security" purview have been included. PCMM process areas have been merged. Appraisal An organization cannot be certified in CMMI; instead, an organization is appraised. Depending on the type of appraisal, the organization can be awarded a maturity level rating (1–5) or a capability level achievement profile. Many organizations find value in measuring their progress by conducting an appraisal. Appraisals are typically conducted for one or more of the following reasons: To determine how well the organization's processes compare to CMMI best practices, and to identify areas where improvement can be made To inform external customers and suppliers of how well the organization's processes compare to CMMI best practices To meet the contractual requirements of one or more customers Appraisals of organizations using a CMMI model must conform to the requirements defined in the Appraisal Requirements for CMMI (ARC) document. There are three classes of appraisals, A, B and C, which focus on identifying improvement opportunities and comparing the organization's processes to CMMI best practices. Of these, class A appraisal is the most formal and is the only one that can result in a level rating. Appraisal teams use a CMMI model and ARC-conformant appraisal method to guide their evaluation of the organization and their reporting of conclusions. The appraisal results can then be used (e.g., by a process group) to plan improvements for the organization. The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is an appraisal method that meets all of the ARC requirements. Results of a SCAMPI appraisal may be published (if the appraised organization approves) on the CMMI Web site of the SEI: Published SCAMPI Appraisal Results. SCAMPI also supports the conduct of ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability Determination), assessments etc. This approach promotes that members of the EPG and PATs be trained in the CMMI, that an informal (SCAMPI C) appraisal be performed, and that process areas be prioritized for improvement. More modern approaches, that involve the deployment of commercially available, CMMI-compliant processes, can significantly reduce the time to achieve compliance. SEI has maintained statistics on the "time to move up" for organizations adopting the earlier Software CMM as well as CMMI. These statistics indicate that, since 1987, the median times to move from Level 1 to Level 2 is 23 months, and from Level 2 to Level 3 is an additional 20 months. Since the release of the CMMI, the median times to move from Level 1 to Level 2 is 5 months, with median movement to Level 3 another 21 months. These statistics are updated and published every six months in a maturity profile. The Software Engineering Institute's (SEI) team software process methodology and the use of CMMI models can be used to raise the maturity level. A new product called Accelerated Improvement Method (AIM) combines the use of CMMI and the TSP. Security To address user security concerns, two unofficial security guides are available. Considering the Case for Security Content in CMMI for Services has one process area, Security Management. Security by Design with CMMI for Development, Version 1.3 has the following process areas: OPSD – Organizational Preparedness for Secure Development SMP – Secure Management in Projects SRTS – Security Requirements and Technical Solution SVV – Security Verification and Validation While they do not affect maturity or capability levels, these process areas can be reported in appraisal results. Applications The SEI published a study saying 60 organizations measured increases of performance in the categories of cost, schedule, productivity, quality and customer satisfaction. The median increase in performance varied between 14% (customer satisfaction) and 62% (productivity). However, the CMMI model mostly deals with what processes should be implemented, and not so much with how they can be implemented. These results do not guarantee that applying CMMI will increase performance in every organization. A small company with few resources may be less likely to benefit from CMMI; this view is supported by the process maturity profile (page 10). Of the small organizations (<25 employees), 70.5% are assessed at level 2: Managed, while 52.8% of the organizations with 1,001–2,000 employees are rated at the highest level (5: Optimizing). Turner & Jain (2002) argue that although it is obvious there are large differences between CMMI and agile software development, both approaches have much in common. They believe neither way is the 'right' way to develop software, but that there are phases in a project where one of the two is better suited. They suggest one should combine the different fragments of the methods into a new hybrid method. Sutherland et al. (2007) assert that a combination of Scrum and CMMI brings more adaptability and predictability than either one alone. David J. Anderson (2005) gives hints on how to interpret CMMI in an agile manner. CMMI Roadmaps, which are a goal-driven approach to selecting and deploying relevant process areas from the CMMI-DEV model, can provide guidance and focus for effective CMMI adoption. There are several CMMI roadmaps for the continuous representation, each with a specific set of improvement goals. Examples are the CMMI Project Roadmap, CMMI Product and Product Integration Roadmaps and the CMMI Process and Measurements Roadmaps. These roadmaps combine the strengths of both the staged and the continuous representations. The combination of the project management technique earned value management (EVM) with CMMI has been described (Solomon, 2002). To conclude with a similar use of CMMI, Extreme Programming (XP), a software engineering method, has been evaluated with CMM/CMMI (Nawrocki et al., 2002). For example, the XP requirements management approach, which relies on oral communication, was evaluated as not compliant with CMMI. CMMI can be appraised using two different approaches: staged and continuous. The staged approach yields appraisal results as one of five maturity levels. The continuous approach yields one of four capability levels. The differences in these approaches are felt only in the appraisal; the best practices are equivalent resulting in equivalent process improvement results. See also Capability Immaturity Model Capability Maturity Model Enterprise Architecture Assessment Framework LeanCMMI People Capability Maturity Model Process area (CMMI) Software Engineering Process Group References Official sources SEI reports SEI Web pages SCAMPI Appraisal Results. The complete SEI list of published SCAMPI appraisal results. External links Maturity models Software development process Standards Systems engineering Carnegie Mellon University software
65781810
https://en.wikipedia.org/wiki/SONiC%20%28operating%20system%29
SONiC (operating system)
SONiC (Software for Open Networking in the Cloud) is a free and open source network operating system based on Linux and developed by Microsoft and the Open Compute Project. SONiC includes the networking software components necessary for a fully functional L3 device and was designed to meet the requirements of a cloud data center. It allows cloud operators to share the same software stack across hardware from different switch vendors. Overview SONiC was developed and open sourced by Microsoft in 2017. The software decouples network software from the underlying hardware and is built on the SAI switch-programming API. It runs on network switches and ASICs from multiple vendors. Notable supported network features include Border Gateway Protocol (BGP), remote direct memory access (RDMA), QoS, and various other Ethernet/IP technologies. The SONiC community includes cloud providers, service providers, and silicon and component suppliers, as well as networking hardware OEMs and ODMs. It has more than 850 members. Companies using and/or contributing to SONiC include Alibaba Group, Arista Networks, Broadcom, Dell, Cisco Systems, Comcast, Juniper, Nokia, Nvidia-Mellanox and VMware. SONiC is used in Microsoft’s Azure networking services. The SONiC network operating system was presented at the ACM SIGCOMM 2nd Asia-Pacific Workshop on Networking 2018 (APNET 2018) in Beijing, China. The source code is licensed under a mix of open source licenses including the GNU General Public License and the Apache License, and is available on GitHub. See also Open Compute Project References Further reading SONiC: Software for Open Networking in the Cloud External links – Documentation – Scripts which perform an installable binary image build for SONiC Computing platforms Free and open-source software Linux Microsoft free software Microsoft operating systems Network operating systems Software using the Apache license Software using the GPL license 2017 software
4949847
https://en.wikipedia.org/wiki/Data%20Protection%20API
Data Protection API
DPAPI (Data Protection Application Programming Interface) is a simple cryptographic application programming interface available as a built-in component in Windows 2000 and later versions of Microsoft Windows operating systems. In theory the Data Protection API can enable symmetric encryption of any kind of data; in practice, its primary use in the Windows operating system is to perform symmetric encryption of asymmetric private keys, using a user or system secret as a significant contribution of entropy. A detailed analysis of DPAPI inner-workings was published in 2011 by Bursztein et al. For nearly all cryptosystems, one of the most difficult challenges is "key management" - in part, how to securely store the decryption key. If the key is stored in plain text, then any user that can access the key can access the encrypted data. If the key is to be encrypted, another key is needed, and so on. DPAPI allows developers to encrypt keys using a symmetric key derived from the user's logon secrets, or in the case of system encryption, using the system's domain authentication secrets. The DPAPI keys used for encrypting the user's RSA keys are stored under %APPDATA%\Microsoft\Protect\{SID} directory, where {SID} is the Security Identifier of that user. The DPAPI key is stored in the same file as the master key that protects the users private keys. It usually is 64 bytes of random data. Security properties DPAPI doesn't store any persistent data for itself; instead, it simply receives plaintext and returns ciphertext (or vice versa). DPAPI security relies upon the Windows operating system's ability to protect the Master Key and RSA private keys from compromise, which in most attack scenarios is most highly reliant on the security of the end user's credentials. A main encryption/decryption key is derived from user's password by PBKDF2 function. Particular data binary large objects can be encrypted in a way that salt is added and/or an external user-prompted password (aka "Strong Key Protection") is required. The use of a salt is a per-implementation option - i.e. under the control of the application developer - and is not controllable by the end user or system administrator. Delegated access can be given to keys through the use of a COM+ object. This enables IIS web servers to use DPAPI. Use of DPAPI by Microsoft software While not universally implemented in all Microsoft products, the use of DPAPI by Microsoft products has increased with each successive version of Windows. However, many applications from Microsoft and third-party developers still prefer to use their own protection approach or have only recently switched to use DPAPI. For example, Internet Explorer versions 4.0-6.0, Outlook Express and MSN Explorer used the older Protected Storage (PStore) API to store saved credentials such as passwords etc. Internet Explorer 7 now protects stored user credentials using DPAPI. Picture password, PIN and fingerprint in Windows 8 Encrypting File System in Windows 2000 and later SQL Server Transparent Data Encryption (TDE) Service Master Key encryption Internet Explorer 7, both in the standalone version available for Windows XP and in the integrated versions available in Windows Vista and Windows Server 2008 Microsoft Edge Windows Mail and Windows Live Mail Outlook for S/MIME Internet Information Services for SSL/TLS Windows Rights Management Services client v1.1 and later Windows 2000 and later for EAP/TLS (VPN authentication) and 802.1x (WiFi authentication) Windows XP and later for Stored User Names and Passwords (aka Credential Manager) .NET Framework 2.0 and later for System.Security.Cryptography.ProtectedData Microsoft.Owin (Katana) authentication by default when self hosting (including cookie authentication and OAuth tokens) References External links Windows Data Protection API (DPAPI) white paper by NAI Labs Data encryption with DPAPI How To: Use DPAPI (User Store) from ASP.NET 1.1 with Enterprise Services System.Security.Cryptography.ProtectedData in .NET Framework 2.0 and later Discussion of the use of MS BackupKey Remote Protocol by DPAPI to protect user secrets The Windows PStore Microsoft application programming interfaces Cryptographic software Microsoft Windows security technology Windows 2000
28844498
https://en.wikipedia.org/wiki/Cyberwarfare%20in%20the%20United%20States
Cyberwarfare in the United States
Cyberwarfare is the use of computer technology to disrupt the activities of a state or organization, especially the deliberate attacking of information systems for strategic or military purposes. As a major developed economy, the United States is highly dependent on the Internet and therefore greatly exposed to cyber attacks. At the same time, the United States has substantial capabilities in both defense and power projection thanks to comparatively advanced technology and a large military budget. Cyber warfare presents a growing threat to physical systems and infrastructures that are linked to the internet. Malicious hacking from domestic or foreign enemies remains a constant threat to the United States. In response to these growing threats, the United States has developed significant cyber capabilities. The United States Department of Defense recognizes the use of computers and the Internet to conduct warfare in cyberspace as a threat to national security, but also as a platform for attack. The United States Cyber Command centralizes command of cyberspace operations, organizes existing cyber resources and synchronizes defense of U.S. military networks. It is an armed forces Unified Combatant Command. A 2021 report by the International Institute for Strategic Studies placed the United States as the world's foremost cyber superpower, taking into account its cyber offense, defense, and intelligence capabilities. The Department of Defense Cyber Strategy In April 2015, the U.S. Department of Defense (DoD) published its latest Cyber Strategy building upon the previous DoD Strategy for Operating in Cyberspace published in July 2011. The DoD Cyber strategy focuses on building capabilities to protect, secure, and defend its own DoD networks, systems and information; defend the nation against cyber attacks; and support contingency plans. This includes being prepared to operate and continue to carry out missions in environments impacted by cyber attacks. The DoD outlines three cyber missions: Defend DoD networks, systems, and information. Defend the United States and its interests against cyber attacks of significant consequence. Provide integrated cyber capabilities to support military operations and contingency plans. In addition, the Cyber Strategy emphasizes the need to build bridges to the private sector, so that the best talent and technology the United States has to offer is at disposal to the DoD. The Five Pillars The five pillars is the base of the Department of Defense's strategy for cyber warfare. The first pillar is to recognize that the new domain for warfare is cyberspace and that it is similar to the other elements in the battlespace. The key objectives of this pillar are to build up technical capabilities and accelerate research and development to provide the United States with a technological advantage. The second pillar is proactive defenses as opposed to passive defense. Two examples of passive defense are computer hygiene and firewalls. The balance of the attacks requires active defense using sensors to provide a rapid response to detect and stop a cyber attack on a computer network. This would provide military tactics to backtrace, hunt down and attack an enemy intruder. The third pillar is critical infrastructure protection (CIP) to ensure the protection of critical infrastructure by developing warning systems to anticipate threats. The fourth pillar is the use of collective defense which would provide the ability of early detection, and incorporate it into the cyber warfare defense structure. The goal of this pillar is to explore all options in the face of a conflict, and to minimize loss of life and destruction of property. The fifth pillar is building and maintaining international alliances and partnerships to deter shared threats, and to remain adaptive and flexible to build new alliances as required. This is focused on "priority regions, to include the Middle East, Asia-Pacific, and Europe". Trump Administration's National Cyber Strategy Shortly after his election, U.S. President Donald Trump pledged to deliver an extensive plan to improve U.S. cybersecurity within 90 days of his inauguration. Three weeks after the designated 90-day mark, he signed an executive order that claimed to strengthen government networks. By the new executive order, federal-agency leaders are to be held responsible for breaches on their networks and federal agencies are to follow the National Institute of Standards and Technology Framework for Improving Critical Infrastructure Cybersecurity in consolidating risk management practices. In addition, the federal departments were to examine cyber defense abilities of agencies within 90 days, focusing on "risk mitigation and acceptance choices" and evaluating needs for funding and sharing technology across departments. Experts in cybersecurity later claimed that the order was "not likely" to have a major impact. In September, President Trump signed the National Cyber Strategy- "the first fully articulated cyber strategy for the United States since 2003." John Bolton, the National Security Advisor, claimed in September 2018 that the Trump administration's new "National Cyber Strategy" has replaced restrictions on the use of offensive cyber operations with a legal regime that enables the Defense Department and other relevant agencies to operate with a greater authority to penetrate foreign networks to deter hacks on U.S. systems. Describing the new strategy as an endeavor to "create powerful deterrence structures that persuade the adversary not to strike in the first place," Bolton added that decision-making for launching attacks will be moved down the chain of command from requiring the president's approval. The Defense Department, in its strategy document released in September 2018, further announced that it would "defend forward" U.S. networks by disrupting "malicious cyber activity at its source" and endeavor to "ensure there are consequences for irresponsible cyber behavior" by "preserving peace through strength." The National Cyber Strategy has also garnered criticisms that evaluating acts of cyberwarfare against the United States still remains ambiguous, as the current U.S. law does not specifically define what constitutes an illegal cyber act that transcends a justifiable computer activity. The legal status of most information security research in the United States is governed by 1986 Computer Fraud and Abuse Act, which was derided to be "poorly drafted and arbitrarily enforced" by enabling prosecution of useful information security research methods such as Nmap or Shodan. As even the needed services fall into prohibition, top-level information security experts find it challenging to improve the infrastructure of cyberdefense. Cyberattack as an act of war In 2011, The White House published an "International Strategy for Cyberspace" that reserved the right to use military force in response to a cyberattack: In 2013, the Defense Science Board, an independent advisory committee to the U.S. Secretary of Defense, went further, stating that "The cyber threat is serious, with potential consequences similar in some ways to the nuclear threat of the Cold War," and recommending, in response to the "most extreme case" (described as a "catastrophic full spectrum cyber attack"), that "Nuclear weapons would remain the ultimate response and anchor the deterrence ladder." Attacks on other nations Iran In June 2010, Iran was the victim of a cyber attack when its nuclear facility in Natanz was infiltrated by the cyber-worm 'Stuxnet', said to be the most advanced piece of malware ever discovered and significantly increases the profile of cyberwarfare. It destroyed perhaps over 1,000 nuclear centrifuges and, according to a Business Insider article, "[set] Tehran's atomic program back by at least two years." Despite a lack of official confirmation, Gary Samore, White House Coordinator for Arms Control and Weapons of Mass Destruction, made a public statement, in which he said, "we're glad they [the Iranians] are having trouble with their centrifuge machine and that we—the US and its allies—are doing everything we can to make sure that we complicate matters for them", offering "winking acknowledgement" of US involvement in Stuxnet. China In 2013, Edward Snowden, a former systems administrator for the Central Intelligence Agency (CIA) and a counterintelligence trainer at the Defense Intelligence Agency (DIA), revealed that the United States government had hacked into Chinese mobile phone companies to collect text messages and had spied on Tsinghua University, one of China's biggest research institutions, as well as home to one of China's six major backbone networks, the China Education and Research Network (CERNET), from where internet data from millions of Chinese citizens could be mined. He said U.S. spy agencies have been watching China and Hong Kong for years. According to classified documents provided by Edward Snowden, the National Security Agency (NSA) has also infiltrated the servers in the headquarters of Huawei, China's largest telecommunications company and the largest telecommunications equipment maker in the world. The plan is to exploit Huawei's technology so that when the company sold equipment to other countries—including both allies and nations that avoid buying American products—the NSA could roam through their computer and telephone networks to conduct surveillance and, if ordered by the president, offensive cyberoperations. Russia In June 2019, Russia said that its electrical grid could be under cyber-attack by the United States. The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid. Others In 1982, a computer control system stolen from a Canadian company by Soviet spies caused a Soviet gas pipeline to explode. It has been alleged that code for the control system had been modified by the CIA to include a logic bomb which changed the pump speeds to cause the explosion, but this is disputed. A 1 April 1991 article in InfoWorld Magazine "Meta-Virus Set to Unleash Plague on Windows 3.0 Users" by John Gantz was purported to be an extremely early example of cyber warfare between 2 countries. In fact the "AF/91 virus" was an April Fools Joke that was misunderstood and widely re-reported as fact by credulous media. Cyber threat information sharing The Pentagon has had an information sharing arrangement, the Defense Industrial Base Cybersecurity and Information Assurance (DIBCIA) program, in place with some private defense contractors since 2007 to which access was widened in 2012. A number of other information sharing initiatives such as the Cyber Intelligence Sharing and Protection Act (CISPA) and Cybersecurity Information Sharing Act (CISA) have been proposed, but failed for various reasons including fears that they could be used to spy on the general public. United States Cyber Command The United States Cyber Command (USCYBERCOM) is a United States Armed Forces Unified Combatant Command. USCYBERCOM plans, coordinates, integrates, synchronizes and conducts activities to: defend Department of Defense information networks and; prepare to conduct "full spectrum military cyberspace operations" to ensure US/Allied freedom of action in cyberspace and deny the same to adversaries. Army The Army Cyber Command (ARCYBER) is an Army component command for the U.S. Cyber Command. ARCYBER has the following components: Army Network Enterprise Technology Command / 9th Army Signal Command Portions of 1st Information Operations Command (Land) United States Army Intelligence and Security Command will be under the operational control of ARCYBER for cyber-related actions. New cyber authorities have been granted under National Security Presidential Memorandum (NSPM) 13; persistent cyber engagements at Cyber command are the new norm for cyber operations. Marine Corps United States Marine Corps Forces Cyberspace Command is a functional formation of the United States Marine Corps to protect infrastructure from cyberwarfare. Air Force The Sixteenth Air Force (16 AF) is the United States Air Force component of United States Cyber Command (USCYBERCOM). It has the following components: 67th Network Warfare Wing 688th Information Operations Wing 689th Combat Communications Wing The F-15 and C-130 systems are being hardened from cyber attack as of 2019. Navy The Navy Cyber Forces (CYBERFOR) is the type of some commanders for the U.S. Navy's global cyber workforce. The headquarters is located at Joint Expeditionary Base Little Creek-Fort Story. CYBERFOR provides forces and equipment in cryptology/signals intelligence, cyber, electronic warfare, information operations, intelligence, networks, and space. In September 2013, the United States Naval Academy will offer undergraduate students the opportunity, to major in Cyber Operations for the United States. Fleet Cyber Command is an operating force of the United States Navy responsible for the Navy's cyber warfare programs. Tenth Fleet is a force provider for Fleet Cyber Command. The fleet components are: Naval Network Warfare Command Navy Cyber Defense Operations Command Naval Information Operation Commands Combined Task Forces Timeline Systems in the US military and private research institutions were penetrated from March 1998 for almost two years in an incident called Moonlight Maze. The United States Department of Defense traced the trail back to a mainframe computer in the former Soviet Union but the sponsor of the attacks is unknown and Russia denies any involvement. Titan Rain was the U.S. government's designation given to a series of coordinated attacks on American computer systems since 2003. The attacks were labeled as Chinese in origin, although their precise nature (i.e., state-sponsored espionage, corporate espionage, or random hacker attacks) and their real identities (i.e., masked by proxy, zombie computer, spyware/virus infected) remain unknown. In 2007, the United States government suffered "an espionage Pearl Harbor" in which an unknown foreign power ... broke into all of the high tech agencies, all of the military agencies, and downloaded terabytes of information. In 2008, a hacking incident occurred on a U.S. Military facility in the Middle East. United States Deputy Secretary of Defense William J. Lynn III had the Pentagon release a document, which reflected a "malicious code" on a USB flash drive spread undetected on both classified and unclassified Pentagon systems, establishing a digital beachhead, from which data could be transferred to servers under foreign control. "It was a network administrator's worst fear: a rogue program operating silently, poised to deliver operational plans into the hands of an unknown adversary. This ... was the most significant breach of U.S. military computers ever and it served as an important wake-up call", Lynn wrote in an article for Foreign Affairs. Operation Buckshot Yankee, conducted by the United States in response to the 2008 breach which was allegedly conducted by Russia. This operation lasted three years, starting in October 2008 when the breach was first detected. The operation included attempts to recognize and mitigate the malicious code (Agent.btz), which had spread to military computers around the world. The team conducting the operation requested permission to use more offensive means of combating the code, but it was denied by senior officials. Operation Buckshot Yankee was a catalyst for the formation of Cyber Command. On 9 February 2009, the White House announced that it will conduct a review of the nation's cyber security to ensure that the Federal government of the United States cyber security initiatives are appropriately integrated, resourced and coordinated with the United States Congress and the private sector. On 1 April 2009, U.S. lawmakers pushed for the appointment of a White House cyber security "czar" to dramatically escalate U.S. defenses against cyber attacks, crafting proposals that would empower the government to set and enforce security standards for private industry for the first time. On 7 April 2009, The Pentagon announced they spent more than $100 million in the last six months responding to and repairing damage from cyber attacks and other computer network problems. From December 2009 to January 2010, a cyber attack, dubbed Operation Aurora, was launched from China against Google and over 20 other companies. Google said the attacks originated from China and that it would "review the feasibility" of its business operations in China following the incident. According to Google, at least 20 other companies in various sectors had been targeted by the attacks. McAfee spokespersons claimed that "this is the highest profile attack of its kind that we have seen in recent memory." In February 2010, the United States Joint Forces Command released a study which included a summary of the threats posed by the internet: "The open and free flow of information favored by the West will allow adversaries an unprecedented ability to gather intelligence." On 19 June 2010, United States Senator Joe Lieberman (I-CT) introduced a bill called "Protecting Cyberspace as a National Asset Act of 2010", which he co-wrote with Senator Susan Collins (R-ME) and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which the American media dubbed the "Kill switch bill", would grant the President emergency powers over parts of the Internet. However, all three co-authors of the bill issued a statement that instead, the bill "[narrowed] existing broad Presidential authority to take over telecommunications networks". In August 2010, the U.S. for the first time publicly warned about the Chinese military's use of civilian computer experts in clandestine cyber attacks aimed at American companies and government agencies. The Pentagon also pointed to an alleged China-based computer spying network dubbed GhostNet that was revealed in a research report last year. The Pentagon stated that the People's Liberation Army was using "information warfare units" to develop viruses to attack enemy computer systems and networks, and those units include civilian computer professionals. Commander Bob Mehal would monitor the PLA's buildup of its cyberwarfare capabilities and "will continue to develop capabilities to counter any potential threat." In 2010, American General Keith B. Alexander endorsed talks with Russia over a proposal to limit military attacks in cyberspace, representing a significant shift in U.S. policy. In 2011 as part of The Anonymous attack on HBGary Federal information about private companies such as Endgame systems who design offensive software for the Department of Defense were revealed. It was shown that Endgame systems job applicants had previously "managed team of 15 persons, responsible for coordinating offensive computer network operations for the United States Department of Defense and other federal agencies." In October 2012, the Pentagon was to host contractors who "want to propose revolutionary technologies for understanding, planning and managing cyberwarfare. It is part of an ambitious program that the Defense Advanced Research Projects Agency, or DARPA, calls Plan X, and the public description talks about 'understanding the cyber battlespace', quantifying 'battle damage' and working in DARPA's 'cyberwar laboratory.'" Starting in September 2012, denial of service attacks, were carried out against the New York Stock Exchange and a number of banks including J.P. Morgan Chase. Credit for these attacks was claimed by a hacktivist group called the Qassam Cyber Fighters who have labeled the attacks Operation Ababil. The attacks had been executed in several phases and were restarted in March 2013. In 2013, the first Tallinn Manual on the International Law Applicable to Cyber Warfare was published. This publication was the result of an independent study to examine and review laws governing cyber warfare sponsored by the NATO Cooperative Cyber Defence Centre of Excellence in 2009. In February 2013, the White House Presidential Executive Order (E.o.) 13636 "Improving Critical Infrastructure Cybersecurity" was published. This executive order highlighted the policies needed to improve and coordinate cybersecurity, identification of critical infrastructure, reduction of cyber risk, information sharing with the private sector, and ensure civil and privacy liberties protections are incorporated. In January 2014, the White House Presidential Policy Directive 28 (PPD-28) on "Signals Intelligence Activities" was published. This presidential policy directive highlighted the principles, limitations of use, process of collection, safeguarding of personal information, and transparency related to the collection and review of cyber intelligence signal activities. In August 2014, "gigabytes" of sensitive data were reported stolen from JPMorgan Chase (see 2014 JPMorgan Chase data breach), and the company's internal investigation was reported to have found that the data was sent to a "major Russian city." The FBI was said to be investigating whether the breach was in retaliation for sanctions the United States had imposed on Russia in relation to the 2014 Russian military intervention in Ukraine. On 29 May 2014, iSIGHT Partners, a global provider of cyber threat intelligence, uncovered a "long-term" and "unprecedented" cyber espionage that was "the most elaborate cyber espionage campaign using social engineering that has been uncovered to date from any nation". Labelled "Operation Newscaster", it targeted senior U.S. military and diplomatic personnel, congresspeople, journalists, lobbyists, think tankers and defense contractors, including a four-star admiral. In December 2014, Cylance Inc. published an investigation on so-called "Operation Cleaver" which targeted over 50 world's unnamed leading enterprises, including in United States. Federal Bureau of Investigation tacitly acknowledged the operation and "warned businesses to stay vigilant and to report any suspicious activity spotted on the companies' computer systems". In December 2014, in response to a hack on the US based company Sony (see Sony Pictures hack) believed to be perpetrated by North Korea, the US government created new economic sanctions on North Korea and listed the country as a state sponsor of terrorism. After the hack, there was an internet blackout over most of North Korea allegedly caused by the US, but there was no definitive evidence to support that claim. In January 2015, terrorist group ISIS hacked United States Central Command and took over their Twitter and YoutTube accounts. They distributed sensitive information obtained during the attack on various social media platforms. In April 2015, The Department of Defense Cyber Strategy was updated and published. Original DoD Strategy for Operating in Cyberspace was published in July 2011. In 2015 the United States Office of Personnel Management (OPM) was victim to what has been described by federal officials as among the largest breaches of government data in the history of the United States, in which an estimated 21.5 million records were stolen. Information targeted in the breach included personally identifiable information such as Social Security numbers, as well as names, dates and places of birth, and addresses, and likely involved theft of detailed background security-clearance-related background information. In June 2015, the US Department of Defense (DoD) included a chapter dedicated to cyber warfare in the DoD Law of War Manual. See Cyber Warfare section on p. 994. In 2016 Cyber Command mounted computer-network attacks on ISIS under Operation Glowing Symphony with the intent to disrupt internal communication, manipulate data, and undermine confidence in the group's security. A particular emphasis was placed on locking key figures out of their accounts, deleting files of propaganda, and making it all look like general IT trouble instead of an intentional attack. This operation prompted an internal debate in the American government about whether or not to alert their allies that they would be attacking servers located within other countries. In March 2018, the Office of Foreign Assets Control sanctioned two Russian intelligence agencies, the Federal Security Service (FSB) and the Main Intelligence Directorate (GRU) for committing "destructive cyber-attacks." The attacks include the NotPetya attack, an assault that was allegedly conducted by the Russian military in February according to statements of the White House and British government, and which the United States Treasury described as "the most destructive and costly cyber-attack in history." In March 2018, the United States Justice Department charged nine Iranians with stealing scientific secrets on behalf of Iran's Revolutionary Guard Corps. The defendants "stole more than 31 terabytes of academic data and intellectual property from universities, and email accounts of employees at private sector companies, government agencies, and non-governmental organizations." In September 2018, the United States Justice Department published a criminal complaint against Park Jin Hyok, a professional hacker alleged to be working for North Korea's military intelligence bureau, for his commitment of three cyber-attacks: attack against Sony Pictures in 2014, the theft of $81m from the central bank of Bangladesh in 2016, and WannaCry 2.0 ransomware attack against hundreds of thousands of computers. September 2018, The White House has "authorized offensive cyber operations" against foreign threats as a result of loosened restrictions on the use of digital weapons in line with the president's directive; the National Security Presidential Memorandum 13 (NSPM 13). This allows the military to carry out such attacks with a shortened approval process. In October 2018, the United States Cyber Command launched the still-classified Operation Synthetic Theology. A team of experts were deployed to Macedonia, Ukraine, and Montenegro to identify Russian agents interfering in the election. The team was also gathering intelligence on Russia's cyber capabilities and attacking the Internet Research Agency, a "Kremin-backed troll farm in St. Petersburg". Beginning at least by March 2019, persistent cyber operations were applied by the United States against Russia's power grid, seemingly per National Security Presidential Memorandum 13 (September 2018). June 2019, White House National Security Adviser John Bolton announced that U.S. offensive cyber operations would be expanded to include "economic cyber intrusions". These comments appear to reference China's alleged theft of information and data from U.S. corporations. In June 2019, President Trump ordered a cyber attack against Iranian weapons systems in retaliation to the shooting down of a US drone being in the Strait of Hormuz and two mine attacks on oil tankers. The attacks disabled Iranian computer systems controlling its rocket and missile launchers. Iran's Islamic Revolutionary Guard Corps (IRGC) was specifically targeted. See also Air Force Cyber Command (Provisional) Computer insecurity Cyber spying Cyberstrategy 3.0 Cyber terrorism Cyberwarfare by Russia Defense Information Systems Network Denial-of-service attack Electronic warfare Espionage Hacker (computer security) iWar Information warfare List of cyber attack threat trends Penetration testing Proactive Cyber Defence Siberian pipeline sabotage Signals intelligence Chinese Intelligence Operations in the United States Chinese Information Operations and Warfare Military-digital complex Economic and Industrial Espionage U.S. Cyber Command Army Cyber Command Fleet Cyber Command Air Forces Cyber Command Marine Corps Forces Cyberspace Command References Further reading Obama Order Sped Up Wave of Cyberattacks Against Iran with diagram, 1 June 2012 Electronic warfare Hacking (computer security) Military technology Military of the United States Internet in the United States
65489784
https://en.wikipedia.org/wiki/Yanxi%20Liu
Yanxi Liu
Yanxi Liu () is a Chinese-American computer scientist specializing in computer vision. She is known for her research on computational symmetry, computational regularity, and the uses of symmetry and regularity in computer vision, as well as on feature selection for motion tracking. She is a professor of computer science at Pennsylvania State University, where she directs the Motion Capture Lab for Smart Health and co-directs the Lab for Perception, Action and Cognition. Education and career Liu has a bachelor's degree from Beijing Normal University. She earned a Ph.D. at the University of Massachusetts Amherst, in 1990. Her dissertation, Symmetry Groups in Robotic Assembly Planning, was supervised by Robin Popplestone. After postdoctoral research at LIFIA/IMAG, part of the Laboratoire d'Informatique de Grenoble in Grenoble, France, and at DIMACS at Rutgers University, she became a research assistant professor at the University of Massachusetts Amherst in 1993. She moved to the Robotics Institute of Carnegie Mellon University in 1996, as a research scientist, and worked there for ten years before moving to Pennsylvania State University. Book With Hagit Hel-Or, Craig S. Kaplan, and Luc Van Gool, Liu is the coauthor of the book Computational Symmetry in Computer Vision and Computer Graphics (Now Publishing, 2009). Recognition Liu was a keynote speaker at DICTA 2016, and the program chair of the 2017 Conference on Computer Vision and Pattern Recognition (CVPR). References External links Home page Year of birth missing (living people) Living people American computer scientists Chinese computer scientists Women computer scientists Computer vision researchers Beijing Normal University alumni University of Massachusetts Amherst alumni Pennsylvania State University faculty
809161
https://en.wikipedia.org/wiki/Algonquin%20College
Algonquin College
Algonquin College of Applied Arts and Technology is a publicly funded English-language college located in Ottawa, Ontario, Canada. The college serves the National Capital Region and the outlying areas of Eastern Ontario, Western Quebec, and Upstate New York. The college has three campuses, all in Ontario: a primary campus located in Ottawa, and secondary campuses located in Perth and Pembroke. The college offers bachelor's degrees, diplomas, and certificates in a range of disciplines and specialties. It has been ranked among the Top 50 Research Colleges in Canada and has been recognized as one of Canada's top innovation leaders. The enabling legislation is the Ministry of Training, Colleges and Universities Act. It is a member of Polytechnics Canada. History The college was established during the formation of Ontario's college system in 1967. Colleges of Applied Arts and Technology were established on May 21, 1965, when the Ontario system of public colleges was created. The founding institutions were the Eastern Ontario Institute of Technology (established in 1957) and the Ontario Vocational Centre Ottawa (established in 1965 at the Woodroffe Campus and known as OVC). The original 8 acres site on Woodroffe Avenue was donated to the city by Mr and Mrs Frank Ryan. The Ottawa architecture firm of Burgess, McLean & MacPhadyen designed the midcentury academic complex with open-ended blocks alternatively faced with long glass expanses in a semi-gambrel formation that make up the curtain walls and precast aggregate panels. The corporate campus or modernist academic acropolis spread across North America in the early 1960s. The entrance is via a deeply recessed terrace that's overhung with small white ceramic tiles and vintage can lights. The long walls are bumped out to float over the foundation. The foundation plantings keep the blocks from appearing stark. The first Principal of the Ontario Vocational Centre (OVC) was Kenneth G. Shoultz. Principal Shoultz took on the leadership of OVC in 1965 after working as a technical studies teacher and then as an inspector for the Ontario Department of Education. K.G. Shoultz continued on as the first Dean of the Technical Centre after OVC was amalgamated with Algonquin College in 1967. Algonquin College is named after the Algonquin First Nations Peoples who were the original inhabitants of the area. In 1964, the Rideau Campus was established. "Satellite" campuses in Pembroke, Hawkesbury, Perth, Carleton Place and Renfrew were established in the late 1960s. The Vanier School of Nursing became a part of the Woodroffe Campus when nursing programs began to be offered at the college. In 1973, the School of Prescott-Russell joined the Algonquin family and the Colonel By Campus was created through the acquisition of St. Patrick's College. With the creation of La Cité Collégiale, 1990 marked the beginning of Algonquin as an English college. The Hawkesbury campus was transferred to La Cité Collégiale, and the Renfrew, Colonel By, and Carleton Place campuses were progressively closed. The latest closure was in August 2002, when the Rideau Campus closed and its programs were moved to the Advanced Technology Centre on the Woodroffe Campus. Expanding its academic purview, the college offers a variety of degree programs taught by expert faculty with a wide range of academic and technical experience. This includes Bachelor of Interior Design (Honours), Bachelor of Public Safety (Honours), Bachelor of Early Learning and Community Development (Honours), Bachelor of Commerce (E-Supply Chain Management), and several others. Woodroffe and Pembroke Campus Expansion The DARE District and AC Library In 2016, Algonquin College launched a $44.9-million building renovation project set to be complete by spring/summer of 2018. This renovation is taking place in the college's original 'C' building which houses most of the administration. The purpose of this significant renovation is to improve the campus library and to provide a range of collaborative spaces for students, staff, and faculty to grow and learn. The new building has been called the DARE District with DARE standing for Discovery, Applied Research, and Entrepreneurship. The DARE District also holds the new Institute for Indigenous Entrepreneurship, which provides Indigenous Algonquin College students and alumni a collaborative space to access resources they need in order to develop or create businesses. This renovation has contributed to the environmental sustainability of the college's research and innovation infrastructure by transforming the northern wing of C building to a high-performance green building. ACCE and Robert C. Gillett Student Commons Opened in the fall of 2011, the Algonquin Centre for Construction Excellence, designed by Edward J. Cuhaci & Associates Architects in joint venture with Diamond Schmitt Architects, houses 600 additional construction seats and provide space for thousands more students studying in related programs. The uniquely green, Leadership in Energy and Environmental Design (LEED) Platinum certified building showcases a teaching laboratory for best practices in sustainable construction. The new facility integrates the relocated bus station and a new below-grade transit roadway (yet to be completed) to the main campus via a $4 million pedestrian bridge constructed across Woodroffe Avenue. Opened in the fall of 2012, the Student Commons project is the result of a continued partnership between the College and its Students' Association. Unique to most Ontario colleges, the Algonquin College Students' Association operates many College services, ranging from the varsity athletics to the Algonquin Fitness Zone. Committed to securing additional social and study space for students, the SA Board of Directors, through consultation with its members, approved to designate part of its activity fee to secure $30 million to fund the new Student Commons. Recognizing this opportunity to improve and centralize student support services the College's Board of Governors approved the contribution of an additional $22 million in funding for the project. Algonquin College Mobile Learning Centre is a computer lab, designed by Edward J. Cuhaci & Associates Architects, that delivers a collaborative learning environment using mobile and cloud computing technology. Algonquin College Waterfront Pembroke Campus Opened in fall 2012, the expansion of the Pembroke Campus adds more than 300 full-time student spaces. These spaces are housed in a modernist building located on the Ottawa River in Pembroke, Ontario. The new waterfront campus is seen as a new beginning for the College, the City of Pembroke, and all of Renfrew County. A new facility would allow the College to grow, allowing it to better meet the labour market needs of Renfrew County's employers well into the future. Programs Algonquin's focus is on the arts and technology and promotes a strong focus on applied theory and practical experience. There are over 19,000 full-time students in more than 180 programs. There are 155 Ontario college programs, 18 apprenticeship programs, 40 co-op programs, 6 collaborative degree programs and 6 bachelor's degree programs. Some of these degrees are through direct collaborative partnerships with Carleton University and University of Ottawa. Algonquin offers the following bachelor's degree programs: Bachelor of Interior Design (Honours); Bachelor of Commerce (E-Supply Chain Management) (Honours); Bachelor of Hospitality and Tourism Management (Honours); Bachelor of Public Safety (Honours); Bachelor of Information Technology – Network Technology Bachelor of Science in Nursing Bachelor of Early Learning and Community Development (Honours); and Bachelor of Building Science (Honours). The college's Woodroffe Campus boasts a fully functional (though non-broadcast) television studio with an adjoining control room, located in N Building. This is reserved for the students of the Broadcasting-Television program. Notable graduates from this program include director of the TV series 24, Jon Cassar and comedian Tom Green. The college used to have a second television studio, which now houses the Theatre Arts program. The college has one fully functional, broadcast radio station run entirely by the students of the Broadcasting-Radio program: CKDJ-FM, as well as an internet station: AIR - Algonquin, that is also broadcast as AIR AM 1700 via AM band. The Algonquin College Animation Program is a three-year advanced diploma with its main focus on performance-based animation whether it be in 3D or traditional animation. Also, all students learn Toonboom's Harmony software. The program is celebrating its 20-year anniversary in 2009-10 and has its curriculum being taught in India, China and South Africa with negotiations with Dubai, Chile and others. The faculty of the program are veterans of the animation industry, all of whom have been at least departmental supervisors, many with over 20 years experience in the industry. Since the introduction of the three-year curriculum, graduates of the program have gone on to varied and rewarding jobs in the animation industry with over 93% of grads finding work in their chosen field, including a graduate Trent Correy who has worked on three Oscar-winning motion-pictures including Zootopia, as well as working on Moana. Student films have gone on to be screened in various festivals, featured on AWN TV (Charged) and won the prestigious ELAN award for best student film 2009 (Snared). The Algonquin College Public Relations program is a two-year diploma in which students have raised notable amounts of money for local not-for-profit organizations including the John Howard Society, LiveWorkPlay, and Harmony House Women's Shelter. Since 1990, the Public Relations program has raised over $300,000 for charity. The Pembroke Campus is well known for its outdoor training programs which attract students from across Canada. These programs include Outdoor Adventure, Outdoor Adventure Naturalist and Forestry Technician. In 2012, a new Waterfront Campus opened in downtown Pembroke. International Campuses Algonquin College has four international campuses through their international offshore partnerships: Manav Rachna International University (MRIU) – Faridabad, India Algonquin College (Orient Education Services Co) – Al-Naseem, Jahra, Kuwait Hotelski Educativni Centar (HEC) in Montenegro (JMI) in Nanjing, China Residence In August 2003, the Woodroffe Campus Residence Complex opened, providing housing for 1,050 students. There is also an abundance of off-campus housing in the area. Most students commute from throughout the National Capital Region by Ottawa's city transit, OC Transpo or by car. Full-time students have a transit pass included in their tuition fees to facilitate off-campus living and reduce the demand for parking on campus. The school's residence is located just a short walk away from Baseline Station where students can take route 95 or route 94 to take them to the downtown core. There is also a clustering of apartment buildings and rental townhouses near the College called Deerfield where many second year students live. The Pembroke Campus has a housing registry. Algonquin College presidents Partnerships Algonquin has formed strategic partnerships with select universities to offer collaborative degrees. This includes the Bachelor of Information Technology - Interactive Multimedia and Design with Carleton University; Bachelor of Information Technology - Network Technology with Carleton University and Bachelor of Science in Nursing with the University of Ottawa. Studies take place at Algonquin College and the partnering university and collaborative degrees are conferred by the university. Algonquin has developed articulation agreements with universities to assist qualified Algonquin graduates to attain specific degrees in shorter periods. Graduates are subject to the admission requirements of the university granting the degree. On February 16, 2017, Algonquin College announced a new partnership with The Ottawa Hospital in health research, innovation and training. The partnership, signed by Algonquin College President Cheryl Jensen and Executive Vice-President of Research at The Ottawa Hospital will be focused on digital health, clinical trials and biotherapeutics manufacturing. The partnership will stand for five years until requiring renewal. Algonquin College has a partnership with Shopify, specifically Shopify U, which has added the study of graphic design to its course list. The partnership will allow students to attend classes at the downtown Ottawa Shopify office and then practice their newly learned skills by helping local businesses. Internationally, the college has several partnerships with institutions in other countries to transfer expertise through technical assistance and training programs. Scholarships Algonquin College joined Project Hero, a scholarship program co founded by General (Ret'd) Rick Hillier for the families of fallen Canadian Forces members. The Government of Canada sponsors an Indigenous Bursaries Search Tool that lists over 680 scholarships, bursaries, and other incentives offered by governments, universities, and industry to support Aboriginal post-secondary participation. Algonquin College bursaries for Aboriginal, First Nations and Métis students include: Peter Wintonick Bursary; Ottawa Police Service's Thomas G. Flanagan Scholarship; MKI Travel and Hospitality Bursary. Military The Diploma in Military Arts and Sciences (DMASc) provides Non-Commissioned Members (NCMs) of the Canadian Forces an online program made possible by a partnership between OntarioLearn (Algonquin College consortium member), the RMC, and the Canadian Defence Academy. Under a RMC and Algonquin College articulation agreement, all graduates of this diploma program who apply to the RMC will be admitted into the Bachelor of Military Arts and Sciences degree program with advanced standing. In 2006, Algonquin College was approached by the Canadian Forces Support Training Group (CFSTG) to explore the feasibility of developing and delivering a program to satisfy the training requirements exclusively for Canadian Forces Geomatics Technicians. The goal was to increase the number of CF graduates produced by the School of Military Mapping. Students in the Geomatics Technician program earn a college-approved certificate in Geomatics. Algonquin College also grants a provincially approved Geomatics Technician Diploma to students who successfully graduate from the Geomatics Technician Training and have completed a small number of approved additional courses. Sports The name of Algonquin College's sports team is the Algonquin Thunder. Thor is the Algonquin College mascot. Algonquin is a member of the OCAA and the CCAA. Varsity teams compete in six sports on the provincial level within the OCAA. The Men's and Women's teams in basketball, soccer, and volleyball can qualify to compete for a "National Championship" as members of the CCAA. Funding is provided by the Students' Association. Algonquin Times The student newspaper of Algonquin College is called the Algonquin Times, founded in 1986. It is produced every two weeks during the fall and winter semesters by journalism and advertising students. Funding is provided by the Students' Association. Glue Magazine Created and distributed for the first time in 2003, Glue Magazine has two deployments, with distribution being carried out twice a year- in September and in January. The issues and topics covered in the publication cover common student concerns such as money, food, friends, gaming, and more. The magazine is created via a collaborative effort between Algonquin College's Journalism and Advertising Marketing Communications students to further their skills in editing, managing promotional material and advertisements. Glue magazine is circulated at three main Ottawa post-secondary campuses including Algonquin College, Carleton University, and the University of Ottawa. Services Available to the Public Algonquin College offers a variety of services to the public at a discounted rate from what is offered outside of the campus. By providing this service, the College allows its students to get hands-on, practical delivery of the theory learned in a classroom setting. The services available for use by the public are: Hair Salon services: The hair salon at Algonquin College offers adult haircuts (for both men and women), children's haircuts, hair colouring and highlighting, perm, scalp therapy, hair relaxing, and extensions. These services are provided by the students enrolled in the Hairstyling program. Massage services: Members of the public are given complete massage therapy care by students in the Massage Therapy program, which includes an assessment of pain and discomfort, a massage treatment, hydrotherapy of deep moist heat or cold and information on self-care. Dental services: Provided by the students registered in the Dental Assistant and Dental Hygienist programs, the services available are restorative services, dental cleanings, preventative dental services for both adults and children, and tooth whitening treatments. Students are supervised by Registered Dental Hygienists and Dentists at all times. Restaurant International: Casual fine dining delivered in the on-site restaurant by students in the Culinary Arts program. Catering services Pet Adoption: Services provided through Algonquin College by the SPCA to make pets available for adoption. Facility veterinarians, student Veterinary Technicians and Veterinary Assistants ensure the pets made available for adoption are neutered, micro-chipped and vaccinated at the College. Notable alumni and faculty Abdiweli Sheikh Ahmed, Prime Minister of Somalia Michael Barrett, Member of Parliament for Leeds—Grenville—Thousand Islands and Rideau Lakes Jason Blaine, Country music star Jon Cassar, Emmy-winning producer and director of the TV series 24 Zdeno Chára, former Boston Bruins captain and current New York Islanders player Frank Cole, documentary filmmaker James Cybulski, TSN reporter Janice Dean, Fox News weather specialist Ben Delaney, sledge hockey player Jon Dore, comedian Rammy Elkhori, Educator Tom Green, comedian Ricardo Larrivée, television host and food writer Chris Lovasz, internet personality and member of The Yogscast Massari, Canadian singer Neil Macdonald, CBC Washington Bureau Chief Norm Macdonald, Comedian Ian Millar, Olympic medal-winning equestrian Larry O'Brien, former Mayor of Ottawa and technology entrepreneur Dan O'Toole, SportsCentre anchor, former Fox Sports Live anchor Anthony Rota, Member of Parliament for Nipissing—Timiskaming, Speaker of the House of Commons Graham Sucha, Member of the Alberta Legislative Assembly, Calgary Shaw Tim Tierney, City of Ottawa Councillor, Beacon Hill-Cyrville See also Higher education in Ontario List of colleges in Ontario References External links Algonquin Times homepage Educational institutions established in 1967 Colleges in Ontario 1967 establishments in Ontario
668160
https://en.wikipedia.org/wiki/AltGr%20key
AltGr key
AltGr (also Alt Graph) is a modifier key found on many computer keyboards (rather than a second Alt key found on US keyboards). It is primarily used to type characters that are not widely used in the territory where sold, such as foreign currency symbols, typographic marks and accented letters. On a typical Windows-compatible PC keyboard, the AltGr key, when present, takes the place of the right-hand Alt key: if not engraved as such, that key may still be remapped to behave as though it is, (or emulated using a chord such as Ctrl+Alt). In macOS, the Option key has functions similar to the AltGr key. AltGr is used similarly to the Shift key: it is held down while another key is struck in order to obtain a character other than the one that the latter normally produces. AltGr and Shift can also sometimes be combined to obtain yet another character. For example, on the US-International keyboard layout, the C key can be used to insert four different characters: → c (lower case — first level) → C (upper case — second level) → © (copyright sign — third level) → ¢ (cent sign — fourth level) Meaning IBM states that AltGr is an abbreviation for alternate graphic. The AltGr key is used as an additional 'shift' key, to provide a third and a fourth (when Shift is also pressed) grapheme for most keys. Most are accented variants of the letters on the keys, but also additional typographical symbols and punctuation marks. Some languages such as Bengali use this key when the number of letters of their alphabet is too large for a standard keyboard. On early home computers the alternate graphemes were primarily box-drawing characters. Function by default national keyboard In most of the keyboard diagrams the symbol you get when holding down AltGr is in blue in the lower-right of the corner. If different, the symbol for Shift+AltGr is shown in the upper-right. Bangladesh Belgium The Windows version of the Belgian keyboard may only support a subset of these characters. Several of the AltGr combinations are themselves dead keys, which are followed by another letter to produce an accented version of that letter. Brazil Some notes The combination results in the (obsolete) symbol ₢ for the former Brazilian currency, the Brazilian cruzeiro. The , , combinations are useful as a replacement for the "/?" key, which is physically absent on non-Brazilian keyboards. Some software (e.g. Microsoft Word) will map to ® and to ™, but this is not standard behavior and was likely an accident owing to the fact that the combinations and were intended. Windows interprets as . France On AZERTY keyboards, AltGr enables the user to type the following characters: → € → ¤ → ~ (a dead key: then → õ) → # → { → [ → | → ` (a dead key: then → ò) → \ → ^ (generally not dead: then → ^o, not ô) → @ → ] → } Germany On German keyboards, AltGr enables the user to type the following characters, which are indicated on the keyboard: → ² → ³ → { → [ → ] → } → \ → ~ → @ → € → | → µ Windows 8 introduced the ability of pressing to produce ẞ (capital ß). Even though this is usually not indicated on the physical keyboard—potentially due to a lack of space, since the ß-key already has three different levels ( → "ß", → "?", and, as shown above, → "\")—, it can be seen in the Windows On-Screen Keyboard by selecting the necessary keys with the German keyboard layout selected. (Some newer types of German keyboards offer the fixed assignment → ẞ.) Greece On Greek keyboards, AltGr enables the user to type the following characters: Digits row → ² → ³ → £ → § → ¶ → ¤ → ¦ → ° → ± → ½ Top letters row → € → ® → ¥ → « → » → ¬ Middle letters row → ΅(a dead key: then → ΐ) Bottom letters row → © Some of these key combinations also result in different characters if the polytonic layout is used. Israel Hebrew On Hebrew keyboards, AltGr enables the user to type the following characters: → ₪ → € There are several combinations using AltGr that activate Hebrew vowels. Yiddish Using a Hebrew keyboard, one may write in Yiddish as the two languages share many letters. However, Yiddish has some additional digraphs and a symbol not otherwise found in Hebrew which are entered via AltGr. → ‎ → ‎ → ‎ → Italy On Italian keyboards, AltGr enables the user to type the following characters: → € → € → @ → # → [ → ] → { → } There is an alternate layout, which differ just in disposition of characters accessible through AltGr and includes the tilde and the curly brackets. Latvia The following letters can be input in the Latvian keyboard layout using AltGr: Lowercase letters → ā → č → ē → ģ → ī → ķ → ļ → ņ → ō → ŗ → š → ū → ž Uppercase letters → Ā → Č → Ē → Ģ → Ī → Ķ → Ļ → Ņ → Ō → Ŗ → Š → Ū → Ž North Macedonia On Macedonian keyboards, AltGr enables the user to type the following characters: → € → Ђ → ђ → [ → ] → Ћ → ћ → @ → { → } → § The Netherlands Digits row → ¹ and ¡ → ² → ³ → £ and ¤ → € → ¼ → ½ → ¾ → ‘ → ’ → ¥ → × and ÷ Top letters row → ä and Ä → å and Å → é and É → ® → þ and Þ (Icelandic and Old English thorn) → ü and Ü → ú and Ú → í and Í → ó and Ó → ö and Ö → « → » → ¬ and ¦ Middle letters row (Home row) → á and Á → ß (German eszett aka sharp s) and § → ð and Ð (Icelandic edh) → ø and Ø → ¶ and ° → ´ and ¨ Bottom letters row → æ and Æ → © and ¢ → ñ and Ñ → µ → ç and Ç → ¿ Nordic countries and Estonia The keyboard layouts in the Nordic countries (Denmark (DK), Faroe Islands (FO), Finland (FI), Norway (NO) and Sweden (SE) as well as in Estonia (EE)) are largely similar to each other. Generally the AltGr key can be used to create the following characters: → @ → £ → $ → € → µ → { → [ → ] → } → ~ (excluding EE) Other AltGr combinations are peculiar to just some of the countries: → \ (EE, FI, SE) → | (EE, FI, SE) → \ (DK, FO) → | (DK, FO) → ´ (NO) → ~ (FO) → ¨ (FO) → ^ (FO) → € (NO, DK, FO, SE, sometimes FI) → š (EE, sometimes FI) → ž (EE, sometimes FI) → § (EE) → ½ (EE) Finnish multilingual The Finnish multilingual keyboard standard adds many new characters to the traditional layout via the AltGr key, as shown in the image below (the blue characters can be written with the AltGr key; several dead key diacritics, shown in red, are also available as an AltGr combination). Poland Typewriters in Poland used a QWERTZ layout specifically designed for the Polish language with accented characters obtainable directly. When personal computers became available worldwide in the 1980s, commercial importing into Poland was not supported by its communist government, so most machines in Poland were brought in by private individuals. Most had US keyboards, and various methods were devised to make special Polish characters available. An established method was to use AltGr in combination with the relevant Latin base letter to obtain a precomposed character with a diacritic; note the exceptional combination using x instead of the base letter z, as the Latin base letter has been reserved for another combination: → ą → ć → ę → ł → ń → ó → ś → € → ź → ż At the time of the political transformation and opening of commercial import channels this practice was so widespread that it was adopted as the de facto standard. Nowadays most PCs in Poland have standard US keyboards and use the AltGr method to enter Polish diacritics. This layout is referred to as Polish programmers' layout () or simply Polish layout. Another layout is still used on typewriters, mostly by professional typists. Computer keyboards with this layout are available, though difficult to find, and supported by a number of operating systems; they are known as Polish typists' layout (). Older Polish versions of Microsoft Windows used this layout, describing it as Polish layout. On current versions it is referred to as Polish (214). Romania The keymap with the AltGr key: â ß € r ț y u î o § „ ” ă ș đ f g h j k ł ; z x © v b n m « » Russia Since release 1903, versions of Windows 10 have the binding: → ₽ (Ruble sign) South Slavic Latin On South Slavic Latin keyboards (used in Croatia, Slovenia, Bosnia and Herzegovina, Montenegro and Serbia), the following letters and special characters are created using AltGr: → \ → | → € → ÷ → × → [ → ] → ł → Ł → ß → ¤ → @ → { → } → § → < → > → ~ → ˇ → ^ → ˘ → ° → ˛ → ` → ˙ → ´ → ˝ → ¨ → ¸ Turkey In Turkish keyboard variants the AltGr can be used to display the following characters: → æ → ß → € → ₺ → @ → i a → ã a → ä a → á a → à United Kingdom and Ireland → á and Á → é and É → í and Í → ó and Ó → ú and Ú → € → \ → ¦ In UK and Ireland keyboard layouts, only two alternative use symbols are printed on most keyboards, which require the AltGr key to function. These are: € the euro sign. Located on the "4/$" key. ¦ the broken bar symbol. Located on the "`/¬" key, to the immediate left of "1". Using the AltGr key on Linux produces many foreign characters and international symbols, e.g. ¹²³€½{[]}@łe¶ŧ←↓→øþæßðđŋħjĸł«»¢“”nµΩŁE®Ŧ¥↑ıØÞƧЪŊĦJ&Ł<>©‘’Nº×÷· (If reconfigured as a compose key, an even larger repertoire is available). With the UK extended keyboard setting (below), Chrome OS offers a large repertoire of symbols and precomposed characters. Scotland and Wales For the diacritics used by Welsh (ŵ and ŷ) and Scots Gaelic (à, è, ì, ò and ù), the UK extended keyboard setting is needed. This makes available (for circumflex accent) and (for grave accent) as dead keys. UK extended keyboard layout The UK-Extended keyboard mapping (available with Microsoft Windows, Linux and ChromeOS) allows many characters with diacritical marks (including those used in other European countries) to be generated by using the AltGr key or dead keys in combination with others. Notes: Dotted circle (◌) is used here to indicate a dead key. The (grave accent) key is the only one that acts as a free-standing dead key and thus does not respond as shown on the key-cap. All others are invoked by AltGr. (°) is a degree sign; (º) is a masculine ordinal indicator. For a complete list of the characters available using dead keys, see QWERTY#Chrome OS. United States Most keyboards sold in the US do not have an (engraved) key. However if there is an right-hand key it will act as if a layout using it is installed (conversely a foreign keyboard will act like the right-hand if the standard US keyboard layout is installed). On some compact keyboards like those of netbooks, the right-hand may be missing altogether. Microsoft Windows (and some other OS's) emulate the key with : typing is the same as . Microsoft recommends that this combination not be used as part of a keyboard shortcut as users attempting to type AltGr characters will instead trigger the shortcut. US-International Microsoft provides a US-International keyboard layout that uses (or right-hand or ) key to produce more characters: Red characters are dead keys; for example ä can be entered with . Other operating systems such as Linux and Chrome OS follow this layout but increase the repertoire of glyphs provided. X Window System In the X Window System (Linux, BSD, Unix), AltGr can often be used to produce additional characters with almost every key on the keyboard. Furthermore, with some keys, AltGr will produce a dead key; for example on a UK keyboard, semicolon can be used to add an acute accent to a base letter, and left square bracket can be used to add a trema: followed by → é followed by → Ö This use of dead keys enables one to type a wide variety of precomposed characters that combine various diacritics with either uppercase or lowercase letters, achieving a similar effect to the Compose key. Keyboard maps Below are some diagrams and examples of country-specific key maps. For the diagrams, the grey symbols are the standard characters, yellow is with , red is with , and blue is with . Danish keyboard The Danish keymap features the following key combinations: → Ω → ø → µ Italian keyboard The Italian keymap includes, among other combinations, the following: → ħ → ~ → ` → × Norwegian keyboard Swedish keyboard See also Modifier key Option key Shift key Dead key Escape character Compose key Windows Alt keycodes Precomposed character References Computer keys
14928693
https://en.wikipedia.org/wiki/1868%20Thersites
1868 Thersites
1868 Thersites is a large Jupiter trojan from the Greek camp, approximately in diameter. Discovered during the Palomar–Leiden survey at Palomar in 1960, it was later named after the warrior Thersites from Greek mythology. The presumed carbonaceous C-type asteroid belongs to the 50 largest Jupiter trojans and has a rotation period of 10.48 hours. Discovery Thersites was discovered on 24 September 1960, by Dutch astronomer couple Ingrid and Cornelis van Houten at Leiden, on photographic plates taken by Tom Gehrels at the Palomar Observatory in California. On the same day, the group discovered another Jupiter trojan, 1869 Philoctetes. The body's observation arc begins with a precovery taken at Palomar in March 1954, more than 6 years prior to its official discovery observation. Palomar–Leiden survey The provisional survey designation "P-L" stands for Palomar–Leiden, named after the Palomar and Leiden observatories, which collaborated on the fruitful Palomar–Leiden survey in the 1960s and 1970s. Gehrels used Palomar's Samuel Oschin telescope, also known as the 48-inch Schmidt Telescope, and shipped the photographic plates to Cornelis and Ingrid van Houten at Leiden Observatory, where astrometry was carried out. The trio are credited with the discovery of several thousand minor planets. Orbit and classification Thersites is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of the Gas Giant's orbit in a 1:1 resonance (see Trojans in astronomy). It is also a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.7–5.9 AU once every 12 years and 3 months (4,478 days; semi-major axis of 5.32 AU). Its orbit has an eccentricity of 0.11 and an inclination of 17° with respect to the ecliptic. Physical characteristics The Trojan asteroid has been assumed to be a carbonaceous C-type asteroid. Rotation period In July 1994, a first rotational lightcurve of Thersites was obtained from photometric observations by Italian astronomer Stefano Mottola using the former Dutch 0.9-metre Telescope at ESO's La Silla Observatory in northern Chile. Lightcurve analysis gave a rotation period of hours with a brightness variation of magnitude (). The best-rated lightcurve by Robert Stephens at the Center for Solar System Studies from June 2016 gave a period of 10.48 hours and an amplitude of 0.27 magnitude (). Follow-up observation in 2017 gave a similar period of 10.412 hours (). Diameter and albedo According to the space-based surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Thersites has a low albedo of 0.055 and measures 78.9 and 68.2 kilometers in diameter, respectively. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057, and calculates an intermediate diameter of 70.08 kilometers with an absolute magnitude of 9.5. Naming This minor planet was named from Greek mythology after Thersites, a Greek warrior who wanted to abandon Troy's siege during the Trojan War and head home. The given name also refers to the fact, that the asteroid was discovered farthest from the Lagrangian point. The official was published by the Minor Planet Center on 1 June 1975 (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 001868 Discoveries by Cornelis Johannes van Houten Discoveries by Ingrid van Houten-Groeneveld Discoveries by Tom Gehrels Minor planets named from Greek mythology 2008 Named minor planets 19600924
22168662
https://en.wikipedia.org/wiki/Callhyccoda
Callhyccoda
Callhyccoda is a genus of moths of the family Noctuidae erected by Emilio Berio in 1935. Species Callhyccoda indecora Hacker, 2019 Sierra Leone Callhyccoda mirei Herbulot & Viette, 1952 Chad, Ethiopia, Somalia, Djibouti, Arabia Callhyccoda namibiensis Hacker, 2019 Namibia Callhyccoda nigrofalcata Hacker, 2019 Tanzania Callhyccoda ochrata Hacker, Fiebig & Stadie, 2019 Uganda Callhyccoda paolii (Berio, 1937) Somalia, Ethiopia Callhyccoda viriditrina Berio, 1935 Sudan, Somalia, Ethiopia, Kenya References Hadeninae
53034622
https://en.wikipedia.org/wiki/Machine%20translation%20of%20sign%20languages
Machine translation of sign languages
The machine translation of sign languages has been possible, albeit in a limited fashion, since 1977. When a research project successfully matched English letters from a keyboard to ASL manual alphabet letters which were simulated on a robotic hand. These technologies translate signed languages into written or spoken language, and written or spoken language to sign language, without the use of a human interpreter. Sign languages possess different phonological features than spoken languages, which has created obstacles for developers. Developers use computer vision and machine learning to recognize specific phonological parameters and epentheses unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people. Limitations Sign language translation technologies are limited in the same way as spoken language translation. None can translate with 100% accuracy. In fact, sign language translation technologies are far behind their spoken language counterparts. This is, in no trivial way, due to the fact that signed languages have multiple articulators. Where spoken languages are articulated through the vocal tract, signed languages are articulated through the hands, arms, head, shoulders, torso, and parts of the face. This multi-channel articulation makes translating sign languages very difficult. An additional challenge for sign language MT is the fact that there is no formal written format for signed languages. There are notations systems but no writing system has been adopted widely enough, by the international Deaf community, that it could be considered the 'written form' of a given sign language. Sign Languages then are recorded in various video formats. There is no gold standard parallel corpus that is large enough for SMT, for example. History The history of automatic sign language translation started with the development of hardware such as finger-spelling robotic hands. In 1977, a finger-spelling hand project called RALPH (short for "Robotic Alphabet") created a robotic hand that can translate alphabets into finger-spellings. Later, the use of gloves with motion sensors became the mainstream, and some projects such as the CyberGlove and VPL Data Glove were born. The wearable hardware made it possible to capture the signers’ hand shapes and movements with the help of the computer software. However, with the development of computer vision, wearable devices were replaced by cameras due to their efficiency and fewer physical restrictions on signers. To process the data collected through the devices, researchers implemented neural networks such as the Stuttgart Neural Network Simulator for pattern recognition in projects such as the CyberGlove. Researchers also use many other approaches for sign recognition. For example, Hidden Markov Models are used to analyze data statistically, and GRASP and other machine learning programs use training sets to improve the accuracy of sign recognition. Fusion of non-wearable technologies such as cameras and Leap Motion controllers have shown to increase the ability of automatic sign language recognition and translation software. Technologies VISICAST http://www.visicast.cmp.uea.ac.uk/Visicast_index.html eSIGN project http://www.visicast.cmp.uea.ac.uk/eSIGN/index.html The American Sign Language Avatar Project at DePaul University http://asl.cs.depaul.edu/ Spanish to LSE SignAloud SignAloud is a technology that incorporates a pair of gloves made by a group of students at University of Washington that transliterate American Sign Language (ASL) into English. In February 2015 Thomas Pryor, a hearing student from the University of Washington, created the first prototype for this device at Hack Arizona, a hackathon at the University of Arizona. Pryor continued to develop the invention and in October 2015, Pryor brought Navid Azodi onto the SignAloud project for marketing and help with public relations. Azodi has a rich background and involvement in business administration, while Pryor has a wealth of experience in engineering. In May 2016, the duo told NPR that they are working more closely with people who use ASL so that they can better understand their audience and tailor their product to the needs of these people rather than the assumed needs. However, no further versions have been released since then. The invention was one of seven to win the Lemelson-MIT Student Prize, which seeks to award and applaud young inventors. Their invention fell under the "Use it!" category of the award which includes technological advances to existing products. They were awarded $10,000. The gloves have sensors that track the users hand movements and then send the data to a computer system via Bluetooth. The computer system analyzes the data and matches it to English words, which are then spoken aloud by a digital voice. The gloves do not have capability for written English input to glove movement output or the ability to hear language and then sign it to a deaf person, which means they do not provide reciprocal communication. The device also does not incorporate facial expressions and other nonmanual markers of sign languages, which may alter the actual interpretation from ASL. ProDeaf ProDeaf (WebLibras) is a computer software that can translate both text and voice into Portuguese Libras (Portuguese Sign Language) "with the goal of improving communication between the deaf and hearing." There is currently a beta edition in production for American Sign Language as well. The original team began the project in 2010 with a combination of experts including linguists, designers, programmers, and translators, both hearing and deaf. The team originated at Federal University of Pernambuco (UFPE) from a group of students involved in a computer science project. The group had a deaf team member who had difficulty communicating with the rest of the group. In order to complete the project and help the teammate communicate, the group created Proativa Soluções and have been moving forward ever since. The current beta version in American Sign Language is very limited. For example, there is a dictionary section and the only word under the letter 'j' is 'jump'. If the device has not been programmed with the word, then the digital avatar must fingerspell the word. The last update of the app was in June 2016, but ProDeaf has been featured in over 400 stories across the country's most popular media outlets. The application cannot read sign language and turn it into word or text, so it only serves as a one-way communication. Additionally, the user cannot sign to the app and receive an English translation in any form, as English is still in the beta edition. Kinect Sign Language Translator Since 2012, researchers from the Chinese Academy of Sciences and specialists of deaf education from Beijing Union University in China have been collaborating with Microsoft Research Asian team to create Kinect Sign Language Translator. The translator consists of two modes: translator mode and communication mode. The translator mode is capable of translating single words from sign into written words and vice versa. The communication mode can translate full sentences and the conversation can be automatically translated with the use of the 3D avatar. The translator mode can also detect the postures and hand shapes of a signer as well as the movement trajectory using the technologies of machine learning, pattern recognition, and computer vision. The device also allows for reciprocal communication because the speech recognition technology allows the spoken language to be translated into the sign language and the 3D modeling avatar can sign back to the deaf people. The original project was started in China based on translating Chinese Sign Language. In 2013, the project was presented at Microsoft Research Faculty Summit and Microsoft company meeting. Currently, this project is also being worked by researchers in the United States to implement American Sign Language translation. As of now, the device is still a prototype, and the accuracy of translation in the communication mode is still not perfect. SignAll SignAll is an automatic sign language translation system provided by Dolphio Technologies in Hungary. The team is "pioneering the first automated sign language translation solution, based on computer vision and natural language processing (NLP), to enable everyday communication between individuals with hearing who use spoken English and deaf or hard of hearing individuals who use ASL." The system of SignAll uses Kinect from Microsoft and other web cameras with depth sensors connected to a computer. The computer vision technology can recognize the handshape and the movement of a signer, and the system of natural language processing converts the collected data from computer vision into a simple English phrase. The developer of the device is deaf and the rest of the project team consists of many engineers and linguist specialists from deaf and hearing communities. The technology has the capability of incorporating all five parameters of ASL, which help the device accurately interpret the signer. SignAll has been endorsed by many companies including Deloitte and LT-innovate and has created partnerships with Microsoft Bizspark and Hungary's Renewal. MotionSavvy MotionSavvy was the first sign language to voice system. The device was created in 2012 by a group from Rochester Institute of Technology / National Technical Institute for the Deaf and "emerged from the Leap Motion accelerator AXLR8R." The team used a tablet case that leverages the power of the Leap Motion controller. The entire six person team was created by deaf students from the schools deaf-education branch. The device is currently one of only two reciprocal communication devices solely for American Sign Language. It allows deaf individuals to sign to the device which is then interpreted or vice versa, taking spoken English and interpreting that into American Sign Language. The device is shipping for $198. Some other features include the ability to interact, live time feedback, sign builder, and crowdsign. The device has been reviewed by everyone from technology magazines to Time. Wired said, "It wasn’t hard to see just how transformative a technology like [UNI] could be” and that “[UNI] struck me as sort of magical."Katy Steinmetz at TIME said, "This technology could change the way deaf people live." Sean Buckley at Engadget mentioned, "UNI could become an incredible communication tool." References Sign language Applications of computer vision Gesture recognition
9266795
https://en.wikipedia.org/wiki/Scalability%20testing
Scalability testing
Scalability testing, is the testing of a software application to measure its capability to scale up or scale out in terms of any of its non-functional capability. Performance, scalability and reliability testing are usually grouped together by software quality analysts. The main goals of scalability testing are to determine the user limit for the web application and ensure end user experience, under a high load, is not compromised. One example is if a web page can be accessed in a timely fashion with a limited delay in response. Another goal is to check if the server can cope i.e. Will the server crash if it is under a heavy load? Dependent on the application that is being tested, different parameters are tested. If a webpage is being tested, the highest possible number of simultaneous users would be tested. Also dependent on the application being tested is the attributes that are tested - these can include CPU usage, network usage or user experience. Successful testing will project most of the issues which could be related to the network, database or hardware/software. Creating a scalability test When creating a new application, it is difficult to accurately predict the number of users in 1, 2 or even 5 years. Although an estimate can be made, it is not a definite number. An issue with an increasing number of users is that it can create new areas of failure. For example, if you have 100,000 new visitors, it's not just access to the application that could be a problem; you might also experience issues with the database where you need to store all the data of these new customers. Increment loads This is why when creating a scalability test, it is important to scale up in increments. These steps can be split into small, medium and high loads. We must scale up in increments as each stage tests a different aspect. Small loads ensure the system functions as it should on a basic level. Medium loads test the system can function at its expected level. High loads test the system can cope with a high load. Test environment The environment should be constant throughout testing in order to provide accurate and reliable results. If the testing is a success, we should see a proportional change in performance. For example, if we double the users on the system, we should see a drop in performance of 50%. Alternatively, if measuring system statistics such as memory or CPU usage over time, this may have a different graph that is not proportional as users are not being plotted on either axis. Outcomes of scalability testing Once we have collected the data from our various stages, we can begin to plot the results on various graphs to show the results. However, the graphs can vary depending on what is being plotted. Unproportional outcome In Figure 1, we can see a graph showing a resources usage (in this case, memory) over time. The graph is not proportionate but can still be considered a passed test as initially there is a ramp up phase as the system begins to run, however, as more users are added, there is little change in memory usage. This means that the current memory capacity can cope with all 3 stages of the test. Proportional outcome In figure 2, we can see a more proportional increase, comparing the number of users to the time taken to execute a report. With a low load of 20 users, the average time is 5.5 seconds, as we increase the load to medium (40 users) and a high load (60 users), the average time increases to 9.5 and 18 seconds respectively. In some cases, there may be changes that have to be made to the server software or hardware. Once the necessary upgrades have been made, we must re-run the tests to ensure the upgrades have been effective in addressing the issues previously raised. When we have a proportional outcome, there are no bottlenecks as we scale up and increase the load the system is placed under. Vertical and horizontal scaling As a result of scalability testing, upgrades can be required to software and hardware. These upgrades can be split into vertical or horizontal scaling. Vertical scaling, also known as scaling up, is the process of replacing a component with a device that is generally more powerful or improved. For example, replacing a processor with a faster one. Horizontal scaling, also known as scaling out is setting up another server for example to run in parallel with the original so they share the workload. Advantages and disadvantages There are advantages and disadvantages to both methods of scaling. Although scaling up may be simpler, the addition of hardware resources can result in diminishing returns. This means that every time we upgrade the processor for example, we do not always get the same level of benefits as the previous change. However, horizontal scaling can be extremely expensive, not only the cost of entire systems such as servers, but we must also take into account their regular maintenance costs . References External links Designing Distributed Applications with Visual Studio .NET: Scalability Software testing
65213838
https://en.wikipedia.org/wiki/1966%20USC%20Trojans%20baseball%20team
1966 USC Trojans baseball team
The 1966 USC Trojans baseball team represented the University of Southern California in the 1966 NCAA University Division baseball season. The Trojans played their home games at Bovard Field. The team was coached by Rod Dedeaux in his 25th year at USC. The Trojans won the California Intercollegiate Baseball Association championship and the District VIII Playoff to advance to the College World Series, where they were defeated by the Ohio State Buckeyes. Roster Schedule ! style="" | Regular Season |- valign="top" |- align="center" bgcolor="#ccffcc" | 1 || February 11 || || Bovard Field • Los Angeles, California || 9–1 || 1–0 || 0–0 |- align="center" bgcolor="#ccffcc" | 2 || February 19 || || Bovard Field • Los Angeles, California || 15–14 || 2–0 || 0–0 |- align="center" bgcolor="#ccffcc" | 3 || February 19 || San Fernando Valley State || Bovard Field • Los Angeles, California || 10–5 || 3–0 || 0–0 |- align="center" bgcolor="#ccffcc" | 4 || February 25 || || Bovard Field • Los Angeles, California || 14–10 || 4–0 || 0–0 |- align="center" bgcolor="#ffcccc" | 5 || February 28 || || Bovard Field • Los Angeles, California || 14–10 || 4–1 || 0–0 |- |- align="center" bgcolor="#ccffcc" | 6 || March 1 || at San Fernando Valley State || Matador Field • Northridge, California || 6–2 || 5–1 || 0–0 |- align="center" bgcolor="#ccffcc" | 7 || March 4 || at || Campus Diamond • Santa Barbara, California || 7–3 || 6–1 || 1–0 |- align="center" bgcolor="#ccffcc" | 8 || March 5 || Santa Barbara || Bovard Field • Los Angeles, California || 10–0 || 7–1 || 2–0 |- align="center" bgcolor="#ccffcc" | 9 || March 7 || at Long Beach State || Blair Field • Long Beach, California || 2–0 || 8–1 || 2–0 |- align="center" bgcolor="#ccffcc" | 10 || March 11 || || Bovard Field • Los Angeles, California || 4–3 || 9–1 || 3–0 |- align="center" bgcolor="#ccffcc" | 11 || March 12 || || Bovard Field • Los Angeles, California || 6–0 || 10–1 || 4–0 |- align="center" bgcolor="#ccffcc" | 12 || March 12 || California || Bovard Field • Los Angeles, California || 6–0 || 11–1 || 5–0 |- align="center" bgcolor="#ccffcc" | 13 || March 15 || || Bovard Field • Los Angeles, California || 5–3 || 12–1 || 5–0 |- align="center" bgcolor="#ccffcc" | 14 || March 18 || at California || Edwards Field • Berkeley, California || 4–2 || 13–1 || 6–0 |- align="center" bgcolor="#ccffcc" | 15 || March 19 || at Santa Clara || Buck Shaw Stadium • Santa Clara, California || 9–6 || 14–1 || 7–0 |- align="center" bgcolor="#ccffcc" | 16 || March 19 || at Santa Clara || Buck Shaw Stadium • Santa Clara, California || 3–2 || 15–1 || 8–0 |- align="center" bgcolor="#ccffcc" | 17 || March 22 || || Bovard Field • Los Angeles, California || 13–0 || 16–1 || 8–0 |- align="center" bgcolor="#ccffcc" | 18 || March 26 || || Bovard Field • Los Angeles, California || 3–1 || 17–1 || 9–0 |- align="center" bgcolor="#ffcccc" | 19 || March 26 || Stanford || Bovard Field • Los Angeles, California || 0–4 || 17–2 || 9–1 |- align="center" bgcolor="#ccffcc" | 20 || March 27 || Santa Clara || Bovard Field • Los Angeles, California || 7–3 || 18–2 || 10–1 |- align="center" bgcolor="#ccffcc" | 21 || March 29 || at Cal State Los Angeles || Unknown • Los Angeles, California || 4–0 || 19–2 || 10–1 |- |- align="center" bgcolor="#ccffcc" | 22 || April 1 || || Bovard Field • Los Angeles, California || 14–3 || 20–2 || 10–1 |- align="center" bgcolor="#ccffcc" | 23 || April 2 || Pepperdine || Bovard Field • Los Angeles, California || 6–0 || 21–2 || 10–1 |- align="center" bgcolor="#ccffcc" | 24 || April 2 || Pepperdine || Bovard Field • Los Angeles, California || 11–0 || 22–2 || 10–1 |- align="center" bgcolor="#ccffcc" | 25 || April 4 || vs || Unknown • Unknown || 5–1 || 23–2 || 10–1 |- align="center" bgcolor="#ffcccc" | 26 || April 5 || vs San Diego State || Unknown • Unknown || 1–3 || 23–3 || 10–1 |- align="center" bgcolor="#ccffcc" | 27 || April 5 || vs || Unknown • Unknown || 12–0 || 24–3 || 10–1 |- align="center" bgcolor="#ccffcc" | 28 || April 6 || vs || Unknown • Unknown || 12–7 || 25–3 || 11–1 |- align="center" bgcolor="#ccffcc" | 29 || April 6 || vs San Diego State || Unknown • Unknown || 10–5 || 26–3 || 11–1 |- align="center" bgcolor="#ffcccc" | 30 || April 12 || || Bovard Field • Los Angeles, California || 3–4 || 26–4 || 11–1 |- align="center" bgcolor="#ffcccc" | 31 || April 15 || at California || Edwards Field • Berkeley, California || 2–7 || 26–5 || 11–2 |- align="center" bgcolor="#ffcccc" | 32 || April 16 || at Stanford || Sunken Diamond • Stanford, California || 3–5 || 26–6 || 11–3 |- align="center" bgcolor="#ccffcc" | 33 || April 16 || at Stanford || Sunken Diamond • Stanford, California || 3–2 || 27–6 || 12–3 |- align="center" bgcolor="#ccffcc" | 34 || April 19 || || Bovard Field • Los Angeles, California || 3–0 || 28–6 || 12–3 |- align="center" bgcolor="#ccffcc" | 35 || April 22 || || Bovard Field • Los Angeles, California || 4–2 || 29–6 || 12–3 |- align="center" bgcolor="#ffcccc" | 36 || April 26 || at Santa Barbara || Campus Diamond • Santa Barbara, California || 1–2 || 29–7 || 12–4 |- align="center" bgcolor="#ccffcc" | 37 || April 29 || || Bovard Field • Los Angeles, California || 2–1 || 30–7 || 12–4 |- |- align="center" bgcolor="#ccffcc" | 38 || May 3 || Santa Barbara || Bovard Field • Los Angeles, California || 6–0 || 31–7 || 12–4 |- align="center" bgcolor="#ccffcc" | 39 || May 6 || at || Sawtelle Field • Los Angeles, California || 13–10 || 32–7 || 13–4 |- align="center" bgcolor="#ccffcc" | 40 || May 7 || UCLA || Bovard Field • Los Angeles, California || 3–2 || 33–7 || 14–4 |- align="center" bgcolor="#ccffcc" | 41 || May 10 || at Cal Poly Pomona || Unknown • Pomona, California || 7–1 || 34–7 || 14–4 |- align="center" bgcolor="#ccffcc" | 42 || May 13 || UCLA || Bovard Field • Los Angeles, California || 13–11 || 35–7 || 15–4 |- align="center" bgcolor="#ccffcc" | 43 || May 14 || at UCLA || Sawtelle Field • Los Angeles, California || 4–1 || 36–7 || 16–4 |- |- ! style="" | Postseason |- valign="top" |- align="center" bgcolor="#ccffcc" | 44 || May 23 || Cal Poly Pomona || Bovard Field • Los Angeles, California || 11–7 || 37–7 || 16–4 |- align="center" bgcolor="#ccffcc" | 45 || May 24 || || Bovard Field • Los Angeles, California || 4–3 || 38–7 || 16–4 |- align="center" bgcolor="#ccffcc" | 46 || May 25 || Washington State || Bovard Field • Los Angeles, California || 7–4 || 39–7 || 16–4 |- |- align="center" bgcolor="#ccffcc" | 47 || June 13 || vs North Carolina || Omaha Municipal Stadium • Omaha, Nebraska || 6–2 || 40–7 || 16–4 |- align="center" bgcolor="#ffcccc" | 48 || June 14 || vs Ohio State || Omaha Municipal Stadium • Omaha, Nebraska || 2–6 || 40–8 || 16–4 |- align="center" bgcolor="#ccffcc" | 49 || June 15 || vs Arizona || Omaha Municipal Stadium • Omaha, Nebraska || 8–4 || 41–8 || 16–4 |- align="center" bgcolor="#ccffcc" | 50 || June 16 || vs Ohio State || Omaha Municipal Stadium • Omaha, Nebraska || 5–1 || 42–8 || 16–4 |- align="center" bgcolor="#ffcccc" | 51 || June 17 || vs Ohio State || Omaha Municipal Stadium • Omaha, Nebraska || 0–1 || 42–9 || 16–4 |- | Awards and honors Shelly Andrens Honorable Mention All-CIBA Oscar Brown Second Team All-American The Sports Network First Team All-CIBA Armando DeCastro Honorable Mention All-CIBA Justin Dedeaux Second Team All-CIBA Pat Harrison Second Team All-CIBA John Herbst Honorable Mention All-CIBA Fred Shuey Second Team All-CIBA Steve Sogge Second Team All-American The Sports Network First Team All-CIBA John Stewart Second Team All-American American Baseball Coaches Association College World Series All-Tournament Team First Team All-CIBA References USC Trojans baseball seasons USC Trojans baseball College World Series seasons USC Pac-12 Conference baseball champion seasons
15183570
https://en.wikipedia.org/wiki/MacBook%20Air
MacBook Air
The MacBook Air is a line of notebook computers developed and manufactured by Apple Inc. It consists of a full-size keyboard, a machined aluminum case, and, in the more modern versions, a thin light structure. The Air was originally positioned above the previous MacBook line as a premium ultraportable. Since then, the original MacBook's discontinuation in 2011, and lowered prices on subsequent iterations, have made the Air Apple's entry-level notebook. In the current product line, the MacBook Air is situated below the performance range MacBook Pro. The Intel-based MacBook Air was introduced in January 2008 with a 13.3-inch screen, and was promoted as the world's thinnest notebook, opening a laptop category known as the ultrabook family. Apple released a second-generation MacBook Air in October 2010, with a redesigned tapered chassis, standard solid-state storage, and added a smaller 11.6-inch version. Later revisions added Intel Core i5 or i7 processors and Thunderbolt. The third generation was released in October 2018, with reduced dimensions, a Retina display, and combination USB-C/Thunderbolt 3 ports for data and power. An updated model was released in February 2020 with the Magic Keyboard and an option for an Intel Core i7 processor. In November 2020, Apple released the first MacBook Air with Apple silicon based on the Apple M1 processor. Intel-based First generation (Unibody) Steve Jobs introduced the MacBook Air during Apple’s keynote address at the 2008 Macworld conference on January 15, 2008. The first generation MacBook Air was a 13.3" model, initially promoted as the world's thinnest notebook at 1.9 cm (a previous record holder, 2005's Toshiba Portege R200, was 1.98 cm high). It featured a custom Intel Merom CPU and Intel GMA GPU which were 40% as big as the standard chip package. It also featured an anti-glare LED backlit display, a full-size keyboard, and a large trackpad that responded to multi-touch gestures such as pinching, swiping, and rotating. The MacBook Air was the first subcompact notebook offered by Apple after the 12" PowerBook G4 discontinued in 2006. It was also Apple's first computer with an optional solid-state drive. It was Apple's first notebook since the PowerBook 2400c without a built-in removable media drive. To read optical disks, users could either purchase an external USB drive such as Apple's SuperDrive or use the bundled Remote Disc software to access the drive of another computer wirelessly that has the program installed. The MacBook Air also did without a FireWire port, Ethernet port, line-in, and a Kensington Security Slot. On October 14, 2008, a new model was announced with a low-voltage Penryn processor and Nvidia GeForce graphics. Storage capacity was increased to a 128 GB SSD or a 120 GB HDD, and the micro-DVI video port was replaced by the Mini DisplayPort. A mid-2009 version featured slightly higher battery capacity and a faster Penryn CPU. Second generation (Tapered Unibody) , Apple released a redesigned 13.3-inch model with a tapered enclosure, higher screen resolution, improved battery, a second USB port, stereo speakers, and standard solid state storage. An 11.6-inch model was introduced, offering reduced cost, weight, battery life, and performance relative to the 13.3-inch model, but better performance than typical netbooks of the time. Both 11-inch and 13-inch models had an analog audio output/headphone minijack supporting Apple earbuds with a microphone. The 13-inch model received a SDXC-capable SD Card slot. On July 20, 2011, Apple released updated models, which also became Apple's entry-level notebooks due to lowered prices and the discontinuation of the white MacBook around the same time. The mid-2011 models were upgraded with Sandy Bridge dual-core Intel Core i5 and i7 processors, Intel HD Graphics 3000, backlit keyboards, Thunderbolt, and Bluetooth was upgraded to v4.0. Maximum storage options were increased up to 256 GB. This revision also replaced the Expose (F3) key with a Mission Control key, and the Dashboard (F4) key with a Launchpad key. On June 11, 2012, Apple updated the line with Intel Ivy Bridge dual-core Core i5 and i7 processors, HD Graphics 4000, faster memory and flash storage speeds, USB 3.0, an upgraded 720p FaceTime camera, and a thinner MagSafe 2 charging port. On June 10, 2013, Apple updated the line with Haswell processors, Intel HD Graphics 5000, and 802.11ac Wi-Fi. The standard memory was upgraded to 4 GB, with a maximum configuration of 8 GB. Storage started at 128 GB SSD, with options for 256 GB and 512 GB. The Haswell considerably improved battery life from the previous generation, and the models are capable of 9 hours on the 11-inch model and 12 hours on the 13-inch model; a team of reviewers exceeded expected battery life ratings during their test. In March 2015, the models were refreshed with Broadwell processors, Intel HD Graphics 6000, Thunderbolt 2, and faster storage and memory. In 2017 the 13-inch model received a processor speed increase from 1.6 GHz to 1.8 GHz and the 11-inch model was discontinued. The 2017 model remained available for sale after Apple launched the next generation in 2018. It was discontinued in July 2019. Before its discontinuation it was Apple's last notebook with USB Type-A ports, a non-Retina display, and a backlit rear Apple logo. Third generation (Retina) Apple released the third generation MacBook Air, with Amber Lake processors, a 13.3-inch Retina display with a resolution of 2560×1600 pixels, Touch ID, and two combination USB-C 3.1 gen 2/Thunderbolt 3 ports plus one audio jack. The screen displays 48% more color and the bezels are 50% narrower than the previous generation, and occupies 17% less volume. Thickness was reduced to 15.6mm and weight to 1.25 kg (2.75 pounds). It was available in three finishes, silver, space gray, and gold. Unlike the previous generation, this model couldn't be configured with an Intel Core i7 processor, possibly because Intel never released the i7-8510Y CPU that would have been used. The base 2018 model came with 8 GB of 2133 MHz LPDDR3 RAM, 128 GB SSD, Intel Core i5 processor (1.6 GHz base clock, with Turbo up to 3.6 GHz) with Intel UHD Graphics 617. Apple released updated models in July 2019 with True Tone display technology and an updated butterfly keyboard using the same components as the mid-2019 MacBook Pro. A test found that the 256 GB SSD in the 2019 model has a 35% lower read speed than the 256 GB SSD in the 2018 model, though the write speed is slightly faster. Updated models were released in March 2020 with Ice Lake Intel Core i3, i5 and i7 processors, updated graphics, support for 6K output to run the Pro Display XDR and other 6k monitors, and replaced the butterfly keyboard with a Magic Keyboard design similar to that initially found in the 2019 16-inch MacBook Pro. Apple silicon On November 10, 2020, Apple announced an updated Retina MacBook Air with an Apple-designed M1 processor, launched alongside an updated Mac Mini and 13-inch MacBook Pro as the first Macs with Apple's new line of custom ARM-based Apple silicon processors. The device uses a fanless design, first ever on a MacBook Air. It also adds support for Wi-Fi 6, USB4 / Thunderbolt 3 and Wide color (P3). The M1 MacBook Air can only run one external display, unlike the previous Intel-based model that was capable of running two 4K displays. The FaceTime camera remains 720p but Apple advertises an improved image signal processor for higher quality video. Supported operating systems Supported macOS releases macOS Monterey, the current release of macOS, will work with Wi-Fi and graphics acceleration on unsupported MacBook Air computers with a compatible patch utility. Boot Camp–supported Windows versions (Intel models only) Note: There is no Boot Camp support for Apple silicon models. See also Comparison of Macintosh models MacBook (12-inch) MacBook family Notes References External links – official site MacBook Computer-related introductions in 2008 Products introduced in 2008 X86 Macintosh computers ARM Macintosh computers
3157334
https://en.wikipedia.org/wiki/Internet%20Governance%20Forum
Internet Governance Forum
The Internet Governance Forum (IGF) is a multistakeholder governance group for policy dialogue on issues of Internet governance. It brings together all stakeholders in the Internet governance debate, whether they represent governments, the private sector or civil society, including the technical and academic community, on an equal basis and through an open and inclusive process. The establishment of the IGF was formally announced by the United Nations Secretary-General in July 2006. It was first convened in October–November 2006 and has held an annual meeting since then. History and development of the Internet Governance Forum WSIS Phase I, WGIG, and WSIS Phase II The first phase of World Summit on the Information Society (WSIS), held in Geneva in December 2003, failed to agree on the future of Internet governance, but did agree to continue the dialogue and requested the United Nations Secretary-General to establish a multi-stakeholder Working Group on Internet Governance (WGIG). Following a series of open consultations in 2004 and 2005 and after reaching a clear consensus among its members the WGIG proposed the creation of the IGF as one of four proposals made in its final report. Paragraph 40 of the WGIG report stated: "(t)he WGIG identified a vacuum within the context of existing structures, since there is no global multi-stakeholder forum to address Internet-related public policy issues. It came to the conclusion that there would be merit in creating such a space for dialogue among all stakeholders. This space could address these issues, as well as emerging issues, that are cross-cutting and multidimensional and that either affect more than one institution, are not dealt with by any institution or are not addressed in a coordinated manner". The WGIG report was one of the inputs to the second phase of the World Summit on the Information Society held in Tunis in 2005. The idea of the Forum was also proposed by Argentina, as stated in its proposal made during the last Prepcom 3 in Tunis: "In order to strengthen the global multistakeholder interaction and cooperation on public policy issues and developmental aspects relating to Internet governance we propose a forum. This forum should not replace existing mechanisms or institutions but should build on the existing structures on Internet governance, should contribute to the sustainability, stability and robustness of the Internet by addressing appropriately public policy issues that are not otherwise being adequately addressed excluding any involvement in the day to day operation of the Internet. It should be constituted as a neutral, non-duplicative and non-binding process to facilitate the exchange of information and best practices and to identify issues and make known its findings, to enhance awareness and build consensus and engagement. Recognizing the rapid development of technology and institutions, we propose that the forum mechanism periodically be reviewed to determine the need for its continuation." The second phase of WSIS, held in Tunis in November 2005, formally called for the creation of the IGF and set out its mandate. Paragraph 72 of the Tunis Agenda called on the UN Secretary-General to convene a meeting with regards to the new multi-stakeholder forum to be known as the IGF. The Tunis WSIS meeting did not reach an agreement on any of the other WGIG proposals that generally focused on new oversight functions for the Internet that would reduce or eliminate the special role that the United States plays with respect to Internet governance through its contractual oversight of ICANN. The US Government's position during the lead-up to the Tunis WSIS meeting was flexible on the principle of global involvement, very strong on the principle of multi-stakeholder participation, but inflexible on the need for US control to remain for the foreseeable future in order to ensure the "security and stability of the Internet". 2005 mandate The mandate for the IGF is contained in the 2005 WSIS Tunis Agenda. The IGF was mandated to be principally a discussion forum for facilitating dialogue between the Forum's participants. The IGF may "identify emerging issues, bring them to the attention of the relevant bodies and the general public, and, where appropriate, make recommendations," but does not have any direct decision-making authority. In this mandate, different stakeholders are encouraged to strengthen engagement, particularly those from developing countries. In paragraph 72(h), the mandate focused on capacity-building for developing countries and the drawing out of local resources. This particular effort, for instance, has been reinforced through Diplo Foundation's Internet Governance Capacity Building Programme (IGCBP) that allowed participants from different regions to benefit from valuable resources with the help of regional experts in Internet governance. Formation of the IGF The United Nations published its endorsement of a five-year mandate for the IGF in April 2006. There were two rounds of consultations with regards to the convening of the first IGF: 16 – 17 of February 2006 – The first round of consultations was held in Geneva. The transcripts of the two-day consultations are available in the IGF site. 19 May 2006 – The second round of consultations was open to all stakeholders and was coordinated for the preparations of the inaugural IGF meeting. The meeting chairman was Nitin Desai who is the United Nations Secretary-General's Special Adviser for Internet Governance. The convening of the IGF was announced on 18 July 2006, with the inaugural meeting of the Forum to be held in Athens, Greece from 30 October to 2 November 2006. 2011 mandate renewal and improvements process In the lead-up to the completion of the first five-year mandate of the IGF in 2010, the UN initiated a process of evaluating the continuation of the IGF, resulting in a United Nations General Assembly resolution to continue the IGF for a further five years (2011-2015). In addition to the renewed mandate, another UN body, the Commission on Science and Technology for Development (CSTD), established a Working Group on Improvements to the IGF (CSTDWG), which first met in February 2011, held five working group meetings, completed its work in early 2012, and issued a report to the Commission for consideration during its 15th session to be held 21–25 May 2012, in Geneva. The Working Group report made 15 recommendations with regard to five specific areas, namely: Shaping of the outcomes of IGF meetings (2); Working modalities of the IGF, including open consultations, the Multi-stakeholder Advisory Group (MAG) and the Secretariat (3); Funding of the IGF (3); Broadening participation and capacity-building (4); and Linking the IGF to other Internet governance-related entities (3). At its meeting held from 21 to 25 May 2012 the CSTD made the following recommendations to the Economic and Social Council regarding Internet governance and the Internet Governance Forum, which the Council accepted at its meeting on 24 July 2012: 25. Takes note that the CSTD Working Group on improvements to the Internet Governance Forum successfully completed its task; 26. Takes note with appreciation of the report of the Working Group on improvements to the Internet Governance Forum and expresses its gratitude to all its members for their time and valuable efforts in this endeavour as well as to all member states and other relevant stakeholders that have submitted inputs to the Working Group consultation process; 35. Urges the Secretary-General to ensure the continued functioning of the IGF and its structures in preparation for the seventh meeting of the Internet Governance Forum, to be held from 6 to 9 November 2012 in Baku, Azerbaijan and future meetings of the Internet Governance Forum; 36. Notes the necessity to appoint the Special Advisor to the Secretary-General on Internet Governance and the Executive Coordinator to the IGF. 2015 mandate renewal The second five-year mandate of the IGF ended in 2015. On 16 December 2015 the United Nations General Assembly adopted the outcome document on the 10-year review of the implementation of the outcomes of the World Summit on the Information Society. Among other things the outcome document urges the need to promote greater participation and engagement in Internet governance discussions that should involve governments, the private sector, civil society, international organizations, the technical and academic communities, and all other relevant stakeholders. It acknowledges the role the Internet Governance Forum (IGF) has played as a multistakeholder platform for discussion of Internet governance issues. And it extends the existing mandate of the IGF as set out in paragraphs 72 to 78 of the Tunis Agenda for a third period of ten years. During the ten-year period, the IGF should continue to show progress on working modalities, and participation of relevant stakeholders from developing countries. IGF Retreat, July 2016 After the UN General Assembly extended the IGF's mandate for ten additional years in December 2015, but before the December 2016 IGF meeting in Mexico, an IGF Retreat was held on 14–16 July 2016 in Glen Cove, New York to focus on "Advancing the 10-Year Mandate of the Internet Governance Forum". At the time that the IGF mandate was extended, the UN General Assembly called for "progress on working modalities and the participation of relevant stakeholders from developing countries" and "accelerated implementation of recommendations in the report of the UN Commission on Science and Technology for Development (CSTD) Working Group on Improvements to the IGF." Thus the retreat was framed by the mandates of the Tunis Agenda and WSIS+10 review. It also aimed to build on the report of the CSTD Working Group on improvements to the IGF and the many years of reflection of the MAG and the IGF community on improving the working methods of the IGF. The retreat was to focus on "how" the IGF could best work to deliver its role and how it could be best supported. As it focused on the "how", it would not try to carry out the substantive discussions that are to happen in the IGF itself. The retreat reached the following understandings: In addition to its renewal of the IGF's mandate in December 2015, the UN General Assembly expressed expectations, specifically the need to show progress on working modalities and the participation of relevant stakeholders from developing countries, as well as for the accelerated implementation of the recommendations of the CSTD Working Group on improvements to the IGF. There was also recognition that improvements have been and continue to be made on an ongoing basis. The relevance of the IGF in the future is not assured, being dependent inter alia on increased voluntary funding to the multi-donor extra-budgetary IGF Trust Fund Project of the UN that funds the IGF Secretariat and on increased participation from a balanced and diverse set of stakeholders. Other fora are emerging for those wishing to engage in discussions about Internet governance. This suggested that the IGF's distinctiveness and value within this range of alternatives would need to remain sufficient to maintain participation levels from governments and the private sector in particular. A few participants felt that the MAG does not engage all parts of the community who want to take part in the discussion on Internet governance, and the IGF itself as well as the various intersessional activities could address this. The IGF has evolved over the years and is now seen by many as much more than an annual forum. Increasingly, it is seen as an ecosystem including national and regional IGFs, intersessional work, best practice fora, dynamic coalitions and other activities. More could be done to take a strategic, long-term view of the role and activities of the IGF, such as through a predictable multi-year programme of work. Even if not undertaken generally, it might be possible to reinvigorate the IGF by taking a longer term view of particular issues, dedicating time and resources to progressing discussions and achieving concrete outcomes on these over time. it might be possible to move towards a continuous, predictable process for programming the work of the IGF. The IGF's innovative and unconventional multistakeholder structure and culture, compared with other UN processes, is generally felt to be one of its strengths. However, it also made it more difficult to integrate it with other UN processes. The same is true with respect to integrating the IGF and its institutional arrangements comfortably into expectations of multistakeholder processes. One of the challenges therefore is how to reconcile its bottom-up approach and stakeholder expectations with other multilateral processes within the UN system. The role of the MAG, in particular whether the MAG is expected or authorized to take on responsibilities beyond the programming of the annual IGF meetings, needs to be clarified in order to pursue significant innovations in the IGF. It was generally felt that the IGF Secretariat is under-resourced and hence lacks capacities for its current responsibilities, let alone additional activities. Organizational structure Following an open consultation meeting called in February 2006, the UN Secretary-General established an Advisory Group (now known as the Multistakeholder Advisory Group, or MAG), and a Secretariat, as the main institutional bodies of the IGF. Multistakeholder Advisory Group (MAG) The Advisory Group, now known as the Multistakeholder Advisory Group (MAG), was established by the then UN Secretary-General, Kofi Annan on 17 May 2006, to assist in convening the first IGF, held in Athens, Greece. The MAG's mandate has been renewed or extended each year to provide assistance in the preparations for each upcoming IGF meeting. The MAG meets for two days three times each year — in February, May and September. All three meetings take place in Geneva and are preceded by a one-day Open Consultations meeting. The details on the MAG's operating principles and selection criteria are contained in the summary reports of its meetings. The MAG was originally made up of 46 members, but membership grew first to 47, then 50, and eventually 56. Members are from international governments, the commercial private sector and public civil society, including academic and technical communities. The MAG tries to renew roughly one third of the members within each stakeholder group each year. In 2011, because there were only three new MAG members in 2010, it was suggested that two thirds of each group's membership be renewed in 2012 and in fact 33 new members were appointed to the 56 member group. The first MAG chairman was Nitin Desai, an Indian economist and former UN Under-Secretary-General for Economic and Social Affairs from 1992 to 2003. He also served as the Secretary-General's Special Adviser for the World Summit on the Information Society, later Special Advisor for Internet Governance. In 2007 Nitin Desai and Brazilian diplomat Hadil da Rocha Vianna served as co-chairs of the MAG. In 2008, 2009 and 2010 Nitin Desai served as MAG chair. In 2011 Alice Munyua, the Chair of the Kenyan IGF Steering Committee, was MAG chair. In 2012 Elmir Valizada, Deputy Minister of Communications and Information Technology, Azerbaijan was MAG chair. In 2013 Ashwin Sasongko, Director General of ICT Application, Ministry of Communication and Information Technology (CIT), Indonesia served as Honorary Chair with Markus Kummer, Vice-President for Public Policy of the Internet Society as interim chair of the MAG. In 2014 and 2015 Jānis Kārkliņš, Ambassador-at-Large for the Government of Latvia, former Assistant Director-General of Communication and Information of UNESCO, Latvian Ambassador to France, Andorra, Monaco and UNESCO and participant in the World Summit on Information Society, serves as MAG chair. In 2016 United Nations Secretary-General Ban Ki-moon appointed Lynn St. Amour of the United States as the new MAG chair. Amour is currently President and CEO of Internet-Matters, an independent, not-for-profit online safety organization and served from 2001 to 2014 as President and CEO of the Internet Society. In 2019 United Nations Secretary-General António Guterres appointed Anriette Esterhuysen of the Republic of South Africa as the new MAG chair. Before the appointment Esterhuysen was the Executive Director of the Association for Progressive Communications. Secretariat The Secretariat, based in the United Nations Office in Geneva, assists and coordinates the work of the Multistakeholder Advisory Group (MAG). The Secretariat also hosts internships and fellowships. The Secretariat's Executive Coordinator position is currently vacant. Chengetai Masango is IGF Programme and Technology Manager. Until 31 January 2011 the IGF Secretariat was headed by Executive Coordinator Markus Kummer. Kummer was also Executive Coordinator of the Secretariat of the UN Working Group on Internet Governance (WGIG). On 1 February 2011 he joined the Internet Society as its Vice President for Public Policy. Activities at the IGF The following activities take place during IGF meetings: Main or focus sessions, Workshops, Dynamic Coalition meetings, Best Practice Forums, Side meetings, Host Country Sessions, 'Flash' Sessions, Open Forums, Inter-regional dialogue sessions, Newcomers track sessions, Lightning sessions, Unconference sessions, Pre-events, and the IGF Village. Main or focus sessions The first IGF meeting in Greece in 2006 was organized around the main themes of: openness, security, diversity, and access. For IGF Brazil in 2007 a new theme, critical Internet resources, was introduced. For 2009 through 2012 there were six standard themes: (i) Internet governance for development, (ii) Emerging issues, (iii) Managing critical Internet resources, (iv) Security, openness, and privacy, (v) Access and diversity, and (vi) Taking stock and the way forward. For IGF Indonesia in 2013 the six main themes were: (i) Access and Diversity - Internet as an engine for growth and sustainable development; (ii) Openness - Human rights, freedom of expression and free flow of information on the Internet; (iii) Security - Legal and other frameworks: spam, hacking and cyber-crime; (iv) Enhanced cooperation; (v) Principles of multi-stakeholder cooperation; (vi) Internet governance principles. For IGF Turkey in 2014 the eight main themes were: (i) Policies Enabling Access; (ii) Content Creation, Dissemination and Use; (iii) Internet as an Engine for Growth and Development; (iv) IGF and The Future of the Internet Ecosystem; (v) Enhancing Digital Trust; (vi) Internet and Human Rights; (vii) Critical Internet Resources; and (viii) Emerging issues. For IGF Brazil in 2015 the eight main themes were: (i) Cybersecurity and Trust; (ii) Internet Economy; (iii) Inclusiveness and Diversity; (iv) Openness; (v) Enhancing Multistakeholder Cooperation; (vi) Internet and Human Rights; (vii) Critical Internet Resources; and (viii) Emerging issues. For IGF Mexico in 2016 a less formal and more bottom up approach was used to develop the meeting's main themes. The nine themes that emerged were: (i) Sustainable Development and the Internet Economy; (ii) Access and Diversity; (iii) Gender and Youth Issues; (iv) Human Rights Online; (v) Cybersecurity; (vi) Multistakeholder Cooperation; (vii) Critical Internet Resources; (viii) Internet governance capacity building; and (ix) Emerging Issues that may affect the future of the open Internet. Workshops Each year starting in 2007, the IGF has hosted a number of workshops (workshop with panel, roundtable, capacity building session). Examples of workshops held at IGF meetings include: Universalization of the Internet - How to reach the next billion (Expanding the Internet) Low cost sustainable access Multilingualization Implications for development policy Managing the Internet (Using the Internet) Critical Internet resources Arrangements for Internet governance Global cooperation for Internet security and stability Taking stock and the way forward Emerging issues Internet Governance and RPKI Spectrum for Democracy and Development Internet Regulation for Improved Access in Emerging Markets Understanding Internet Infrastructure: an Overview of Technology and Terminology Freedom of expression and freedom from hate on-line (Young People Combating Hate Speech On-line) Protecting the rule of law in the online environment Evaluating Internet Freedom Initiatives: What works? DNSSEC for ccTLDs: Securing National Domains Media pluralism and freedom of expression in the Internet age An industry lead approach for making internet a better place for kids Best Common Practices for Building Internet Capacity Law Enforcement via Domain Names: Caveats to DNS Neutrality Defining the Successful Factors of Different Models for Youth Participation in Internet Governance How to engage users on Internet Policies? New gTLDs: Implications and Potential for Community Engagement, advocacy and Development Human Rights, Internet Policy and the Public Policy Role of ICANN Innovative application of ICTs to facilitate child protection online EURid/UNESCO World Report on IDN Deployment 2012 – opportunities and challenges associated with IDNs and online multilingualism The Benefits of Using Advanced Mobile Technologies and Global Spectrum Harmonization Empowering Internet Users – which tools? Dynamic coalitions The most tangible result of the first IGF in Athens was the establishment of a number of so-called Dynamic Coalitions. These coalitions are relatively informal, issue-specific groups consisting of stakeholders that are interested in the particular issue. Most coalitions allow participation of anyone interested in contributing. Thus, these groups gather not only academics and representatives of governments, but also members of the civil society interested in participating on the debates and engaged in the coalition's works. Active Dynamic Coalitions: Accessibility and Disability Child Online Safety Core Internet Values Freedom of Expression and Freedom of the Media on the Internet (FOEonline) Gender and Internet Governance Internet and Climate Change Internet of Things Internet Rights and Principles / Internet Bill of Rights Network Neutrality Platform Responsibility Public Access in Libraries Youth Coalition on Internet Governance Inactive Dynamic Coalitions: Access and Connectivity for Remote, Rural and Dispersed Communities Access 2 Knowledge (A2K@IGF) Framework of Principles for the Internet Global Localization Platform Linguistic Diversity Online Collaboration Online Education Open Standards Privacy Social Media and Legal Issues Stop Spam Alliance Best practice forums Starting in 2014 these sessions demonstrate some of the best practices that have been adapted with regard to the key IGF themes and the development and deployment of the Internet. The sessions provide an opportunity to discuss what constitutes a "best practice", to share relevant information and experiences and build consensus around best practices that can then be transferred to other situations, and strengthen capacity building activities. The five Best Practice Forums held during IGF 2014 were: Developing Meaningful Multistakeholder Mechanisms; Regulation and Mitigation of Unwanted Communications (Spam); Establishing and Supporting CERTs for Internet Security; Creating an Enabling Environment for the Development of Local Content; and Online Child Safety and Protection. 'Flash' sessions A flash session provides an opportunity for presenters/organisers to evoke/sparkle interest of the participants in specific reports, case studies, best practices, methodologies, tools, etc. that have already been implemented or are in the process of implementation. Participants have an opportunity to ask very specific questions. Flash Sessions will generally be shorter than other types of sessions. Flash sessions held at IGF 2014 were: Internet and Jurisdiction Project; and Crowd Sourced Solutions to Bridge the Gender Digital Divide Open forums All major organizations dealing with Internet governance related issues are given a 90-minute time slot, at their request, to hold an Open Forum in order to present and discuss their activities during the past year and allow for questions and discussions. Examples of recent Open fora include: Consultation on ten-year review of WSIS (CSTD) The Economics of an Open Internet (OECD) Governmental Advisory Committee (GAC) Open Forum (ICANN) ICANN Open Forum Internet & Jurisdiction Policy Network Open Forum ISOC@IGF: Dedicated to an open accessible Internet (Internet Society) South Korea's effort to advance the Internet environment including IPv6 deployment (MSIP and KISA) Launch of Revised Guidelines on for Industry on Child Online Protection (ITU and UNICEF) Measuring what and how: Capturing the effects of the Internet we want (World Wide Web Foundation) Multi-stakeholder Consultation on UNESCO's Comprehensive Study on the Internet (UNESCO) Protecting Human Rights Online (Freedom Online Coalition) Your Internet, Our Aim: Guide Internet Users to Their Human Rights (Council of Europe) Regional, national, and youth initiatives A number of regional, national, and youth initiatives hold separate meetings throughout the year and an inter-regional dialogue session at the annual IGF meeting. EuroDIG was the first regional IGF initiative, initiated in 2008. Youth IGF initiatives: Youth IGF Movement Youth IGF Project Youth Observatory Asia Pacific Youth IGF German Youth IGF Hong Kong, Youth IGF of Latin America and Caribbean, Youth IGF of (LACIGF) Netherlands Youth IGF Turkey, Youth IGF of Regional IGF initiatives: African IGF (AfIGF) Arab IGF Asia Pacific IGF (APrIGF) Caribbean IGF (CIGF) Central Africa IGF Central Asia IGF (CAIGF) Commonwealth IGF East Africa IGF (EA-IGF) European Dialog on Internet Governance (EuroDIG) Latin American and Caribbean IGF (LAC IGF) Macao IGF Persian IGF Southern Africa IGF South Eastern European Dialogue on Internet Governance (SEEDIG) West Africa IGF National IGF initiatives: Afghanistan Argentina Armenia Australia Austrian Azerbaijan Bangladesh Barbados Belarus Benin Bosnia and Herzegovina Brazil Canada Chad Colombia Croatia Danish Dominican Republic Ecuador Estonia Georgia Finland Germany Ghana Indonesia (ID-IGF) Italy Japan Kenya Malawi   Malta Mexico Moldova Mozambique Nepal Netherlands New Zealand Nigeria Panama Paraguay Peru Poland Portugal Russia Slovenia South Africa Spain Sri Lanka Switzerland Togo Trinidad and Tobago Tunis Uganda Ukraine United Kingdom United States Uruguay Zimbabwe Lightning sessions At IGF 2016 Lightning sessions were introduced as quicker, more informal versions of full-length workshops or presentations. The 20-minute sessions took place during lunch breaks in a shaded outdoor plaza in front of the venue. A few examples of the 23 Lightning sessions held in 2016 include: Are Tribunals re-inventing Global Internet Governance? Sharing research on tech-facilitated crimes against children Research and Policy Advocacy Tools for #WomensRightsOnline Internet users’ data and their unlawful use Governance of Cyber Identity Unveiling Surveillance Practices in Latin America Redefining Broadband Affordability for a more Inclusive Internet Holding algorithms accountable to protect fundamental rights Human Rights Online: Internet Access and minorities Anonymity vs Hate speech? Conflict Management & Human Rights on the Internet Electronic voting: Is not digital the future of democracy? Unconference sessions At IGF 2016 Unconference sessions were introduced. The 20 to 40 minute talks are not pre-scheduled, participants reserve a speaking slot by signing up on a scheduling board on a first-come, first-served basis on the day of the Unconference. Five Unconference talks took place at IGF 2016: Freedom of Expression and Religion in Asia: Desecrating Expression – Launch of a Report #africaninternetrights - a best practice policy Derecho de videojuegos (videogames law) y Ciberseguridad: "El Nuevo Internet of Toys" [Super Lawyer Bros.] Free Trade Agreements and IG in Latin America Violencia Digital in the World Newcomers track Introduced at IGF 2016 the Newcomers track helps participants attending the IGF meeting for the first time, to understand the IGF processes, foster the integration of all new-coming stakeholders into the IGF community, and make the meeting participant's first IGF experience as productive and welcoming as possible. Newcomer sessions held in 2016 included: What is the IGF? Newcomers Mentor Session 'Knowledge cafes': Private sector and Technical community at the IGF: What is the role of these stakeholder groups within the IGF and ways for engagement? Governments and IGOs at the IGF: What's the role of these stakeholder groups in the IGF processes and ways for engagement? The role of Civil Society within the IGF: work modalities and ways for engagement Wrap up: Taking Stock and How to engage in the IGF 2017 community intersessional work IGF Village The IGF Village provides booths and meeting areas where participants may present their organizations and hold informal meetings. Pre-events Examples of pre-events held the day before the IGF Turkey meeting in 2014 include: A Safe, Secure, Sustainable Internet and the Role of Stakeholders Collaborative Leadership Exchange on Multistakeholder Participation Commercial Law Development Program (CLDP) Supported Delegations Pre-Conference Seminar Empowering Grassroots Level Organizations Through the .ORG Top Level Domain Global Internet Governance Academic Network (GigaNet) Governance in a Mobile Social Web – Finding the Markers IGF Support Association Integration of Diasporas and Displaced People Through ICT Multilingualism Applied in Africa NETmundial + Book Release – Beyond NETmundial: The Roadmap for Institutional Improvements to the Global Internet Sex, Rights and Internet Governance Supporting Innovation on Internet Development in the global south through evaluation, research communication and resource mobilization UN Commission on Science and Technology for Development (CSTD) 10-year review of WSIS - Arab Perspective IGF meetings Four-day IGF meetings have been held in the last quarter of each year starting in 2006. IGF I — Athens, Greece 2006 The first meeting of the IGF was held in Athens, Greece from 30 October to 2 November 2006. The overall theme for the meeting was: "Internet Governance for Development". The agenda was structured along five broad themes: (i) Openness - Freedom of expression, free flow of information, ideas and knowledge; (ii) Security - Creating trust and confidence through collaboration; (iii) Diversity - Promoting multilingualism and local content; and (iv) Access - Internet connectivity, policy and cost; and (v) Emerging issues, with capacity-building as a cross-cutting priority. Setting the scene: The moderator himself recalled that 10 years a similar gathering was mainly attended by engineers and academics from North America and Europe, while this meeting had a much broader participations, both in terms of geography as well as stakeholder groups. One panellist made the remarks that four years ago many people assembled in the meeting room would not have spoken to one another. One of the moderators called the panel sessions a giant experiment and a giant brainstorming. He also recalled the Secretary-General's comment that the IGF entered uncharted waters in fostering a dialogue among all stakeholders as equals. The innovative format was generally accepted and well received and some commentators called it a true breakthrough in multi-stakeholder cooperation. Several speakers noted that IGF is not the beginning of this process but the middle of, much has already been achieved in the WSIS process and the IGF must build on that. It was remarked that all stakeholders have roles to play in the IGF. We need to share experiences and perspectives. We need to share best practices. The theme of development was emphasized with several speakers asking what that IGF could do for the billions who do not yet have access. The main message of this session was that no single stakeholder could do it alone and therefore we all needed to work on IG issue in development together. To conclude it was felt that for the IGF to have value we would have to leave Athens with a clear view of how to move forward. Openness - Freedom of expression, free flow of information, ideas and knowledge: This session focused on the free flow of information and on freedom of information on the one hand and access to information and knowledge on the other. Much of the discussion was devoted to finding the right balance between freedom of expression and responsible use of this freedom, and the balance between protecting copyright and ensuring access to knowledge. Security - Creating trust and confidence through collaboration: There was a generally held view that the growing significance of the Internet in economic and social activities raised continuing and complex security issues. One of the key issues here is the way in which responses to growing security threats are dependent on the implementation of processes of authentication and identification. Such processes can only be effective where there is a trusted third party that can guarantee both authentication and identification. This raised the question of who could effectively act as a trusted third party, the state or the private sector. There was a widely held view that the best approach to resolving security issues is based on ‘best practises’ and multi-stakeholder co-operation in an international context. However, there was concern about the degree to which information was shared in a timely manner and in a common format (particularly with developing countries). There was a debate as to whether market based solutions, which stimulate innovation, or a public goods model would deliver better security measures across the Internet. For some, the public goods approach offered the opportunity for the widespread adoption of best practice across all countries. A counter view was that innovative solutions were required at that these could only be provided by market based activities. There was a wide-ranging, but inconclusive debate about the role of open standards in shaping security solutions. Diversity - Promoting multilingualism and local content: but there was strong agreement that the multilingualism is a driving requirement for diversity in the Internet, that the event was not about the ‘digital divide’, but rather about the ‘linguistic divide’. There was recognition that diversity extended beyond linguistic diversity to cover populations challenged by lack of literacy in the dominating language or by disability. UNESCO drew attention to the Universal Declaration on Cultural Diversity mentioning that its purpose was to support the expressions of culture and identity through the diversity of languages. Participants raised the issue of software, pointing out that market forces were sometimes not strong enough to provide countries with software in the languages they required. During the discussion on internationalized domain names (IDNs), it was generally felt that internationalizing these domain names without endangering the stability and security of the Internet remained one of the largest challenges. Access - Internet connectivity, policy and cost: Increasing access remains one of the great challenges facing the Internet community. A theme that emerged was the introduction of competition and the removal of blocks to competition were of fundamental importance. It was recognized that Africa faced particularly complex problems with regard to access to ICT resources. It was widely expected that wireless technologies could change the access market landscape. There was a broad convergence of views that the most appropriate level to address issues of access was the national level, as most policy development and implementation is at the national level. Emerging issues: The session included video link-ups with remote participants at locations in Chile, Mexico, and Peru. There was the sense of a growing digital divide due in large part to lack of access which in turn was due to high costs. Access, according to several of the panelists should be a fundamental human right because without access the young cannot grow up to truly live in the modern world. The hope was expressed that the IGF would enable youth to get more involved in Internet governance issues. Other events: A total of 36 workshops were held in parallel to the main sessions. Reports from these workshops were made available on the IGF Web site. IGF II — Rio de Janeiro, Brazil 2007 The second meeting of the IGF was held in Rio de Janeiro on 12–15 November 2007. The overall theme for the meeting was: "Internet Governance for Development". The main sessions were organized around five themes: (i) Critical Internet resources; (ii) Access; (iii) Diversity; (iv) Openness, and (v) Security. Opening ceremony / Opening session: The multi-stakeholder approach was highlighted by many speakers and panelists during the Opening Session, including the message from the UN Secretary-General Ban Ki-Moon, which was read by the UN Under-Secretary-General for Economic and Social Affairs, M. Sha Zukang. M. Ban Ki-Moon assured that it is not a UN goal to take over Internet Governance, but the UN will offer an opportunity to bring people with similar interests together to reach their common goals. M. Sha Zukang concluded that the IGF was a unique experience because "it brings together people who normally do not meet under the same roof." The nature and prospective of the IGF were also discussed, as the Chairman properly summarizes: "Several participants underlined that the IGF was not only a space for dialogue, but also a medium that should encourage fundamental change at the local level to empower communities, build capacity and skills enable the Internet's expansion, thereby contributing to economic and social development." Critical Internet resources: This is a new session. It covered issues pertaining to the infrastructure of the Internet including the roles of ICANN and of governments in shaping policies. Access: The issue of "access" is about how to get the next billion users to go online in the years to come. Initiatives with this goal are reminiscent of pilot projects in Africa where laptops were given to children under an open source software agreement. Diversity: "Diversity" calls for multilingualism in the Net. Promotion of multilingualism would increase the number of users whose main language is not English. In order to open the Net to a diverse population, international domain names (IDN) were added to facilitate the language needs of other users. Openness: The strong support on closed software has not been favorable to some people. This is because there were long-lasting agreements between governments and large software companies. Such actions were considered critical, as it binds different entities to proprietary or closed source technologies. Many believed that the shift from closed to open software can only happen with the full-scale participation of both the private and public sectors. As such, many people fear the turning of the Internet into a "private" network if there is much insistence on the use of closed technologies. Talks on open standards, open architecture and open software are clear indicators of what the issue on openness is all about. See the book Free Culture by Lawrence Lessig to learn more about "Openness on the Internet." Security: Internet Security questions on the agenda were related to: Cybercrime, Cyber-terrorism, Protection of individuals and automatic processing of personal data, Action against trafficking in human beings, and Protection of children against sexual exploitation and sexual abuse. The meeting called for international cooperation and coordinated action to counter cybercrime because of its trans-national dimension. Recommendations pointed to the responsibility of governments in order to raise awareness among Internet users and toward ICANN because of its responsibility for the Domain Name System and controlling illegal online content for the protection of children from Internet pornography. Emerging issues: This session identified four key issues that should be addressed in the Forum: (i) Demand and supply side initiatives (by Robert Pepper). He brought into debate the economic concept of demand and supply applied to Internet Governance; (ii) Social, cultural and political issues of Web 2.0 (by Andrew Keen); (iii) Access, particularly in Africa (by Nii Quaynor); and (iv) Innovation, research and development (by Robert Kahn). On the demand side, there were interesting proposals, such as the need for educating through capacity-building Internet users, the ability of people controlling their web ID (part of educating the usage in Internet), local content in local languages (enforcing local community) and improving public policies (but not over regulating, such as prohibiting or limiting access to VoIP, which can suppress the demand). On the supply side, there were the common concern of extending Internet users/access, but also considering "the opportunities created by the release of spectrum through the switch to digital broadcasting were highlighted. Some speakers suggested that such spectrum could be used to support new broadband networks and support new investment and innovative services, while others held the view that this would not be a sustainable solution". Another challenge was to discuss emerging issues in a global forum with different perspectives, for example, developed and developing countries realities; democratic and non-democratic political regimes; etc. Taking stock and the way forward: There was a broad agreement that the meeting had been a success; the richness of the debate, the number of workshops, the multi-stakeholder format, the diversity of opinions, the number and range of delegates were all cited as indicators of success. There was clear support for the multi-stakeholder processes and many comments as to how the dialogue of the IGF, freed from the constraints of negotiations and decision-making, allowed for ideas to be freely exchanged and debated. Some concern was expressed that the link between the workshops and the main sessions was not as clear or as strong as could have been expected. Participation from users could be increased and that attention needed to be given to ensuring effective remote participation in the meeting. Some commentators spoke of the need for greater diversity in participation and, for example, the need for greater gender balance on the panels. Also young people needed to be better represented. Development was a key topic of discussion during the Rio Meeting. It will still be an important aspect for future discussion, together with the issue of bridging the digital divide - a key topic for discussion at IGF Hyderabad and one that reflects the theme of the IGF Hyderabad which is "Internet for All." Other events: 84 self-organized events took place in parallel to the main sessions: 36 workshops, 23 best practices forums, 11 dynamic coalitions meetings, 8 open forums, and 6 events covering others issues. Of these, 11 were devoted to the issue of openness and freedom of expression, 12 on development and capacity-building, 9 on access, 10 on critical Internet resources, 6 on diversity, 17 on other issues, and 19 were devoted to the issue of security. Of the security sessions 9 spotlighted the issue of the protection of children and child pornography on the Internet. IGF III — Hyderabad, India 2008 The third meeting of the IGF was held in Hyderabad, India between 3–6 December 2008. The overall theme for the meeting was "Internet for All". The meeting was held in the aftermath of terrorist attacks in Mumbai. The participants expressed their sympathies to the families of the victims and the Government and the people of India. The five main sessions were organized around the themes: (i) Reaching the next billion, (ii) Promoting cyber-security and trust, (iii) Managing critical Internet resources, (iv) Emerging issues - the Internet of tomorrow, and (v) Taking stock and the way forward. The meeting was attended by 1280 participants from 94 countries. Opening ceremony / Opening session: During the opening session, nine speakers representing all stakeholder groups addressed the meeting. A common thread through all the speeches was the recognition of the importance of the meeting's overall motto, ‘Internet for All’. It was noted that the Internet was bringing great potential for economic and social benefit to the world. At the same time, speakers also pointed out that there was a need to guard against the problems the Internet could bring when used for harmful purposes. Speakers noted the opportunity the IGF provided for a dialogue between all stakeholders and a mutual exchange of ideas. It allowed to build partnerships and relationships that otherwise might not occur. The IGF was appreciated for its open multi-stakeholder model, with examples of new national and regional IGF initiatives illustrating the spread of the multi-stakeholder ideal and its value in policy discussion. Reaching the next billion: This session included two panels: (i) Realizing a Multilingual Internet, (ii) Access, Reaching the Next Billions. Promoting cyber-security and trust: This topic was covered in two panel discussions, one on the ‘Dimensions of Cybersecurity and Cyber-crime’, and the second on ‘Fostering Security, Privacy and Openness’, followed by an open dialogue. Managing critical Internet resources: This theme was covered in two panel discussions, one on the ‘Transition from IPv4 to IPv6’, and the second on ‘Global, Regional and National Arrangements’. These were followed by an open dialogue. Emerging issues - the Internet of tomorrow: The goal for this session was to identify important topics that had not been discussed in the IGF to date. The moderator asked the participants to propose and discuss issues the IGF should consider in the next year at the IGF in Egypt and beyond. Suggestions included: growing popularity of social networks and user-generated content, looking at the situation with the last billion in addition to the next billion; impact of policy frameworks on creativity and innovation; the global nature of the Internet on jurisdiction and legislation; challenges to providing an environmentally sustainable Internet; a new multilateral treaty including positive obligations to ensure the ongoing functioning of the Internet; making existing treaties work, rather than creating new treaties; and building trust. Taking stock and the way forward: This session attempted to address three questions: (i) considering the IGF itself, what should the format and modalities of the Forum be going forward, bearing in mind that the IGF was not a negotiating forum; (ii) suggestions for the 2009 IGF meeting that the MAG should consider in terms of substance of the agenda; and review of the desirability of continuing the IGF beyond its initial five-year mandate. Closing session: A common thread throughout all the speeches at the closing session was the recognition that the Hyderabad meeting had been a success and that the IGF had proved its usefulness as a space for multi-stakeholder dialogue. Mr. Jainder Singh, Secretary of the Department of Information Technology in the Ministry for Communications and Information Technology of the Government of India, in his closing remarks expressed the gratitude of the people and the Government of India to all participants for coming to Hyderabad and for participating in the Third Meeting of the Internet Governance Forum. By being in Hyderabad in spite of the terrorist acts in Mumbai, participants had demonstrated their solidarity with the people of India in facing this menace. He made the point that the Internet today was standing at a threshold, where both limitless opportunities and daunting threats lied ahead. The challenge was to grab the opportunities and exploit them to the fullest while containing, if not eliminating, the threats. It was clear that achieving these objectives would be possible only by concerted and collaborative action by governments, businesses, civil society organizations and academia. The IGF as a forum held great promise as a platform to forge precisely such a grand coalition for universal good. Other events: The meeting included 87 other events that ran in parallel to the main sessions: 61 workshops, 9 best practices forums, 10 Dynamic Coalition meetings and 7 open forums. Of the 61 workshops, 8 were devoted to the issue of access, 5 to diversity, 14 to openness, 8 to security, 8 to critical Internet resources, 11 to development and capacity building, and 7 to other issues. Five workshops and other meetings were cancelled following the events in Mumbai. Reports were received from a number of regional and national IGF initiatives, other related events, and other meetings. IGF IV — Sharm El Sheikh, Egypt 2009 Egypt hosted the fourth IGF meeting from 15–18 November 2009 in Sharm El Sheikh. The overall theme for the meeting was: "Internet Governance – Creating Opportunities for all". IGF IV marked the beginning of a new multi-stakeholder process. The main sessions on the agenda were (i) Managing critical Internet resources; (ii) Security, openness and privacy; (iii) Access and diversity; (iv) Internet governance in light of the WSIS principles; (v) Taking stock and the way forward: the desirability of the continuation of the forum; and (vi) Emerging Issues: impact of social networks. A key focus of IGF 2009 was encouraging youth participation in Internet Governance issues. Opening ceremony / Opening session: In all 20 speakers addressed the participants during the Opening Ceremony and Opening Session. Mr. Sha Zukang, Under-Secretary-General for Economic & Social Affairs explained that the IGF worked through voluntary cooperation, not legal compulsion. IGF participants came to the Forum to discuss, to exchange information and to share best practices with each other. While the IGF did not have decision-making abilities, it informed and inspired those who did. The Under-Secretary-General also reminded the meeting that the Tunis Agenda specifically called on the Secretary-General "to examine the desirability of the continuation of the Forum, in formal consultation with Forum participants, within five years of its creation, and to make recommendations to the UN membership in this regard" and encouraged all participants to contribute fully to the consultations. In his keynote address, Sir Tim Berners-Lee, creator of the World Wide Web and Director of World Wide Web Consortium (W3C) emphasized the importance of a single Web that could be shared and used by all. He noted the importance of the Web to enhance the lives of people with disabilities. He said the W3C championed open standards that were royalty free so they could be openly shared. He also announced the launch of the World Wide Web Foundation, an international, non-profit organization that would strive to advance the Web as a medium that empowered people. A common thread through all the speeches was the endorsement of the IGF as a platform for fostering dialogue. Eleven speakers specifically supported an extension of the IGF mandate. Internet governance – Setting the scene: This session was to help newcomers and other participants understand the IGF and to find their way around the programme. Managing critical Internet resources: The session focused on four main topics: (i) transition from IPv4 to IPv6; (ii) importance of new TLDs and IDNs for development; (iii) affirmation of commitments and the IANA contract and recent developments in the relationship between ICANN and the United States government; enhanced cooperation generally and the internationalization of critical Internet resource management. Security, openness, and privacy: The importance of privacy was discussed in the light of the new social network phenomenon and the fact that children were the easiest targets since they were at the same time the most vulnerable and most trusting group and the earliest adopters of new technology. It was noted that in addition to the rights to freedom of expression and privacy, that security was also an important right. The problems of establishing a culture of trust, the separation of valid security countermeasures from those that would be established in order to collect data for control and suppression were raised. Another challenge involves contextual integrity in data aggregation, and the role of powerful corporate and national entities in the use and abuse of this data. Another challenge concerned the issue that rights were currently protected by the constitutional nation state, yet people lived in a borderless global network. This means there is a need for a human rights perspective beyond technological and commercial developments. The discussion also touched on anonymity. Eliminating anonymity on the Internet would be very hard, as would designing an Internet architecture that did not permit anonymity. It was also commented that anonymity, as a fundamental property of the Internet, was a social good, a political good, and an economic good. Access and diversity : Access and diversity can be considered as two sides of the same coin; they are issues that affect hundreds of millions of people not yet involved in the Internet conversation, and of particular concern are diversity in language and diversity concerning disability. Access includes financial access, the relevance of literacy to access, political access which gives voice to linguistic access, and access by the disabled. Desirable access to the Internet is further defined as being connected to the Internet at the right speed, linked to the right content at the right time and place. Issues concerned with infrastructure were now secondary, because advances have been made, specifically with mobile phones and Internet penetration in many parts of the world. Many agreed that progress had been made regarding infrastructure, notably that submarine fibre cable systems had been built and provided increased bandwidth and higher quality connectivity. However, it was noted that landlocked countries still struggled to access coastal Internet cables, and that broadband access was still limited and costs were still high. Spectrum management was identified as a major and a fundamental component of access. Internet governance in the light of the WSIS principles: The IGF was created as a product of the WSIS, and was mandated by the Tunis Agenda to promote and assess on an ongoing basis the embodiment of the WSIS principles in the Internet governance process. The session was to determine whether the WSIS principles had been taken into consideration in the governance of the Internet. The session was divided into two main segments. The first section concentrated on principles, which were adopted in Geneva and Tunis, and particularly on paragraph 29. The second part was devoted to a debate on how Internet governance influenced the evolution of inclusive, non-discriminatory, development oriented Information Society and made reference to paragraph 31 of the Tunis Agenda. After discussion of many topics, the chair emphasised two main points: (i) that a serious and sincere effort had been made by many to adhere to the WSIS principles in the Internet governance ecosystem, but there was still a lot of work that needed to be done to get everybody on board and to adhere to all of the WSIS principles and (ii) there was a need for more serious engagement of the developing countries in the IGF activities. The chair made a call on governments from developing countries to get more involved in the IGF activities, to make use of this forum, to get their voice heard, and to get their opinions on the issues related to Internet debated. Taking stock and looking forward – on the desirability of the continuation of the Forum: 45 speakers and nine written statements supported a continuation of the Forum. Many speakers emphasized the usefulness of the IGF as a platform for dialogue, free from the pressures of negotiations. A majority of speakers and written submissions supported an extension of the mandate as it is, that is, to continue the IGF as a multi-stakeholder platform that brings people together to discuss issues, exchange information and share best practices, but not to make decisions, nor to have highly visible outputs. The other speakers, while supporting a continuation of the IGF along similar lines to its current form, called for some change, ranging from small operational improvements to major changes in its functioning, such as adding provisions that would allow it to produce outputs, recommendations and decisions on a multi-stakeholder consensus basis, or to finance the IGF through the regular UN budget. Most of those who supported the continuation of the forum would like to do so for at least another five-year term. Two speakers, while welcoming the success of the IGF and not opposing an extension, said it had not met expectations as regards ‘enhanced cooperation’ in the area of Internet governance. They also linked the IGF to unilateral control of critical Internet resource, an issue that needed to be addressed in the future. Egypt, the host country, supported the continuation of the forum, while stressing at the same time the need to review its modalities of work, to increase institutional and financial capacity of its secretariat. The Chairman, Mr. Sha Zukang, Under-Secretary-General for Economic and Social Affairs, concluded the meeting by stating that he would now report back to the Secretary-General on the discussions held in Sharm El Sheikh and the Secretary-General would then make his recommendations to the UN Membership, as requested by the Tunis Agenda. Emerging issues - Impact of social networks: This session focused on the development of social media and explored whether these developments required the modification of traditional policy approaches, in particular regarding privacy and data protection, rules applicable to user-generated content and copyrighted material, as well as freedom of expression and illegal content. Closing session: Several speakers, representing all stakeholder groups, addressed the closing session. Common to all the speeches was the recognition that Internet governance needed to be based on multi-stakeholder cooperation. As one speaker pointed out, the lack of multi-stakeholder involvement in the past had often led to ill-informed decision-making. Mr. Sha Zukang, Under-Secretary-General for Economic and Social Affairs, in his concluding remarks stressed the centrality of the principle of inclusiveness and the need for continued discussions on public policy issues related to the Internet. He recalled that he would present a report to the Secretary-General on the consultation on the desirability of the continuation of the Forum, as mandated by the Tunis Agenda. The Secretary-General would then communicate his recommendations to the UN Membership. All other speakers expressed their support for an extension of the mandate and emphasized the value of the IGF as a platform for multi-stakeholder dialogue. In his concluding address, the Chairman of the Fourth IGF Meeting, Mr. Tarek Kamel, said that he was confident that this message, representing the views of all stakeholders, would be conveyed to the Secretary-General. Other events: Parallel to the main sessions, more than 100 workshops, best practice forums, dynamic coalition meetings and open forums were held. Preparing the young generations in the digital age — A shared responsibility: The First Lady of Egypt, H.E. Ms. Suzanne Mubarak, President and Founder of the Suzanne Mubarak Women's International Peace Movement, addressed Forum participants in a special session. Her address focused on youth empowerment, and the safety of children and young people on the Internet. She reminded the Forum that the Internet would continue to be a reflection of the global reality we lived in. As the divisions between transparency and privacy were erased, as the walls between the physical and virtual reality faded away, we would continue to feel reverberations of those challenges on the net through more discrimination, more violence, more instability. And it was for this reason that we should work harder to ensure that the focus of Internet governance became more people-centered, and that the Internet becomes a catalyst for human development. In closing, she outlined her vision of the Internet of tomorrow which held the real promise that we would be able to look at our computer or mobile screens and see a world where people lived in dignity, security and peace. Ms. Hoda Baraka, First Deputy to the Minister of Communications and Information Technology of the Arab Republic of Egypt, then moderated an international panel that commented on the issues raised by the First Lady. Regional perspectives: Session panelists brought together different regional experiences as they had emerged from various regional and national meetings, discussed how their different priorities were linked, and identified the commonalities and differences of each region. Speakers presenting on the East African and European IGFs noted that they were not held as preparatory meetings for the global IGF, but had independent value, designed to identify local needs and priorities and to seek local solutions. Each regional IGF had a different structure. The Caribbean IGF held its fifth annual meeting in August, noting it had existed longer than the global meeting. Access, Cybercrime, and cybersecurity were noted as priorities by all the regional representatives. The Latin America and Caribbean as well as the European regional meetings stressed the importance of privacy. Presenters from the floor informed the Forum about national IGF initiatives that had taken place in Spain and the United States. The US meeting also included a youth panel. IGF V — Vilnius, Lithuania 2010 The fifth IGF meeting was held in Vilnius, Lithuania on 14–17 September 2010. The overall theme for the meeting was "Developing the future together". The meeting was organized around six themes: (i) Internet governance for development, (ii) Emerging issues: cloud computing, (iii) Managing critical Internet resources, (iv) Security, openness, and privacy, (v) Access and diversity, and (vi) Taking stock and the way forward. Opening ceremony: Mr. Jomo Kwame Sundaram, Assistant Secretary-General for Economic Development at UNDESA noted that while Internet use was increasing, it was growing faster in the developed world than in developing regions and that the digital divide was growing instead of shrinking. Internet governance for development: This session explored the possible effects of global Internet governance arrangements on the development of the Internet in developing countries. Almost all speakers made it clear that they supported the continuation of the IGF. Several speakers mentioned the importance of ‘the Internet way’, a decentralized open and inclusive multi-stakeholder collaboration that allowed for innovation and creativity at the edges. The importance of maintaining focus on the expansion of the Internet to the billions of users who did not yet have access was emphasized by several speakers. As part of this general theme, it was pointed out that a factor to consider over the coming days was that as the number of Internet users grows worldwide, emerging economies will soon have more Internet users than the European Union and the United States combined. Emerging issues — Cloud computing: This session provided an overview of the Internet governance considerations related to cloud computing from both the policy and the technical standpoints. Challenges include security, privacy, expense, and differences in policy between countries on what can be done with undisclosed personal data. The assertion was made that the cloud should be protected by the same safe guards against public and private interference as is data today on desktops or local hard drives. Cloud computing was seen as linked to the Internet of things, which was viewed as an emerging issue for future IGF meetings. Managing critical Internet resources: This session discussed four themes: (i) the status of IPv6 availability around the world, examples and cases; (ii) the internationalization of critical Internet resources management and enhanced cooperation; (iii) the importance of new TLDs and IDNs for development; and (iv) maintaining Internet services in situations of disaster and crisis. Security, openness, and privacy: This session looked at: (i) issues related to social media, (ii) the nature and characteristics of Internet networks, technologies, and standards, and (iii) international cooperation and collaboration on security, privacy and openness. The point was made by many speakers that new actors had entered the media system so that the traditional means of regulating the media were no longer applicable. Media now included search engines as well as social networks. However, a representative from a social network company said it was a mistake to think the Internet was an unregulated space, when many laws and regulations existed. Online companies had to respect and work with regulators and different authorities on a daily basis. A UNESCO commission report on policy approaches that shaped freedom of expression on the Internet had found that with increased access to information in cyberspace, censorship and filtering was done not only by government, but also by private companies. The Budapest Convention was mentioned as one of the tools that addressed cybercrime standards and norms. It had the force of law and could potentially be applied worldwide and had been drafted with the participation of non-European countries. In closing the paramount importance of making the Internet safe for children and youngsters was noted. Access and diversity: This session focused on access to infrastructure and access to content and considered a range of issues from geo-location, the global reach of social networks and the linkages between access to knowledge and security solutions, both in terms of hardware and software. the need for continued broadband expansion was seen as crucial by several of the speakers. The importance of inexpensive, but powerful wireless handsets and other devices was also seen as a critical ingredient in achieving global access. The tools that would enable hardware and software developers to develop networks and devices according to universal design principles were also necessary. The biggest drivers on connectivity were poverty, education and geographic location, with people in developing countries less likely to have access than those is developed countries. For a multilingual Internet three things were needed: internationalization of domain names, the availability of local content, and localization of applications and tools. The first of these was in the process of being met with the introduction of IDN ccTLDs, so that Web sites could be named in local scripts and languages. The increase in the use of filters installed to block content considered illegal or harmful was also discussed. The need to balance autonomy with protection of the public good was also raised and it was argued that filtering had a negative impact on access to knowledge, particularly by students. Taking stock of Internet governance and the way forward: While speakers acknowledged that there was still much work to be done, the discussions had matured and moved from basic explanations to good practices and deployment issues. On some issues like internationalization of critical Internet resources speakers felt that progress had been made. While several speakers talked about the need for a more results oriented IGF, others saw in the IGF practice of not negotiating outcomes one of its strengths, as it allowed for open discussions free from the pressure of negotiations. Several people used the example of the multi-stakeholder dialogue and sharing of information and good practices as proof for the IGF's viability. Papers such as the Inventory of Good Practices that was posted on the IGF Web site shortly before the Vilnius meeting were mentioned as examples of more tangible results. The increased participation of young people in the 2010 IGF meeting was seen as a positive development. In his closing remarks the Session Chair concluded by observing that power is devolving from governments to other actors through interconnected networks and that the IGF is part of this trend. Closing session: One speaker commented that while the IGF provides a forum for dialogue, it has not yet begun to make recommendations to the organizations involved in Internet governance, as had been the expectation by some at the time of the Tunis Agenda. Before closing the meeting, Mr. Rimvydas Vaštakas, Vice Minister of Transport and Communications of Lithuania said that the Government of Lithuania would make its voice heard in the forthcoming debate of the United Nations General Assembly, adding that it was important to renew the IGF mandate as a platform for non-binding multi-stakeholder dialogue. Other events: 113 workshops, best practice forums, dynamic coalition meetings and open forums were scheduled in parallel with the main sessions. Setting the scene: The objective of this session was to provide participants with the historical context of the IGF and an introduction to the main issues of the Vilnius meeting. The session began with brief presentations by the editor and five of the experts who authored background papers on the principal themes of the meeting. Regional perspectives: The main aim of this session was to compare the various regional initiatives, to explore their differences, to find commonalities and improve the linkages with the global IGF. Included in the discussion were, the East Africa IGF, West African IGF, Latin America IGF, Caribbean IGF, Asia Pacific Regional IGF, Arabic region IGF, the Pan-European dialogue on Internet governance (EuroDIG), and the Commonwealth IGF. IGF VI — Nairobi, Kenya 2011 The sixth IGF meeting was held in Nairobi, Kenya on 27–30 September 2011, at the United Nations Office (UNON). The overall theme for the meeting was "Internet as a catalyst for change: access, development, freedoms and innovation". The meeting was organized around the traditional six themes: (i) Internet governance for development, (ii) Emerging issues, (iii) Managing critical Internet resources, (iv) Security, openness, and privacy, (v) Access and diversity, and (vi) Taking stock and the way forward. Opening ceremony / Opening session: Ms. Alice Munyua, Chair of the Kenya Internet Governance Steering Committee, highlighted the importance attached to Internet governance for development (IG4D) and expressed the hope that the Internet governance development agenda would permeate all conversations at this sixth meeting of the IGF. She stressed that in keeping with the traditions of the IGF, the meeting outputs would not be formal recommendations, but multi-stakeholder dialogues. These dialogues should inform other international processes and particularly the domestic policy issues of all those concerned with Internet Governance. Internet governance for development (IG4D): This session highlighted the significance of Internet governance for development, not as a fringe activity but as a core element of the development agenda linking new forms of access, economic developments, innovations and new freedoms and human rights. The significance of mobile Internet was stressed. The growth in diffusion and adoption of broadband, and hence access to the Internet, has led many to see access to the Internet as a human right; the rights to development and the rights to the Internet are conjoined as the Internet becomes one of the key engines of economic and social transformation and growth so access to the Internet becomes an inalienable human right. An Internet governance framework for development should not only focus on access to infrastructures but also access to freedoms of expression and association. Emerging issues: This session focused on the question "Is governance different for the mobile Internet from the wired Internet?" The question is of particular importance to developing countries where the mobile Internet now connects individuals and businesses to services, markets and information previously beyond reach. The mobile Internet must now become more robust, when people are connected they should be protected against the failure of the system they have come to rely on for critical life-effecting services, such as banking, health, and education. The importance of spectrum allocation and management was also recognized. It was noted that the functionality of mobile devices was often locked which seems to make the current mobile Internet a more closed environment than the wired Internet. The audience was asked if this invited less innovation than would be achieved if the mobile environment were more open. Managing critical Internet resources: This session was focused on three fundamental issues: (i) the DNS system and the role of different stakeholders with specific reference to new gTLDs, (ii) the re-bid of the contract to operate the functions of the Internet Assigned Numbers Authority (IANA), and (iii) the mechanisms to secure and reinforce multi-stakeholder participation in critical Internet resources, especially those stakeholders from emerging economies. Other issues, such as capacity building, and IPv6, were incorporated into the broader discussion. Security, openness, and privacy: This session discussed the cross-border Internet governance issues that are encountered at the intersection of security, openness, privacy, and human rights. Concerns were raised about increasing government interventions and regulations and the future implications of instances such as the ‘Arab Spring’ and the wiki-leaks controversy that took place in the last year. It was agreed that the State must be able to protect their citizens, but must also ensure their freedom of expression. Service providers and other intermediaries must all keep user safety and freedom of expression in mind, but must do so with the rule of law in mind and the safety of the users must remain a top priority. Access and diversity: This session explored the ways in which access to the Internet can be understood as a human right. There was profound questioning over the difference between ensuring the universality of access to the Internet and the Internet as a human right. Access is inextricably linked with the concept of accessibility. It was observed that there were over 1 billion plus people in the world with disabilities and that many of these are highly vulnerable people with relative low incomes. As a consequence access without accessibility is meaningless. Affordability is seen as a major barrier to both access and accessibility. It is important to extend the debate beyond issues of connectivity and to also focus on issues such as freedom of expression and freedom of association. Taking stock and the way forward: This session reflected on the experiences of the participants and allowed the stakeholders to discuss what went well during the week, what went not as well, and finally, what could and should be done to make the 2012 IGF even better. It was seen as important by many that both the theme of the meeting and the discussions in workshops had adequately incorporated the ideas of Internet Governance for Development. Youth participation needs to be strengthened, both physically and remotely, and youth needed to be included in all aspects of the IGF and at all levels, not only in ‘youth’ centered workshops and sessions. Though holding the Forum in Africa for the first time did increase developing country participation, the inclusion of developing country participants, women, and persons with disabilities, among others, must continue to be strengthened and improved each year. Regional dialogues: Regional dialogue sessions were held to inform delegates of the way in which national and regional IGF activities have been addressing key issues, to give participants a cross-regional perspective, and to allow representatives of the regional and national meetings to inform others of concerns and topics beyond those included in the main programme for IGF 2011. The following national and regional groups were represented: East Africa, Uganda, United Kingdom, Commonwealth, West Africa, Asia-Pacific, Europe, Southern Africa, Canada, Russia, Japan, Latin America and the Caribbean, United States, Pacific, Sweden, Rwanda, Central Africa, Finland and the European Youth Forum. Though youth involvement varied among the IGFs present, there was a universal call among all of the IGFs that this involvement needed to increase and that engaging young people in creative and new ways was crucial to the success of the national and regional IGFs. Several common issues among national and regional IGFs were identified, including: cyber-crime, child protection, cross border issues, law enforcement standards and principals, the role of ICT and social networks in particular in natural disasters and social uprisings, cloud computing, mobile technology development, and IPv6 compliance. Closing session: The period between the fifth and sixth meetings of the IGF saw tangible examples of the importance of human rights as an integral part of Internet governance agenda, such as during the so-called ‘Arab Spring’. It was suggested that human rights should form the core concept of the theme for the next IGF meeting. Clear and specific calls were made for the host country to inform the United Nations Secretary-General and the General Assembly of the need to ensure that all stakeholders, on an equal and collaborative footing, are integral to any process on the future of Internet governance and that the Tunis Agenda should continue to be the reference point and guide to the responses of the UN to issues of Internet governance. Other events: 122 workshops, best practise forums, dynamic coalition meetings and open forums were held in parallel with the main sessions. "Feeder" workshops created feedback loops between the main sessions and the different events being held on related subjects. IGF VII — Baku, Azerbaijan 2012 The seventh IGF meeting was held in Baku, Azerbaijan on 6–9 November 2012. The overall theme for the meeting was: "Internet Governance for Sustainable Human, Economic and Social Development". The meeting was organized around the traditional six themes: (i) Internet governance for development, (ii) Emerging issues, (iii) Managing critical Internet resources, (iv) Security, openness, and privacy, (v) Access and diversity, and (vi) Taking stock and the way forward. Opening session: A collective affirmation of the necessity of the multi-stakeholder model in handling Internet governance issues was continually stressed throughout the session. Dr. Hamadoun Touré, Secretary General of the International Telecommunication Union, assured participants that ITU did not want to control the Internet, but rather wanted to re-affirm its commitment to ensuring Internet's sustainability using the multi-stakeholder model. A universal call was made by the speakers to strengthen efforts to ensure the protection of basic human rights and fundamental freedoms in the online world. An underlying message was delivered regarding the importance of putting appropriate regulations in place to assure a safe and secure Internet for young people and the generations to come, while still guaranteeing the basic principles of human rights. Internet governance for development (IG4D): This session was divided into three clusters. The first cluster examined the ‘Pending Expansion of the Top Level Domain Space’. The second cluster examined the ‘Enabling Environment’ where panelists explored various ways to attract investment in infrastructure and encourage innovation and growth of ICT services, including mobile technology, while understanding how these technologies can best be employed to address development challenges. The third and final cluster examined Internet infrastructure from developing countries' experiences and how new technologies and the global Internet governance mechanisms address limitations, offer opportunities and enable development. This session highlighted the significance of Internet governance for development, not as a fringe activity but as a core element of the development agenda. An important message to take to the next IGF was to bring more specific case studies and concrete actions to the forum. Emerging issues: The first half of the session examined the extent that Internet based services today offer new and radically different opportunities to help families, social groups, communities and broader structures in society organize and re-organize themselves when challenged by natural disaster or strife. The second half of the session explored a range of questions and issues related to the free flow of information, freedom of expression, and other human rights and fundamental freedoms and their respective balances with intellectual property rights. New regulations might not be necessary to provide improved privacy and safety, as consumer protection laws are already in place in many parts of the world. These existing laws together with education and outreach to new consumers of online content, especially those using mobile devices, was said to be crucial in assuring privacy and safety. It was agreed that certain new cyber-threats such as identity theft needed special attention and innovative regulatory and legal policy solutions. Managing critical Internet resources: This session focused on three main issues: (i) the initial round of applications in ICANN's New gTLD Program; (ii) proposals for the development of secondary markets for IP addresses; and (iii) issues raised by Internet-related proposals for the revision of the International Telecommunication Regulations at the upcoming World Conference on International Telecommunications (WCIT). The WCIT is a conference organized by the International Telecommunication Union (ITU) to discuss the modification of the International Telecommunication Regulations (ITRs). The WCIT negotiations will not be multi-stakeholder, as only governments can speak and will vote on the outcomes. The process is not well understood by many in the ICT sector, but had recently received a lot of publicity suggesting current Internet operational and governance models might be under threat. The session broadly agreed that adoption of some of the national proposals for revision of the ITRs would constitute a form of global Internet governance and could negatively impact the Internet. Security, openness, and privacy: This session examined and questioned a wide range of rapidly emerging controversial issues relevant to and impacting online and offline security, privacy, and notions of identity as they relate to concepts of human rights and fundamental freedoms. There are no easy answers here, other than education was absolutely essential. Internet users of all ages need to be trained on the risks of going online, on the basic human responsibilities, and on the fact that the same un-written rules of how we should treat one another offline should also apply online. A conclusion that emerged was that the inclusion of youth in formulating policies on all Internet governance issues was absolutely essential. Access and diversity: This session addressed five main topics: (i) infrastructure, (ii) the mobile Internet and innovation, (iii) human empowerment, (iv) the free flow of information, and (v) multilingualism. The session chair presented research findings that a 10% increase in broadband penetration can lead to a 3.2 per cent increase in a county's GDP, along with a 2 per cent productivity increase. She noted that broadband Internet can play an important role in boosting the economy of a country as well as the well being of citizens. Taking stock and the way forward: This session reflected on the experiences of the participants at the IGF 2012 and allowed the stakeholders to discuss observations and conclusions stemming from the workshops and main sessions that took place in Baku. Speakers from all stakeholder groups recommended that the IGF should be used to advance the work done over the past year in other fora to advance discussions on enhanced cooperation. The pending recommendations of the CSTD working group on improvements to the IGF were brought up as a point of guidance for improving and planning future meetings. Integrating the discussions of the national and regional IGF initiatives into the annual meetings should also be priority, as a means to capture the activity of the broader IGF community that takes place between the annual global gatherings. Recent initiatives by various government and non-government actors to set principles and new frameworks and the both positive and negative implications that such initiatives might have were discussed. Delegates counted more than 25 different sets of principles that exist in some form or another, as proposals or drafts, some coming from groups of states, others unilaterally. Some are proposed by organizations like the OECD or Council of Europe, some represent government-led initiatives such as Brazil's multi-stakeholder developed Internet Bill of Rights, and others are developed by civil society organizations. It was mostly agreed that the IGF should continue its role as a non-binding discussion platform, but it was emphasized that the discussions and the trending topics of the annual meeting should be documented and disseminated into other Internet governance fora in a more effective way. Closing session: The speakers noted that the IGF had successfully evolved and progressed from previous years. Speakers made reference to other upcoming international high-level gatherings where Internet governance policy issues will be discussed and existing frameworks and regulatory measures will be reviewed. A strong call was made by the civil society representative for the IGF to continue to be a forum that promotes human rights and fundamental freedoms on the Internet. Representatives of the Internet and business communities emphasized the importance of the multi-stakeholder, bottom-up Internet governance model to ensure that the Internet fairly advances social and economic development around the world. Other events: A record number of workshops, dynamic coalition meetings, open fora, and other events were held in parallel with the main sessions. The topics addressed ranged from issues related to cyber-security and child protection online, the rise of social networks, the use of ‘big data’ and various aspects of human rights as they related to the Internet, among many others. Other events: A record number of workshops, dynamic coalition meetings, open fora and other events were held in parallel with the main sessions. Topics ranged from issues related to cybersecurity and child protection online, the rise of social networks, the use of ‘big data’, and various aspects of human rights as they related to the Internet, among many others. IGF VIII — Bali, Indonesia 2013 The eighth IGF meeting was held in Bali, Indonesia from 22 to 25 October 2013. 135 focus sessions, workshops, open forums, flash sessions, and other meetings took place over the 4 day event. The overarching theme for meeting was: "Building Bridges - Enhancing Multistakeholder Cooperation for Growth and Sustainable Development". The meeting was organized around six sub-themes: (i) Access and Diversity - Internet as an engine for growth and sustainable development; (ii) Openness - Human rights, freedom of expression and free flow of information on the Internet; (iii) Security - Legal and other frameworks: spam, hacking and cyber-crime; (iv) Enhanced cooperation; (v) Principles of multi-stakeholder cooperation; (vi) Internet governance principles. In the context of the recent revelations about government-led Internet surveillance activities, IGF 2013 was marked by many discussions about the need to ensure better protection of all citizens in the online environment and to reach a proper balance between actions driven by national security concerns and the respect for internationally recognized human rights, such as the right to privacy and freedom of expression. An elephant in the room — Government-led Internet surveillance: In the context of the recent revelations about government-led Internet surveillance activities, IGF 2013 was marked by many discussions about the need to ensure better protection of all citizens in the online environment and to reach a proper balance between actions driven by national security concerns and the respect for internationally recognized human rights, such as the right to privacy and freedom of expression. Several focus sessions and workshops touched upon these issues and focused on the need to rebuild the trust of Internet users, which has been seriously affected by these actions. It was underlined throughout the week that any Internet surveillance practices motivated by security concerns should only happen within a truly democratic framework, ensuring their adequacy, proportionality, due process, and judicial oversight. Opening ceremony and session: Mr. Thomas Gass, Assistant Secretary-General for Policy Coordination and Inter-Agency Affairs of the United Nations Department of Economic and Social Affairs (UN DESA), formally opened the 8th Internet Governance Forum (IGF). Mr. Gass stressed that the United Nations Secretary General was committed to the multistakeholder model for Internet governance championed by the IGF and the long-term sustainability of the forum, with the hope that the forum's mandate would be extended beyond 2015 when the broader WSIS review process will be taking place. Mr. Gass emphasized the importance of ensuring that our global Internet is one that promotes peace and security, enables development and ensures human rights. As the international community strives to accelerate the achievement of the Millennium Development Goals by 2015, and as it shapes the Post-2015 Development Agenda that focuses on sustainable development, expanding the benefits of ICTs, through a global, inter-operable and robust Internet, will be crucial. H.E. Tifatul Sembiring, Minister of Communications and Information Technology (MCIT), of the Republic of Indonesia, assumed the chairmanship of the meeting and welcomed all participants to Indonesia and the island of Bali. In a video address, Mr. Hamadoun Touré, Secretary-General of the International Telecommunication Union (ITU), stressed that from the beginning, ITU has been firmly committed to the IGF, which he said was a great example of multistakeholder action. The Secretary-General also encouraged the IGF stakeholders to join the many World Summit on the Information Society (WSIS) review activities that the ITU was spearheading over the next year. The representative from Brazil invited IGF stakeholders to participate in a "summit" focused on Internet governance issues to be held in the first half of 2014. Role of governments in multistakeholder cooperation: A panel discussion of the role of governments in multistakeholder cooperation on Internet governance issues. The chair explained that the session topic was inspired by a formal International Telecommunication Union (ITU) opinion on the Role of Governments proposed by the Government of Brazil at the World Telecommunications Policy Forum (WTPF) in Geneva in May 2013. It was underlined that while the concept of multistakeholder cooperation is widely recognized as a vital feature of Internet policy processes, Brazil's intervention at the WTPF was intended to remind everyone that the roles and responsibilities of different stakeholders, particularly of governments, were far from well understood or agreed. A panelist noted in his introductory remarks that Brazil's WTPF opinion prompted serious reconsideration by many stakeholders. He noted that his own government's deliberations after WTPF came up with four areas where government played an important role. As the morning's discussion continued these four areas of government activity were reinforced by both the panel and audience, and were met with broad support: Government enables and facilitates the building of ICT infrastructure and the development of competition frameworks and policies that supported private sector investment. Government creates domestic legal frameworks that are intended to legally reinforce the idea that what is illegal offline is also illegal online. As the legal frameworks have to be updated in order to keep them consistent with the evolution of the Internet, partnerships with the private sector and civil society are needed in order to make such reviews possible and to address the challenges of a top-down legislation which may prove to be too slow, unwieldy, and bureaucratic. By working together, all stakeholders are able to develop more comprehensive public policy concerning the Internet. Government, among other stakeholders, plays an important role in preserving free expression, cultural diversity, and gender equality on the Internet, and in supporting people's ability to access and engage with the Internet, through support for education and skills development. A panelist noted that a human rights framework underpins our use of the Internet and our access to it, and governments should be the guardians of these global commitments, a statement agreed to by many in the discussion. Government can help to support the multistakeholder process and partnerships, but are not the leaders of it. The example of the Brazilian Internet Steering Committee (CGI.br) was mentioned by both panelists and members of the audience as a successful example of such a partnership. It was recognized that governments often have a careful role to play in balancing competing interests in policy processes. The aim is to achieve bottom-up, transparent and inclusive Internet Governance related decision-making processes where governments work in genuine partnership with all stakeholders. One area where governments have an especially important role to play is the area of human rights. Indeed, government has a responsibility and duty to protect human rights, including freedom of expression. Not only was this not contested in the room, it clearly found broad support. It was noted that human rights issues were not on the IGF agenda seven years ago, but have emerged as a fundamental issue in current Internet governance discussions. The issue of government surveillance was raised by a number of members of the audience, and there was broad recognition from the panel that governments should 'practice what they preach' when talking about openness and transparency on the Internet. It was felt by many that we have seen trust in the Internet significantly eroded by recent events. There was agreement that the evolution of the different parts of the overall system for Internet governance must continue, and a number of participants mentioned the recent Montevideo Statement on the Future of Internet Cooperation from leading Internet technical organizations. There was agreement on and support for a greater and clearer role for governments, but it was emphasized that this increased role should not be at the expense of other actors' contribution. Governments must not push others from the tent. A speaker suggested that the IGF might become a policy equivalent to the bottom-up IETF, which produces Internet technical standards. This idea was met with some agreement; however, it was noted that if this were to be our goal, we should be ready to add a layer that allows the IGF to actually draft policy documents. Currently, the IGF does not create anything like Internet drafts and RFCs. This discussion remains open and is being dealt with by a dedicated working group on enhanced cooperation convened by the UN's Commission on Science and Technology for Development (CSTD) . Internet governance principles: This session was organized with invited experts and audience members seated in a roundtable format with moderated discussion. The session had three aims: To provide an overview of the principles developed and adopted by various governmental and non-governmental groups over the past few years; To discuss the similarities, overlaps, areas of consensus, differences and disagreements with regard to those various principles; and To develop ideas for moving towards a common framework of multistakeholder principles based on the existing initiatives and projects. The moderators noted that in preparing for the session they had found a high degree of commonality (perhaps 80%) in the more than 25 documents, declarations, resolutions and statements that they had identified which defined principles for Internet governance. Beginning the discussion, the Organisation for Economic Co-operation and Development (OECD) noted three key principles from an overall package of 14 that had been agreed by the OECD Council. They are openness, flexibility, and a multistakeholder approach. The Council also noted that Internet policy must be grounded in respect for human rights and the rule of law. However, given the special role of governments in some policy areas such as security and stability and critical infrastructure, these areas could not be left to the private sector and civil society alone. The Council of Europe also emphasized the need for respect for human rights and the rule of law, for multistakeholder governance arrangements and the equal and full participation of all stakeholders. In all, member states of the Council of Europe had agreed to a package of ten principles. The Seoul Conference on Cyberspace, which took place the weekend before IGF 2013, noted that progress had been made towards agreeing on principles and widely accepted norms for behavior in cyberspace, but we had still not reached agreement on international "rules of the road" or a set of standards of behavior. The Chairman of the Seoul Conference noted that differences of emphasis remain on how to reconcile and accommodate different national legal practices, policies and processes. However, the 87 countries that were in Seoul adopted the Seoul Framework and that in itself is an important step. The IGF Dynamic Coalition of Internet Rights and Principles introduced a document they had produced as a Charter of Human Rights. The Charter has twenty-one clauses based on ten broad principles that summarize the intent of the Charter: universality, accessibility, neutrality, freedom of expression, life, liberty and security, privacy, diversity, standards and regulation and governance. The Charter is a live document, still undergoing changes. A speaker from Brazil noted how the principles developed by the CGI.br, the multistakeholder body responsible for the Internet policy and governance activities in the country, were now close to being adopted as part of proposed legislation. The legislation, "Marco Civil da Internet", guarantees civil rights online and in the use of the Internet. The session heard about Open Stand, a set of principles developed to guide global Internet standards activities. They were developed after discussion between the IEEE, IETF, IAB, and ISOC as a new concept, in contrast to some of the more inter-governmental models that currently exist. The principles are based on respectful cooperation, specifically between standards organizations, each respecting the autonomy, integrity, processes and intellectual property rights of the other organizations. The principles support interoperability at all levels. A government representative responded to these various examples from Internet principles projects noting that Internet governance should promote international peace, sustainable development and shared understanding and cooperation. He reminded the session that there are two types of human rights: civil and political rights; and economic, social, and cultural rights. The right to development is essential to Internet governance. There was widespread support for the principles mentioned by various panelists, but there were also notes of caution. For example, one person mentioned that these principles must reflect national principles, norms and culture and not be imposed from outside. As an example, it was noted that the African Union Cybercrime Convention makes references to human rights, but also proposes the criminalization of any blasphemous speech. Having a set of broadly agreed multistakeholder principles is not the end of the road, but a starting point for further work. As a final question, panellists were asked if they and their organizations involved in producing their respective principles proposals would be willing to come together under the umbrella of the IGF to create a coherent global set of principles. The answer was a resounding "Yes". Principles of multistakeholder cooperation: This session was organized as an open discussion facilitated by the two moderators with no designated panelists, just interaction with the audience. The goal of the session was to explore and work towards key principles which should be the basis of a multistakeholder forum or policy making process. The moderators introduced the work of the "IGF Working Group on multistakeholder principles" which had looked at the many principles documents, etc., developed by various international processes. From these the coalition compiled a set of key common principles which were introduced as the basis for discussion: Open and inclusive processes. Engagement, which was described as processes that enabled all stakeholders to engage and to participate. Participation and contribution, described as the ability to participate in and contribute to decision making. Transparency in processes and decision making and how decisions made and input is reflected. Accountability, described as mechanisms for checks and balances in decision making, and Consensus-based approaches for decision making that should reflect how input from the multistakeholder processes are incorporated. These were not suggested as the only principles, or as principles that could not be challenged, but they had been identified as common among the many principles documents reviewed. Throughout the session, speakers from different stakeholder groups endorsed these core principles either as being central to statements they had developed or as having been an integral part of the discussions they had held on multistakeholder cooperation. An important note of caution was raised by a speaker who reminded the session that these new processes were not a replacement for established democratic processes and representation of the public interest. The appropriate instruments of democracy must be maintained. Another discussant noted that while principles were an important guide, they should remain flexible and able to adapt: not become rules, where we might risk transparency and inclusiveness and responsiveness to changing situations. The U.K. recently established a "Multistakeholder Advisory Group on Internet Governance" called MAGIG, composed of approximately 40 representatives from across the administration that addresses Internet issues and representatives of appropriate stakeholders. The discussion suggested that there was consensus on the broad set of principles, with some notes of caution, noting the imperative of diversity and geographical representation, the need for common language, and a common understanding of how those principles can be implemented and work in practice. Considering the way forward, the session heard a comment that it was necessary to look at actual practices and how those can be mapped to the principles, how are principles being followed and in multistakeholder processes. The IGF WG on Multistakeholder Principles will continue to work towards identifying key multistakeholder principles and best practices in their implementation, and look forward to further inputs from all stakeholders. Security, legal and other frameworks — spam, hacking and cyber-crime: This session aimed to produce clear takeaways on legal and other frameworks for addressing the controversial problems of spam, hacking, and cyber-crime at local, regional, national, and global levels. This session carried forward some of the critical concerns with spam that were raised at WCIT-12 in Dubai last year as well as problems countries face with understanding the complexity of cyber hacking, cybersecurity, and cyber-crime. The first part of the discussion examined spam and its emerging challenges and opportunities for capacity building to exchange expertise on mitigation and prevention with countries and communities who are interested in establishing spam mitigation initiatives. Participants in the meeting and following remotely examined the roles that the multistakeholder community plays in possible technical solutions and examples of sound regulatory approaches, need for legal frameworks and law enforcement responses that are necessary to address the growing issue of spam in particular in developing countries. There was consensus of the participants that while spam may be ill defined as unwanted or unsolicited electronic communication or email, it is the delivery mechanism whereby malware, botnets, and phishing attacks infect unsuspecting users. Cooperation amongst all responsible actors for prevention of such acts as well as the importance of public private partnerships and cross-border synergy amongst governments, the technical community, the private sector, and law enforcement was noted in the work being performed in industry groups. The work of the Internet Society's Combating Spam Project to bring together technical experts and organizations such as Messaging Anti-Abuse Working Group (MAAWG), the London Action Plan, and within the GSMA to work with developing countries to address from a global perspective the ever-shifting nature of spam attacks. The second part of the discussion addressed the inherent fear and lack of trust in the Internet that exists in many parts of the world. While the media often paints an optimistic picture of the potential for economic and social growth that the Internet holds, in many developing countries this is simply not the case. Many users there are hesitant to communicate and innovate online because of the prevalence of spam and the threat of hacking and cyber-crime. A participant from a small island developing state explained how his country is now a prime target for malicious online activity as an example of the risk they are facing. In this regard the sharing of best practices and capacity building activities were seen as being extremely important in helping to prevent spam, hacking and cyber-crime in these recently connected areas of the world. It was noted and agreed by the participants that producing data and statistics to measure the scope of the problem in these areas was said to be of great importance to identifying the areas of need. The Messaging Anti-Abuse Working Group (MAAWG) and the London Action Plan (LAP) were both mentioned as strong multistakeholder global initiatives that are working actively on prevention measures for harmful activities on the web. The Budapest Convention on Cyber-Crime was also said to be a strong starting point and groundwork for international cooperation efforts. The IETF is heavily involved in work related to securing networks and in implementing the proper infrastructure. Computer Emergency Response Teams (CERTs) on the national level have been very helpful in both prevention efforts and in mitigating the effects of harmful attacks after the fact. Many emphasized the need to strike a balance between keeping the Internet both open and secure. Efforts to secure networks should not stifle innovation by fragmenting network flows of information. Access/Diversity — Internet as an engine for growth and sustainable development: The session discussed how the World Summit on the Information Society (WSIS) decisions could feed into a review of Millennium Development Goals (MDGs), and how technology could become an integral part of post-2015 Sustainable Development agenda. The chair and moderator reminded participants that October 24 is UN day so it was an appropriate day to discuss the Millennium Development Goals (MDGs), WSIS goals, and the correlation and interplay between them. 2015 is an important year, as it is when the international community will review its progress towards the achieving the goals adopted at the Millennium Summit in 2000. It also marks WSIS +10, which will entail an evaluation of the action lines adopted at Tunis in 2005. The session began with a presentation on Indonesia's response and implementation of the MDGs. Discussion reviewed Indonesia's successes and also areas where more hard work was required, such as in lowering the rates of infant and maternal mortality. The speaker introduced the post-2015 Sustainable Development agenda and the three pillars the agenda proposes: economic development, social inclusion, and environmental sustainability. The next presenter, joining the session remotely, provided a history of the MDGs, describing the implementation of some of the issues and the development of the Sustainable Development goals, which are set to become the main conceptual framework for development in the 21st century. He stated that collaboration across all sectors involved in the wider development process would help deliver the agenda while working in silos would not; this was met with strong agreement. A video was shown reminding the audience that the MDGs are really about people, and shared real examples of development activities that have been enabled by the Internet or made much more effective by the Internet. A number of speakers and members of the audience noted the limited reference to technology in the MDGs and that this must be updated in future international goals to reflect the ever-increasing importance of Information and communications technologies (ICTs) in development. The meeting agreed that the benefits of ICTs were cross-cutting. ICTs are general purpose technologies, which makes them enabling technologies much as the combustion engine or power generation enabled whole sectors to develop. Work produced by the UN Broadband Commission suggests that when governments act alone implementation tends to move more slowly and with less innovation than if the private sector and others were involved. Similarly, when broadband roll-out is left strictly to the private sector there are gaps that are not filled. A presenter commented that he had been told that the successor document to the MDGs included only two references to the Internet. There was a tendency within governments for the departments responsible for ICT policy to be different from those responsible for WSIS and UN arrangements and they did not necessarily communicate. The session moved on to the goal of making recommendations to fulfill the aims of the WSIS and to make the connection to the broader Sustainable Development Goals, as both processes were to be reviewed in 2015. The Sustainable Development Goals Working Group will produce goals on water, energy, jobs, education and health. Gender is expected to be a goal or to be cross-cutting, and there might be other topics such as oceans, forests, peace and security. The session noted the importance of how ICTs will be included in the development of these global goals. A speaker noted the value of data collection, and how information about the full impact of the Internet, for instance, in the sharing economy that has developed, the caring economy and the app economy, are not being properly captured, documented and quantified in terms of the benefits they produce. The panel agreed on the significant value of improved data gathering and dissemination. Another speaker noted the importance of other infrastructures, particularly power, that are platforms essential to providing ICTs. Another participant commented on the need to share best practices, the need to communicate what works and past successes. The session was informed of a potential repository of materials from IGFs, regional events and other fora, a new initiative called "Friends of IGF". Launched this year in Bali, the Friends of IGF website project has collected the conversations, video, transcripts, presentations and other materials that have happened at IGFs over the past few years and has made it all available in one place. Such a site might be a very useful shared resource. At the Seoul Cyberspace Conference earlier in October, the U.K. government presented a 'next steps' paper which attempted to generate greater consensus around Internet governance principles and how they should lead into model policies as part of a global capacity building agenda. A mind-map of the different topics, challenges and possible solutions was created during the session to provide a visual overview of the dialog. A key conclusion was that there is a need to strengthen ICT's presence within the post-2015 process, particularly the Sustainable Development Goals. Two clear takeaways from the session were the need to promote the collection and dissemination of new data and to share success stories and good practices. An important lesson from the MDG process was the need to be more concrete in the formulation of goals, so as to be able to measure progress. It must be made clear that money goes where the goals are, and that when targets are not met there must be transparency about the outcome. Important questions were raised about data collection and how best to collect, analyze and share data in the future. This area, amongst others, is somewhere where the Internet has clear strengths and where it can contribute to accomplishment of the wider development objective. Human rights, freedom of expression, and the free flow of information on the Internet: To the pleasure of many participants, for the first time in the history of the IGF a dedicated plenary session focused on human rights, freedom of expression and the free flow of information on the Internet. The highly interactive roundtable discussion touched upon many of the key issues addressed in the related workshops prior to the session. Access to and use of the Internet from a human rights perspective were at the forefront of discussions. Key points were made related to a wide range of violations of rights and particular groups being affected, including journalists, human rights defenders, and sexual rights activists. The ways in which Governments have responded with legislation to challenges posed by the Internet, as well as new jurisprudence, new case law, and new forms of defamation, were also discussed throughout the proceedings. One commonality in the discussions was the desire to connect openness in Internet standards with that for "reasonable limitations online". Some fascinating regional perspectives provided depth and scope to the broader discussions. Speakers addressed emerging issues and concerns that include civil suits against individuals for Twitter expression. Another source of concern, especially for speakers from developing countries, are copyright suits by technology providers that are seen as "overriding protections provided by the law", with one speaker describing the enforcement of copyright as limiting people's access to essential knowledge. "Unbalanced copyright frameworks" were also described from the perspective of public library service providers, with one speaker saying that licensing systems of the digital age are bringing restrictions that "end up defeating the purposes of the Internet", as sometimes the public can only access information that public library systems "can afford to pay for". Others warned of setting up a false dichotomy between copyright and freedom of expression. One speaker reported back from a vibrant workshop on the popular issue of Net Neutrality. The workshop agreed that openness and neutrality are essential features of the Internet that have to be fostered to ensure the free flow of information. They also agreed that both openness and neutrality are the features that make the Internet a key driver for innovation, as well as a great human rights enabler. Finally they agreed that at present, there are some traffic management techniques that can jeopardize this open and neutral architecture and can have negative effects on human rights and thus net neutrality should not be considered just from a competition perspective, but also from a human rights perspective. Finally, everyone in the session agreed that human rights and freedom of expression online should remain high atop the growing list of issues central to the ongoing IGF discussions. Some key takeaways and next steps from the session‟s rapporteur are attached to the IGF chair's summary as an annex. Emerging Issues – Internet Surveillance: In response to the high level of interest generated by recent revelations about extensive Internet surveillance programs in different countries, the traditional IGF emerging issues session addressed in depth the hot topic of Internet surveillance. Two moderators introduced a panel of five presenters and four commenters and proposed to address the community policy questions in five main baskets: Infrastructure and the basic functionality of the Internet Privacy protection and the other human rights issues related to the Internet surveillance Focus on security, and situations when surveillance is justified and under what conditions Data protection and the economic concerns Ethics and the potential impact of surveillance on trust in the Internet. The moderator suggested issues of law enforcement procedures and international law would underlie many of the discussions. In their opening remarks all the panelists noted the severity of the problem and its importance to the international community. In response to the many reports of U.S. intelligence gathering practices, the session heard that the U.S. administration, directed by the President, had begun processes of extensive reviews and reforms. Some participants noted the difference between gathering information for intelligence and security purposes and intelligence collection for the purpose of repression and persecution of citizens. A speaker providing a U.S. business perspective stated that his company, in common with other ICT companies effected by government requests to access and monitor user data, did not accept blanket requests for access. However, they were subject to the rule of law and treated each individual request from the government on its merits. He also commented that surveillance revelations were a major problem for the Internet industry; if users didn't trust a company's products they would go elsewhere. A comment from a remote participant referred to reports that U.S. cloud companies can expect to lose business from non-U.S. customers to the tune of many billions of dollars, with the overall negative impact on the IT industry even greater because of this loss of trust. A speaker from the Internet technical community echoed these concerns about the loss of trust in Internet products and services. He pointed out that there was an understanding that intelligence activities targeted individuals and groups, but the very large scale of the alleged monitoring shocked and surprised many. This observation about the massive scale of the monitoring was shared by many, and led to questions about the central role of a single country in many aspects of the Internet; from the control of infrastructure and the success and global spread of commercial services, to positions of oversight over critical Internet functions. Concern over these issues was one of the motivations behind the proposed Internet governance summit to be held in Brazil in May 2014. A commenter noted that Brazil intends for the meeting to be a "Summit" in the sense that it will be high level and will have authority enough to make decisions. Comments about building more Internet exchange points and adding more connectivity also received support. Keeping traffic local would avoid transiting networks that might be monitored, and they would increase speed, lower costs and enable local Internet businesses to grow. Open source solutions were mentioned as being useful to assure users about the reliability of the tools they used, and additional efforts with open source would be worth pursuing. Any response that tried to create national or regional Internets would risk fragmenting the Internet and most likely harm opportunities for innovation. A global and open Internet is still needed. Open microphone session: To wrap up the IGF an open microphone session was held to provide an opportunity for all participants to address any issue of their concern, allowing the Multistakeholder Advisory Group (MAG) to receive feedback from participants in regards to the proceedings that took place throughout the week. The MAG and the IGF Secretariat will take note of all comments made during the session as well as comments received from an open call for comments on the 8th IGF and take them into account when planning future meetings. There was an interesting discussion about the value of the IGF for government stakeholders in particular. Government representatives spoke about how the IGF teaches them how the multistakeholder model can be strengthened and further developed, how the Internet can be used to benefit developing countries, and lessons about the importance of respecting human rights and freedom of expression both online and offline. It is a useful platform where governments can interact with all other stakeholder groups. The importance of continued outreach to new stakeholders about the IGF process was stressed. Links to important media outlets should be strengthened to improve the forum’s global visibility and reach. Capacity building opportunities and e-participation at the IGF events need to continue to improve to attract new stakeholders. Closing ceremony and session: Many speakers praised the IGF for its significant progress in ‘evolving’ in-step with other Internet governance processes. A number of steps were taken in the preparatory process, in-line with the recommendations of the CSTD working group, to ensure this. It was emphasized that the broad support received for the 8th IGF needed to be catalyzed to bring increased stable and sustainable funding and overall support for the IGF Secretariat. Three important announcements were made by the governments of Turkey, Brazil, and Mexico to close the meeting. Representatives from each country announced their intentions to host future IGF meetings; in Turkey in 2014, Brazil in 2015 and in Mexico in 2016. Mexico’s announcement was of course contingent on the mandate of the IGF being extended beyond its second 5-year mandate which will end in 2015. IGF IX — Istanbul, Turkey 2014 The ninth IGF meeting was held in Istanbul, Turkey from 2 to 5 September 2014. The meeting included 135 sessions and 14 pre-events. The overarching theme for meeting was: "Connecting Continents for Enhanced Multi-stakeholder Internet Governance". The meeting was organized around eight sub-themes: (i) Policies Enabling Access; (ii) Content Creation, Dissemination and Use; (iii) Internet as an Engine for Growth and Development; (iv) IGF and The Future of the Internet Ecosystem; (v) Enhancing Digital Trust; (vi) Internet and Human Rights; (vii) Critical Internet Resources; and (viii) Emerging Issues. Opening ceremony and session: Mr. Thomas Gass, Assistant Secretary-General for Policy Coordination and Inter-Agency Affairs of United Nations Department of Economic and Social Affairs (UNDESA), formally opened the ninth IGF. Mr. Gass stressed that the United Nations Secretary General was committed to the multistakeholder model for Internet governance championed by the IGF and the long-term sustainability of the forum. Lütfi Elvan, Minister of Transport, Maritime Affairs and Communications of the Republic of Turkey, assumed the role of chair of the meeting and welcomed all participants. Mr. Elvan, suggested an "Internet Universal Declaration" be prepared in a multistakeholder fashion as an additional concrete output of the IGF. After his speech, Mr. Elvan conveyed the role of chair to Mr. Tayfun Acarer, Chairman of the Board and President of the Information and Communication Technologies Authority (ICTA) of the Republic of Turkey. Mr. Acarer expressed his appreciation for the opportunity to host the ninth IGF in Istanbul and stressed the importance of enabling access to information resources in helping to bridge the digital divide. Many speakers made an urgent call to strengthen the IGF and provide it with further financial and political sustainability to safeguard the progress that has been made in creating an ecosystem where the Internet can go on flourishing in the future. Mr. Virgilio Fernandes Almeida, National Secretary for Information Technology Policies at the Brazilian Ministry of Science and Technology, invited all participants to the tenth IGF in 2015 hosted by Brazil. Policies Enabling Access, Growth and Development on the Internet: There were 1 billion Internet users when the Tunis Agenda was adopted, in 2005. Nine years later, there are approx. 7 billion mobile subscriptions and approx. 3 billion Internet users. Home Internet access is near saturation in developed countries, but only 31% in developing countries. Public Internet access, infrastructure sharing and access as a human right for the socially disadvantaged, vulnerable sections and persons with disabilities are critical access issues – that need global attention. The session was conducted as a roundtable with 22 invited speakers, with 13 from developing countries and two from international organizations. Nearly half the participants were women. Highlights of the interactive discussion included: Many stressed that the concerns over Internet access and inclusivity go beyond connectivity and infrastructure issues and must incorporate the role of social inclusion in the debate, including users with disabilities and marginalized groups. One speaker noted that there is an important and complex relationship between access to networks, the development of local content and information knowledge flows. This was echoed in comments from the floor, which acknowledged that there was a strong correlation between growth in local content and the development of network infrastructure, and that open government and open data policies around the world provided strong examples of how developers in the public arena are able to leverage the public to generate new information society services. The need to place more emphasis on multilingualism online was also acknowledged by the panel. Local and small enterprises need to be involved in policy discussions. One speaker also noted that the significance of youth empowerment in the policy formation debate is imperative in spurring economic and social development. The importance of standardizing how access levels are calculated was noted. It was suggested that an action to take from the session is to do more work looking at the different methodologies for calculating access levels and providing more transparency for these debates. Digital competencies and media literacy were seen by many participants as essential to Internet growth. It was agreed that the involvement of governments in promoting and supporting infrastructure expansion through planning was imperative; however, there were differences in opinion about how the implementation of these plans should be monitored. Network Neutrality: Towards a Common Understanding of a Complex Issue: Network neutrality was one of the most polemic issues, as was also witnessed at NETmundial in April 2014. At NETmundial there were "diverging views as to whether or not to include the specific term as a principle in the outcomes". However, NETmundial participants agreed on the need to continue the discussion regarding network neutrality and recommended this discussion "be addressed at forums such as the IGF". The session looked at the issue from different perspectives – technical, economic, social and human rights as well as two cross-cutting perspectives, developmental and regulatory. The discussions showed that all these issues are intertwined and multifaceted. Given the differences between developing and developed country perspectives, there was a sense that the search for a one-size fits all policy solution would not be the best way to proceed globally. While there was a divergence of views on many issues, such as the concept of appropriate network management, the impact on innovation or zero-rating, there were also convergence of views on the importance of enhancing users’ experience or the need to avoid the blocking of legal content. The Dynamic Coalition on Network Neutrality will continue the discussions leading up to the 2015 meeting, but the view was also held that there was a need to develop a process that allowed the entire IGF community to weigh in and validate the findings of the Dynamic Coalition. Evolution of the Internet Governance Ecosystem and the Role of the IGF: As the Internet continues to grow and its benefits reach more people, more stakeholders are entering the Internet governance debates, with the aim to address concerns they have about the use and potential misuse of the Internet. Existing organizations, such as UN agencies, upon request by the governments, examine their roles in relation to Internet-related issues while newer organizations that follow more of a "bottom up" governance approach, such as the Internet Corporation for Assigned Names and Numbers (ICANN), now co-exist alongside intergovernmental organizations. In addition, since 2006, the IGF has been a platform for stakeholders to come together on an equal footing to discuss, exchange ideas and share good practices with each other. While many are embracing the engagement of stakeholders more directly in decisions and governance, others remain concerned that more intergovernmental involvement in the Internet is needed, especially on public policy issues. NETmundial and the Internet Assigned Numbers Authority (IANA) stewardship transition were noted as signs that Internet governance had reached a pivotal moment in its development. IANA Functions: NTIA's Stewardship Transition and ICANN's Accountability Process: This session was a response to two developments in the first half of 2014: 1) the announcement by the United States National Telecommunications and Information Administration (NTIA) in March 2014 to transition its stewardship of the IANA function to the global multistakeholder community; and 2) prompted by that announcement, a call by many in the ICANN community to examine ICANN's accountability in the absence of its historical contractual relationship with the United States Government. Both these issues also appeared in the NETmundial Multistakeholder Statement of São Paulo as issues with relevance to the broader Internet governance ecosystem. Members of the IANA Stewardship Coordination Group (ICG) - a group of representatives from a wide range of communities with an interest in IANA - are working to collate proposals developed by the communities with an interest in IANA into a single document that will be sent to the NTIA, outlining how NTIA's stewardship could be replaced by a global multistakeholder model. The ICG plans to have proposals submitted by different sectors of the community by the end of December 2014, with the intention of having the new stewardship mechanism agreed to by the community and accepted by the NTIA and in place before the September 2015 date for the renewal of the IANA contract. Concerns were expressed that when NTIA was no longer the authority reassigning the IANA contract, some future ICANN Board may overstep its boundaries. Taking Stock and Open Microphone Sessions: This session reflected on the main outputs of the IGF main sessions. Participants identified issues that could lend themselves to ongoing inter-sessional work and discussed appropriate ways to pursue this work. Some other overall suggestions were considered regarding the role of the IGF in the evolving Internet governance ecosystem. Closing Session: Several speakers, representing all stakeholder groups, addressed the Closing Session. Gratitude to the host country and all those who had participated and made the ninth IGF a success was expressed by everyone. Speakers reaffirmed the importance of the multistakeholder process and cooperation, and emphasized the importance of dialogue. Mr. Hartmut Glaser, the Executive Secretary of the Brazilian Internet Steering Committee, invited participants to the tenth IGF, 10–14 November 2015, in Joao Pessoa, Brazil. The representative of the United States of Mexico extended an invitation to all participants to attend the eleventh IGF Meeting in Mexico in 2016, subject to the extension of the IGF mandate. Best Practice Forums (BPFs): Best practice forums were held on the following topics: Developing Meaningful Multistakeholder Mechanisms Regulation and Mitigation of Unwanted Communications (Spam) Establishing and Supporting CERTS for Internet Security Creating an Enabling Environment for the Development of Local Content Online Child Safety and Protection Dynamic Coalitions: The following Dynamic Coalitions, informal, issue-specific groups comprising members of various stakeholder groups, met during IGF 2014: Dynamic Coalition on Gender and Internet Governance Dynamic Coalition on Public Access in Libraries Dynamic Coalition on Network Neutrality Dynamic Coalition on Child Online Safety: "Disrupting and Reducing the Availability of Child Sex Abuse Materials on the Internet - How Can Technology Help?" Dynamic Coalition on Internet and Climate Change Dynamic Coalition on Accessibility and Disability Dynamic Coalition on the Internet of Things Dynamic Coalition on Platform Responsibility Dynamic Coalition Internet Rights and Principles Dynamic Coalition: "The IRPC Charter of Human Rights and Principles for the Internet: Five Years On Youth Coalition on Internet Governance " Dynamic Coalition on Core Internet Values Dynamic Coalition on Freedom of Expression and Freedom of the Media on the Internet: "Battle for Free User Generated Content" Open Forums: Open Forums focus on an organizations activities during the past year and allow time for questions and discussions. Governments can also hold an open forum to present their Internet governance related activities. The following Open Forums were held: ICANN Governmental Advisory Committee (GAC) Open Forum Internet Society Open Forum: "ISOC @IGF: Dedicated to an Open Accessible Internet" Council of Europe Open Forum: "Your Internet, Our Aim: Guide Internet Users to Their Human Rights!" The Freedom Online Coalition Open Forum: "Protecting Human Rights Online" UNCTAD Open Forum: "Consultation on CSTD ten-year review of WSIS" UNESCO Open Forum: "Multistakeholder Consultation on UNESCO’s Comprehensive Study on the Internet" ICANN Open Forum Ministry of Science, ICT and Future Planning (MSIP)/Korea Internet & Security Agency (KISA) Open Forum: "Korea's Effort to Advance Internet Environment including IPv6 Deployment" Organisation for Economic Co-operation and Development (OECD) Open Forum: "The Economics of an Open Internet" ITU-UNICEF Open Forum: "Launch of Revised Guidelines for Industry on Child Online Protection, by ITU and UNICEF" World Wide Web Foundation Open Forum: "Measuring What and How: Capturing the Effects of the Internet We Want" Host country sessions: Child Online Protection : Roles and Responsibilities, Best Practices and Challenges Perspectives on Internet Governance Research and Scholarship Policies for Enabling Broadband: Special Focus on OTTs and Level Playing Fields National & International Information Sharing Model in Cybersecurity & CERTs High Level Leaders Meeting: Turkey convened a meeting on the topic of "Capacity Building for Economic Development". Thirty three high level leaders, including a deputy prime minister, ministers and deputy ministers, representatives of international organizations, presidents of regulatory bodies, leaders of entities from civil society, the private sector and the technical community spoke on this important topic. National and Regional IGF Roundtable: The national/regional IGF initiatives session was an interactive session that engaged coordinators and participants from the national and regional IGF initiatives and others interested or engaged in the initiatives. It was clear during the session that there is great diversity between the way that the national and regional IGFs conduct their respective engagements. One size does not fit all. The need to work together was acknowledged. There were suggestions on inter-sessional work can be done using the national and regional initiatives. Side meetings: Enhancing ICANN Accountability and Governance Town Hall Meeting WSIS+10 High-Level Event – Information Session Seed Alliance Awards Ceremony Geneva Internet Conference: "Promising 2014 - a head start to the decisive 2015" Geneva Internet Platform: Where Internet meets diplomacy Council of Europe: "A Human Rights Perspective on ICANN’s Policies and Procedures" 10 Years of Internet Governance Book - 10 Thousands Copies - 10 Languages Privacy and the Right to be Forgotten Internet and Climate Change Launch of the GISWatch Report 2014 APrIGF Multistakeholder Steering Group Open Meeting Friends of the IGF (FoIGF) Flash Sessions: The following Flash Sessions were held: Internet and Jurisdiction Project Crowd Sourced Solutions to Bridge the Gender Digital Divide Workshops: 89 Workshops were held, each exploring detailed issues related to the main themes of the IGF. Pre-events: The following pre-events were held: Pre-Conference Seminar for CLDP Supported Delegations Collaborative Leadership Exchange on Multistakeholder Participation Sex, Rights and Internet Governance Global Internet Governance Academic Network (GigaNet) - 9th Annual Symposium NETmundial: Looking Back, Learning Lessons and Mapping the Road Ahead (including a book launch - Beyond NETmundial: The Roadmap for Institutional Improvements to the Global Internet Governance Ecosystem) Integration of Diasporas and Displaced People Through ICT Consultation on CSTD Ten-year Review of WSIS: Latin American and the Caribbean perspective IGF Support Association Empowering Grassroots Level Organizations Through the .NGO Top Level Domain A Safe, Secure, Sustainable Internet and the Role of Stakeholders Supporting Innovation on Internet Development in the Global South through Evaluation, Research, Communication and Resource Mobilization Multilingualism Applied in Africa Governance in a Mobile Social Web – Finding the Markers IGF X — João Pessoa, Brazil 2015 The tenth IGF meeting was held in João Pessoa, Brazil from 10 to 13 November 2015. The meeting included more than 150 sessions and 21 pre-events. The overarching theme for the meeting was: "Evolution of Internet Governance: Empowering Sustainable Development". The meeting was organized around eight sub-themes: (i) Cybersecurity and Trust; (ii) Internet Economy; (iii) Inclusiveness and Diversity; (iv) Openness; (v) Enhancing Multistakeholder Cooperation; (vi) Internet and Human Rights; (vii) Critical Internet Resources and (viii) Emerging Issues. Opening Ceremony and Opening Session: Brazilian Minister of Communications, André Figueiredo, reminded participants that in developing countries, access to the Internet for those still not yet connected to the information society remains the most pressing issue. Strong statements of support for the renewal of the IGF's mandate were made by several governments, including Turkey, the European Commission, the United States, Japan, and China, recognizing the invaluable multistakeholder synergy it brings to the discussion on Internet governance. IGF WSIS + 10 Consultations: This session brought together a diverse and inclusive group of stakeholders on an equal footing, to address and comment on the UNGA’s Overall Review of the Implementation of WSIS Outcomes Draft Outcome Document, released on 4 November 2015. The presence of the two co-facilitators of the High-Level review process enriched the deliberations and H.E. Mr. Janis Mazeiks, Permanent Representative of the Republic of Latvia and H.E. Mrs. Lana Zaki Nusseibeh, Permanent Representative of the United Arab Emirates confirmed that a report on the consultations held at the IGF would act as an input into the High-Level review of the UNGA set to take place on 17–18 December. Internet Economy and Sustainable Development: Deliberations on issues related to the Internet Economy and Sustainable Development coming from the IGF could serve as valuable inputs to the draft WSIS outcome document. UN agencies such as UNDESA, ITU, UNESCO, and UNCTAD can feed IGF discussions into synchronizing WSIS action lines to individual Sustainable Development Goals (SDGs). It was stressed that the Internet and ICTs can support all 17 SDGs and the IGF can contribute to enabling citizens across local economies to better understand the potential of ICTs and Internet access. Other recommendations coming from the session included: Creating more awareness about the SDGs, IGF, Multistakeholder mechanisms and how the Internet can help achieve SDGs on Regional and National levels, through different stakeholders and Governments. Inducing more investment into Internet innovation to serve the SDGs, through both public funds and Venture Capital incentives, among other channels. Engaging further local Small and Medium-Sized Enterprises (SMEs) in localized results serving the SDGs, from local content, to solutions serving different SDGs. Improving policies serving access, privacy and security of the Internet. Engaging more Women and youth. Fostering Internet entrepreneurship. Extending the Internet economy to marginalized groups and least developed countries (LDCs). Augmenting local content. Increase knowledge sharing, capacity building and preparation of youth for future employment. Transforming the digital divide into social inclusion. IGF Policy Options and Best Practices for Connecting the Next Billion: The intercessional work on "Policy Options for Connecting the Next Billion" was presented and discussed. The work of the IGF Best Practice Forums were also presented and it was suggested that moving forward BPF work could perhaps be fed into consultations through the National and Regional IGF initiatives. Enhancing Cybersecurity and Building Digital Trust: Recognizing the crucial need to enhance cybersecurity and build trust, this main session held valuable discussions with stakeholders coming from government, private sector and civil society. The general consensus coming from the session was that: cybersecurity is everyone's problem and everyone should be aware and understand that the cyber world is a potential unsafe place; culture of cybersecurity is needed on different levels; individual action was encouraged to make the Internet safer; there is a need for a comprehensive approach to tackling cybercrime and building trust, such as the introduction of security elements when developing cyber products and services; was highlighted. Participants also stressed the critical role that education plays a critical role in addressing cybercrime issues and should be expanded to involve all levels of society. The involvement of the government, private sector, civil society and other stakeholders in handling cyber security was stressed as fundamental in terms of sharing best practices, sharing results of critical assessments and identifying globally accepted standards of cybersecurity. All stakeholders must understand, respect and trust each other’s expertise and competences. A Dialogue on ‘Zero Rating’ and Net Neutrality: Zero Rating (ZR) services provide a mobile broadband subscriber with access to select content, without that access counting against the subscriber's data cap. Two questions were posed to the speakers: whether ZR assists in connecting the unconnected by offering Internet access to those who cannot afford it, and whether ZR is a violation of net neutrality when it does not offer access the "full Internet." The positions that were heard from expert speakers and session participants on ZR were extremely diverse. Some think ZR is a direct violation of Network Neutrality, others don’t even think that it is a Network Neutrality issue. The national regulators who participated in the session described completely different approaches to ZR. ZR is only one means of connecting more people to the Internet. The discussion talked about other means to increase access, such as the use of municipal Wi-Fi and wireless community networks. Further research is needed on this complex subject. Human Rights on the Internet: The session focused on three major areas of discussion: (i) freedom of expression, privacy, and assembly; (ii) access, human rights and development; and (iii) emerging issues. There is a growing recognition that human rights extend beyond enabling access to topics including: how the Internet enables sustainable development, hate speech, protecting journalists and citizen journalists to ensure freedom of expression online, preventing the radicalization of youth, the protection and promotion of privacy, the relationship between surveillance and privacy, the importance of protecting women's and LGBT communities’ rights online and offline by addressing online abuse and gender-based violence, and private sector responsibilities in promoting and protecting human rights online. The NETmundial Statement and the Evolution of the Internet Governance Ecosystem: The NETmundial Multistakeholder Statement covers a wide range of Internet Governance issues that are of great relevance to the IGF. The session took stock of how those issues are being advanced by the broader Internet governance community 18 months after the São Paulo meeting. Participants (in person and remotely) raised the following issues: The NETmundial Statement is still up to date and valuable in all of its recommendations. There was a general sense among the speakers with regard to the importance of promoting NETmundial principles in all tracks and spheres that form the Internet governance ecosystem. It is necessary however to analyse the meaning of those normative propositions according to the different local and regional contexts. Human rights and shared values have become a permanent item on the work agenda of Internet technical fora and organizations. International trade and cybersecurity (and their overlap with Internet governance) are critical areas for the advance of multistakeholder participation. There is a need for considering the opinion of people with disabilities in order to implement the provisions of the NETmundial Statement regarding accessibility. The NETmundial methodology is unequivocally one of the main reasons for its success. That methodology has to be studied and be used to enhance the methodologies applied at the IGF. Strong evidence, good arguments and high quality debate make a lot of difference for societal self-determination. One of the issues that led to the NETmundial Meeting was the issue of mass surveillance. Currently, that topic has not been dealt with satisfactorily. Child protection is still a matter of concern. It is disappointing that there is little or no mention of the NETmundial Meeting in the context of the WSIS+10 process. Closing Ceremony: The IGF by its nature is an inclusive environment, as are the National and Regional IGFs. Speakers urged delegates to leverage that inclusiveness and continue to strive for greater participation, particularly from developing countries, in IGF processes. By doing this it was said that we can help foster an open Internet, that has seen tremendous growth and innovation, provides an engine for economic growth and serves as a platform for expressing ideas, thought and creativity. Many speakers expressed great thanks to CGI Brazil and to the local and host country government officials and supporting staff. Ms. Yolanda Martínez, Head of the Digital Government Unit, Secretariat of Public Administration of Mexico, offered on behalf of the Government of Mexico to host the 11th IGF in 2016, pending the renewal of the IGF mandate [following IGF 2015, in December 2015 the IGF's mandate was extended by the UN General Assembly for ten years]. IGF Best Practice Forum (BPF): Best practice forums were held on the following topics: Online Abuse and Gender-Based Violence Against Women Enabling Environments to Establish Successful IXPs Creating an Enabling Environment for IPv6 Adoption Establishing and Supporting Computer Security Incident Response Teams (CSIRTs) for Internet Security Regulation and Mitigation of Unsolicited Communications Strengthening Multistakeholder Participation Mechanisms Dynamic Coalitions: Dynamic Coalitions (DCs) are informal, issue-specific groups comprising members of various stakeholder groups. At IGF 2015 Dynamic Coalitions were, for the first time, featured in a main session. A proposal that found broad support was to create a DC Coordination Group. The main task of the group would be to develop a charter for all DCs with common principles and rules of procedure they would agree to adhere to, such as having open lists and open archives. The Group would also look at areas of overlap and duplication and aim to create synergies among the DCs. The following Dynamic Coalitions met during IGF 2015: on Accessibility and Disability, on Accountability of Internet Governance Venues (new), on Blockchain Technologies, on Child Online Safety (new), on Core Internet Values, on Freedom of Expression and Freedom of the Media (new), on Gender and Internet Governance, on the Internet of Things, on Internet Rights and Principles, on Network Neutrality, on Platform Responsibility, on Public Access in Libraries, and Youth Coalition on Internet Governance. Open Forums: Open Forums focus on an organizations activities during the past year and allow time for questions and discussions. Governments can also hold an open forum to present their Internet governance related activities. The following Open Forums were held: Asia-Pacific Regional Internet Registry (APNIC): The Internet Number Community and their related organizations Association for Progressive Communications (APC): Networking globally and acting locally: 25 years of working for an Internet for all Commonwealth Telecommunications Organisation: Commonwealth Internet Governance Forum Council of Europe: An enabling environment for Internet freedom Digital Infrastructure Association (DINL): The Public Core of the Internet – Towards a framework for sustainable interaction between governments and the Internet ecosystem DiploFoundation and Geneva Internet Platform: Geneva Internet Platform and DiploFoundation — ideas, words and actions European Broadcasting Union in partnership with EuroDIG organizers: Messages from Europe European Commission & Global Internet Policy Observatory (GIPO): Progress of Global Internet Policy Observatory – Open debate on usability and inclusivity of the platform Freedom Online Coalition: Protecting Human Rights Online National ICT Ministry of Paraguay: Digital E-Gov Initiatives in Paraguay Institute of Electrical and Electronics Engineers (IEEE): Advancing Technology for an Open Internet International Telecommunication Union (ITU): Fostering SMEs in the ICT Sector – The new global ICT Entrepreneurship Initiative Internet Corporation for Assigned Names and Numbers (ICANN): ICANN Open Forum Internet Society (ISOC): Bringing people together around the world Ministry of Education of Cuba: Internet as a pathway from school to exercise the human right of access to information Office of the United Nations High Commissioner for Human Rights, jointly with the Council of Europe: The right to privacy in the digital age Organisation for Economic Co-operation and Development (OECD): Digital Economy for Innovation, Growth and Social Prosperity – towards the 2016 OECD Ministerial UN Conference on Trade and Development (UNCTAD): UNCTAD Open Forum UN Educational, Scientific and Cultural Organization (UNESCO): Keystones to Foster Inclusive Knowledge Societies – Launching UNESCO's Comprehensive Study on the Internet World Intellectual Property Organization (WIPO): WIPO Open Forum Pre-events and other sessions: Should education 3.0 and children be part of Internet governance? Internet Governance, Security and Privacy in 2030 Global Commission on Internet Governance Diplo_GIP Digital Watch Inter regional dialogue session IGF XI — Guadalajara, Mexico 2016 The eleventh IGF meeting was held in Guadalajara, Mexico, from 6 to 9 December 2016. The meeting included 205 sessions as well as 24 pre-events (7 host country and ceremonial sessions; 8 main sessions; 96 workshops; 31 open forums; 4 individual Best Practice Forum sessions; 14 individual Dynamic Coalition sessions; 23 lightning sessions; 5 unconference sessions; 17 sessions classified "other"; and 24 pre-events). Experimental Lightning and Unconference sessions were held for the first time. A newcomers track helped participants attending the IGF meeting for the first time, to understand the IGF processes, foster the integration of all new-coming stakeholders into the IGF community, and make the meeting participant's first IGF experience as productive and welcoming as possible. The overarching theme for the meeting was: "Enabling Inclusive and Sustainable Growth". The meeting addressed a broad range of themes and issues including, but not limited to, Sustainable Development and the Internet Economy; Access and Diversity; Gender and Youth Issues; Human Rights Online; Cybersecurity; Multistakeholder Cooperation; Critical Internet Resources; Internet governance capacity building; and Emerging Issues that may affect the future of the open Internet. IGF XII — Geneva, Switzerland 2017 The twelfth IGF meeting took place in Geneva, Switzerland, from 18 to 21 December 2017. The programme included 4 host country and ceremonial sessions; 8 main/special sessions; 99 workshops; 45 open forums; 4 individual BPF sessions; 15 individual DC sessions; 8 individual NRIs sessions; 13 sessions classified as “other”; 24 lightning sessions; and 40 Day 0 events; for a total of 260 sessions in the overall programme (220 if Day 0 events are not counted). 55 booths were featured in the IGF Village. The overarching meeting theme was "Shape Your Digital Future!". The meeting addressed a broad range of issues including, the future of global cooperation on Digital Governance; the impact of digitization on democracy, public trust and public opinion; Internet and the Sustainable Development Goals; access and diversity; the digital transformation and its socio-economic and labour impacts; youth and gender challenges pertaining to the Internet; the protection and promotion of human rights online; cybersecurity; intended and unintended global impacts of local interventions; the need to enhance multistakeholder cooperation; critical Internet resources; Internet governance capacity-building; and other emerging issues that enhance and affect the future of the open Internet. IGF XIII — Paris, France 2018 The thirteenth IGF meeting took place in Paris, from 12 to 14 November 2018. In addition to the Opening and Closing Sessions, the IGF 2018 programme featured 8 main/special sessions; 71 workshops; 27 open forums; 5 individual best practice forum (BPF) sessions; 15 individual dynamic coalition (DC) sessions; 5 individual national, regional, and youth (NRIs) collaborative sessions; 14 sessions classified as “other”; and 24 lightning sessions; for a total of 171 sessions in the overall programme. IGF XIII was held as part of the Paris Digital Week which, in addition to the IGF, featured the inaugural events of the Paris Peace Forum and the Govtech Summit. UN Secretary-General (SG) António Guterres addressed the IGF, marking the first time in Forum’s history that a SG has attended in person. French President Macron addressed the IGF at the opening ceremony and launched the "Paris Call for Trust and Security in Cyberspace", a framework for regulating the Internet and fighting back against cyber attacks, hate speech and other cyber threats. Eight themes formed the backbone of the 2018 agenda: (i) Cybersecurity, Trust and Privacy; (ii) Development, Innovation and Economic Issues; (iii) Digital Inclusion and Accessibility; (iv) Human Rights, Gender and Youth; (v) Emerging Technologies; (vi) Evolution of Internet Governance; (vii) Media and Content; and (viii) Technical and Operational Issues. IGF XIV — Berlin, Germany 2019 The fourteenth IGF meeting took place in Berlin, from 25 to 29 November 2019. IGF XV — online, 2020 The Fifteenth Annual Meeting of the Internet Governance Forum (IGF) was hosted online by the United Nations under the overarching theme: Internet for human resilience and solidarity. The first phase was hosted on 2-6 November and the second one on 9-17 November 2020. Visit IGF 2020 Meeting Page: https://www.intgovforum.org/vIGF/ IGF XVI — Katowice, Poland 2021 The 16th annual IGF meeting was hosted by the Government of Poland in Katowice from 6-10 December, under the overarching theme: Internet United, see https://www.gov.pl/web/igf2021-en Upcoming IGF meetings IGF XVII — Addis Abeba, Ethopia 2022 The 17th annual IGF meeting will be hosted by the Government of Ethiopia in Addis Ababa. IGF XVIII — Japan 2023 The 18th annual IGF meeting will be hosted by the Government of Japan. A specific location has not been communicated. IGF Attendance Onsite attendance Onsite attendance at the first IGF meeting in 2006 was estimated to be around one thousand participants and has grown to between 1500 and 2200 participants from over 100 countries. In recent years participants have typically been roughly 60% men and 40% women. Participants are drawn from civil society, governments, the private sector, the technical community, the media, and intergovernmental organizations. IGF I — Athens, Greece 2006: Attendance was estimated to be around one thousand participants. IGF II — Rio de Janeiro, Brazil 2007: There were over 2,100 registered participants prior to the meeting, of which 700 came from civil society, 550 from government, 300 from business entities, 100 from international organizations, and 400 representing other categories. The meeting was attended by 1,363 participants from 109 countries. Over 100 members of the press attended. IGF III — Hyderabad, India 2008: The meeting was held in the aftermath of terrorist attacks in Mumbai. While these tragic events led to some cancellations, the overall attendance with 1280 participants from 94 countries, of which 133 were media representatives, was close to that at the second annual meeting. IGF IV — Sharm El Sheikh, Egypt 2009: With more than 1800 participants from 112 countries the Sharm meeting had the largest attendance of any IGF to date. 96 governments were represented. 122 media representatives were accredited. IGF V — Vilnius, Lithuania 2010: With close to 2000 badges issued and 1461 participants, attendance at the Vilnius meeting was similar to the 2009 meeting in Sharm El Sheikh. IGF VI — Nairobi, Kenya 2011: More than 2,000 participants attended, the highest attendance of IGF meetings held so far. 125 governments were represented. 68 media representatives were accredited. The approximate nationality distribution was: African (53%), WEOG-Western European and Others Group (29%), Asian (11%), GRULAC-Latin American and Caribbean Group (4%) and Eastern Europe (3%). IGF VII — Baku, Azerbaijan 2012: More than 1,600 delegates representing 128 different countries attended with a particularly strong presence from civil society as this was the highest represented stakeholder group at the forum. Participation was regionally diverse and the participation of women at the forum increased significantly from previous years. Youth representation and activity was also sited to be a notable achievement. IGF VIII — Bali, Indonesia 2013: Nearly 1,500 delegates representing 111 different countries convened in Bali. Once again civil society was the largest represented stakeholder group at the forum. IGF IX — Istanbul, Turkey 2014: More than 2,400 delegates representing 144 different countries convened in Istanbul. Once again civil society was the largest represented stakeholder group at the forum with 779 participants, followed by the private sector with 581, governments with 571, the technical community with 266, the media with 110, and intergovernmental organizations with 96. The approximate regional distribution was: Turkey (31%), Africa (8%), WEOG-Western European and Others (32%), Asia Pacific (17%), GRULAC-Latin American and Caribbean Group (6%) and Eastern Europe (6%). IGF X — João Pessoa, Brazil 2015: More than 2,130 delegates representing 112 different countries convened in João Pessoa. Once again civil society was the largest represented stakeholder group at the forum with 44% of the participants, followed by governments with 22%, the private sector with 12%, the technical community with 10%, the media with 8%, and intergovernmental organizations with 4%. The approximate regional distribution was: Brazil (49%), Africa (5%), WEOG-Western European and Others (26%), Asia Pacific (8%), GRULAC-Latin American and Caribbean Group (9%) and Eastern Europe (3%). 62% of the participants were men and 38% were women. IGF XI — Jalisco, Mexico 2016: The program included 229 sessions attended by more than 2,000 onsite participants, from 123 countries. Once again civil society was the largest represented stakeholder group at the forum with 45% of the participants, followed by governments with 21%, the private sector with 15%, the technical community with 14%, the media with 3%, and intergovernmental organizations with 3%. The approximate regional distribution was: Africa (7%), WEOG-Western European and Others (27%), Asia Pacific (13%), GRULAC-Latin American and Caribbean Group (51%) and Eastern Europe (3%). 60% of the participants were men and 40% were women. IGF XII — Geneva, Switzerland 2017: The program included 220 sessions attended by more than 2,000 onsite participants, from 142 countries. Once again civil society was the largest represented stakeholder group at the forum with 45% of the participants, followed by governments with 20%, the private sector with 15%, the technical community with 14%, the media with 0.4%, and intergovernmental organizations with 6%. The approximate regional distribution was: Africa (11%), WEOG-Western European and Others (46%), Asia Pacific (18%), GRULAC-Latin American and Caribbean Group (12%) and Eastern Europe (8%). 57% of the participants were men and 43% were women. IGF XIII — Paris, France 2018: The program included 171 sessions attended by more than 1,600 onsite participants, from 143 countries. Civil society was the largest represented stakeholder group at the forum with 45% of the participants, followed by governments with 16%, the private sector with 20%, the technical community with 11%, the media with 1%, and intergovernmental organizations with 7%. The approximate regional distribution was: Africa (25%), WEOG-Western European and Others (38%), Asia Pacific (16%), GRULAC-Latin American and Caribbean Group (9%), Eastern Europe (6%), and Intergovernmental Organizations (6%). 57% of the participants were men and 43% were women. Remote participation The Remote Participation Working Group (RPWG) has worked closely with the IGF Secretariat starting in 2008 to allow remote participants across the globe to interact in the IGF meetings. IGF I — Athens, Greece 2006: Remote participants were able to take part via blogs, chat rooms, email, and text messaging. IGF II — Rio de Janeiro, Brazil 2007: The entire meeting was webcast and transcribed in real time. Video and text records were made available on the IGF Web site. IGF III — Hyderabad, India 2008: The entire meeting was webcast in real-time using high quality video, audio streaming, and live chat. There were 522 remote participants from around the world who joined the main sessions and workshops. Remote hubs were also introduced with remote moderators leading discussions in their region. Most of the hubs were able to discuss pertinent local and domestic Internet Governance issues. The Remote Hubs were located in Buenos Aires, Argentina, Belgrade, Serbia, São Paulo (Brazil), Pune (India), Lahore (Pakistan), Bogotà (Colombia), Barcelona and Madrid (Spain). The platform used for remote participation was DimDim. The text transcripts of the main sessions, the video and audio records of all workshops and other meetings were made available through the IGF Web site. IGF IV — Sharm El Sheikh, Egypt 2009: The entire meeting was Webcast, with video streaming provided from the main session room and audio streaming provided from all workshop meeting rooms. The proceedings of the main sessions were transcribed and displayed in the main session hall in real-time and streamed to the Web. Remote hubs in 11 locations around the world allowed remote participation. The text transcripts of the main sessions, the video and audio records of all workshops and other meetings were made available through the IGF Web site. Webex was used as the remote participation platform. IGF V — Vilnius, Lithuania 2010: The entire meeting was Webcast, with video streaming provided from the main session room and all nine other meeting rooms. All proceedings were transcribed and displayed in the meeting rooms in real-time and streamed to the Web. Remote hubs in 32 locations around the world provided the means for more than 600 people who could not travel to the meeting to participate actively in the forum and contribute to discussions.The text transcripts as well as the video and audio records of all official meetings are archived on the IGF Web site. IGF VI — Nairobi, Kenya 2011: All the main sessions and workshops had real time transcription. The entire meeting was Webcast, with video streaming provided from the main session room and audio streaming provided from all workshop meeting rooms. Remote hubs were established in 47 locations, and provided the means for more than 823 people participate contribute to discussions. 38 remote participants/panelists participated via video or audio and an approximate 2,500 connections were made throughout the week from 89 countries. The text transcripts and video of all meetings were made available through the IGF Website. IGF VII — Baku, Azerbaijan 2012: Real time transcription was available. The entire meeting was webcast and remote participation was offered, which doubled the active participation in main sessions, workshops, and other events. 49 expert remote participants and panelists participated in various sessions via video and audio. 52 different remote ‘hubs’ allowed remote participants to gather together to follow the proceedings in Baku online. There was also an increase in social media activity allowing discussions to begin prior to the start of the meeting, continue between sessions and during breaks throughout the week and extend after delegates left Baku to return home. There were thousands of ‘tweets’ about the forum each day, which reached millions of followers. IGF VIII — Bali, Indonesia 2013: Real time transcription was available. The entire meeting was web-cast and remote participation more than doubled the in person participation. Approximately 1,704 connections were made to the meetings remotely from participants from 83 different countries. All web-cast videos were immediately uploaded to YouTube after the sessions ended allowing for additional public viewership. There were approximately 25 remote hubs and more than 100 remote presenters. Millions of interested individuals followed the proceedings on Twitter. IGF IX — Istanbul, Turkey 2014: There were nearly 1,300 remote participants. Real time transcription was available. The entire meeting was web-cast and all web-cast videos were uploaded to YouTube after sessions ended allowing for additional public viewership. Flickr, Facebook, Twitter, and Tumblr were all widely used. Twitter messages using the hashtag, #IGF2014, reached more than 4 million people each day. IGF X — João Pessoa, Brazil 2015: Approximately 50 remote hubs were organized around the world, with an estimated 2000 active participants online. Real time transcription was available. The entire meeting was web-cast and all web-cast videos were uploaded to YouTube after sessions ended allowing for additional public viewership. Flickr, Facebook, Twitter, and Tumblr were all widely used. IGF XI — Jalisco, Mexico 2016: 45 remote hubs were organized around the world, with 2,000 stakeholders participating online. The largest number of online participants came from the following countries: United States, Mexico, Nigeria, Brazil, India, Cuba, United Kingdom, China, Japan, Tunisia and Argentina. IGF XII — Geneva, Switzerland 2017: 32 remote hubs were organised around the world, with 1661 stakeholders participating online. The largest number of online participants came from the following countries: United States, Switzerland, Nigeria, China, India, Brazil, France, United Kingdom and Mexico. IGF XIII — Paris, France 2018: Approximately 1400 people from 101 different countries participated online with the majority coming from France, United States, Brazil, Nigeria, United Kingdom, India, Iran, Bangladesh, and Germany. There were 35 remote hubs organised around the world representing all regions, 42% from Africa and 22% from both the Latin America and Caribbean and the Asia-Pacific regions, with an active online presence, video-sharing and live-comments. See also Internet governance References External links Internet Governance Forum, official website Internet Governance Forum Support Association, helps to provide support and funding for the IGF Secretariat and related activities. Friends of the IGF, a comprehensive, searchable archive of video proceedings of past IGF events. Internet Society at the IGF, information on Internet Society contributions to the IGF and its IGF Ambassador Programme. Association for Progressive Communications (APC) on the IGF, recommendations and publications from the civil society network for social justice and sustainable development. Meetings Organizations established in 2006 Information and communication technologies for development Internet governance organizations Organizations established by the United Nations
1839854
https://en.wikipedia.org/wiki/Desktop%20organizer
Desktop organizer
Desktop organizer software applications are applications that automatically create useful organizational structures from desktop content from heterogeneous types of content including email, files, contacts, companies, RSS news feeds, photos, music and chat sessions. The organization is based on a combination of automated scanning of metadata similar to data mining and manual tagging of content. The metadata stored in applications is correlated based on a structure for the data type handled by the organizer tool. For example, the email address of a sender of an email allows the email to be filed in a virtual folder for the author and company the author works for or a music file is filed by the musician and album label. The resulting visualization simplifies use of desktop content to navigate, search, and use related information stored on the desktop computer. The data in desktop organizer tools is normally stored in a database rather than the computer's file system in order to produce virtual folders where the same item can appear in multiple folders to the user based on its relationship to the folder. Desktop organizers are related to desktop search because both sets of tools allow users to locate desktop resources. The primary differences between the two are that desktop organizers perform post-search functionality related to the primary purpose of the organizer, offer manual taxonomy creation and tagging by the desktop user, and help gather additional related resources for taxonomy or related content from Internet resources. Communications organizers Organization tools of contacts and correspondences involve the tracking and management of information stored in multiple communications tools. Due to the rise of computers for use in communications including email, VoIP applications like Skype, chat, web browsers, blogs, RSS and CRM content relating to companies and contacts is often spread across multiple applications. Desktop communications organizers collect and correlate information stored in these applications. Common features of communications organizers include: Connectivity through scanners and listeners to communications tools including email, chat, bookmarks, and VoIP RSS newsfeed subscriptions Filing of desktop files and documents Connectivity to desktop search or desktop search capabilities Virtual folders to locate the same item in multiple locations Workflow utilities to mark items for follow-up and annotate items Picture organizers Also referred to as image viewers, picture organizers provide utilities to file and view pictures in folders and to tag pictures with metadata for future use. Picture organizers may also integrate with photo sharing sites that also organize pictures but through a social network. There are two classes of picture organizers: Automatic picture organizers. These are software packages that read data present in digital pictures and use this data to automatically create an organization structure. Each digital picture contains information about the date when the picture was taken. It is this piece of information that serves as the basis for automatic picture organization. The user usually has little or no control over the automatically created organization structure. Some tools create this structure on the hard drive (physical structure), while other tools create a virtual structure (it exists only within the tool). Manual picture organizers. This kind of software provides a direct view of the folders present on a user's hard disk. Sometimes referred to as image viewers, they only allow the user to see the pictures but do not provide any automatic organization features. They give maximum flexibility to a user and show exactly what the user has created on his hard drive. While they provide maximum flexibility, manual organizers rely on the user to have his/her own method to organize their pictures. Currently there are two main methods for organizing pictures manually: tag and folder based methods. While not mutually exclusive, these methods are different in their methodology, outcome and purpose. Presently, many commercial image organization software offer both automatic and manual picture organization features. A comparison of image viewers reveals that many freely available software packages are available that offer most of the organization features available in commercial software. However, not all image viewers offer organizational tools. Popular picture organizers include Google's Picasa, DigiKam, Adobe Systems's Elements, Apple's iPhoto, Phase One's Media Pro 1 and Novell's F-spot. Common key organizational tools provided by picture organizers are: Organization by date or date range Tagging of pictures with attributes including location, people, and Storing the same picture in multiple virtual folders Rating of pictures Music organizers Music contains unique attributes such as artist, album, genre, era, and song title used to organize the songs. The desktop organization of music is primarily embedded into audio players and media players like Amarok, Rhythmbox, Banshee, MediaMonkey, Songbird, Apple's iTunes and Microsoft's Windows Media Player or Zune. Organization is used to create playlists, or to organize the media collection into a folder hierarchy, by specific artists, albums, or genres. Another type of music organizer allow a user to organize their media collection directly without listening to music. By connecting the audio players to databases of tagged music to compare a collection with a professionally organized and tagged collection of music, MP3 collections can be enhanced with metadata corrections to support the organization process when metadata is not complete in the original items. Personal information managers
9676991
https://en.wikipedia.org/wiki/Nicola%20Salmoria
Nicola Salmoria
Nicola Salmoria is an Italian software developer. He is the original developer of MAME, an emulator application designed to recreate the hardware of arcade machines in software. In December 2002, he graduated from the University of Siena with a laurea in mathematics, with a thesis written about MAME. Before his fame as the author of MAME, he was active in the Amiga software development scene, producing utility programs such as NewIcons. He has defeated numerous encryption algorithms, including the CPS-2 program ROM encryption (together with Andreas Naive), the Kabuki (sound) program ROM encryption and the graphics ROM encryption in the later Neo Geo games. He is also a founding member of the JP1 remote project. He became less and less involved with MAME development over the years, and his last contributions date back to 2009. In 2013, Salmoria started writing reviews of puzzle games on his own blog. Since 2012, he has been developing puzzle games for iOS devices. References External links Salmoria's blog about MAME Salmoria's blog about logic puzzles Italian computer programmers Living people Year of birth missing (living people) University of Siena alumni
33688447
https://en.wikipedia.org/wiki/Schindler%20%26%20Schill%20GmbH
Schindler & Schill GmbH
Schindler&Schill GmbH is a German software company, founded 2008 in Regensburg by two experts on Windows based software. The company also trades under EasyLogix. Company Portrait Since the foundation Günther Schindler has been holding the position as CEO. The first project was developing Windows drivers for USB devices, followed by GerberLogix, a free Gerber Viewer. Up to now the most extensive project is a complete CAD (computer-aided design) system for pcb (printed circuit board) analysis, the PCB-Investigator. Further areas EasyLogix is experienced in, are the development of semi-automatic and automatic tradingsystems, Add-ons for Windows Office products, software-engineering in OOP/OOD and data base applications. Furthermore, EasyLogix offers a university program and the possibility for clients to participate in the developing process. In 2011, EasyLogix joined the IPC-2581 Consortium, a group of pcb design and supply chain companies who want to establish IPC-2581 in the industry. Products PCB-Investigator PCB-Investigator is a CAD (computer-aided design) software for pcb (printed circuit board) development and manufacturing. The foundation for this project was the combination of hardware acceleration with software rendering. Using GDI, the PCB-Investigator is optimized for windows but has its own graphic data processing. After this basis, a software interface was added to the program, which allows creating own analysis algorithms or programming new import/export possibilities. The next step was implementing a full featured plug-in system. Like this, PCBI can create fitting customized packages and is suitable for various application areas, ranging from AOI (automated optical inspection) to boundary scan. A special function of PCB-Investigator is the possibility to put pictures behind the CAD data. Furthermore, PCBI offers an embedded function, which enables every department of the PCB development process license-free access to all needed data. Possible formats are ODB++, DXF, Catia, SolidWorks, X-File, BOM, Gerber, Excellon, Sieb & Meyer, GenCAD 1.4 and DXF. Upcoming format is IPC2581. The tool is commercial software, but there is a free test version available. By now the tool has been used worldwide by electronic engineers and designers. PCB-Investigator is for free for universities. GerberLogix GerberLogix is a free, multifunctional Gerber Viewer. GerberLogix is an advancement to the Online Gerber Viewer by EasyLogix and offers Gerber274x and Excellon import with additional functions: auto format recognition different drawing modes high resolution image export pictures can be imported and displayed behind the real CAD data multiple selection possibilities transparent drawing different measure modes System requirements are Microsoft Windows 7/Vista/XP/2003 and Microsoft .Net Framework 2.0. The software is free for non-commercial use, a license can be acquired. Online Gerber Viewer The Online Gerber Viewer is a free web service for the graphic presentation of Gerber274x and Excellon data, developed by EasyLogix, a German software company. An installation on the computer is not necessary, because the selected data are transmitted to the server, converted into images and online available for editing. The viewer can be integrated in a company network as well. See also Comparison of EDA software References External links Electronic design automation companies
31333982
https://en.wikipedia.org/wiki/List%20of%20Lepidoptera%20of%20Greece
List of Lepidoptera of Greece
Lepidoptera of Greece consist of both the butterflies and moths recorded from Greece, including Crete, the Greek mainland and the Aegean Islands (including the Cyclades and Dodecanese). Butterflies Hesperiidae Carcharodus alceae (Esper, 1780) Carcharodus floccifera (Zeller, 1847) Carcharodus lavatherae (Esper, 1783) Carcharodus orientalis Reverdin, 1913 Carcharodus stauderi Reverdin, 1913 Carterocephalus palaemon (Pallas, 1771) Erynnis marloyi (Boisduval, 1834) Erynnis tages (Linnaeus, 1758) Gegenes nostrodamus (Fabricius, 1793) Gegenes pumilio (Hoffmannsegg, 1804) Hesperia comma (Linnaeus, 1758) Muschampia proto (Ochsenheimer, 1808) Muschampia tessellum (Hübner, 1803) Ochlodes sylvanus (Esper, 1777) Pelopidas thrax (Hübner, 1821) Pyrgus alveus (Hübner, 1803) Pyrgus armoricanus (Oberthur, 1910) Pyrgus carthami (Hübner, 1813) Pyrgus cinarae (Rambur, 1839) Pyrgus malvae (Linnaeus, 1758) Pyrgus serratulae (Rambur, 1839) Pyrgus sidae (Esper, 1784) Spialia orbifer (Hübner, 1823) Spialia phlomidis (Herrich-Schäffer, 1845) Thymelicus acteon (Rottemburg, 1775) Thymelicus hyrax (Lederer, 1861) Thymelicus lineola (Ochsenheimer, 1808) Thymelicus sylvestris (Poda, 1761) Lycaenidae Agriades dardanus (Freyer, 1844) Aricia agestis (Denis & Schiffermüller, 1775) Aricia anteros (Freyer, 1838) Aricia artaxerxes (Fabricius, 1793) Callophrys rubi (Linnaeus, 1758) Celastrina argiolus (Linnaeus, 1758) Cupido minimus (Fuessly, 1775) Cupido osiris (Meigen, 1829) Cupido alcetas (Hoffmannsegg, 1804) Cupido argiades (Pallas, 1771) Cupido decolorata (Staudinger, 1886) Cyaniris semiargus (Rottemburg, 1775) Eumedonia eumedon (Esper, 1780) Favonius quercus (Linnaeus, 1758) Freyeria trochylus (Freyer, 1845) Glaucopsyche alexis (Poda, 1761) Iolana iolas (Ochsenheimer, 1816) Kretania eurypilus (Freyer, 1851) Kretania psylorita (Freyer, 1845) Kretania sephirus (Frivaldzky, 1835) Lampides boeticus (Linnaeus, 1767) Leptotes pirithous (Linnaeus, 1767) Lycaena alciphron (Rottemburg, 1775) Lycaena candens (Herrich-Schäffer, 1844) Lycaena dispar (Haworth, 1802) Lycaena ottomanus (Lefebvre, 1830) Lycaena phlaeas (Linnaeus, 1761) Lycaena thersamon (Esper, 1784) Lycaena thetis Klug, 1834 Lycaena tityrus (Poda, 1761) Lycaena virgaureae (Linnaeus, 1758) Lysandra bellargus (Rottemburg, 1775) Lysandra coridon (Poda, 1761) Neolysandra coelestina (Eversmann, 1843) Phengaris alcon (Denis & Schiffermüller, 1775) Phengaris arion (Linnaeus, 1758) Plebejidea loewii (Zeller, 1847) Plebejus argus (Linnaeus, 1758) Plebejus argyrognomon (Bergstrasser, 1779) Plebejus idas (Linnaeus, 1761) Polyommatus admetus (Esper, 1783) Polyommatus andronicus Coutsis & Gavalas, 1995 Polyommatus aroaniensis (Brown, 1976) Polyommatus damon (Denis & Schiffermüller, 1775) Polyommatus iphigenia (Herrich-Schäffer, 1847) Polyommatus nephohiptamenos (Brown & Coutsis, 1978) Polyommatus orphicus Kolev, 2005 Polyommatus ripartii (Freyer, 1830) Polyommatus daphnis (Denis & Schiffermüller, 1775) Polyommatus amandus (Schneider, 1792) Polyommatus dorylas (Denis & Schiffermüller, 1775) Polyommatus eros (Ochsenheimer, 1808) Polyommatus escheri (Hübner, 1823) Polyommatus icarus (Rottemburg, 1775) Polyommatus thersites (Cantener, 1835) Polyommatus timfristos Lukhtanov, Vishnevskaya & Shapoval, 2016 Pseudophilotes bavius (Eversmann, 1832) Pseudophilotes vicrama (Moore, 1865) Satyrium acaciae (Fabricius, 1787) Satyrium ilicis (Esper, 1779) Satyrium ledereri (Boisduval, 1848) Satyrium pruni (Linnaeus, 1758) Satyrium spini (Denis & Schiffermüller, 1775) Satyrium w-album (Knoch, 1782) Scolitantides orion (Pallas, 1771) Tarucus balkanica (Freyer, 1844) Thecla betulae (Linnaeus, 1758) Turanana panagea (Herrich-Schäffer, 1851) Turanana taygetica (Rebel, 1902) Zizeeria karsandra (Moore, 1865) Riodinidae Hamearis lucina (Linnaeus, 1758) Nymphalidae Aglais io (Linnaeus, 1758) Aglais urticae (Linnaeus, 1758) Apatura ilia (Denis & Schiffermüller, 1775) Apatura iris (Linnaeus, 1758) Apatura metis Freyer, 1829 Aphantopus hyperantus (Linnaeus, 1758) Araschnia levana (Linnaeus, 1758) Arethusana arethusa (Denis & Schiffermüller, 1775) Argynnis paphia (Linnaeus, 1758) Argynnis pandora (Denis & Schiffermüller, 1775) Boloria graeca (Staudinger, 1870) Boloria dia (Linnaeus, 1767) Boloria euphrosyne (Linnaeus, 1758) Brenthis daphne (Bergstrasser, 1780) Brenthis hecate (Denis & Schiffermüller, 1775) Brintesia circe (Fabricius, 1775) Charaxes jasius (Linnaeus, 1767) Chazara briseis (Linnaeus, 1764) Coenonympha arcania (Linnaeus, 1761) Coenonympha glycerion (Borkhausen, 1788) Coenonympha leander (Esper, 1784) Coenonympha orientalis Rebel, 1910 Coenonympha pamphilus (Linnaeus, 1758) Coenonympha rhodopensis Elwes, 1900 Coenonympha thyrsis (Freyer, 1845) Danaus chrysippus (Linnaeus, 1758) Erebia aethiops (Esper, 1777) Erebia cassioides (Reiner & Hochenwarth, 1792) Erebia epiphron (Knoch, 1783) Erebia euryale (Esper, 1805) Erebia ligea (Linnaeus, 1758) Erebia medusa (Denis & Schiffermüller, 1775) Erebia melas (Herbst, 1796) Erebia oeme (Hübner, 1804) Erebia ottomana Herrich-Schäffer, 1847 Erebia rhodopensis Nicholl, 1900 Fabriciana adippe (Denis & Schiffermüller, 1775) Fabriciana niobe (Linnaeus, 1758) Euphydryas aurinia (Rottemburg, 1775) Hipparchia fagi (Scopoli, 1763) Hipparchia syriaca (Staudinger, 1871) Hipparchia fatua Freyer, 1844 Hipparchia statilinus (Hufnagel, 1766) Hipparchia christenseni Kudrna, 1977 Hipparchia cretica (Rebel, 1916) Hipparchia mersina (Staudinger, 1871) Hipparchia pellucida (Stauder, 1923) Hipparchia senthes (Fruhstorfer, 1908) Hipparchia volgensis (Mazochin-Porshnjakov, 1952) Hyponephele lupinus (O. Costa, 1836) Hyponephele lycaon (Rottemburg, 1775) Issoria lathonia (Linnaeus, 1758) Kirinia climene (Esper, 1783) Kirinia roxelana (Cramer, 1777) Lasiommata maera (Linnaeus, 1758) Lasiommata megera (Linnaeus, 1767) Lasiommata petropolitana (Fabricius, 1787) Libythea celtis (Laicharting, 1782) Limenitis camilla (Linnaeus, 1764) Limenitis populi (Linnaeus, 1758) Limenitis reducta Staudinger, 1901 Maniola chia Thomson, 1987 Maniola halicarnassus Thomson, 1990 Maniola jurtina (Linnaeus, 1758) Maniola megala (Oberthur, 1909) Maniola telmessia (Zeller, 1847) Melanargia galathea (Linnaeus, 1758) Melanargia larissa (Geyer, 1828) Melanargia russiae (Esper, 1783) Melitaea arduinna (Esper, 1783) Melitaea athalia (Rottemburg, 1775) Melitaea aurelia Nickerl, 1850 Melitaea cinxia (Linnaeus, 1758) Melitaea didyma (Esper, 1778) Melitaea phoebe (Denis & Schiffermüller, 1775) Melitaea trivia (Denis & Schiffermüller, 1775) Minois dryas (Scopoli, 1763) Neptis rivularis (Scopoli, 1763) Neptis sappho (Pallas, 1771) Nymphalis antiopa (Linnaeus, 1758) Nymphalis polychloros (Linnaeus, 1758) Nymphalis xanthomelas (Esper, 1781) Pararge aegeria (Linnaeus, 1758) Polygonia c-album (Linnaeus, 1758) Polygonia egea (Cramer, 1775) Proterebia afra (Fabricius, 1787) Pseudochazara amymone Brown, 1976 Pseudochazara anthelea (Hübner, 1824) Pseudochazara cingovskii Gross, 1973 Pseudochazara geyeri (Herrich-Schäffer, 1846) Pseudochazara graeca (Staudinger, 1870) Pseudochazara orestes De Prins & van der Poorten, 1981 Pyronia cecilia (Vallantin, 1894) Pyronia tithonus (Linnaeus, 1767) Satyrus ferula (Fabricius, 1793) Speyeria aglaja (Linnaeus, 1758) Vanessa atalanta (Linnaeus, 1758) Vanessa cardui (Linnaeus, 1758) Ypthima asterope (Klug, 1832) Papilionidae Archon apollinus (Herbst, 1798) Iphiclides podalirius (Linnaeus, 1758) Papilio alexanor Esper, 1800 Papilio machaon Linnaeus, 1758 Parnassius apollo (Linnaeus, 1758) Parnassius mnemosyne (Linnaeus, 1758) Zerynthia cerisy (Godart, 1824) Zerynthia cretica (Rebel, 1904) Zerynthia polyxena (Denis & Schiffermüller, 1775) Pieridae Anthocharis cardamines (Linnaeus, 1758) Anthocharis damone Boisduval, 1836 Anthocharis gruneri Herrich-Schäffer, 1851 Aporia crataegi (Linnaeus, 1758) Colias alfacariensis Ribbe, 1905 Colias aurorina Herrich-Schäffer, 1850 Colias caucasica Staudinger, 1871 Colias croceus (Fourcroy, 1785) Colias erate (Esper, 1805) Euchloe penia (Freyer, 1851) Euchloe ausonia (Hübner, 1804) Gonepteryx cleopatra (Linnaeus, 1767) Gonepteryx farinosa (Zeller, 1847) Gonepteryx rhamni (Linnaeus, 1758) Leptidea duponcheli (Staudinger, 1871) Leptidea sinapis (Linnaeus, 1758) Pieris balcana Lorkovic, 1970 Pieris brassicae (Linnaeus, 1758) Pieris ergane (Geyer, 1828) Pieris krueperi Staudinger, 1860 Pieris mannii (Mayer, 1851) Pieris napi (Linnaeus, 1758) Pieris rapae (Linnaeus, 1758) Pontia chloridice (Hübner, 1813) Pontia edusa (Fabricius, 1777) Moths Adelidae Adela croesella (Scopoli, 1763) Adela mazzolella (Hübner, 1801) Adela paludicolella Zeller, 1850 Adela repetitella Mann, 1861 Cauchas anatolica (Rebel, 1902) Cauchas leucocerella (Scopoli, 1763) Cauchas rufifrontella (Treitschke, 1833) Nematopogon pilella (Denis & Schiffermüller, 1775) Nematopogon robertella (Clerck, 1759) Nemophora barbatellus (Zeller, 1847) Nemophora dumerilella (Duponchel, 1839) Nemophora fasciella (Fabricius, 1775) Nemophora metallica (Poda, 1761) Nemophora minimella (Denis & Schiffermüller, 1775) Nemophora raddaella (Hübner, 1793) Alucitidae Alucita hexadactyla Linnaeus, 1758 Alucita huebneri Wallengren, 1859 Alucita major (Rebel, 1906) Alucita palodactyla Zeller, 1847 Alucita pectinata Scholz & Jackh, 1994 Alucita zonodactyla Zeller, 1847 Argyresthiidae Argyresthia glaucinella Zeller, 1839 Argyresthia hilfiella Rebel, 1910 Argyresthia pruniella (Clerck, 1759) Argyresthia spinosella Stainton, 1849 Autostichidae Amselina cedestiella (Zeller, 1868) Amselina emir (Gozmány, 1961) Amselina kasyi (Gozmány, 1961) Amselina virgo (Gozmány, 1959) Apatema apolausticum Gozmány, 1996 Apatema mediopallidum Walsingham, 1900 Apatema sutteri Gozmány, 1997 Apatema whalleyi (Popescu-Gorj & Capuse, 1965) Aprominta aga Gozmány, 1962 Aprominta aperitta Gozmány, 1997 Aprominta argonauta Gozmány, 1964 Aprominta atricanella (Rebel, 1906) Aprominta bifasciata (Staudinger, 1870) Aprominta designatella (Herrich-Schäffer, 1855) Aprominta gloriosa Gozmány, 1959 Aprominta pannosella (Rebel, 1906) Aprominta reisseri Gozmány, 1959 Aprominta separata Gozmány, 1961 Aprominta tectaphella (Rebel, 1916) Aprominta xena Gozmány, 1959 Charadraula cassandra Gozmány, 1967 Deroxena venosulella (Moschler, 1862) Dysspastus baldizzonei Gozmány, 1977 Dysspastus ios Gozmány, 2000 Dysspastus musculina (Staudinger, 1870) Dysspastus undecimpunctella (Mann, 1864) Holcopogon bubulcellus (Staudinger, 1859) Nukusa cinerella (Rebel, 1941) Oecia oecophila (Staudinger, 1876) Oegoconia ariadne Gozmány, 1988 Oegoconia caradjai Popescu-Gorj & Capuse, 1965 Oegoconia deauratella (Herrich-Schäffer, 1854) Oegoconia novimundi (Busck, 1915) Oegoconia uralskella Popescu-Gorj & Capuse, 1965 Pantacordis pantsa (Gozmány, 1963) Pantacordis scotinella (Rebel, 1916) Symmoca attalica Gozmány, 1957 Symmoca christenseni Gozmány, 1982 Symmoca signatella Herrich-Schäffer, 1854 Symmoca signella (Hübner, 1796) Symmoca sutteri Gozmány, 2000 Symmoca vitiosella Zeller, 1868 Syringopais temperatella (Lederer, 1855) Batrachedridae Batrachedra parvulipunctella Chrétien, 1915 Bedelliidae Bedellia somnulentella (Zeller, 1847) Blastobasidae Blastobasis phycidella (Zeller, 1839) Tecmerium perplexum (Gozmány, 1957) Brachodidae Brachodes beryti (Stainton, 1867) Brachodes compar (Staudinger, 1879) Brachodes nana (Treitschke, 1834) Brachodes pumila (Ochsenheimer, 1808) Brachodes tristis (Staudinger, 1879) Brahmaeidae Lemonia balcanica (Herrich-Schäffer, 1847) Lemonia dumi (Linnaeus, 1761) Lemonia taraxaci (Denis & Schiffermüller, 1775) Bucculatricidae Bucculatrix albedinella (Zeller, 1839) Bucculatrix albella Stainton, 1867 Bucculatrix cretica Deschka, 1991 Bucculatrix infans Staudinger, 1880 Bucculatrix phagnalella Walsingham, 1908 Bucculatrix ulmella Zeller, 1848 Carposinidae Carposina scirrhosella Herrich-Schäffer, 1854 Choreutidae Anthophila fabriciana (Linnaeus, 1767) Choreutis nemorana (Hübner, 1799) Choreutis pariana (Clerck, 1759) Prochoreutis myllerana (Fabricius, 1794) Prochoreutis stellaris (Zeller, 1847) Tebenna micalis (Mann, 1857) Tebenna pretiosana (Duponchel, 1842) Cimeliidae Axia nesiota Reisser, 1962 Coleophoridae Augasma aeratella (Zeller, 1839) Coleophora achilleae Baldizzone, 2001 Coleophora acrisella Milliere, 1872 Coleophora adjectella Hering, 1937 Coleophora adspersella Benander, 1939 Coleophora aestuariella Bradley, 1984 Coleophora afrosarda Baldizzone & Kaltenbach, 1983 Coleophora alashiae Baldizzone, 1996 Coleophora albella (Thunberg, 1788) Coleophora albicostella (Duponchel, 1842) Coleophora albidella (Denis & Schiffermüller, 1775) Coleophora albilineella Toll, 1960 Coleophora alcyonipennella (Kollar, 1832) Coleophora aleramica Baldizzone & Stubner, 2007 Coleophora alticolella Zeller, 1849 Coleophora amethystinella Ragonot, 1855 Coleophora anatipenella (Hübner, 1796) Coleophora asteris Muhlig, 1864 Coleophora badiipennella (Duponchel, 1843) Coleophora ballotella (Fischer v. Röslerstamm, 1839) Coleophora basimaculella Mann, 1864 Coleophora bilineatella Zeller, 1849 Coleophora bilineella Herrich-Schäffer, 1855 Coleophora breviuscula Staudinger, 1880 Coleophora calycotomella Stainton, 1869 Coleophora chamaedriella Bruand, 1852 Coleophora christenseni Baldizzone, 1983 Coleophora cnossiaca Baldizzone, 1983 Coleophora coarctataephaga Toll, 1961 Coleophora colutella (Fabricius, 1794) Coleophora congeriella Staudinger, 1859 Coleophora conspicuella Zeller, 1849 Coleophora conyzae Zeller, 1868 Coleophora corsicella Walsingham, 1898 Coleophora coxi Baldizzone & van der Wolf, 2007 Coleophora crepidinella Zeller, 1847 Coleophora cuprariella Lienig & Zeller, 1864 Coleophora currucipennella Zeller, 1839 Coleophora deauratella Lienig & Zeller, 1846 Coleophora dentiferella Toll, 1952 Coleophora depunctella Toll, 1961 Coleophora deviella Zeller, 1847 Coleophora dianthi Herrich-Schäffer, 1855 Coleophora dignella Toll, 1961 Coleophora discordella Zeller, 1849 Coleophora drymidis Mann, 1857 Coleophora epijudaica Amsel, 1935 Coleophora eupepla Gozmány, 1954 Coleophora filaginella Fuchs, 1881 Coleophora flaviella Mann, 1857 Coleophora follicularis (Vallot, 1802) Coleophora fretella Zeller, 1847 Coleophora frischella (Linnaeus, 1758) Coleophora galbulipennella Zeller, 1838 Coleophora genistae Stainton, 1857 Coleophora graeca Baldizzone, 1990 Coleophora granulatella Zeller, 1849 Coleophora halophilella Zimmermann, 1926 Coleophora hartigi Toll, 1944 Coleophora helianthemella Milliere, 1870 Coleophora helichrysiella Krone, 1909 Coleophora hemerobiella (Scopoli, 1763) Coleophora hospitiella Chrétien, 1915 Coleophora ibipennella Zeller, 1849 Coleophora jerusalemella Toll, 1942 Coleophora juncicolella Stainton, 1851 Coleophora kautzi Rebel, 1933 Coleophora klimeschiella Toll, 1952 Coleophora kroneella Fuchs, 1899 Coleophora laconiae Baldizzone, 1983 Coleophora lassella Staudinger, 1859 Coleophora lebedella Falkovitsh, 1982 Coleophora limosipennella (Duponchel, 1843) Coleophora lineolea (Haworth, 1828) Coleophora longicornella Constant, 1893 Coleophora luteolella Staudinger, 1880 Coleophora lutipennella (Zeller, 1838) Coleophora maritimella Newman, 1863 Coleophora mausolella Chrétien, 1908 Coleophora mayrella (Hübner, 1813) Coleophora medelichensis Krone, 1908 Coleophora meridionella Rebel, 1912 Coleophora minoica Baldizzone, 1983 Coleophora nesiotidella Baldizzone & v.d. Wolf, 2000 Coleophora nigridorsella Amsel, 1935 Coleophora nikiella Baldizzone, 1983 Coleophora niveicostella Zeller, 1839 Coleophora nutantella Muhlig & Frey, 1857 Coleophora obtectella Zeller, 1849 Coleophora ochrea (Haworth, 1828) Coleophora ochripennella Zeller, 1849 Coleophora ochroflava Toll, 1961 Coleophora olympica Baldizzone, 1983 Coleophora onobrychiella Zeller, 1849 Coleophora ononidella Milliere, 1879 Coleophora onopordiella Zeller, 1849 Coleophora oriolella Zeller, 1849 Coleophora ornatipennella (Hübner, 1796) Coleophora paramayrella Nel, 1993 Coleophora parthenica Meyrick, 1891 Coleophora parvicuprella Baldizzone & Tabell, 2006 Coleophora patzaki Baldizzone, 1983 Coleophora pennella (Denis & Schiffermüller, 1775) Coleophora peribenanderi Toll, 1943 Coleophora pseudodianthi Baldizzone & Tabell, 2006 Coleophora pyrrhulipennella Zeller, 1839 Coleophora quadristraminella Toll, 1961 Coleophora qulikushella Toll, 1959 Coleophora salicorniae Heinemann & Wocke, 1877 Coleophora saxicolella (Duponchel, 1843) Coleophora semicinerea Staudinger, 1859 Coleophora serinipennella Christoph, 1872 Coleophora serpylletorum Hering, 1889 Coleophora serratulella Herrich-Schäffer, 1855 Coleophora soffneriella Toll, 1961 Coleophora spartana Baldizzone, 2010 Coleophora stramentella Zeller, 1849 Coleophora taeniipennella Herrich-Schäffer, 1855 Coleophora tamesis Waters, 1929 Coleophora taurica Baldizzone, 1994 Coleophora tauricella Staudinger, 1880 Coleophora taygeti Baldizzone, 1983 Coleophora therinella Tengstrom, 1848 Coleophora thymi Hering, 1942 Coleophora tricolor Walsingham, 1889 Coleophora trifolii (Curtis, 1832) Coleophora tyrrhaenica Amsel, 1951 Coleophora valesianella Zeller, 1849 Coleophora variicornis Toll, 1952 Coleophora versurella Zeller, 1849 Coleophora vicinella Zeller, 1849 Coleophora virgatella Zeller, 1849 Coleophora zelleriella Heinemann, 1854 Goniodoma auroguttella (Fischer v. Röslerstamm, 1841) Goniodoma limoniella (Stainton, 1884) Goniodoma nemesi Capuse, 1970 Cosmopterigidae Alloclita recisella Staudinger, 1859 Anatrachyntis badia (Hodges, 1962) Ascalenia vanella (Frey, 1860) Coccidiphila gerasimovi Danilevsky, 1950 Cosmopterix athesiae Huemer & Koster, 2006 Cosmopterix coryphaea Walsingham, 1908 Cosmopterix crassicervicella Chrétien, 1896 Cosmopterix lienigiella Zeller, 1846 Cosmopterix pararufella Riedl, 1976 Cosmopterix pulchrimella Chambers, 1875 Eteobalea albiapicella (Duponchel, 1843) Eteobalea anonymella (Riedl, 1965) Eteobalea dohrnii (Zeller, 1847) Eteobalea intermediella (Riedl, 1966) Eteobalea isabellella (O. G. Costa, 1836) Eteobalea serratella (Treitschke, 1833) Eteobalea sumptuosella (Lederer, 1855) Hodgesiella rebeli (Krone, 1905) Limnaecia phragmitella Stainton, 1851 Pancalia leuwenhoekella (Linnaeus, 1761) Pancalia nodosella (Bruand, 1851) Pancalia schwarzella (Fabricius, 1798) Pyroderces argyrogrammos (Zeller, 1847) Pyroderces caesaris Gozmány, 1957 Ramphis libanoticus Riedl, 1969 Sorhagenia lophyrella (Douglas, 1846) Sorhagenia reconditella Riedl, 1983 Vulcaniella cognatella Riedl, 1990 Vulcaniella grabowiella (Staudinger, 1859) Vulcaniella grandiferella Sinev, 1986 Vulcaniella klimeschi (Riedl, 1966) Vulcaniella pomposella (Zeller, 1839) Vulcaniella rosmarinella (Walsingham, 1891) Cossidae Acossus terebra (Denis & Schiffermüller, 1775) Cossus cossus (Linnaeus, 1758) Danielostygia persephone Reisser, 1962 Dyspessa aphrodite Yakovlev & Witt, 2007 Dyspessa salicicola (Eversmann, 1848) Dyspessa ulula (Borkhausen, 1790) Parahypopta caestrum (Hübner, 1808) Paropta paradoxus (Herrich-Schäffer, 1851) Phragmacossia albida (Erschoff, 1874) Phragmacossia minos Reisser, 1962 Phragmataecia castaneae (Hübner, 1790) Stygia mosulensis Daniel, 1965 Stygoides colchica (Herrich-Schäffer, 1851) Zeuzera pyrina (Linnaeus, 1761) Crambidae Achyra nudalis (Hübner, 1796) Aeschremon disparalis (Herrich-Schäffer, 1851) Agriphila beieri Błeszyński, 1953 Agriphila brioniellus (Zerny, 1914) Agriphila cyrenaicellus (Ragonot, 1887) Agriphila dalmatinellus (Hampson, 1900) Agriphila geniculea (Haworth, 1811) Agriphila indivisellus (Turati & Zanon, 1922) Agriphila inquinatella (Denis & Schiffermüller, 1775) Agriphila latistria (Haworth, 1811) Agriphila paleatellus (Zeller, 1847) Agriphila selasella (Hübner, 1813) Agriphila tersellus (Lederer, 1855) Agriphila tolli (Błeszyński, 1952) Agriphila trabeatellus (Herrich-Schäffer, 1848) Agriphila tristella (Denis & Schiffermüller, 1775) Agrotera nemoralis (Scopoli, 1763) Anania coronata (Hufnagel, 1767) Anania crocealis (Hübner, 1796) Anania funebris (Strom, 1768) Anania hortulata (Linnaeus, 1758) Anania lancealis (Denis & Schiffermüller, 1775) Anania stachydalis (Germar, 1821) Anania testacealis (Zeller, 1847) Anania verbascalis (Denis & Schiffermüller, 1775) Anarpia incertalis (Duponchel, 1832) Ancylolomia disparalis Hübner, 1825 Ancylolomia palpella (Denis & Schiffermüller, 1775) Ancylolomia pectinatellus (Zeller, 1847) Ancylolomia tentaculella (Hübner, 1796) Angustalius malacellus (Duponchel, 1836) Anthophilopsis baphialis (Staudinger, 1871) Antigastra catalaunalis (Duponchel, 1833) Aporodes floralis (Hübner, 1809) Calamotropha aureliellus (Fischer v. Röslerstamm, 1841) Calamotropha hackeri Ganev, 1985 Calamotropha hierichuntica Zeller, 1867 Calamotropha paludella (Hübner, 1824) Cataclysta lemnata (Linnaeus, 1758) Cataonia erubescens (Christoph, 1877) Catoptria acutangulellus (Herrich-Schäffer, 1847) Catoptria casalei Bassi, 1999 Catoptria confusellus (Staudinger, 1882) Catoptria dimorphellus (Staudinger, 1882) Catoptria falsella (Denis & Schiffermüller, 1775) Catoptria fibigeri Ganev, 1987 Catoptria fulgidella (Hübner, 1813) Catoptria gozmanyi Błeszyński, 1956 Catoptria languidellus (Zeller, 1863) Catoptria margaritella (Denis & Schiffermüller, 1775) Catoptria myella (Hübner, 1796) Catoptria mytilella (Hübner, 1805) Catoptria olympica Ganev, 1983 Catoptria pinella (Linnaeus, 1758) Chilo luteellus (Motschulsky, 1866) Cholius luteolaris (Scopoli, 1772) Chrysocrambus craterella (Scopoli, 1763) Chrysocrambus linetella (Fabricius, 1781) Chrysoteuchia culmella (Linnaeus, 1758) Cornifrons ulceratalis Lederer, 1858 Crambus lathoniellus (Zincken, 1817) Crambus pascuella (Linnaeus, 1758) Crambus perlella (Scopoli, 1763) Crambus pratella (Linnaeus, 1758) Crambus uliginosellus Zeller, 1850 Cybalomia pentadalis (Lederer, 1855) Cynaeda dentalis (Denis & Schiffermüller, 1775) Cynaeda gigantea (Wocke, 1871) Dentifovea fulvifascialis (Christoph, 1887) Diasemia reticularis (Linnaeus, 1761) Diasemiopsis ramburialis (Duponchel, 1834) Dolicharthria bruguieralis (Duponchel, 1833) Dolicharthria metasialis (Rebel, 1916) Dolicharthria punctalis (Denis & Schiffermüller, 1775) Dolicharthria stigmosalis (Herrich-Schäffer, 1848) Donacaula forficella (Thunberg, 1794) Donacaula mucronella (Denis & Schiffermüller, 1775) Donacaula niloticus (Zeller, 1867) Duponchelia fovealis Zeller, 1847 Ecpyrrhorrhoe diffusalis (Guenée, 1854) Ecpyrrhorrhoe rubiginalis (Hübner, 1796) Elophila nymphaeata (Linnaeus, 1758) Elophila rivulalis (Duponchel, 1834) Epascestria pustulalis (Hübner, 1823) Ephelis cruentalis (Geyer, 1832) Euchromius bella (Hübner, 1796) Euchromius bleszynskiellus Popescu-Gorj, 1964 Euchromius ocellea (Haworth, 1811) Euchromius rayatellus (Amsel, 1949) Euchromius superbellus (Zeller, 1849) Euchromius vinculellus (Zeller, 1847) Euclasta splendidalis (Herrich-Schäffer, 1848) Eudonia angustea (Curtis, 1827) Eudonia delunella (Stainton, 1849) Eudonia lacustrata (Panzer, 1804) Eudonia laetella (Zeller, 1846) Eudonia mercurella (Linnaeus, 1758) Eudonia murana (Curtis, 1827) Eudonia phaeoleuca (Zeller, 1846) Eudonia speideli Leraut, 1982 Eurrhypis cacuminalis (Eversmann, 1843) Eurrhypis guttulalis (Herrich-Schäffer, 1848) Eurrhypis pollinalis (Denis & Schiffermüller, 1775) Evergestis aenealis (Denis & Schiffermüller, 1775) Evergestis caesialis (Herrich-Schäffer, 1849) Evergestis desertalis (Hübner, 1813) Evergestis extimalis (Scopoli, 1763) Evergestis frumentalis (Linnaeus, 1761) Evergestis infirmalis (Staudinger, 1871) Evergestis isatidalis (Duponchel, 1833) Evergestis limbata (Linnaeus, 1767) Evergestis mundalis (Guenée, 1854) Evergestis nomadalis (Lederer, 1871) Evergestis serratalis (Staudinger, 1871) Evergestis sophialis (Fabricius, 1787) Evergestis subfuscalis (Staudinger, 1871) Glaucocharis euchromiella (Ragonot, 1895) Heliothela wulfeniana (Scopoli, 1763) Hellula undalis (Fabricius, 1781) Hodebertia testalis (Fabricius, 1794) Hydriris ornatalis (Duponchel, 1832) Hyperlais argillacealis (Zeller, 1847) Hyperlais dulcinalis (Treitschke, 1835) Hyperlais nemausalis (Duponchel, 1834) Loxostege aeruginalis (Hübner, 1796) Loxostege deliblatica Szent-Ivany & Uhrik-Meszaros, 1942 Loxostege manualis (Geyer, 1832) Loxostege sticticalis (Linnaeus, 1761) Loxostege turbidalis (Treitschke, 1829) Mecyna asinalis (Hübner, 1819) Mecyna flavalis (Denis & Schiffermüller, 1775) Mecyna lutealis (Duponchel, 1833) Mecyna subsequalis (Herrich-Schäffer, 1851) Mecyna trinalis (Denis & Schiffermüller, 1775) Mesocrambus candiellus (Herrich-Schäffer, 1848) Metacrambus carectellus (Zeller, 1847) Metaeuchromius lata (Staudinger, 1870) Metasia carnealis (Treitschke, 1829) Metasia ophialis (Treitschke, 1829) Metasia rosealis Ragonot, 1895 Metasia suppandalis (Hübner, 1823) Metaxmeste phrygialis (Hübner, 1796) Metaxmeste schrankiana (Hochenwarth, 1785) Neocrambus wolfschlaegeri (Schawerda, 1937) Nomophila noctuella (Denis & Schiffermüller, 1775) Ostrinia nubilalis (Hübner, 1796) Palpita vitrealis (Rossi, 1794) Paracorsia repandalis (Denis & Schiffermüller, 1775) Parapoynx stagnalis (Zeller, 1852) Parapoynx stratiotata (Linnaeus, 1758) Paratalanta hyalinalis (Hübner, 1796) Pediasia contaminella (Hübner, 1796) Pediasia fascelinella (Hübner, 1813) Pediasia jucundellus (Herrich-Schäffer, 1847) Pediasia luteella (Denis & Schiffermüller, 1775) Pediasia matricella (Treitschke, 1832) Platytes cerussella (Denis & Schiffermüller, 1775) Pleuroptya balteata (Fabricius, 1798) Pleuroptya ruralis (Scopoli, 1763) Psammotis pulveralis (Hübner, 1796) Pyrausta aerealis (Hübner, 1793) Pyrausta aurata (Scopoli, 1763) Pyrausta castalis Treitschke, 1829 Pyrausta cingulata (Linnaeus, 1758) Pyrausta despicata (Scopoli, 1763) Pyrausta obfuscata (Scopoli, 1763) Pyrausta purpuralis (Linnaeus, 1758) Pyrausta sanguinalis (Linnaeus, 1767) Pyrausta trimaculalis (Staudinger, 1867) Pyrausta virginalis Duponchel, 1832 Scirpophaga praelata (Scopoli, 1763) Scoparia ambigualis (Treitschke, 1829) Scoparia basistrigalis Knaggs, 1866 Scoparia dicteella Rebel, 1916 Scoparia ganevi Leraut, 1985 Scoparia graeca Nuss, 2005 Scoparia ingratella (Zeller, 1846) Scoparia manifestella (Herrich-Schäffer, 1848) Scoparia perplexella (Zeller, 1839) Scoparia pyralella (Denis & Schiffermüller, 1775) Scoparia staudingeralis (Mabille, 1869) Scoparia subfusca Haworth, 1811 Sitochroa palealis (Denis & Schiffermüller, 1775) Sitochroa verticalis (Linnaeus, 1758) Tegostoma comparalis (Hübner, 1796) Thisanotia chrysonuchella (Scopoli, 1763) Thyridiphora furia (Swinhoe, 1884) Titanio normalis (Hübner, 1796) Titanio venustalis (Lederer, 1855) Udea austriacalis (Herrich-Schäffer, 1851) Udea bipunctalis (Herrich-Schäffer, 1851) Udea confinalis (Lederer, 1858) Udea ferrugalis (Hübner, 1796) Udea fimbriatralis (Duponchel, 1834) Udea fulvalis (Hübner, 1809) Udea institalis (Hübner, 1819) Udea languidalis (Eversmann, 1842) Udea numeralis (Hübner, 1796) Udea olivalis (Denis & Schiffermüller, 1775) Udea prunalis (Denis & Schiffermüller, 1775) Udea rhododendronalis (Duponchel, 1834) Uresiphita gilvata (Fabricius, 1794) Usgentia vespertalis (Herrich-Schäffer, 1851) Xanthocrambus saxonellus (Zincken, 1821) Douglasiidae Klimeschia cinereipunctella (Turati & Fiori, 1930) Klimeschia transversella (Zeller, 1839) Tinagma anchusella (Benander, 1936) Tinagma klimeschi Gaedike, 1987 Tinagma ocnerostomella (Stainton, 1850) Drepanidae Asphalia ruficollis (Denis & Schiffermüller, 1775) Cilix asiatica O. Bang-Haas, 1907 Cilix glaucata (Scopoli, 1763) Cymatophorina diluta (Denis & Schiffermüller, 1775) Drepana falcataria (Linnaeus, 1758) Falcaria lacertinaria (Linnaeus, 1758) Habrosyne pyritoides (Hufnagel, 1766) Sabra harpagula (Esper, 1786) Tethea ocularis (Linnaeus, 1767) Tethea or (Denis & Schiffermüller, 1775) Thyatira batis (Linnaeus, 1758) Watsonalla binaria (Hufnagel, 1767) Watsonalla cultraria (Fabricius, 1775) Watsonalla uncinula (Borkhausen, 1790) Elachistidae Agonopterix adspersella (Kollar, 1832) Agonopterix alstromeriana (Clerck, 1759) Agonopterix arenella (Denis & Schiffermüller, 1775) Agonopterix assimilella (Treitschke, 1832) Agonopterix atomella (Denis & Schiffermüller, 1775) Agonopterix cnicella (Treitschke, 1832) Agonopterix comitella (Lederer, 1855) Agonopterix furvella (Treitschke, 1832) Agonopterix graecella Hannemann, 1976 Agonopterix inoxiella Hannemann, 1959 Agonopterix irrorata (Staudinger, 1870) Agonopterix leucadensis (Rebel, 1932) Agonopterix nanatella (Stainton, 1849) Agonopterix nervosa (Haworth, 1811) Agonopterix pallorella (Zeller, 1839) Agonopterix propinquella (Treitschke, 1835) Agonopterix purpurea (Haworth, 1811) Agonopterix rotundella (Douglas, 1846) Agonopterix rutana (Fabricius, 1794) Agonopterix scopariella (Heinemann, 1870) Agonopterix straminella (Staudinger, 1859) Agonopterix subpropinquella (Stainton, 1849) Agonopterix thapsiella (Zeller, 1847) Agonopterix yeatiana (Fabricius, 1781) Anchinia laureolella Herrich-Schäffer, 1854 Blastodacna atra (Haworth, 1828) Blastodacna hellerella (Duponchel, 1838) Blastodacna vinolentella (Herrich-Schäffer, 1854) Cacochroa corfuella Lvovsky, 2000 Cacochroa permixtella (Herrich-Schäffer, 1854) Depressaria absynthiella Herrich-Schäffer, 1865 Depressaria albipunctella (Denis & Schiffermüller, 1775) Depressaria badiella (Hübner, 1796) Depressaria beckmanni Heinemann, 1870 Depressaria chaerophylli Zeller, 1839 Depressaria daucella (Denis & Schiffermüller, 1775) Depressaria depressana (Fabricius, 1775) Depressaria discipunctella Herrich-Schäffer, 1854 Depressaria douglasella Stainton, 1849 Depressaria floridella Mann, 1864 Depressaria hofmanni Stainton, 1861 Depressaria marcella Rebel, 1901 Depressaria tenebricosa Zeller, 1854 Depressaria ultimella Stainton, 1849 Depressaria velox Staudinger, 1859 Depressaria veneficella Zeller, 1847 Depressaria hirtipalpis Zeller, 1854 Dystebenna stephensi (Stainton, 1849) Elachista antonia Kaila, 2007 Elachista atrisquamosa Staudinger, 1880 Elachista catalana Parenti, 1978 Elachista dalmatiensis Traugott-Olsen, 1992 Elachista deceptricula Staudinger, 1880 Elachista gangabella Zeller, 1850 Elachista graeca Parenti, 2002 Elachista grotenfelti Kaila, 2012 Elachista modesta Parenti, 1978 Elachista neapolisella Traugott-Olsen, 1985 Elachista nedaella Traugott-Olsen, 1985 Elachista nuraghella Amsel, 1951 Elachista occulta Parenti, 1978 Elachista pollutella Duponchel, 1843 Elachista rudectella Stainton, 1851 Elachista skulei Traugott-Olsen, 1992 Elachista subalbidella Schlager, 1847 Elachista sutteri Kaila, 2002 Elachista anatoliensis Traugott-Olsen, 1990 Elachista kalki Parenti, 1978 Elachista christenseni Traugott-Olsen, 2000 Elachista falirakiensis Traugott-Olsen, 2000 Elachista gleichenella (Fabricius, 1781) Elachista helia Kaila & Sruoga, 2014 Elachista infuscata Frey, 1882 Elachista kosteri Traugott-Olsen, 1995 Elachista martinii O. Hofmann, 1898 Elachista minuta (Parenti, 2003) Elachista occidentalis Frey, 1882 Elachista pigerella (Herrich-Schäffer, 1854) Elachista rufocinerea (Haworth, 1828) Ethmia aurifluella (Hübner, 1810) Ethmia bipunctella (Fabricius, 1775) Ethmia candidella (Alphéraky, 1908) Ethmia chrysopyga (Zeller, 1844) Ethmia distigmatella (Erschoff, 1874) Ethmia fumidella (Wocke, 1850) Ethmia haemorrhoidella (Eversmann, 1844) Ethmia iranella Zerny, 1940 Ethmia mariannae Karsholt & Kun, 2003 Ethmia pusiella (Linnaeus, 1758) Ethmia quadrinotella (Mann, 1861) Ethmia terminella T. B. Fletcher, 1938 Exaeretia conciliatella (Rebel, 1892) Exaeretia nigromaculata Hannemann, 1989 Haplochrois albanica (Rebel & Zerny, 1932) Haplochrois gelechiella (Rebel, 1902) Haplochrois ochraceella (Rebel, 1903) Heinemannia festivella (Denis & Schiffermüller, 1775) Hypercallia citrinalis (Scopoli, 1763) Luquetia orientella (Rebel, 1893) Orophia sordidella (Hübner, 1796) Perittia echiella (de Joannis, 1902) Perittia minitaurella Kaila, 2009 Perittia mucronata (Parenti, 2001) Perittia ravida Kaila, 2009 Stephensia staudingeri Nielsen & Traugott-Olsen, 1981 Epermeniidae Epermenia aequidentellus (E. Hofmann, 1867) Epermenia chaerophyllella (Goeze, 1783) Epermenia insecurella (Stainton, 1854) Epermenia petrusellus (Heylaerts, 1883) Epermenia strictellus (Wocke, 1867) Epermenia iniquellus (Wocke, 1867) Epermenia ochreomaculellus (Milliere, 1854) Epermenia pontificella (Hübner, 1796) Ochromolopis ictella (Hübner, 1813) Ochromolopis staintonellus (Milliere, 1869) Erebidae Acantholipes regularis (Hübner, 1813) Amata kruegeri (Ragusa, 1904) Amata phegea (Linnaeus, 1758) Apopestes spectrum (Esper, 1787) Araeopteron ecphaea Hampson, 1914 Arctia caja (Linnaeus, 1758) Arctia festiva (Hufnagel, 1766) Arctia villica (Linnaeus, 1758) Arctornis l-nigrum (Muller, 1764) Autophila asiatica (Staudinger, 1888) Autophila banghaasi Boursin, 1940 Autophila dilucida (Hübner, 1808) Autophila libanotica (Staudinger, 1901) Autophila limbata (Staudinger, 1871) Autophila anaphanes Boursin, 1940 Autophila ligaminosa (Eversmann, 1851) Callimorpha dominula (Linnaeus, 1758) Calliteara pudibunda (Linnaeus, 1758) Calymma communimacula (Denis & Schiffermüller, 1775) Calyptra thalictri (Borkhausen, 1790) Catephia alchymista (Denis & Schiffermüller, 1775) Catocala brandti Hacker, 1999 Catocala coniuncta (Esper, 1787) Catocala conversa (Esper, 1783) Catocala dilecta (Hübner, 1808) Catocala disjuncta (Geyer, 1828) Catocala diversa (Geyer, 1828) Catocala electa (Vieweg, 1790) Catocala elocata (Esper, 1787) Catocala eutychea Treitschke, 1835 Catocala hymenaea (Denis & Schiffermüller, 1775) Catocala lupina Herrich-Schäffer, 1851 Catocala nupta (Linnaeus, 1767) Catocala nymphaea (Esper, 1787) Catocala nymphagoga (Esper, 1787) Catocala promissa (Denis & Schiffermüller, 1775) Catocala puerpera (Giorna, 1791) Catocala separata Freyer, 1848 Catocala sponsa (Linnaeus, 1767) Chelis maculosa (Gerning, 1780) Clytie syriaca (Bugnion, 1837) Colobochyla salicalis (Denis & Schiffermüller, 1775) Coscinia striata (Linnaeus, 1758) Cybosia mesomella (Linnaeus, 1758) Cymbalophora pudica (Esper, 1785) Cymbalophora rivularis (Menetries, 1832) Diacrisia sannio (Linnaeus, 1758) Diaphora luctuosa (Hübner, 1831) Diaphora mendica (Clerck, 1759) Dicallomera fascelina (Linnaeus, 1758) Drasteria cailino (Lefebvre, 1827) Dysauxes ancilla (Linnaeus, 1767) Dysauxes famula (Freyer, 1836) Dysauxes punctata (Fabricius, 1781) Dysgonia algira (Linnaeus, 1767) Dysgonia torrida (Guenée, 1852) Eilema caniola (Hübner, 1808) Eilema complana (Linnaeus, 1758) Eilema costalis (Zeller, 1847) Eilema depressa (Esper, 1787) Eilema lurideola (Zincken, 1817) Eilema muscula (Staudinger, 1899) Eilema palliatella (Scopoli, 1763) Eilema pseudocomplana (Daniel, 1939) Eilema pygmaeola (Doubleday, 1847) Eilema rungsi Toulgoët, 1960 Eilema sororcula (Hufnagel, 1766) Eublemma amoena (Hübner, 1803) Eublemma candidana (Fabricius, 1794) Eublemma cochylioides (Guenée, 1852) Eublemma minutata (Fabricius, 1794) Eublemma ochreola (Staudinger, 1900) Eublemma ostrina (Hübner, 1808) Eublemma panonica (Freyer, 1840) Eublemma parva (Hübner, 1808) Eublemma polygramma (Duponchel, 1842) Eublemma pudorina (Staudinger, 1889) Eublemma purpurina (Denis & Schiffermüller, 1775) Eublemma rosea (Hübner, 1790) Eublemma scitula Rambur, 1833 Eublemma straminea (Staudinger, 1892) Eublemma viridula (Guenée, 1841) Eublemma zillii Fibiger, Ronkay & Yela, 2010 Euclidia mi (Clerck, 1759) Euclidia glyphica (Linnaeus, 1758) Euclidia triquetra (Denis & Schiffermüller, 1775) Euplagia quadripunctaria (Poda, 1761) Euproctis chrysorrhoea (Linnaeus, 1758) Euproctis similis (Fuessly, 1775) Exophyla rectangularis (Geyer, 1828) Grammodes bifasciata (Petagna, 1787) Grammodes stolida (Fabricius, 1775) Herminia tarsicrinalis (Knoch, 1782) Honeyania ragusana (Freyer, 1844) Hypena lividalis (Hübner, 1796) Hypena munitalis Mann, 1861 Hypena obesalis Treitschke, 1829 Hypena obsitalis (Hübner, 1813) Hypena palpalis (Hübner, 1796) Hypena proboscidalis (Linnaeus, 1758) Hypena rostralis (Linnaeus, 1758) Hypenodes anatolica Schwingenschuss, 1938 Hypenodes nesiota Rebel, 1916 Hyphantria cunea (Drury, 1773) Idia calvaria (Denis & Schiffermüller, 1775) Laelia coenosa (Hübner, 1808) Laspeyria flexula (Denis & Schiffermüller, 1775) Leucoma salicis (Linnaeus, 1758) Lithosia quadra (Linnaeus, 1758) Lygephila amasina (Staudinger, 1878) Lygephila craccae (Denis & Schiffermüller, 1775) Lygephila lusoria (Linnaeus, 1758) Lygephila procax (Hübner, 1813) Lygephila viciae (Hübner, 1822) Lymantria dispar (Linnaeus, 1758) Lymantria monacha (Linnaeus, 1758) Macrochilo cribrumalis (Hübner, 1793) Metachrostis dardouini (Boisduval, 1840) Metachrostis velocior (Staudinger, 1892) Metachrostis velox (Hübner, 1813) Micronoctua karsholti Fibiger, 1997 Miltochrista miniata (Forster, 1771) Minucia lunaris (Denis & Schiffermüller, 1775) Nodaria nodosalis (Herrich-Schäffer, 1851) Ocneria eos Reisser, 1962 Ocneria ledereri (Milliere, 1869) Ocneria rubea (Denis & Schiffermüller, 1775) Ocnogyna loewii (Zeller, 1846) Ocnogyna parasita (Hübner, 1790) Odice arcuinna (Hübner, 1790) Odice suava (Hübner, 1813) Ophiusa tirhaca (Cramer, 1773) Orectis massiliensis (Milliere, 1864) Orectis proboscidata (Herrich-Schäffer, 1851) Orgyia antiqua (Linnaeus, 1758) Paidia cinerascens (Herrich-Schäffer, 1847) Paidia minoica de Freina, 2006 Paidia rica (Freyer, 1858) Pandesma robusta (Walker, 1858) Paracolax tristalis (Fabricius, 1794) Parascotia detersa (Staudinger, 1891) Parascotia fuliginaria (Linnaeus, 1761) Parasemia plantaginis (Linnaeus, 1758) Parocneria detrita (Esper, 1785) Parocneria terebinthi (Freyer, 1838) Pechipogo plumigeralis Hübner, 1825 Pelosia muscerda (Hufnagel, 1766) Pelosia obtusa (Herrich-Schäffer, 1852) Pericyma albidentaria (Freyer, 1842) Phragmatobia fuliginosa (Linnaeus, 1758) Phragmatobia placida (Frivaldszky, 1835) Phytometra viridaria (Clerck, 1759) Polypogon tentacularia (Linnaeus, 1758) Raparna conicephala (Staudinger, 1870) Rhypagla lacernaria (Hübner, 1813) Rhyparia purpurata (Linnaeus, 1758) Rivula sericealis (Scopoli, 1763) Rivula tanitalis Rebel, 1912 Schrankia costaestrigalis (Stephens, 1834) Scoliopteryx libatrix (Linnaeus, 1758) Setina irrorella (Linnaeus, 1758) Simplicia rectalis (Eversmann, 1842) Spilosoma lubricipeda (Linnaeus, 1758) Spilosoma lutea (Hufnagel, 1766) Spilosoma urticae (Esper, 1789) Tathorhynchus exsiccata (Lederer, 1855) Tyria jacobaeae (Linnaeus, 1758) Utetheisa pulchella (Linnaeus, 1758) Watsonarctia deserta (Bartel, 1902) Zanclognatha lunalis (Scopoli, 1763) Zanclognatha zelleralis (Wocke, 1850) Zebeeba falsalis (Herrich-Schäffer, 1839) Zekelita ravalis (Herrich-Schäffer, 1851) Zekelita antiqualis (Hübner, 1809) Zethes insularis Rambur, 1833 Eriocraniidae Dyseriocrania subpurpurella (Haworth, 1828) Euteliidae Eutelia adoratrix (Staudinger, 1892) Eutelia adulatrix (Hübner, 1813) Gelechiidae Acompsia cinerella (Clerck, 1759) Acompsia ponomarenkoae Huemer & Karsholt, 2002 Agnippe lunaki (Rebel, 1941) Altenia elsneriella Huemer & Karsholt, 1999 Altenia modesta (Danilevsky, 1955) Altenia scriptella (Hübner, 1796) Altenia wagneriella (Rebel, 1926) Anacampsis malella Amsel, 1959 Anacampsis obscurella (Denis & Schiffermüller, 1775) Anacampsis scintillella (Fischer von Röslerstamm, 1841) Anacampsis timidella (Wocke, 1887) Anarsia lineatella Zeller, 1839 Anarsia spartiella (Schrank, 1802) Apodia bifractella (Duponchel, 1843) Aproaerema anthyllidella (Hübner, 1813) Aristotelia brizella (Treitschke, 1833) Aristotelia decurtella (Hübner, 1813) Aristotelia ericinella (Zeller, 1839) Aristotelia subericinella (Duponchel, 1843) Aroga aristotelis (Milliere, 1876) Aroga balcanicola Huemer & Karsholt, 1999 Aroga velocella (Duponchel, 1838) Athrips amoenella (Frey, 1882) Athrips rancidella (Herrich-Schäffer, 1854) Atremaea lonchoptera Staudinger, 1871 Brachmia blandella (Fabricius, 1798) Bryotropha affinis (Haworth, 1828) Bryotropha arabica Amsel, 1952 Bryotropha azovica Bidzilia, 1997 Bryotropha desertella (Douglas, 1850) Bryotropha domestica (Haworth, 1828) Bryotropha dryadella (Zeller, 1850) Bryotropha figulella (Staudinger, 1859) Bryotropha hendrikseni Karsholt & Rutten, 2005 Bryotropha hulli Karsholt & Rutten, 2005 Bryotropha plebejella (Zeller, 1847) Bryotropha sabulosella (Rebel, 1905) Bryotropha sattleri Nel, 2003 Bryotropha senectella (Zeller, 1839) Bryotropha sutteri Karsholt & Rutten, 2005 Bryotropha tachyptilella (Rebel, 1916) Bryotropha terrella (Denis & Schiffermüller, 1775) Carpatolechia aenigma (Sattler, 1983) Carpatolechia decorella (Haworth, 1812) Carpatolechia fugitivella (Zeller, 1839) Caryocolum alsinella (Zeller, 1868) Caryocolum amaurella (M. Hering, 1924) Caryocolum baischi Huemer & Karsholt, 2010 Caryocolum blandella (Douglas, 1852) Caryocolum blandelloides Karsholt, 1981 Caryocolum blandulella (Tutt, 1887) Caryocolum cauligenella (Schmid, 1863) Caryocolum confluens Huemer, 1988 Caryocolum crypticum Huemer, Karsholt & Mutanen, 2014 Caryocolum fibigerium Huemer, 1988 Caryocolum hispanicum Huemer, 1988 Caryocolum junctella (Douglas, 1851) Caryocolum leucomelanella (Zeller, 1839) Caryocolum marmorea (Haworth, 1828) Caryocolum moehringiae (Klimesch, 1954) Caryocolum mucronatella (Chrétien, 1900) Caryocolum peregrinella (Herrich-Schäffer, 1854) Caryocolum provinciella (Stainton, 1869) Caryocolum proxima (Haworth, 1828) Caryocolum saginella (Zeller, 1868) Caryocolum schleichi (Christoph, 1872) Caryocolum tischeriella (Zeller, 1839) Caryocolum vicinella (Douglas, 1851) Catatinagma trivittellum Rebel, 1903 Caulastrocecis pudicellus (Mann, 1861) Chionodes distinctella (Zeller, 1839) Chionodes electella (Zeller, 1839) Chionodes fumatella (Douglas, 1850) Chrysoesthia drurella (Fabricius, 1775) Chrysoesthia sexguttella (Thunberg, 1794) Cosmardia moritzella (Treitschke, 1835) Crossobela trinotella (Herrich-Schäffer, 1856) Deltophora maculata (Staudinger, 1879) Dichomeris acuminatus (Staudinger, 1876) Dichomeris alacella (Zeller, 1839) Dichomeris lamprostoma (Zeller, 1847) Dichomeris limbipunctellus (Staudinger, 1859) Dichomeris marginella (Fabricius, 1781) Dichomeris ustalella (Fabricius, 1794) Dirhinosia arnoldiella (Rebel, 1905) Ephysteris deserticolella (Staudinger, 1871) Ephysteris diminutella (Zeller, 1847) Ephysteris iberica Povolny, 1977 Ephysteris olympica Povolny, 1968 Ephysteris promptella (Staudinger, 1859) Epidola barcinonella Milliere, 1867 Epidola stigma Staudinger, 1859 Eulamprotes graecatella Šumpich & Skyva, 2012 Eulamprotes helotella (Staudinger, 1859) Eulamprotes nigromaculella (Milliere, 1872) Eulamprotes wilkella (Linnaeus, 1758) Exoteleia dodecella (Linnaeus, 1758) Filatima spurcella (Duponchel, 1843) Gelechia dujardini Huemer, 1991 Gelechia mediterranea Huemer, 1991 Gelechia nigra (Haworth, 1828) Gelechia sabinellus (Zeller, 1839) Gelechia scotinella Herrich-Schäffer, 1854 Gelechia senticetella (Staudinger, 1859) Gelechia sororculella (Hübner, 1817) Gnorimoschema soffneri Riedl, 1965 Harpagidia magnetella (Staudinger, 1871) Helcystogramma lutatella (Herrich-Schäffer, 1854) Helcystogramma rufescens (Haworth, 1828) Helcystogramma triannulella (Herrich-Schäffer, 1854) Isophrictis anthemidella (Wocke, 1871) Isophrictis kefersteiniellus (Zeller, 1850) Isophrictis lineatellus (Zeller, 1850) Isophrictis striatella (Denis & Schiffermüller, 1775) Istrianis femoralis (Staudinger, 1876) Istrianis myricariella (Frey, 1870) Klimeschiopsis kiningerella (Duponchel, 1843) Lutilabria lutilabrella (Mann, 1857) Megacraspedus binotella (Duponchel, 1843) Megacraspedus cerussatellus Rebel, 1930 Megacraspedus incertellus Rebel, 1930 Megacraspedus separatellus (Fischer von Röslerstamm, 1843) Mesophleps corsicella Herrich-Schäffer, 1856 Mesophleps ochracella (Turati, 1926) Mesophleps oxycedrella (Milliere, 1871) Mesophleps silacella (Hübner, 1796) Metzneria aestivella (Zeller, 1839) Metzneria agraphella (Ragonot, 1895) Metzneria aprilella (Herrich-Schäffer, 1854) Metzneria campicolella (Mann, 1857) Metzneria castiliella (Moschler, 1866) Metzneria diffusella Englert, 1974 Metzneria intestinella (Mann, 1864) Metzneria lappella (Linnaeus, 1758) Metzneria metzneriella (Stainton, 1851) Metzneria neuropterella (Zeller, 1839) Metzneria paucipunctella (Zeller, 1839) Metzneria riadella Englert, 1974 Metzneria tenuiella (Mann, 1864) Microlechia chretieni Turati, 1924 Microlechia rhamnifoliae (Amsel & Hering, 1931) Mirificarma aflavella (Amsel, 1935) Mirificarma cytisella (Treitschke, 1833) Mirificarma eburnella (Denis & Schiffermüller, 1775) Mirificarma flavella (Duponchel, 1844) Mirificarma maculatella (Hübner, 1796) Mirificarma minimella Huemer & Karsholt, 2001 Mirificarma mulinella (Zeller, 1839) Mirificarma rhodoptera (Mann, 1866) Monochroa cytisella (Curtis, 1837) Monochroa rumicetella (O. Hofmann, 1868) Monochroa tenebrella (Hübner, 1817) Neofriseria peliella (Treitschke, 1835) Neotelphusa cisti (Stainton, 1869) Neotelphusa sequax (Haworth, 1828) Nothris congressariella (Bruand, 1858) Nothris magna Nel & Peslier, 2007 Nothris verbascella (Denis & Schiffermüller, 1775) Ochrodia subdiminutella (Stainton, 1867) Ornativalva heluanensis (Debski, 1913) Ornativalva plutelliformis (Staudinger, 1859) Palumbina guerinii (Stainton, 1858) Parastenolechia nigrinotella (Zeller, 1847) Pectinophora gossypiella (Saunders, 1844) Pexicopia malvella (Hübner, 1805) Phthorimaea operculella (Zeller, 1873) Platyedra subcinerea (Haworth, 1828) Pogochaetia solitaria Staudinger, 1879 Prolita sexpunctella (Fabricius, 1794) Prolita solutella (Zeller, 1839) Pseudotelphusa istrella (Mann, 1866) Pseudotelphusa scalella (Scopoli, 1763) Psoricoptera gibbosella (Zeller, 1839) Ptocheuusa inopella (Zeller, 1839) Ptocheuusa paupella (Zeller, 1847) Pyncostola bohemiella (Nickerl, 1864) Recurvaria nanella (Denis & Schiffermüller, 1775) Schneidereria pistaciella Weber, 1957 Scrobipalpa acuminatella (Sircom, 1850) Scrobipalpa artemisiella (Treitschke, 1833) Scrobipalpa atriplicella (Fischer von Röslerstamm, 1841) Scrobipalpa bigoti Povolny, 1973 Scrobipalpa brahmiella (Heyden, 1862) Scrobipalpa bryophiloides Povolny, 1966 Scrobipalpa camphorosmella Nel, 1999 Scrobipalpa ergasima (Meyrick, 1916) Scrobipalpa gecho (Walsingham, 1911) Scrobipalpa hendrikseni Huemer & Karsholt, 2010 Scrobipalpa instabilella (Douglas, 1846) Scrobipalpa kasyi Povolny, 1968 Scrobipalpa obsoletella (Fischer von Röslerstamm, 1841) Scrobipalpa ocellatella (Boyd, 1858) Scrobipalpa perinii (Klimesch, 1951) Scrobipalpa phagnalella (Constant, 1895) Scrobipalpa proclivella (Fuchs, 1886) Scrobipalpa salinella (Zeller, 1847) Scrobipalpa samadensis (Pfaffenzeller, 1870) Scrobipalpa selectella (Caradja, 1920) Scrobipalpa spergulariella (Chrétien, 1910) Scrobipalpa vasconiella (Rossler, 1877) Scrobipalpa vicaria (Meyrick, 1921) Scrobipalpula psilella (Herrich-Schäffer, 1854) Scrobipalpula seniorum Povolny, 2000 Scrobipalpula tussilaginis (Stainton, 1867) Sitotroga cerealella (Olivier, 1789) Sitotroga psacasta Meyrick, 1908 Sophronia chilonella (Treitschke, 1833) Sophronia finitimella Rebel, 1905 Sophronia humerella (Denis & Schiffermüller, 1775) Sophronia sicariellus (Zeller, 1839) Stenolechia gemmella (Linnaeus, 1758) Stenolechiodes macrolepiellus Huemer & Karsholt, 1999 Stenolechiodes pseudogemmellus Elsner, 1996 Stomopteryx basalis (Staudinger, 1876) Stomopteryx detersella (Zeller, 1847) Stomopteryx hungaricella Gozmány, 1957 Stomopteryx remissella (Zeller, 1847) Streyella anguinella (Herrich-Schäffer, 1861) Syncopacma cinctella (Clerck, 1759) Syncopacma patruella (Mann, 1857) Syncopacma polychromella (Rebel, 1902) Syncopacma sangiella (Stainton, 1863) Syncopacma suecicella (Wolff, 1958) Teleiodes albiluculella Huemer & Karsholt, 2001 Teleiodes luculella (Hübner, 1813) Teleiodes vulgella (Denis & Schiffermüller, 1775) Teleiopsis bagriotella (Duponchel, 1840) Teleiopsis diffinis (Haworth, 1828) Teleiopsis terebinthinella (Herrich-Schäffer, 1856) Telphusa cistiflorella (Constant, 1890) Thiotricha majorella (Rebel, 1910) Tuta absoluta (Meyrick, 1917) Xenolechia aethiops (Humphreys & Westwood, 1845) Xenolechia lindae Huemer & Karsholt, 1999 Xenolechia pseudovulgella Huemer & Karsholt, 1999 Geometridae Abraxas grossulariata (Linnaeus, 1758) Acanthovalva inconspicuaria (Hübner, 1819) Acasis viretata (Hübner, 1799) Agriopis bajaria (Denis & Schiffermüller, 1775) Alcis jubata (Thunberg, 1788) Alcis repandata (Linnaeus, 1758) Aleucis orientalis (Staudinger, 1892) Alsophila aescularia (Denis & Schiffermüller, 1775) Amorphogynia necessaria (Zeller, 1849) Angerona prunaria (Linnaeus, 1758) Apeira syringaria (Linnaeus, 1758) Aplasta ononaria (Fuessly, 1783) Aplocera annexata (Freyer, 1830) Aplocera columbata (Metzner, 1845) Aplocera cretica (Reisser, 1974) Aplocera efformata (Guenée, 1858) Aplocera plagiata (Linnaeus, 1758) Aplocera praeformata (Hübner, 1826) Aplocera simpliciata (Treitschke, 1835) Apocheima hispidaria (Denis & Schiffermüller, 1775) Apochima flabellaria (Heeger, 1838) Ascotis selenaria (Denis & Schiffermüller, 1775) Asovia maeoticaria (Alphéraky, 1876) Aspitates ochrearia (Rossi, 1794) Asthena albulata (Hufnagel, 1767) Biston betularia (Linnaeus, 1758) Bupalus piniaria (Linnaeus, 1758) Cabera pusaria (Linnaeus, 1758) Campaea honoraria (Denis & Schiffermüller, 1775) Campaea margaritaria (Linnaeus, 1761) Camptogramma bilineata (Linnaeus, 1758) Camptogramma grisescens (Staudinger, 1892) Casilda antophilaria (Hübner, 1813) Cataclysme riguata (Hübner, 1813) Catarhoe basochesiata (Duponchel, 1831) Catarhoe hortulanaria (Staudinger, 1879) Catarhoe permixtaria (Herrich-Schäffer, 1856) Catarhoe putridaria (Herrich-Schäffer, 1852) Celonoptera mirificaria Lederer, 1862 Chariaspilates formosaria (Eversmann, 1837) Charissa certhiatus (Rebel & Zerny, 1931) Charissa obscurata (Denis & Schiffermüller, 1775) Charissa mutilata (Staudinger, 1879) Charissa pullata (Denis & Schiffermüller, 1775) Charissa dubitaria (Staudinger, 1892) Charissa mucidaria (Hübner, 1799) Charissa variegata (Duponchel, 1830) Charissa ambiguata (Duponchel, 1830) Charissa onustaria (Herrich-Schäffer, 1852) Charissa zeitunaria (Staudinger, 1901) Charissa intermedia (Wehrli, 1917) Charissa supinaria (Mann, 1854) Charissa glaucinaria (Hübner, 1799) Chemerina caliginearia (Rambur, 1833) Chesias rufata (Fabricius, 1775) Chiasmia aestimaria (Hübner, 1809) Chiasmia clathrata (Linnaeus, 1758) Chiasmia syriacaria (Staudinger, 1871) Chlorissa cloraria (Hübner, 1813) Chlorissa viridata (Linnaeus, 1758) Chloroclysta siterata (Hufnagel, 1767) Chloroclystis v-ata (Haworth, 1809) Cidaria fulvata (Forster, 1771) Cleorodes lichenaria (Hufnagel, 1767) Cleta filacearia (Herrich-Schäffer, 1847) Coenotephria ablutaria (Boisduval, 1840) Colostygia aptata (Hübner, 1813) Colostygia aqueata (Hübner, 1813) Colostygia fitzi (Schawerda, 1914) Colostygia olivata (Denis & Schiffermüller, 1775) Colostygia wolfschlaegerae (Pinker, 1953) Colotois pennaria (Linnaeus, 1761) Comibaena bajularia (Denis & Schiffermüller, 1775) Cosmorhoe ocellata (Linnaeus, 1758) Costaconvexa polygrammata (Borkhausen, 1794) Crocallis elinguaria (Linnaeus, 1758) Crocallis helenaria Ruckdeschel, 2006 Crocallis tusciaria (Borkhausen, 1793) Cyclophora linearia (Hübner, 1799) Cyclophora porata (Linnaeus, 1767) Cyclophora punctaria (Linnaeus, 1758) Cyclophora suppunctaria (Zeller, 1847) Cyclophora albiocellaria (Hübner, 1789) Cyclophora annularia (Fabricius, 1775) Cyclophora ariadne Reisser, 1939 Cyclophora puppillaria (Hübner, 1799) Cyclophora quercimontaria (Bastelberger, 1897) Cyclophora ruficiliaria (Herrich-Schäffer, 1855) Dasycorsa modesta (Staudinger, 1879) Deileptenia ribeata (Clerck, 1759) Docirava dervenaria (von Mentzer, 1981) Docirava mundulata (Guenée, 1858) Dyscia conspersaria (Denis & Schiffermüller, 1775) Dyscia crassipunctaria (Rebel, 1916) Dyscia innocentaria (Christoph, 1885) Dyscia raunaria (Freyer, 1852) Dysstroma truncata (Hufnagel, 1767) Eilicrinia cordiaria (Hübner, 1790) Eilicrinia trinotata (Metzner, 1845) Ematurga atomaria (Linnaeus, 1758) Ennomos alniaria (Linnaeus, 1758) Ennomos duercki Reisser, 1958 Ennomos quercaria (Hübner, 1813) Ennomos quercinaria (Hufnagel, 1767) Entephria cyanata (Hübner, 1809) Entephria flavicinctata (Hübner, 1813) Epione repandaria (Hufnagel, 1767) Epirrhoe alternata (Muller, 1764) Epirrhoe galiata (Denis & Schiffermüller, 1775) Epirrhoe molluginata (Hübner, 1813) Epirrhoe rivata (Hübner, 1813) Epirrita dilutata (Denis & Schiffermüller, 1775) Epirrita terminassianae Vardikian, 1974 Eucrostes indigenata (de Villers, 1789) Eulithis peloponnesiaca (Rebel, 1902) Eulithis populata (Linnaeus, 1758) Eulithis prunata (Linnaeus, 1758) Eumannia oppositaria (Mann, 1864) Eumannia psyloritaria (Reisser, 1958) Eumera regina Staudinger, 1892 Euphyia biangulata (Haworth, 1809) Euphyia frustata (Treitschke, 1828) Euphyia unangulata (Haworth, 1809) Eupithecia abietaria (Goeze, 1781) Eupithecia absinthiata (Clerck, 1759) Eupithecia addictata Dietze, 1908 Eupithecia alliaria Staudinger, 1870 Eupithecia antalica Mironov, 2001 Eupithecia biornata Christoph, 1867 Eupithecia breviculata (Donzel, 1837) Eupithecia carpophagata Staudinger, 1871 Eupithecia centaureata (Denis & Schiffermüller, 1775) Eupithecia cerussaria (Lederer, 1855) Eupithecia cretaceata (Packard, 1874) Eupithecia cuculliaria (Rebel, 1901) Eupithecia denotata (Hübner, 1813) Eupithecia distinctaria Herrich-Schäffer, 1848 Eupithecia dodoneata Guenée, 1858 Eupithecia druentiata Dietze, 1902 Eupithecia ericeata (Rambur, 1833) Eupithecia extraversaria Herrich-Schäffer, 1852 Eupithecia extremata (Fabricius, 1787) Eupithecia fuscicostata Christoph, 1887 Eupithecia gemellata Herrich-Schäffer, 1861 Eupithecia graphata (Treitschke, 1828) Eupithecia gratiosata Herrich-Schäffer, 1861 Eupithecia gueneata Milliere, 1862 Eupithecia haworthiata Doubleday, 1856 Eupithecia icterata (de Villers, 1789) Eupithecia impurata (Hübner, 1813) Eupithecia innotata (Hufnagel, 1767) Eupithecia insigniata (Hübner, 1790) Eupithecia intricata (Zetterstedt, 1839) Eupithecia irriguata (Hübner, 1813) Eupithecia laquaearia Herrich-Schäffer, 1848 Eupithecia lentiscata Mabille, 1869 Eupithecia limbata Staudinger, 1879 Eupithecia linariata (Denis & Schiffermüller, 1775) Eupithecia millefoliata Rossler, 1866 Eupithecia mystica Dietze, 1910 Eupithecia ochridata Schutze & Pinker, 1968 Eupithecia oxycedrata (Rambur, 1833) Eupithecia pauxillaria Boisduval, 1840 Eupithecia phoeniceata (Rambur, 1834) Eupithecia pimpinellata (Hübner, 1813) Eupithecia plumbeolata (Haworth, 1809) Eupithecia pulchellata Stephens, 1831 Eupithecia pusillata (Denis & Schiffermüller, 1775) Eupithecia pyreneata Mabille, 1871 Eupithecia quercetica Prout, 1938 Eupithecia reisserata Pinker, 1976 Eupithecia riparia Herrich-Schäffer, 1851 Eupithecia satyrata (Hübner, 1813) Eupithecia scalptata Christoph, 1885 Eupithecia schiefereri Bohatsch, 1893 Eupithecia scopariata (Rambur, 1833) Eupithecia semigraphata Bruand, 1850 Eupithecia silenicolata Mabille, 1867 Eupithecia simpliciata (Haworth, 1809) Eupithecia spissilineata (Metzner, 1846) Eupithecia subfuscata (Haworth, 1809) Eupithecia succenturiata (Linnaeus, 1758) Eupithecia tantillaria Boisduval, 1840 Eupithecia thurnerata Schutze, 1958 Eupithecia ultimaria Boisduval, 1840 Eupithecia unedonata Mabille, 1868 Eupithecia venosata (Fabricius, 1787) Eupithecia virgaureata Doubleday, 1861 Fagivorina arenaria (Hufnagel, 1767) Gagitodes sagittata (Fabricius, 1787) Gandaritis pyraliata (Denis & Schiffermüller, 1775) Gnopharmia stevenaria (Boisduval, 1840) Gnophos sartata Treitschke, 1827 Gnophos furvata (Denis & Schiffermüller, 1775) Gnophos obfuscata (Denis & Schiffermüller, 1775) Gnophos dumetata Treitschke, 1827 Gnophos zacharia Staudinger, 1879 Gymnoscelis rufifasciata (Haworth, 1809) Gypsochroa renitidata (Hübner, 1817) Heliomata glarearia (Denis & Schiffermüller, 1775) Hemistola chrysoprasaria (Esper, 1795) Hemithea aestivaria (Hübner, 1789) Horisme corticata (Treitschke, 1835) Horisme radicaria (de La Harpe, 1855) Horisme tersata (Denis & Schiffermüller, 1775) Horisme vitalbata (Denis & Schiffermüller, 1775) Hylaea fasciaria (Linnaeus, 1758) Hypomecis punctinalis (Scopoli, 1763) Hypomecis roboraria (Denis & Schiffermüller, 1775) Idaea albitorquata (Pungeler, 1909) Idaea aureolaria (Denis & Schiffermüller, 1775) Idaea aversata (Linnaeus, 1758) Idaea biselata (Hufnagel, 1767) Idaea camparia (Herrich-Schäffer, 1852) Idaea circuitaria (Hübner, 1819) Idaea consanguinaria (Lederer, 1853) Idaea consolidata (Lederer, 1853) Idaea degeneraria (Hübner, 1799) Idaea determinata (Staudinger, 1876) Idaea deversaria (Herrich-Schäffer, 1847) Idaea dilutaria (Hübner, 1799) Idaea dimidiata (Hufnagel, 1767) Idaea distinctaria (Boisduval, 1840) Idaea elongaria (Rambur, 1833) Idaea emarginata (Linnaeus, 1758) Idaea filicata (Hübner, 1799) Idaea fuscovenosa (Goeze, 1781) Idaea humiliata (Hufnagel, 1767) Idaea infirmaria (Rambur, 1833) Idaea inquinata (Scopoli, 1763) Idaea intermedia (Staudinger, 1879) Idaea laevigata (Scopoli, 1763) Idaea leipnitzi Hausmann, 2004 Idaea longaria (Herrich-Schäffer, 1852) Idaea metohiensis (Rebel, 1900) Idaea moniliata (Denis & Schiffermüller, 1775) Idaea obsoletaria (Rambur, 1833) Idaea ochrata (Scopoli, 1763) Idaea ossiculata (Lederer, 1870) Idaea ostrinaria (Hübner, 1813) Idaea palaestinensis (Sterneck, 1933) Idaea pallidata (Denis & Schiffermüller, 1775) Idaea politaria (Hübner, 1799) Idaea rubraria (Staudinger, 1901) Idaea rufaria (Hübner, 1799) Idaea rusticata (Denis & Schiffermüller, 1775) Idaea seriata (Schrank, 1802) Idaea sericeata (Hübner, 1813) Idaea straminata (Borkhausen, 1794) Idaea subsericeata (Haworth, 1809) Idaea tineata (Thierry-Mieg, 1911) Idaea trigeminata (Haworth, 1809) Idaea troglodytaria (Heydenreich, 1851) Isturgia arenacearia (Denis & Schiffermüller, 1775) Isturgia berytaria (Staudinger, 1892) Jodis lactearia (Linnaeus, 1758) Larentia clavaria (Haworth, 1809) Larentia malvata (Rambur, 1833) Ligdia adustata (Denis & Schiffermüller, 1775) Lithostege farinata (Hufnagel, 1767) Lithostege palaestinensis Amsel, 1935 Lomaspilis bithynica Wehrli, 1954 Lycia graecarius (Staudinger, 1861) Lycia hirtaria (Clerck, 1759) Lythria purpuraria (Linnaeus, 1758) Macaria artesiaria (Denis & Schiffermüller, 1775) Macaria liturata (Clerck, 1759) Macaria notata (Linnaeus, 1758) Macaria signaria (Hübner, 1809) Macaria wauaria (Linnaeus, 1758) Mattia adlata (Staudinger, 1895) Melanthia procellata (Denis & Schiffermüller, 1775) Menophra abruptaria (Thunberg, 1792) Menophra berenicidaria (Turati, 1924) Menophra japygiaria (O. Costa, 1849) Microloxia herbaria (Hübner, 1813) Minoa murinata (Scopoli, 1763) Myinodes shohami Hausmann, 1994 Nebula achromaria (de La Harpe, 1853) Nebula nebulata (Treitschke, 1828) Nebula schneideraria (Lederer, 1855) Nebula senectaria (Herrich-Schäffer, 1852) Nychiodes amygdalaria (Herrich-Schäffer, 1848) Nychiodes dalmatina Wagner, 1909 Nychiodes waltheri Wagner, 1919 Nychiodes obscuraria (de Villers, 1789) Nycterosea obstipata (Fabricius, 1794) Odezia atrata (Linnaeus, 1758) Odontopera graecarius (A. Bang-Haas, 1910) Opisthograptis luteolata (Linnaeus, 1758) Ortaliella gruneraria (Staudinger, 1862) Orthostixis cribraria (Hübner, 1799) Oulobophora externaria (Herrich-Schäffer, 1848) Oulobophora internata (Pungeler, 1888) Ourapteryx sambucaria (Linnaeus, 1758) Pachycnemia hippocastanaria (Hübner, 1799) Pachycnemia tibiaria (Rambur, 1829) Paraboarmia viertlii (Bohatsch, 1883) Pareulype lasithiotica (Rebel, 1906) Pasiphila debiliata (Hübner, 1817) Pasiphila rectangulata (Linnaeus, 1758) Pennithera ulicata (Rambur, 1934) Perconia strigillaria (Hübner, 1787) Peribatodes correptaria (Zeller, 1847) Peribatodes ilicaria (Geyer, 1833) Peribatodes rhomboidaria (Denis & Schiffermüller, 1775) Peribatodes secundaria (Denis & Schiffermüller, 1775) Peribatodes umbraria (Hübner, 1809) Perizoma albulata (Denis & Schiffermüller, 1775) Perizoma alchemillata (Linnaeus, 1758) Perizoma bifaciata (Haworth, 1809) Perizoma flavosparsata (Wagner, 1926) Perizoma minorata (Treitschke, 1828) Petrophora chlorosata (Scopoli, 1763) Phaiogramma etruscaria (Zeller, 1849) Phaiogramma faustinata (Milliere, 1868) Phigalia pilosaria (Denis & Schiffermüller, 1775) Philereme transversata (Hufnagel, 1767) Plagodis pulveraria (Linnaeus, 1758) Plemyria rubiginata (Denis & Schiffermüller, 1775) Problepsis ocellata (Frivaldszky, 1845) Proteuchloris neriaria (Herrich-Schäffer, 1852) Protorhoe corollaria (Herrich-Schäffer, 1848) Protorhoe unicata (Guenée, 1858) Pseudopanthera macularia (Linnaeus, 1758) Pseudoterpna coronillaria (Hübner, 1817) Pseudoterpna pruinata (Hufnagel, 1767) Pungeleria capreolaria (Denis & Schiffermüller, 1775) Rhodometra sacraria (Linnaeus, 1767) Rhodostrophia calabra (Petagna, 1786) Rhodostrophia cretacaria Rebel, 1916 Rhodostrophia discopunctata Amsel, 1935 Rhodostrophia tabidaria (Zeller, 1847) Rhodostrophia vibicaria (Clerck, 1759) Rhoptria asperaria (Hübner, 1817) Rhoptria dolosaria (Herrich-Schäffer, 1848) Schistostege decussata (Denis & Schiffermüller, 1775) Scopula asellaria (Herrich-Schäffer, 1847) Scopula beckeraria (Lederer, 1853) Scopula confinaria (Herrich-Schäffer, 1847) Scopula flaccidaria (Zeller, 1852) Scopula imitaria (Hübner, 1799) Scopula immutata (Linnaeus, 1758) Scopula incanata (Linnaeus, 1758) Scopula luridata (Zeller, 1847) Scopula marginepunctata (Goeze, 1781) Scopula mentzeri Hausmann, 1993 Scopula minorata (Boisduval, 1833) Scopula decorata (Denis & Schiffermüller, 1775) Scopula nigropunctata (Hufnagel, 1767) Scopula ochraceata (Staudinger, 1901) Scopula ornata (Scopoli, 1763) Scopula rubiginata (Hufnagel, 1767) Scopula submutata (Treitschke, 1828) Scopula tessellaria (Boisduval, 1840) Scopula turbulentaria (Staudinger, 1870) Scopula vigilata (Sohn-Rethel, 1929) Scotopteryx bipunctaria (Denis & Schiffermüller, 1775) Scotopteryx chenopodiata (Linnaeus, 1758) Scotopteryx coarctaria (Denis & Schiffermüller, 1775) Scotopteryx ignorata Huemer & Hausmann, 1998 Scotopteryx luridata (Hufnagel, 1767) Scotopteryx moeniata (Scopoli, 1763) Scotopteryx olympia Rezbanyai-Reser, 2003 Scotopteryx vicinaria (Duponchel, 1830) Selenia lunularia (Hübner, 1788) Selidosema brunnearia (de Villers, 1789) Selidosema plumaria (Denis & Schiffermüller, 1775) Siona lineata (Scopoli, 1763) Stamnodes depeculata (Lederer, 1870) Stegania dilectaria (Hübner, 1790) Synopsia sociaria (Hübner, 1799) Tephronia sepiaria (Hufnagel, 1767) Thalera fimbrialis (Scopoli, 1763) Thera britannica (Turner, 1925) Thera cognata (Thunberg, 1792) Thera cupressata (Geyer, 1831) Thera variata (Denis & Schiffermüller, 1775) Thera vetustata (Denis & Schiffermüller, 1775) Thetidia smaragdaria (Fabricius, 1787) Timandra comae Schmidt, 1931 Triphosa dubitata (Linnaeus, 1758) Triphosa sabaudiata (Duponchel, 1830) Xanthorhoe biriviata (Borkhausen, 1794) Xanthorhoe designata (Hufnagel, 1767) Xanthorhoe disjunctaria (de La Harpe, 1860) Xanthorhoe fluctuata (Linnaeus, 1758) Xanthorhoe friedrichi Viidalepp & Skou, 2004 Xanthorhoe montanata (Denis & Schiffermüller, 1775) Xanthorhoe oxybiata (Milliere, 1872) Xanthorhoe spadicearia (Denis & Schiffermüller, 1775) Xenochlorodes olympiaria (Herrich-Schäffer, 1852) Glyphipterigidae Acrolepiopsis vesperella (Zeller, 1850) Digitivalva eglanteriella (Mann, 1855) Digitivalva granitella (Treitschke, 1833) Digitivalva macedonica (Klimesch, 1956) Digitivalva occidentella (Klimesch, 1956) Digitivalva pulicariae (Klimesch, 1956) Digitivalva seligeri Gaedike, 2011 Glyphipterix equitella (Scopoli, 1763) Glyphipterix schoenicolella Boyd, 1859 Glyphipterix simpliciella (Stephens, 1834) Glyphipterix thrasonella (Scopoli, 1763) Orthotelia sparganella (Thunberg, 1788) Gracillariidae Acrocercops brongniardella (Fabricius, 1798) Acrocercops tacita Triberti, 2001 Aspilapteryx inquinata Triberti, 1985 Aspilapteryx limosella (Duponchel, 1843) Aspilapteryx tringipennella (Zeller, 1839) Caloptilia alchimiella (Scopoli, 1763) Caloptilia braccatella (Staudinger, 1870) Caloptilia elongella (Linnaeus, 1761) Caloptilia flava (Staudinger, 1871) Caloptilia roscipennella (Hübner, 1796) Calybites phasianipennella (Hübner, 1813) Cameraria ohridella Deschka & Dimic, 1986 Cupedia cupediella (Herrich-Schäffer, 1855) Dextellia dorsilineella (Amsel, 1935) Dialectica scalariella (Zeller, 1850) Dialectica soffneri (Gregor & Povolny, 1965) Euspilapteryx auroguttella Stephens, 1835 Gracillaria syringella (Fabricius, 1794) Metriochroa latifoliella (Milliere, 1886) Micrurapteryx kollariella (Zeller, 1839) Parornix acuta Triberti, 1980 Parornix anguliferella (Zeller, 1847) Parornix carpinella (Frey, 1863) Parornix compsumpta Triberti, 1987 Parornix finitimella (Zeller, 1850) Parornix fragilella Triberti, 1981 Parornix oculata Triberti, 1979 Parornix scoticella (Stainton, 1850) Parornix torquillella (Zeller, 1850) Phyllocnistis citrella Stainton, 1856 Phyllocnistis labyrinthella (Bjerkander, 1790) Phyllocnistis unipunctella (Stephens, 1834) Phyllocnistis valentinensis M. Hering, 1936 Phyllonorycter abrasella (Duponchel, 1843) Phyllonorycter anceps Triberti, 2007 Phyllonorycter belotella (Staudinger, 1859) Phyllonorycter blancardella (Fabricius, 1781) Phyllonorycter brunnea Deschka, 1975 Phyllonorycter cephalariae (Lhomme, 1934) Phyllonorycter cerasicolella (Herrich-Schäffer, 1855) Phyllonorycter christenseni Derra, 1985 Phyllonorycter corylifoliella (Hübner, 1796) Phyllonorycter cydoniella (Denis & Schiffermüller, 1775) Phyllonorycter delitella (Duponchel, 1843) Phyllonorycter esperella (Goeze, 1783) Phyllonorycter fraxinella (Zeller, 1846) Phyllonorycter gerfriedi A. & Z. Lastuvka, 2007 Phyllonorycter graecus A. & Z. Lastuvka, 2007 Phyllonorycter helianthemella (Herrich-Schäffer, 1861) Phyllonorycter ilicifoliella (Duponchel, 1843) Phyllonorycter kusdasi Deschka, 1970 Phyllonorycter lapadiella (Krone, 1909) Phyllonorycter lautella (Zeller, 1846) Phyllonorycter leucographella (Zeller, 1850) Phyllonorycter macedonica (Deschka, 1971) Phyllonorycter maestingella (Muller, 1764) Phyllonorycter messaniella (Zeller, 1846) Phyllonorycter millierella (Staudinger, 1871) Phyllonorycter muelleriella (Zeller, 1839) Phyllonorycter obtusifoliella Deschka, 1974 Phyllonorycter olympica Deschka, 1983 Phyllonorycter parisiella (Wocke, 1848) Phyllonorycter platani (Staudinger, 1870) Phyllonorycter populifoliella (Treitschke, 1833) Phyllonorycter quercifoliella (Zeller, 1839) Phyllonorycter roboris (Zeller, 1839) Phyllonorycter scitulella (Duponchel, 1843) Phyllonorycter spinicolella (Zeller, 1846) Phyllonorycter suberifoliella (Zeller, 1850) Phyllonorycter sublautella (Stainton, 1869) Phyllonorycter trifasciella (Haworth, 1828) Phyllonorycter triflorella (Peyerimhoff, 1872) Phyllonorycter trojana Deschka, 1982 Phyllonorycter ulicicolella (Stainton, 1851) Povolnya leucapennella (Stephens, 1835) Spulerina simploniella (Fischer von Röslerstamm, 1840) Heliozelidae Antispila treitschkiella (Fischer von Röslerstamm, 1843) Holocacista rivillei (Stainton, 1855) Hepialidae Pharmacis lupulina (Linnaeus, 1758) Triodia adriaticus (Osthelder, 1931) Triodia amasinus (Herrich-Schäffer, 1851) Triodia sylvina (Linnaeus, 1761) Heterogynidae Heterogynis penella (Hübner, 1819) Incurvariidae Incurvaria masculella (Denis & Schiffermüller, 1775) Incurvaria oehlmanniella (Hübner, 1796) Lasiocampidae Dendrolimus pini (Linnaeus, 1758) Eriogaster catax (Linnaeus, 1758) Eriogaster lanestris (Linnaeus, 1758) Eriogaster rimicola (Denis & Schiffermüller, 1775) Euthrix potatoria (Linnaeus, 1758) Gastropacha quercifolia (Linnaeus, 1758) Lasiocampa quercus (Linnaeus, 1758) Lasiocampa grandis (Rogenhofer, 1891) Lasiocampa trifolii (Denis & Schiffermüller, 1775) Macrothylacia rubi (Linnaeus, 1758) Malacosoma castrensis (Linnaeus, 1758) Malacosoma neustria (Linnaeus, 1758) Malacosoma franconica (Denis & Schiffermüller, 1775) Odonestis pruni (Linnaeus, 1758) Pachypasa otus (Drury, 1773) Phyllodesma ilicifolia (Linnaeus, 1758) Phyllodesma tremulifolia (Hübner, 1810) Trichiura castiliana Spuler, 1908 Trichiura crataegi (Linnaeus, 1758) Trichiura verenae Witt, 1981 Lecithoceridae Ceuthomadarus viduellus Rebel, 1903 Eurodachtha flavissimella (Mann, 1862) Lecithocera nigrana (Duponchel, 1836) Odites kollarella (O. G. Costa, 1832) Limacodidae Apoda limacodes (Hufnagel, 1766) Heterogenea asella (Denis & Schiffermüller, 1775) Hoyosia cretica (Rebel, 1906) Lyonetiidae Leucoptera malifoliella (O. Costa, 1836) Leucoptera nieukerkeni Mey, 1994 Leucoptera thessalica Mey, 1994 Lyonetia clerkella (Linnaeus, 1758) Lyonetia prunifoliella (Hübner, 1796) Micropterigidae Micropterix aruncella (Scopoli, 1763) Micropterix corcyrella Walsingham, 1919 Micropterix kardamylensis Rebel, 1903 Micropterix klimeschi Heath, 1973 Micropterix lakoniensis Heath, 1985 Micropterix myrtetella Zeller, 1850 Micropterix tunbergella (Fabricius, 1787) Micropterix wockei Staudinger, 1870 Millieridae Millieria dolosalis (Heydenreich, 1851) Momphidae Mompha miscella (Denis & Schiffermüller, 1775) Mompha conturbatella (Hübner, 1819) Mompha epilobiella (Denis & Schiffermüller, 1775) Mompha meridionella Koster & Sinev, 2003 Mompha ochraceella (Curtis, 1839) Mompha subbistrigella (Haworth, 1828) Mompha raschkiella (Zeller, 1839) Nepticulidae Acalyptris lesbia van Nieukerken & Hull, 2007 Acalyptris limonii Z. & A. Lastuvka, 1998 Acalyptris loranthella (Klimesch, 1937) Acalyptris maritima A. & Z. Lastuvka, 1997 Acalyptris pistaciae van Nieukerken, 2007 Acalyptris platani (Muller-Rutz, 1934) Ectoedemia aegilopidella (Klimesch, 1978) Ectoedemia agrimoniae (Frey, 1858) Ectoedemia albifasciella (Heinemann, 1871) Ectoedemia alnifoliae van Nieukerken, 1985 Ectoedemia angulifasciella (Stainton, 1849) Ectoedemia arcuatella (Herrich-Schäffer, 1855) Ectoedemia argyropeza (Zeller, 1839) Ectoedemia caradjai (Groschke, 1944) Ectoedemia cerris (Zimmermann, 1944) Ectoedemia contorta van Nieukerken, 1985 Ectoedemia erythrogenella (de Joannis, 1908) Ectoedemia gilvipennella (Klimesch, 1946) Ectoedemia haraldi (Soffner, 1942) Ectoedemia heringella (Mariani, 1939) Ectoedemia heringi (Toll, 1934) Ectoedemia klimeschi (Skala, 1933) Ectoedemia mahalebella (Klimesch, 1936) Ectoedemia preisseckeri (Klimesch, 1941) Ectoedemia pseudoilicis Z. & A. Lastuvka, 1998 Ectoedemia quinquella (Bedell, 1848) Ectoedemia rufifrontella (Caradja, 1920) Ectoedemia spinosella (de Joannis, 1908) Ectoedemia subbimaculella (Haworth, 1828) Ectoedemia terebinthivora (Klimesch, 1975) Ectoedemia decentella (Herrich-Schäffer, 1855) Ectoedemia aegaeica Z. & A. Lastuvka & Johansson, 1998 Ectoedemia deschkai (Klimesch, 1978) Ectoedemia empetrifolii A. & Z. Lastuvka, 2000 Ectoedemia eriki A. & Z. Lastuvka, 2000 Ectoedemia euphorbiella (Stainton, 1869) Ectoedemia groschkei (Skala, 1943) Ectoedemia septembrella (Stainton, 1849) Ectoedemia amani Svensson, 1966 Ectoedemia atrifrontella (Stainton, 1851) Ectoedemia liebwerdella Zimmermann, 1940 Ectoedemia longicaudella Klimesch, 1953 Ectoedemia monemvasiae van Nieukerken, 1985 Ectoedemia reichli Z. & A. Lastuvka, 1998 Parafomoria pseudocistivora van Nieukerken, 1983 Simplimorpha promissa (Staudinger, 1871) Stigmella aceris (Frey, 1857) Stigmella amygdali (Klimesch, 1978) Stigmella atricapitella (Haworth, 1828) Stigmella aurella (Fabricius, 1775) Stigmella auromarginella (Richardson, 1890) Stigmella azaroli (Klimesch, 1978) Stigmella basiguttella (Heinemann, 1862) Stigmella cocciferae van Nieukerken & Johansson, 2003 Stigmella dorsiguttella (Johansson, 1971) Stigmella eberhardi (Johansson, 1971) Stigmella fasciata van Nieukerken & Johansson, 2003 Stigmella filipendulae (Wocke, 1871) Stigmella freyella (Heyden, 1858) Stigmella hemargyrella (Kollar, 1832) Stigmella hybnerella (Hübner, 1796) Stigmella incognitella (Herrich-Schäffer, 1855) Stigmella irregularis Puplesis, 1994 Stigmella johanssonella A. & Z. Lastuvka, 1997 Stigmella lemniscella (Zeller, 1839) Stigmella macrolepidella (Klimesch, 1978) Stigmella malella (Stainton, 1854) Stigmella microtheriella (Stainton, 1854) Stigmella minusculella (Herrich-Schäffer, 1855) Stigmella muricatella (Klimesch, 1978) Stigmella nivenburgensis (Preissecker, 1942) Stigmella paliurella Gerasimov, 1937 Stigmella paradoxa (Frey, 1858) Stigmella perpygmaeella (Doubleday, 1859) Stigmella plagicolella (Stainton, 1854) Stigmella prunetorum (Stainton, 1855) Stigmella pyrellicola (Klimesch, 1978) Stigmella rhamnophila (Amsel, 1934) Stigmella roborella (Johansson, 1971) Stigmella rolandi van Nieukerken, 1990 Stigmella ruficapitella (Haworth, 1828) Stigmella samiatella (Zeller, 1839) Stigmella sorbi (Stainton, 1861) Stigmella speciosa (Frey, 1858) Stigmella styracicolella (Klimesch, 1978) Stigmella svenssoni (Johansson, 1971) Stigmella szoecsiella (Borkowski, 1972) Stigmella tityrella (Stainton, 1854) Stigmella trimaculella (Haworth, 1828) Stigmella trojana Z. & A. Lastuvka, 1998 Stigmella ulmiphaga (Preissecker, 1942) Stigmella viscerella (Stainton, 1853) Stigmella zangherii (Klimesch, 1951) Trifurcula albiflorella Klimesch, 1978 Trifurcula bleonella (Chrétien, 1904) Trifurcula headleyella (Stainton, 1854) Trifurcula helladica Z. & A. Lastuvka, 2007 Trifurcula kalavritana Z. & A. Lastuvka, 1998 Trifurcula melanoptera van Nieukerken & Puplesis, 1991 Trifurcula saturejae (Parenti, 1963) Trifurcula trilobella Klimesch, 1978 Trifurcula cryptella (Stainton, 1856) Trifurcula eurema (Tutt, 1899) Trifurcula manygoza van Nieukerken, A. & Z. Lastuvka, 2007 Trifurcula peloponnesica van Nieukerken, 2007 Trifurcula aurella Rebel, 1933 Trifurcula austriaca van Nieukerken, 1990 Trifurcula calycotomella A. & Z. Lastuvka, 1997 Trifurcula graeca Z. & A. Lastuvka, 1998 Trifurcula josefklimeschi van Nieukerken, 1990 Trifurcula orientella Klimesch, 1953 Trifurcula pallidella (Duponchel, 1843) Trifurcula subnitidella (Duponchel, 1843) Noctuidae Abrostola agnorista Dufay, 1956 Abrostola asclepiadis (Denis & Schiffermüller, 1775) Abrostola tripartita (Hufnagel, 1766) Abrostola triplasia (Linnaeus, 1758) Acontia lucida (Hufnagel, 1766) Acontia trabealis (Scopoli, 1763) Acontia melanura (Tauscher, 1809) Acontiola lascivalis (Lederer, 1855) Acontiola moldavicola (Herrich-Schäffer, 1851) Acronicta aceris (Linnaeus, 1758) Acronicta strigosa (Denis & Schiffermüller, 1775) Acronicta cuspis (Hübner, 1813) Acronicta psi (Linnaeus, 1758) Acronicta tridens (Denis & Schiffermüller, 1775) Acronicta auricoma (Denis & Schiffermüller, 1775) Acronicta euphorbiae (Denis & Schiffermüller, 1775) Acronicta orientalis (Mann, 1862) Acronicta rumicis (Linnaeus, 1758) Actebia fugax (Treitschke, 1825) Actinotia polyodon (Clerck, 1759) Actinotia radiosa (Esper, 1804) Aedia funesta (Esper, 1786) Aedia leucomelas (Linnaeus, 1758) Aegle agatha (Staudinger, 1861) Aegle kaekeritziana (Hübner, 1799) Aegle pallida (Staudinger, 1892) Aegle semicana (Esper, 1798) Agrochola lychnidis (Denis & Schiffermüller, 1775) Agrochola lactiflora Draudt, 1934 Agrochola deleta (Staudinger, 1882) Agrochola gratiosa (Staudinger, 1882) Agrochola helvola (Linnaeus, 1758) Agrochola humilis (Denis & Schiffermüller, 1775) Agrochola kindermannii (Fischer v. Röslerstamm, 1837) Agrochola litura (Linnaeus, 1758) Agrochola luteogrisea (Warren, 1911) Agrochola nitida (Denis & Schiffermüller, 1775) Agrochola osthelderi Boursin, 1951 Agrochola rupicapra (Staudinger, 1879) Agrochola thurneri Boursin, 1953 Agrochola mansueta (Herrich-Schäffer, 1850) Agrochola lota (Clerck, 1759) Agrochola macilenta (Hübner, 1809) Agrochola schreieri Hacker & Weigert, 1986 Agrochola laevis (Hübner, 1803) Agrochola circellaris (Hufnagel, 1766) Agrotis bigramma (Esper, 1790) Agrotis catalaunensis (Milliere, 1873) Agrotis cinerea (Denis & Schiffermüller, 1775) Agrotis clavis (Hufnagel, 1766) Agrotis endogaea Boisduval, 1834 Agrotis exclamationis (Linnaeus, 1758) Agrotis haifae Staudinger, 1897 Agrotis herzogi Rebel, 1911 Agrotis ipsilon (Hufnagel, 1766) Agrotis puta (Hübner, 1803) Agrotis segetum (Denis & Schiffermüller, 1775) Agrotis spinifera (Hübner, 1808) Agrotis trux (Hübner, 1824) Agrotis vestigialis (Hufnagel, 1766) Allophyes asiatica (Staudinger, 1892) Allophyes cretica Pinker & Reisser, 1978 Allophyes oxyacanthae (Linnaeus, 1758) Amephana dalmatica (Rebel, 1919) Ammoconia caecimacula (Denis & Schiffermüller, 1775) Ammoconia reisseri L. Ronkay & Varga, 1984 Ammoconia senex (Geyer, 1828) Amphipoea fucosa (Freyer, 1830) Amphipoea oculea (Linnaeus, 1761) Amphipyra berbera Rungs, 1949 Amphipyra effusa Boisduval, 1828 Amphipyra livida (Denis & Schiffermüller, 1775) Amphipyra micans Lederer, 1857 Amphipyra pyramidea (Linnaeus, 1758) Amphipyra stix Herrich-Schäffer, 1850 Amphipyra tetra (Fabricius, 1787) Amphipyra tragopoginis (Clerck, 1759) Amphipyra cinnamomea (Goeze, 1781) Anaplectoides prasina (Denis & Schiffermüller, 1775) Anarta mendax (Staudinger, 1879) Anarta odontites (Boisduval, 1829) Anarta stigmosa (Christoph, 1887) Anarta trifolii (Hufnagel, 1766) Anthracia eriopoda (Herrich-Schäffer, 1851) Antitype chi (Linnaeus, 1758) Antitype jonis (Lederer, 1865) Antitype suda (Geyer, 1832) Apamea anceps (Denis & Schiffermüller, 1775) Apamea aquila Donzel, 1837 Apamea baischi Hacker, 1989 Apamea crenata (Hufnagel, 1766) Apamea epomidion (Haworth, 1809) Apamea furva (Denis & Schiffermüller, 1775) Apamea illyria Freyer, 1846 Apamea lateritia (Hufnagel, 1766) Apamea lithoxylaea (Denis & Schiffermüller, 1775) Apamea maillardi (Geyer, 1834) Apamea michielii Varga, 1976 Apamea minoica (Fibiger, Ronkay, Schmidt & Zilli, 2005) Apamea monoglypha (Hufnagel, 1766) Apamea platinea (Treitschke, 1825) Apamea remissa (Hübner, 1809) Apamea sicula (Turati, 1909) Apamea sordens (Hufnagel, 1766) Apamea syriaca (Osthelder, 1933) Apamea unanimis (Hübner, 1813) Apamea zeta (Treitschke, 1825) Apaustis rupicola (Denis & Schiffermüller, 1775) Aporophyla australis (Boisduval, 1829) Aporophyla canescens (Duponchel, 1826) Aporophyla chioleuca (Herrich-Schäffer, 1850) Aporophyla lutulenta (Denis & Schiffermüller, 1775) Aporophyla nigra (Haworth, 1809) Apterogenum ypsillon (Denis & Schiffermüller, 1775) Archanara dissoluta (Treitschke, 1825) Asteroscopus sphinx (Hufnagel, 1766) Asteroscopus syriaca (Warren, 1910) Atethmia ambusta (Denis & Schiffermüller, 1775) Atethmia centrago (Haworth, 1809) Athetis hospes (Freyer, 1831) Atypha pulmonaris (Esper, 1790) Auchmis detersa (Esper, 1787) Autographa gamma (Linnaeus, 1758) Autographa jota (Linnaeus, 1758) Axylia putris (Linnaeus, 1761) Behounekia freyeri (Frivaldszky, 1835) Brachylomia viminalis (Fabricius, 1776) Brithys crini (Fabricius, 1775) Bryophila ereptricula Treitschke, 1825 Bryophila gea (Schawerda, 1934) Bryophila raptricula (Denis & Schiffermüller, 1775) Bryophila rectilinea (Warren, 1909) Bryophila seladona Christoph, 1885 Bryophila tephrocharis (Boursin, 1953) Bryophila domestica (Hufnagel, 1766) Bryophila strobinoi (Dujardin, 1972) Bryophila petrea Guenée, 1852 Bryophila maeonis Lederer, 1865 Calamia tridens (Hufnagel, 1766) Calliergis ramosa (Esper, 1786) Callopistria juventina (Stoll, 1782) Callopistria latreillei (Duponchel, 1827) Calophasia barthae Wagner, 1929 Calophasia lunula (Hufnagel, 1766) Calophasia opalina (Esper, 1793) Calophasia platyptera (Esper, 1788) Caradrina syriaca Staudinger, 1892 Caradrina agrotina Staudinger, 1892 Caradrina morpheus (Hufnagel, 1766) Caradrina draudti (Boursin, 1936) Caradrina flava Oberthur, 1876 Caradrina gilva (Donzel, 1837) Caradrina pertinax Staudinger, 1879 Caradrina zernyi (Boursin, 1939) Caradrina clavipalpis Scopoli, 1763 Caradrina flavirena Guenée, 1852 Caradrina levantina Hacker, 2004 Caradrina minoica Hacker, 2004 Caradrina selini Boisduval, 1840 Caradrina suscianja (Mentzer, 1981) Caradrina wullschlegeli Pungeler, 1903 Caradrina aspersa Rambur, 1834 Caradrina kadenii Freyer, 1836 Caradrina montana Bremer, 1861 Cardepia hartigi Parenzan, 1981 Cardepia sociabilis (de Graslin, 1850) Ceramica pisi (Linnaeus, 1758) Cerapteryx graminis (Linnaeus, 1758) Cerastis rubricosa (Denis & Schiffermüller, 1775) Charanyca trigrammica (Hufnagel, 1766) Charanyca apfelbecki (Rebel, 1901) Charanyca ferruginea (Esper, 1785) Chersotis anatolica (Draudt, 1936) Chersotis andereggii (Boisduval, 1832) Chersotis capnistis (Lederer, 1872) Chersotis cuprea (Denis & Schiffermüller, 1775) Chersotis elegans (Eversmann, 1837) Chersotis fimbriola (Esper, 1803) Chersotis laeta (Rebel, 1904) Chersotis larixia (Guenée, 1852) Chersotis margaritacea (Villers, 1789) Chersotis multangula (Hübner, 1803) Chersotis obnubila (Corti, 1926) Chersotis rectangula (Denis & Schiffermüller, 1775) Chersotis zukowskyi (Draudt, 1936) Chilodes maritima (Tauscher, 1806) Chloantha hyperici (Denis & Schiffermüller, 1775) Chrysodeixis chalcites (Esper, 1789) Cleoceris scoriacea (Esper, 1789) Cleonymia opposita (Lederer, 1870) Colocasia coryli (Linnaeus, 1758) Condica viscosa (Freyer, 1831) Conisania renati (Oberthur, 1890) Conisania luteago (Denis & Schiffermüller, 1775) Conistra ligula (Esper, 1791) Conistra rubiginosa (Scopoli, 1763) Conistra vaccinii (Linnaeus, 1761) Conistra veronicae (Hübner, 1813) Conistra erythrocephala (Denis & Schiffermüller, 1775) Conistra rubiginea (Denis & Schiffermüller, 1775) Conistra ragusae (Failla-Tedaldi, 1890) Conistra torrida (Lederer, 1857) Cornutiplusia circumflexa (Linnaeus, 1767) Cosmia trapezina (Linnaeus, 1758) Cosmia diffinis (Linnaeus, 1767) Cosmia pyralina (Denis & Schiffermüller, 1775) Cosmia confinis Herrich-Schäffer, 1849 Cosmia affinis (Linnaeus, 1767) Craniophora ligustri (Denis & Schiffermüller, 1775) Cryphia amygdalina Boursin, 1963 Cryphia omalosi Svendsen & Fibiger, 1998 Cryphia receptricula (Hübner, 1803) Cryphia algae (Fabricius, 1775) Cryphia electra Fibiger, Steiner, & Ronkay, 2009 Cryphia ochsi (Boursin, 1940) Ctenoplusia accentifera (Lefebvre, 1827) Cucullia celsiae Herrich-Schäffer, 1850 Cucullia calendulae Treitschke, 1835 Cucullia chamomillae (Denis & Schiffermüller, 1775) Cucullia formosa Rogenhofer, 1860 Cucullia lactucae (Denis & Schiffermüller, 1775) Cucullia santolinae Rambur, 1834 Cucullia santonici (Hübner, 1813) Cucullia syrtana Mabille, 1888 Cucullia umbratica (Linnaeus, 1758) Cucullia blattariae (Esper, 1790) Cucullia lanceolata (Villers, 1789) Cucullia lychnitis Rambur, 1833 Cucullia scrophulariae (Denis & Schiffermüller, 1775) Cucullia verbasci (Linnaeus, 1758) Dasypolia esseri Fibiger, 1993 Dasypolia templi (Thunberg, 1792) Deltote bankiana (Fabricius, 1775) Deltote pygarga (Hufnagel, 1766) Denticucullus pygmina (Haworth, 1809) Diachrysia chrysitis (Linnaeus, 1758) Diachrysia chryson (Esper, 1789) Diachrysia nadeja (Oberthur, 1880) Diarsia mendica (Fabricius, 1775) Dichagyris flammatra (Denis & Schiffermüller, 1775) Dichagyris musiva (Hübner, 1803) Dichagyris candelisequa (Denis & Schiffermüller, 1775) Dichagyris celsicola (Bellier, 1859) Dichagyris erubescens (Staudinger, 1892) Dichagyris flavina (Herrich-Schäffer, 1852) Dichagyris forcipula (Denis & Schiffermüller, 1775) Dichagyris forficula (Eversmann, 1851) Dichagyris gracilis (Wagner, 1929) Dichagyris insula (Fibiger, 1997) Dichagyris melanura (Kollar, 1846) Dichagyris nigrescens (Hofner, 1888) Dichagyris renigera (Hübner, 1808) Dichagyris rhadamanthys (Reisser, 1958) Dichagyris signifera (Denis & Schiffermüller, 1775) Dichagyris soror (Fibiger, 1997) Dichonia aeruginea (Hübner, 1808) Dichonia convergens (Denis & Schiffermüller, 1775) Dicycla oo (Linnaeus, 1758) Diloba caeruleocephala (Linnaeus, 1758) Dioszeghyana schmidti (Dioszeghy, 1935) Divaena haywardi (Tams, 1926) Dryobota labecula (Esper, 1788) Dryobotodes tenebrosa (Esper, 1789) Dryobotodes carbonis Wagner, 1931 Dryobotodes eremita (Fabricius, 1775) Dryobotodes monochroma (Esper, 1790) Dryobotodes servadeii Parenzan, 1982 Dypterygia scabriuscula (Linnaeus, 1758) Egira anatolica (M. Hering, 1933) Egira conspicillaris (Linnaeus, 1758) Egira tibori Hreblay, 1994 Elaphria venustula (Hübner, 1790) Enterpia laudeti (Boisduval, 1840) Epilecta linogrisea (Denis & Schiffermüller, 1775) Epimecia ustula (Freyer, 1835) Epipsilia cervantes (Reisser, 1935) Epipsilia grisescens (Fabricius, 1794) Episema glaucina (Esper, 1789) Episema gozmanyi L. Ronkay & Hacker, 1985 Episema korsakovi (Christoph, 1885) Episema tersa (Denis & Schiffermüller, 1775) Eremobia ochroleuca (Denis & Schiffermüller, 1775) Eremohadena chenopodiphaga (Rambur, 1832) Eucarta amethystina (Hübner, 1803) Euchalcia chlorocharis (Dufay, 1961) Euchalcia emichi (Rogenhofer & Mann, 1873) Euchalcia siderifera (Eversmann, 1846) Eugnorisma depuncta (Linnaeus, 1761) Eugnorisma pontica (Staudinger, 1892) Euplexia lucipara (Linnaeus, 1758) Eupsilia transversa (Hufnagel, 1766) Euxoa penelope Fibiger, 1997 Euxoa aquilina (Denis & Schiffermüller, 1775) Euxoa conspicua (Hübner, 1824) Euxoa cos (Hübner, 1824) Euxoa decora (Denis & Schiffermüller, 1775) Euxoa distinguenda (Lederer, 1857) Euxoa eruta (Hübner, 1817) Euxoa glabella Wagner, 1930 Euxoa hastifera (Donzel, 1847) Euxoa malickyi Varga, 1990 Euxoa montivaga Fibiger, 1997 Euxoa nigricans (Linnaeus, 1761) Euxoa obelisca (Denis & Schiffermüller, 1775) Euxoa pareruta Fibiger, Gyulai, Zilli, Yela & Ronkay, 2010 Euxoa segnilis (Duponchel, 1837) Euxoa temera (Hübner, 1808) Euxoa vitta (Esper, 1789) Euxoa derrae Hacker, 1985 Evisa schawerdae Reisser, 1930 Globia sparganii (Esper, 1790) Gortyna flavago (Denis & Schiffermüller, 1775) Gortyna moesiaca Herrich-Schäffer, 1849 Gortyna xanthenes Germar, 1842 Griposia aprilina (Linnaeus, 1758) Griposia pinkeri Kobes, 1973 Griposia wegneri Kobes & Fibiger, 2003 Hada plebeja (Linnaeus, 1761) Hadena perplexa (Denis & Schiffermüller, 1775) Hadena silenes (Hübner, 1822) Hadena syriaca (Osthelder, 1933) Hadena adriana (Schawerda, 1921) Hadena albimacula (Borkhausen, 1792) Hadena caesia (Denis & Schiffermüller, 1775) Hadena capsincola (Denis & Schiffermüller, 1775) Hadena clara (Staudinger, 1901) Hadena compta (Denis & Schiffermüller, 1775) Hadena confusa (Hufnagel, 1766) Hadena drenowskii (Rebel, 1930) Hadena filograna (Esper, 1788) Hadena gueneei (Staudinger, 1901) Hadena luteocincta (Rambur, 1834) Hadena magnolii (Boisduval, 1829) Hadena persimilis Hacker, 1996 Hadena vulcanica (Turati, 1907) Hadena wehrlii (Draudt, 1934) Hadena pumila (Staudinger, 1879) Hadena tephroleuca (Boisduval, 1833) Haemerosia renalis (Hübner, 1813) Haemerosia vassilininei A. Bang-Haas, 1912 Hecatera bicolorata (Hufnagel, 1766) Hecatera cappa (Hübner, 1809) Hecatera dysodea (Denis & Schiffermüller, 1775) Helicoverpa armigera (Hübner, 1808) Heliothis adaucta Butler, 1878 Heliothis incarnata Freyer, 1838 Heliothis maritima Graslin, 1855 Heliothis nubigera Herrich-Schäffer, 1851 Heliothis peltigera (Denis & Schiffermüller, 1775) Heliothis viriplaca (Hufnagel, 1766) Helivictoria victorina (Sodoffsky, 1849) Helotropha leucostigma (Hübner, 1808) Heterophysa dumetorum (Geyer, 1834) Hoplodrina ambigua (Denis & Schiffermüller, 1775) Hoplodrina blanda (Denis & Schiffermüller, 1775) Hoplodrina octogenaria (Goeze, 1781) Hoplodrina respersa (Denis & Schiffermüller, 1775) Hoplodrina superstes (Ochsenheimer, 1816) Janthinea friwaldskii (Duponchel, 1835) Jodia croceago (Denis & Schiffermüller, 1775) Lacanobia contigua (Denis & Schiffermüller, 1775) Lacanobia suasa (Denis & Schiffermüller, 1775) Lacanobia blenna (Hübner, 1824) Lacanobia oleracea (Linnaeus, 1758) Lacanobia splendens (Hübner, 1808) Lacanobia w-latinum (Hufnagel, 1766) Lamprosticta culta (Denis & Schiffermüller, 1775) Lasionycta proxima (Hübner, 1809) Lenisa geminipuncta (Haworth, 1809) Leucania loreyi (Duponchel, 1827) Leucania comma (Linnaeus, 1761) Leucania herrichi Herrich-Schäffer, 1849 Leucania obsoleta (Hübner, 1803) Leucania palaestinae Staudinger, 1897 Leucania punctosa (Treitschke, 1825) Leucania putrescens (Hübner, 1824) Leucania zeae (Duponchel, 1827) Leucochlaena muscosa (Staudinger, 1892) Lithophane ledereri (Staudinger, 1892) Lithophane merckii (Rambur, 1832) Lithophane ornitopus (Hufnagel, 1766) Lithophane semibrunnea (Haworth, 1809) Lithophane socia (Hufnagel, 1766) Lithophane lapidea (Hübner, 1808) Litoligia literosa (Haworth, 1809) Lophoterges hoerhammeri (Wagner, 1931) Luperina dumerilii (Duponchel, 1826) Luperina rubella (Duponchel, 1835) Lycophotia porphyrea (Denis & Schiffermüller, 1775) Macdunnoughia confusa (Stephens, 1850) Mamestra brassicae (Linnaeus, 1758) Maraschia grisescens Osthelder, 1933 Megalodes eximia (Freyer, 1845) Meganephria bimaculosa (Linnaeus, 1767) Mesapamea secalella Remm, 1983 Mesapamea secalis (Linnaeus, 1758) Mesogona acetosellae (Denis & Schiffermüller, 1775) Mesogona oxalina (Hübner, 1803) Mesoligia furuncula (Denis & Schiffermüller, 1775) Mniotype adusta (Esper, 1790) Mniotype satura (Denis & Schiffermüller, 1775) Mniotype solieri (Boisduval, 1829) Mormo maura (Linnaeus, 1758) Mythimna riparia (Rambur, 1829) Mythimna albipuncta (Denis & Schiffermüller, 1775) Mythimna congrua (Hübner, 1817) Mythimna ferrago (Fabricius, 1787) Mythimna l-album (Linnaeus, 1767) Mythimna umbrigera (Saalmuller, 1891) Mythimna languida (Walker, 1858) Mythimna conigera (Denis & Schiffermüller, 1775) Mythimna impura (Hübner, 1808) Mythimna pallens (Linnaeus, 1758) Mythimna straminea (Treitschke, 1825) Mythimna turca (Linnaeus, 1761) Mythimna vitellina (Hübner, 1808) Mythimna prominens (Walker, 1856) Mythimna unipuncta (Haworth, 1809) Mythimna alopecuri (Boisduval, 1840) Mythimna andereggii (Boisduval, 1840) Mythimna sicula (Treitschke, 1835) Naenia typica (Linnaeus, 1758) Noctua comes Hübner, 1813 Noctua fimbriata (Schreber, 1759) Noctua interjecta Hübner, 1803 Noctua interposita (Hübner, 1790) Noctua janthina Denis & Schiffermüller, 1775 Noctua orbona (Hufnagel, 1766) Noctua pronuba (Linnaeus, 1758) Noctua tertia Mentzer & al., 1991 Noctua tirrenica Biebinger, Speidel & Hanigk, 1983 Nonagria typhae (Thunberg, 1784) Nyctobrya amasina Draudt, 1931 Ochropleura leucogaster (Freyer, 1831) Ochropleura plecta (Linnaeus, 1761) Oligia latruncula (Denis & Schiffermüller, 1775) Oligia strigilis (Linnaeus, 1758) Olivenebula subsericata (Herrich-Schäffer, 1861) Omphalophana anatolica (Lederer, 1857) Omphalophana antirrhinii (Hübner, 1803) Opigena polygona (Denis & Schiffermüller, 1775) Oria musculosa (Hübner, 1808) Orthosia cerasi (Fabricius, 1775) Orthosia cruda (Denis & Schiffermüller, 1775) Orthosia dalmatica (Wagner, 1909) Orthosia miniosa (Denis & Schiffermüller, 1775) Orthosia incerta (Hufnagel, 1766) Orthosia gothica (Linnaeus, 1758) Oxytripia orbiculosa (Esper, 1799) Pachetra sagittigera (Hufnagel, 1766) Pamparama acuta (Freyer, 1838) Panchrysia v-argenteum (Esper, 1798) Panemeria tenebrata (Scopoli, 1763) Panemeria tenebromorpha Rakosy, Hentscholek & Huber, 1996 Panolis flammea (Denis & Schiffermüller, 1775) Panthea coenobita (Esper, 1785) Paranataelia whitei (Rebel, 1906) Peridroma saucia (Hübner, 1808) Perigrapha i-cinctum (Denis & Schiffermüller, 1775) Perigrapha rorida Frivaldszky, 1835 Perigrapha sellingi Fibiger, Hacker & Moberg, 1996 Periphanes delphinii (Linnaeus, 1758) Philareta treitschkei (Frivaldszky, 1835) Phlogophora meticulosa (Linnaeus, 1758) Phlogophora scita (Hübner, 1790) Photedes fluxa (Hübner, 1809) Photedes morrisii (Dale, 1837) Phyllophila obliterata (Rambur, 1833) Plusia festucae (Linnaeus, 1758) Polia bombycina (Hufnagel, 1766) Polia serratilinea Ochsenheimer, 1816 Polymixis bischoffii (Herrich-Schäffer, 1850) Polymixis culoti (Schawerda, 1921) Polymixis leuconota (Frivaldszky, 1841) Polymixis manisadijani (Staudinger, 1881) Polymixis polymita (Linnaeus, 1761) Polymixis rufocincta (Geyer, 1828) Polymixis serpentina (Treitschke, 1825) Polyphaenis sericata (Esper, 1787) Praestilbia armeniaca Staudinger, 1892 Protoschinia scutosa (Denis & Schiffermüller, 1775) Pseudozarba bipartita (Herrich-Schäffer, 1850) Pyrrhia purpura (Hübner, 1817) Pyrrhia umbra (Hufnagel, 1766) Pyrrhia victorina (Sodoffsky, 1849) Rhizedra lutosa (Hübner, 1803) Rhyacia arenacea (Hampson, 1907) Rhyacia helvetina (Boisduval, 1833) Rhyacia lucipeta (Denis & Schiffermüller, 1775) Rhyacia nyctymerides (O. Bang-Haas, 1922) Rhyacia simulans (Hufnagel, 1766) Rileyiana fovea (Treitschke, 1825) Schinia cognata (Freyer, 1833) Scotochrosta pulla (Denis & Schiffermüller, 1775) Sesamia cretica Lederer, 1857 Sesamia nonagrioides Lefebvre, 1827 Sideridis implexa (Hübner, 1809) Sideridis reticulata (Goeze, 1781) Sideridis lampra (Schawerda, 1913) Simyra albovenosa (Goeze, 1781) Simyra dentinosa Freyer, 1838 Simyra nervosa (Denis & Schiffermüller, 1775) Spaelotis ravida (Denis & Schiffermüller, 1775) Spaelotis senna (Freyer, 1829) Spodoptera cilium Guenée, 1852 Spodoptera exigua (Hübner, 1808) Spodoptera littoralis (Boisduval, 1833) Standfussiana lucernea (Linnaeus, 1758) Standfussiana nictymera (Boisduval, 1834) Standfussiana sturanyi (Rebel, 1906) Stilbina olympica Dierl & Povolny, 1970 Subacronicta megacephala (Denis & Schiffermüller, 1775) Teinoptera lunaki (Boursin, 1940) Teinoptera oliva (Staudinger, 1895) Teinoptera olivina (Herrich-Schäffer, 1852) Thalpophila matura (Hufnagel, 1766) Tholera cespitis (Denis & Schiffermüller, 1775) Tholera decimalis (Poda, 1761) Thysanoplusia circumscripta (Freyer, 1831) Thysanoplusia daubei (Boisduval, 1840) Thysanoplusia orichalcea (Fabricius, 1775) Tiliacea aurago (Denis & Schiffermüller, 1775) Tiliacea citrago (Linnaeus, 1758) Tiliacea cypreago (Hampson, 1906) Tiliacea sulphurago (Denis & Schiffermüller, 1775) Trachea atriplicis (Linnaeus, 1758) Trichoplusia ni (Hübner, 1803) Trigonophora flammea (Esper, 1785) Tyta luctuosa (Denis & Schiffermüller, 1775) Ulochlaena hirta (Hübner, 1813) Valeria oleagina (Denis & Schiffermüller, 1775) Xanthia gilvago (Denis & Schiffermüller, 1775) Xanthia icteritia (Hufnagel, 1766) Xanthia castanea Osthelder, 1933 Xanthia togata (Esper, 1788) Xanthodes albago (Fabricius, 1794) Xestia ashworthii (Doubleday, 1855) Xestia c-nigrum (Linnaeus, 1758) Xestia triangulum (Hufnagel, 1766) Xestia baja (Denis & Schiffermüller, 1775) Xestia castanea (Esper, 1798) Xestia cohaesa (Herrich-Schäffer, 1849) Xestia ochreago (Hübner, 1809) Xestia palaestinensis (Kalchberg, 1897) Xestia stigmatica (Hübner, 1813) Xestia xanthographa (Denis & Schiffermüller, 1775) Xylena exsoleta (Linnaeus, 1758) Xylena lunifera Warren, 1910 Xylena vetusta (Hübner, 1813) Nolidae Bena bicolorana (Fuessly, 1775) Earias clorana (Linnaeus, 1761) Earias insulana (Boisduval, 1833) Earias vernana (Fabricius, 1787) Garella nilotica (Rogenhofer, 1882) Meganola albula (Denis & Schiffermüller, 1775) Meganola gigantula (Staudinger, 1879) Meganola impura (Mann, 1862) Meganola kolbi (Daniel, 1935) Meganola togatulalis (Hübner, 1796) Nola aerugula (Hübner, 1793) Nola chlamitulalis (Hübner, 1813) Nola confusalis (Herrich-Schäffer, 1847) Nola cucullatella (Linnaeus, 1758) Nola harouni (Wiltshire, 1951) Nola squalida Staudinger, 1871 Nola subchlamydula Staudinger, 1871 Nycteola asiatica (Krulikovsky, 1904) Nycteola columbana (Turner, 1925) Nycteola revayana (Scopoli, 1772) Nycteola siculana (Fuchs, 1899) Pseudoips prasinana (Linnaeus, 1758) Notodontidae Cerura vinula (Linnaeus, 1758) Clostera anachoreta (Denis & Schiffermüller, 1775) Clostera anastomosis (Linnaeus, 1758) Clostera curtula (Linnaeus, 1758) Clostera pigra (Hufnagel, 1766) Dicranura ulmi (Denis & Schiffermüller, 1775) Drymonia dodonaea (Denis & Schiffermüller, 1775) Drymonia querna (Denis & Schiffermüller, 1775) Drymonia ruficornis (Hufnagel, 1766) Drymonia velitaris (Hufnagel, 1766) Furcula bifida (Brahm, 1787) Furcula furcula (Clerck, 1759) Harpyia milhauseri (Fabricius, 1775) Notodonta torva (Hübner, 1803) Notodonta tritophus (Denis & Schiffermüller, 1775) Notodonta ziczac (Linnaeus, 1758) Paradrymonia vittata (Staudinger, 1892) Peridea korbi (Rebel, 1918) Phalera bucephala (Linnaeus, 1758) Phalera bucephaloides (Ochsenheimer, 1810) Pterostoma palpina (Clerck, 1759) Ptilodon capucina (Linnaeus, 1758) Rhegmatophila alpina (Bellier, 1881) Spatalia argentina (Denis & Schiffermüller, 1775) Stauropus fagi (Linnaeus, 1758) Thaumetopoea pityocampa (Denis & Schiffermüller, 1775) Thaumetopoea processionea (Linnaeus, 1758) Thaumetopoea solitaria (Freyer, 1838) Oecophoridae Batia lambdella (Donovan, 1793) Batia lunaris (Haworth, 1828) Batia lutosella Jackh, 1972 Batia samosella Sutter, 2003 Borkhausenia minutella (Linnaeus, 1758) Crossotocera wagnerella Zerny, 1930 Dasycera imitatrix Zeller, 1847 Dasycera krueperella Staudinger, 1870 Dasycera oliviella (Fabricius, 1794) Decantha borkhausenii (Zeller, 1839) Denisia augustella (Hübner, 1796) Denisia rhaetica (Frey, 1856) Endrosis sarcitrella (Linnaeus, 1758) Epicallima formosella (Denis & Schiffermüller, 1775) Epicallima icterinella (Mann, 1867) Esperia sulphurella (Fabricius, 1775) Fabiola pokornyi (Nickerl, 1864) Harpella forficella (Scopoli, 1763) Holoscolia huebneri Kocak, 1980 Holoscolia majorella Rebel, 1902 Oecophora bractella (Linnaeus, 1758) Pleurota marginella (Denis & Schiffermüller, 1775) Pleurota arduella Rebel, 1906 Pleurota aristella (Linnaeus, 1767) Pleurota bicostella (Clerck, 1759) Pleurota chalepensis Rebel, 1917 Pleurota contristatella Mann, 1867 Pleurota ericella (Duponchel, 1839) Pleurota filigerella Mann, 1867 Pleurota metricella (Zeller, 1847) Pleurota nitens Staudinger, 1870 Pleurota planella (Staudinger, 1859) Pleurota protasella Staudinger, 1883 Pleurota pungitiella Herrich-Schäffer, 1854 Pleurota pyropella (Denis & Schiffermüller, 1775) Pleurota tristatella Staudinger, 1870 Pleurota vittalba Staudinger, 1871 Pleurota creticella Rebel, 1916 Schiffermuelleria schaefferella (Linnaeus, 1758) Opostegidae Opostega salaciella (Treitschke, 1833) Opostega spatulella Herrich-Schäffer, 1855 Opostegoides menthinella (Mann, 1855) Pseudopostega crepusculella (Zeller, 1839) Peleopodidae Carcina quercana (Fabricius, 1775) Plutellidae Eidophasia messingiella (Fischer von Röslerstamm, 1840) Eidophasia syenitella Herrich-Schäffer, 1854 Plutella xylostella (Linnaeus, 1758) Rhigognostis annulatella (Curtis, 1832) Rhigognostis wolfschlaegeri (Rebel, 1940) Praydidae Prays citri (Milliere, 1873) Prays oleae (Bernard, 1788) Prodoxidae Lampronia rupella (Denis & Schiffermüller, 1775) Psychidae Acanthopsyche ecksteini (Lederer, 1855) Anaproutia reticulatella (Bruand, 1853) Apterona helicinella (Herrich-Schäffer, 1846) Apterona helicoidella (Vallot, 1827) Bijugis bombycella (Denis & Schiffermüller, 1775) Bijugis pectinella (Denis & Schiffermüller, 1775) Canephora hirsuta (Poda, 1761) Dahlica achajensis (Sieder, 1966) Dahlica pseudoachajensis (Stengel, 1990) Dahlica thessaliensis Weidlich, 2008 Dahlica triquetrella (Hübner, 1813) Eochorica balcanica (Rebel, 1919) Epichnopterix plumella (Denis & Schiffermüller, 1775) Epichnopterix sieboldi (Reutti, 1853) Eumasia parietariella (Heydenreich, 1851) Heliopsychidea graecella (Milliere, 1866) Loebelia crassicornis (Staudinger, 1870) Luffia lapidella (Goeze, 1783) Megalophanes viciella (Denis & Schiffermüller, 1775) Montanima predotae Sieder, 1949 Narycia astrella (Herrich-Schäffer, 1851) Oiketicoides febretta (Boyer de Fonscolombe, 1835) Oiketicoides lutea (Staudinger, 1870) Pachythelia villosella (Ochsenheimer, 1810) Peloponnesia culminella Sieder, 1961 Peloponnesia glaphyrella (Rebel, 1906) Peloponnesia haettenschwileri Hauser, 1996 Penestoglossa dardoinella (Milliere, 1863) Phalacropterix praecellens (Staudinger, 1870) Pseudobankesia arahova Stengel, 1990 Pseudobankesia darwini Stengel, 1990 Pseudobankesia hauseriella Henderickx, 1998 Psyche casta (Pallas, 1767) Psyche crassiorella Bruand, 1851 Ptilocephala albida (Esper, 1786) Reisseronia magna Hattenschwiler, 1982 Reisseronia malickyi Hauser, 1996 Reisseronia nigrociliella (Rebel, 1934) Reisseronia pusilella (Rebel, 1941) Stichobasis helicinoides (Heylaerts, 1879) Typhonia christenseni Hattenschwiler, 1990 Typhonia ciliaris (Ochsenheimer, 1810) Pterolonchidae Pterolonche albescens Zeller, 1847 Pterolonche inspersa Staudinger, 1859 Pterophoridae Adaina microdactyla (Hübner, 1813) Agdistis adactyla (Hübner, 1819) Agdistis bennetii (Curtis, 1833) Agdistis bigoti Arenberger, 1976 Agdistis cypriota Arenberger, 1983 Agdistis frankeniae (Zeller, 1847) Agdistis hartigi Arenberger, 1973 Agdistis heydeni (Zeller, 1852) Agdistis hulli Gielis, 1998 Agdistis meridionalis (Zeller, 1847) Agdistis paralia (Zeller, 1847) Agdistis satanas Milliere, 1875 Agdistis tamaricis (Zeller, 1847) Amblyptilia acanthadactyla (Hübner, 1813) Calyciphora albodactylus (Fabricius, 1794) Calyciphora homoiodactyla (Kasy, 1960) Calyciphora nephelodactyla (Eversmann, 1844) Capperia celeusi (Frey, 1886) Capperia fusca (O. Hofmann, 1898) Capperia hellenica Adamczewski, 1951 Capperia maratonica Adamczewski, 1951 Capperia marginellus (Zeller, 1847) Capperia polonica Adamczewski, 1951 Capperia trichodactyla (Denis & Schiffermüller, 1775) Capperia washbourni Adamczewski, 1951 Cnaemidophorus rhododactyla (Denis & Schiffermüller, 1775) Crombrugghia distans (Zeller, 1847) Crombrugghia laetus (Zeller, 1847) Crombrugghia tristis (Zeller, 1841) Emmelina monodactyla (Linnaeus, 1758) Gillmeria pallidactyla (Haworth, 1811) Hellinsia carphodactyla (Hübner, 1813) Hellinsia distinctus (Herrich-Schäffer, 1855) Hellinsia inulae (Zeller, 1852) Hellinsia pectodactylus (Staudinger, 1859) Hellinsia tephradactyla (Hübner, 1813) Merrifieldia baliodactylus (Zeller, 1841) Merrifieldia leucodactyla (Denis & Schiffermüller, 1775) Merrifieldia malacodactylus (Zeller, 1847) Merrifieldia tridactyla (Linnaeus, 1758) Oidaematophorus lithodactyla (Treitschke, 1833) Oxyptilus ericetorum (Stainton, 1851) Oxyptilus parvidactyla (Haworth, 1811) Paracapperia anatolicus (Caradja, 1920) Platyptilia farfarellus Zeller, 1867 Platyptilia gonodactyla (Denis & Schiffermüller, 1775) Platyptilia tesseradactyla (Linnaeus, 1761) Procapperia linariae (Chrétien, 1922) Pselnophorus heterodactyla (Muller, 1764) Pterophorus ischnodactyla (Treitschke, 1835) Pterophorus pentadactyla (Linnaeus, 1758) Puerphorus olbiadactylus (Milliere, 1859) Stangeia siceliota (Zeller, 1847) Stenoptilia aridus (Zeller, 1847) Stenoptilia bipunctidactyla (Scopoli, 1763) Stenoptilia coprodactylus (Stainton, 1851) Stenoptilia elkefi Arenberger, 1984 Stenoptilia lucasi Arenberger, 1990 Stenoptilia parnasia Arenberger, 1986 Stenoptilia pterodactyla (Linnaeus, 1761) Stenoptilia stigmatodactylus (Zeller, 1852) Stenoptilia zophodactylus (Duponchel, 1840) Stenoptilodes taprobanes (Felder & Rogenhofer, 1875) Wheeleria ivae (Kasy, 1960) Wheeleria lyrae (Arenberger, 1983) Wheeleria obsoletus (Zeller, 1841) Wheeleria phlomidis (Staudinger, 1871) Wheeleria spilodactylus (Curtis, 1827) Pyralidae Acrobasis advenella (Zincken, 1818) Acrobasis bithynella Zeller, 1848 Acrobasis centunculella (Mann, 1859) Acrobasis consociella (Hübner, 1813) Acrobasis dulcella (Zeller, 1848) Acrobasis glaucella Staudinger, 1859 Acrobasis legatea (Haworth, 1811) Acrobasis marmorea (Haworth, 1811) Acrobasis obliqua (Zeller, 1847) Acrobasis obtusella (Hübner, 1796) Acrobasis repandana (Fabricius, 1798) Acrobasis sodalella Zeller, 1848 Acrobasis suavella (Zincken, 1818) Acrobasis tumidana (Denis & Schiffermüller, 1775) Aglossa asiatica Erschoff, 1872 Aglossa caprealis (Hübner, 1809) Aglossa pinguinalis (Linnaeus, 1758) Aglossa signicostalis Staudinger, 1871 Alophia combustella (Herrich-Schäffer, 1855) Ancylodes pallens Ragonot, 1887 Ancylosis cinnamomella (Duponchel, 1836) Ancylosis convexella (Lederer, 1855) Ancylosis hellenica (Staudinger, 1871) Ancylosis oblitella (Zeller, 1848) Ancylosis pallida (Staudinger, 1870) Ancylosis roscidella (Eversmann, 1844) Ancylosis sareptalla (Herrich-Schäffer, 1861) Aphomia sociella (Linnaeus, 1758) Aphomia unicolor (Staudinger, 1880) Aphomia zelleri de Joannis, 1932 Apomyelois ceratoniae (Zeller, 1839) Apomyelois cognata (Staudinger, 1871) Asalebria florella (Mann, 1862) Bostra obsoletalis (Mann, 1884) Bradyrrhoa cantenerella (Duponchel, 1837) Bradyrrhoa confiniella Zeller, 1848 Bradyrrhoa gilveolella (Treitschke, 1832) Cadra abstersella (Zeller, 1847) Cadra calidella (Guenée, 1845) Cadra cautella (Walker, 1863) Cadra delattinella Roesler, 1965 Cadra figulilella (Gregson, 1871) Cadra furcatella (Herrich-Schäffer, 1849) Catastia marginea (Denis & Schiffermüller, 1775) Corcyra cephalonica (Stainton, 1866) Cryptoblabes gnidiella (Milliere, 1867) Delplanqueia dilutella (Denis & Schiffermüller, 1775) Denticera divisella (Duponchel, 1842) Dioryctria abietella (Denis & Schiffermüller, 1775) Dioryctria mendacella (Staudinger, 1859) Dioryctria pineae (Staudinger, 1859) Dioryctria resiniphila Segerer & Prose, 1997 Dioryctria sylvestrella (Ratzeburg, 1840) Eccopisa effractella Zeller, 1848 Elegia fallax (Staudinger, 1881) Elegia similella (Zincken, 1818) Ematheudes punctella (Treitschke, 1833) Endotricha flammealis (Denis & Schiffermüller, 1775) Ephestia cypriusella (Roesler, 1965) Ephestia disparella Hampson, 1901 Ephestia elutella (Hübner, 1796) Ephestia kuehniella Zeller, 1879 Ephestia unicolorella Staudinger, 1881 Ephestia welseriella (Zeller, 1848) Epidauria strigosa (Staudinger, 1879) Epidauria transversariella (Zeller, 1848) Epischnia adultella Zeller, 1848 Epischnia cretaciella Mann, 1869 Epischnia illotella Zeller, 1839 Epischnia leucoloma Herrich-Schäffer, 1849 Epischnia prodromella (Hübner, 1799) Etiella zinckenella (Treitschke, 1832) Eurhodope cirrigerella (Zincken, 1818) Eurhodope incompta (Zeller, 1847) Eurhodope rosella (Scopoli, 1763) Euzophera bigella (Zeller, 1848) Euzophera cinerosella (Zeller, 1839) Euzophera formosella (Rebel, 1910) Euzophera fuliginosella (Heinemann, 1865) Euzophera lunulella (O. Costa, 1836) Euzophera nessebarella Soffner, 1962 Euzophera osseatella (Treitschke, 1832) Euzophera pinguis (Haworth, 1811) Euzophera pulchella Ragonot, 1887 Euzophera umbrosella (Staudinger, 1879) Euzopherodes lutisignella (Mann, 1869) Euzopherodes vapidella (Mann, 1857) Faveria dionysia (Zeller, 1846) Galleria mellonella (Linnaeus, 1758) Gymnancyla canella (Denis & Schiffermüller, 1775) Gymnancyla hornigii (Lederer, 1852) Homoeosoma nebulella (Denis & Schiffermüller, 1775) Homoeosoma nimbella (Duponchel, 1837) Homoeosoma sinuella (Fabricius, 1794) Hypochalcia ahenella (Denis & Schiffermüller, 1775) Hypochalcia lignella (Hübner, 1796) Hypotia corticalis (Denis & Schiffermüller, 1775) Hypsopygia costalis (Fabricius, 1775) Hypsopygia fulvocilialis (Duponchel, 1834) Hypsopygia glaucinalis (Linnaeus, 1758) Hypsopygia incarnatalis (Zeller, 1847) Hypsopygia rubidalis (Denis & Schiffermüller, 1775) Hypsotropa limbella Zeller, 1848 Insalebria serraticornella (Zeller, 1839) Isauria dilucidella (Duponchel, 1836) Keradere lepidella (Ragonot, 1887) Keradere tengstroemiella (Erschoff, 1874) Khorassania compositella (Treitschke, 1835) Klimeschiola philetella (Rebel, 1916) Lamoria anella (Denis & Schiffermüller, 1775) Lamoria ruficostella Ragonot, 1888 Loryma egregialis (Herrich-Schäffer, 1838) Matilella fusca (Haworth, 1811) Megasis rippertella (Zeller, 1839) Metallosticha argyrogrammos (Zeller, 1847) Metallostichodes nigrocyanella (Constant, 1865) Michaeliodes friesei Roesler, 1969 Moitrelia obductella (Zeller, 1839) Myelois circumvoluta (Fourcroy, 1785) Myelois pluripunctella Ragonot, 1887 Neurotomia coenulentella (Zeller, 1846) Nyctegretis lineana (Scopoli, 1786) Nyctegretis triangulella Ragonot, 1901 Oncocera semirubella (Scopoli, 1763) Oxybia transversella (Duponchel, 1836) Pempelia alpigenella (Duponchel, 1836) Pempelia amoenella (Zeller, 1848) Pempelia johannella (Caradja, 1916) Pempelia palumbella (Denis & Schiffermüller, 1775) Pempeliella ornatella (Denis & Schiffermüller, 1775) Pempeliella sororculella (Ragonot, 1887) Pempeliella sororiella Zeller, 1839 Phycita coronatella (Guenée, 1845) Phycita diaphana (Staudinger, 1870) Phycita meliella (Mann, 1864) Phycita metzneri (Zeller, 1846) Phycita pedisignella Ragonot, 1887 Phycita poteriella (Zeller, 1846) Phycita roborella (Denis & Schiffermüller, 1775) Phycitodes albatella (Ragonot, 1887) Phycitodes binaevella (Hübner, 1813) Phycitodes inquinatella (Ragonot, 1887) Phycitodes lacteella (Rothschild, 1915) Phycitodes saxicola (Vaughan, 1870) Plodia interpunctella (Hübner, 1813) Polyocha venosa (Zeller, 1847) Psorosa dahliella (Treitschke, 1832) Pterothrixidia rufella (Duponchel, 1836) Pyralis farinalis (Linnaeus, 1758) Pyralis regalis Denis & Schiffermüller, 1775 Raphimetopus ablutella (Zeller, 1839) Sciota hostilis (Stephens, 1834) Sciota imperialella (Ragonot, 1887) Selagia argyrella (Denis & Schiffermüller, 1775) Selagia spadicella (Hübner, 1796) Selagia subochrella (Herrich-Schäffer, 1849) Seleucia pectinella (Chrétien, 1911) Seleucia semirosella Ragonot, 1887 Stemmatophora brunnealis (Treitschke, 1829) Stemmatophora combustalis (Fischer v. Röslerstamm, 1842) Stemmatophora honestalis (Treitschke, 1829) Synaphe moldavica (Esper, 1794) Synaphe punctalis (Fabricius, 1775) Synoria antiquella (Herrich-Schäffer, 1855) Trachonitis cristella (Denis & Schiffermüller, 1775) Tretopteryx pertusalis (Geyer, 1832) Zophodia grossulariella (Hübner, 1809) Saturniidae Aglia tau (Linnaeus, 1758) Saturnia pavoniella (Scopoli, 1763) Saturnia spini (Denis & Schiffermüller, 1775) Saturnia caecigena Kupido, 1825 Saturnia pyri (Denis & Schiffermüller, 1775) Scythrididae Enolmis desidella (Lederer, 1855) Episcythris triangulella (Ragonot, 1874) Scythris aerariella (Herrich-Schäffer, 1855) Scythris albidella (Stainton, 1867) Scythris albostriata Hannemann, 1961 Scythris ambustella Bengtsson, 1997 Scythris anomaloptera (Staudinger, 1880) Scythris apicistrigella (Staudinger, 1870) Scythris braschiella (O. Hofmann, 1897) Scythris clavella (Zeller, 1855) Scythris confluens (Staudinger, 1870) Scythris crassiuscula (Herrich-Schäffer, 1855) Scythris crypta Hannemann, 1961 Scythris cuspidella (Denis & Schiffermüller, 1775) Scythris cycladeae Jackh, 1978 Scythris eberhardi Bengtsson, 1997 Scythris fallacella (Schlager, 1847) Scythris flavilaterella (Fuchs, 1886) Scythris fuscoaenea (Haworth, 1828) Scythris gravatella (Zeller, 1847) Scythris hungaricella Rebel, 1917 Scythris inclusella Lederer, 1855 Scythris inertella (Zeller, 1855) Scythris jaeckhi Bengtsson, 1989 Scythris lafauryi Passerin d'Entreves, 1986 Scythris laminella (Denis & Schiffermüller, 1775) Scythris limbella (Fabricius, 1775) Scythris mus Walsingham, 1898 Scythris obscurella (Scopoli, 1763) Scythris parnassiae Bengtsson, 1997 Scythris pascuella (Zeller, 1855) Scythris paullella (Herrich-Schäffer, 1855) Scythris platypyga (Staudinger, 1880) Scythris pudorinella (Moschler, 1866) Scythris punctivittella (O. Costa, 1836) Scythris scopolella (Linnaeus, 1767) Scythris seliniella (Zeller, 1839) Scythris siccella (Zeller, 1839) Scythris similis Hannemann, 1961 Scythris skulei Bengtsson, 1997 Scythris subaerariella (Stainton, 1867) Scythris subschleichiella Hannemann, 1961 Scythris tabescentella (Staudinger, 1880) Scythris tabidella (Herrich-Schäffer, 1855) Scythris taygeticola Scholz, 1997 Scythris tenuivittella (Stainton, 1867) Scythris tergestinella (Zeller, 1855) Scythris tributella (Zeller, 1847) Scythris vittella (O. Costa, 1834) Sesiidae Bembecia albanensis (Rebel, 1918) Bembecia blanka Spatenka, 2001 Bembecia fokidensis Tosevski, 1991 Bembecia ichneumoniformis (Denis & Schiffermüller, 1775) Bembecia lomatiaeformis (Lederer, 1853) Bembecia megillaeformis (Hübner, 1813) Bembecia pavicevici Tosevski, 1989 Bembecia priesneri Kallies, Petersen & Riefenstahl, 1998 Bembecia puella Z. Lastuvka, 1989 Bembecia sanguinolenta (Lederer, 1853) Bembecia scopigera (Scopoli, 1763) Bembecia uroceriformis (Treitschke, 1834) Chamaesphecia aerifrons (Zeller, 1847) Chamaesphecia albiventris (Lederer, 1853) Chamaesphecia alysoniformis (Herrich-Schäffer, 1846) Chamaesphecia anatolica Schwingenschuss, 1938 Chamaesphecia annellata (Zeller, 1847) Chamaesphecia astatiformis (Herrich-Schäffer, 1846) Chamaesphecia bibioniformis (Esper, 1800) Chamaesphecia chalciformis (Esper, 1804) Chamaesphecia doleriformis (Herrich-Schäffer, 1846) Chamaesphecia dumonti Le Cerf, 1922 Chamaesphecia empiformis (Esper, 1783) Chamaesphecia gorbunovi Spatenka, 1992 Chamaesphecia masariformis (Ochsenheimer, 1808) Chamaesphecia minoica Bartsch & Puhringer, 2005 Chamaesphecia nigrifrons (Le Cerf, 1911) Chamaesphecia proximata (Staudinger, 1891) Chamaesphecia schmidtiiformis (Freyer, 1836) Chamaesphecia tenthrediniformis (Denis & Schiffermüller, 1775) Chamaesphecia thracica Z. Lastuvka, 1983 Osminia fenusaeformis (Herrich-Schäffer, 1852) Paranthrene insolitus Le Cerf, 1914 Paranthrene tabaniformis (Rottemburg, 1775) Pennisetia bohemica Kralicek & Povolny, 1974 Pennisetia hylaeiformis (Laspeyres, 1801) Pyropteron affinis (Staudinger, 1856) Pyropteron leucomelaena (Zeller, 1847) Pyropteron minianiformis (Freyer, 1843) Pyropteron muscaeformis (Esper, 1783) Pyropteron triannuliformis (Freyer, 1843) Pyropteron umbrifera (Staudinger, 1870) Sesia apiformis (Clerck, 1759) Sesia pimplaeformis Oberthur, 1872 Synanthedon andrenaeformis (Laspeyres, 1801) Synanthedon cephiformis (Ochsenheimer, 1808) Synanthedon conopiformis (Esper, 1782) Synanthedon culiciformis (Linnaeus, 1758) Synanthedon formicaeformis (Esper, 1783) Synanthedon geranii Kallies, 1997 Synanthedon loranthi (Kralicek, 1966) Synanthedon mesiaeformis (Herrich-Schäffer, 1846) Synanthedon myopaeformis (Borkhausen, 1789) Synanthedon rubiana Kallies, Petersen & Riefenstahl, 1998 Synanthedon spuleri (Fuchs, 1908) Synanthedon stomoxiformis (Hübner, 1790) Synanthedon tipuliformis (Clerck, 1759) Synanthedon vespiformis (Linnaeus, 1761) Tinthia brosiformis (Hübner, 1813) Tinthia hoplisiformis (Mann, 1864) Tinthia myrmosaeformis (Herrich-Schäffer, 1846) Tinthia tineiformis (Esper, 1789) Sphingidae Acherontia atropos (Linnaeus, 1758) Agrius convolvuli (Linnaeus, 1758) Daphnis nerii (Linnaeus, 1758) Deilephila elpenor (Linnaeus, 1758) Deilephila porcellus (Linnaeus, 1758) Dolbina elegans A. Bang-Haas, 1912 Hemaris croatica (Esper, 1800) Hemaris fuciformis (Linnaeus, 1758) Hemaris tityus (Linnaeus, 1758) Hippotion celerio (Linnaeus, 1758) Hyles cretica Eitschberger, Danner & Surholt, 1998 Hyles euphorbiae (Linnaeus, 1758) Hyles hippophaes (Esper, 1789) Hyles livornica (Esper, 1780) Hyles nicaea (de Prunner, 1798) Hyles vespertilio (Esper, 1780) Laothoe populi (Linnaeus, 1758) Macroglossum stellatarum (Linnaeus, 1758) Marumba quercus (Denis & Schiffermüller, 1775) Mimas tiliae (Linnaeus, 1758) Proserpinus proserpina (Pallas, 1772) Rethera komarovi (Christoph, 1885) Smerinthus ocellata (Linnaeus, 1758) Sphingoneopsis gorgoniades (Hübner, 1819) Sphinx ligustri Linnaeus, 1758 Sphinx pinastri Linnaeus, 1758 Theretra alecto (Linnaeus, 1758) Stathmopodidae Neomariania partinicensis (Rebel, 1937) Tortilia graeca Kasy, 1981 Thyrididae Thyris fenestrella (Scopoli, 1763) Tineidae Anomalotinea gardesanella (Hartig, 1950) Anomalotinea liguriella (Milliere, 1879) Archinemapogon yildizae Kocak, 1981 Ateliotum hungaricellum Zeller, 1839 Ateliotum petrinella (Herrich-Schäffer, 1854) Ateliotum syriaca (Caradja, 1920) Cephimallota angusticostella (Zeller, 1839) Ceratuncus danubiella (Mann, 1866) Crassicornella crassicornella (Zeller, 1847) Dryadaula hellenica (Gaedike, 1988) Edosa fuscoviolacella (Ragonot, 1895) Eudarcia armatum (Gaedike, 1985) Eudarcia glaseri (Petersen, 1967) Eudarcia montanum (Gaedike, 1985) Eudarcia sutteri Gaedike, 1997 Eudarcia verkerki Gaedike & Henderickx, 1999 Eudarcia hellenica Gaedike, 2007 Eudarcia lobata (Petersen & Gaedike, 1979) Eudarcia confusella (Heydenreich, 1851) Eudarcia fibigeri Gaedike, 1997 Eudarcia forsteri (Petersen, 1964) Eudarcia graecum (Gaedike, 1985) Eudarcia kasyi (Petersen, 1971) Eudarcia moreae (Petersen & Gaedike, 1983) Eudarcia holtzi (Rebel, 1902) Euplocamus anthracinalis (Scopoli, 1763) Euplocamus ophisus (Cramer, 1779) Gaedikeia kokkariensis Sutter, 1998 Hapsifera luridella Zeller, 1847 Infurcitinea albicomella (Stainton, 1851) Infurcitinea arenbergeri Gaedike, 1988 Infurcitinea finalis Gozmány, 1959 Infurcitinea graeca Gaedike, 1983 Infurcitinea hellenica Gaedike, 1997 Infurcitinea karsholti Gaedike, 1992 Infurcitinea lakoniae Gaedike, 1983 Infurcitinea litochorella Petersen, 1964 Infurcitinea nedae Gaedike, 1983 Infurcitinea nigropluviella (Walsingham, 1907) Infurcitinea ochridella Petersen, 1962 Infurcitinea olympica Petersen, 1958 Infurcitinea parnassiella Gaedike, 1987 Infurcitinea reisseri Petersen, 1968 Infurcitinea rumelicella (Rebel, 1903) Infurcitinea tauridella Petersen, 1968 Infurcitinea taurus Gaedike, 1988 Infurcitinea tribertii Gaedike, 1983 Lichenotinea pustulatella (Zeller, 1852) Matratinea rufulicaput Sziraki & Szocs, 1990 Monopis crocicapitella (Clemens, 1859) Monopis imella (Hübner, 1813) Monopis laevigella (Denis & Schiffermüller, 1775) Monopis obviella (Denis & Schiffermüller, 1775) Monopis weaverella (Scott, 1858) Morophaga choragella (Denis & Schiffermüller, 1775) Morophaga morella (Duponchel, 1838) Myrmecozela parnassiella (Rebel, 1915) Myrmecozela stepicola Zagulajev, 1972 Nemapogon anatolica Gaedike, 1986 Nemapogon arenbergeri Gaedike, 1986 Nemapogon cloacella (Haworth, 1828) Nemapogon falstriella (Bang-Haas, 1881) Nemapogon granella (Linnaeus, 1758) Nemapogon gravosaellus Petersen, 1957 Nemapogon hungaricus Gozmány, 1960 Nemapogon inconditella (Lucas, 1956) Nemapogon orientalis Petersen, 1961 Nemapogon reisseri Petersen & Gaedike, 1983 Nemapogon ruricolella (Stainton, 1849) Nemapogon scholzi Sutter, 2000 Nemapogon scutifera Gaedike, 2007 Nemapogon signatellus Petersen, 1957 Nemapogon variatella (Clemens, 1859) Neurothaumasia ankerella (Mann, 1867) Neurothaumasia macedonica Petersen, 1962 Niditinea fuscella (Linnaeus, 1758) Niditinea striolella (Matsumura, 1931) Novotinea klimeschi (Rebel, 1940) Oinophila v-flava (Haworth, 1828) Proterospastis merdella (Zeller, 1847) Reisserita relicinella (Herrich-Schäffer, 1853) Rhodobates unicolor (Staudinger, 1870) Scardia boletella (Fabricius, 1794) Stenoptinea cyaneimarmorella (Milliere, 1854) Tenaga nigripunctella (Haworth, 1828) Tenaga rhenania (Petersen, 1962) Tinea basifasciella Ragonot, 1895 Tinea columbariella Wocke, 1877 Tinea flavescentella Haworth, 1828 Tinea messalina Robinson, 1979 Tinea murariella Staudinger, 1859 Tinea pellionella Linnaeus, 1758 Tinea translucens Meyrick, 1917 Tinea trinotella Thunberg, 1794 Triaxomasia caprimulgella (Stainton, 1851) Triaxomera parasitella (Hübner, 1796) Trichophaga bipartitella (Ragonot, 1892) Trichophaga tapetzella (Linnaeus, 1758) Tischeriidae Coptotriche gaunacella (Duponchel, 1843) Coptotriche marginea (Haworth, 1828) Tischeria dodonaea Stainton, 1858 Tischeria ekebladella (Bjerkander, 1795) Tortricidae Acleris boscanoides Razowski, 1959 Acleris forsskaleana (Linnaeus, 1758) Acleris hastiana (Linnaeus, 1758) Acleris lipsiana (Denis & Schiffermüller, 1775) Acleris quercinana (Zeller, 1849) Acleris schalleriana (Linnaeus, 1761) Acleris variegana (Denis & Schiffermüller, 1775) Adoxophyes orana (Fischer v. Röslerstamm, 1834) Aethes bilbaensis (Rossler, 1877) Aethes flagellana (Duponchel, 1836) Aethes francillana (Fabricius, 1794) Aethes hartmanniana (Clerck, 1759) Aethes margarotana (Duponchel, 1836) Aethes mauritanica (Walsingham, 1898) Aethes nefandana (Kennel, 1899) Aethes sanguinana (Treitschke, 1830) Aethes tesserana (Denis & Schiffermüller, 1775) Aethes triangulana (Treitschke, 1835) Aethes williana (Brahm, 1791) Agapeta largana (Rebel, 1906) Agapeta zoegana (Linnaeus, 1767) Aleimma loeflingiana (Linnaeus, 1758) Ancylis achatana (Denis & Schiffermüller, 1775) Ancylis apicella (Denis & Schiffermüller, 1775) Ancylis comptana (Frolich, 1828) Ancylis selenana (Guenée, 1845) Ancylis unguicella (Linnaeus, 1758) Aphelia euxina (Djakonov, 1929) Aphelia ferugana (Hübner, 1793) Archips crataegana (Hübner, 1799) Archips podana (Scopoli, 1763) Archips rosana (Linnaeus, 1758) Archips xylosteana (Linnaeus, 1758) Argyrotaenia ljungiana (Thunberg, 1797) Avaria hyerana (Milliere, 1858) Bactra bactrana (Kennel, 1901) Bactra furfurana (Haworth, 1811) Bactra lancealana (Hübner, 1799) Bactra venosana (Zeller, 1847) Cacoecimorpha pronubana (Hübner, 1799) Capua vulgana (Frolich, 1828) Celypha lacunana (Denis & Schiffermüller, 1775) Celypha rurestrana (Duponchel, 1843) Celypha striana (Denis & Schiffermüller, 1775) Celypha woodiana (Barrett, 1882) Choristoneura hebenstreitella (Muller, 1764) Choristoneura murinana (Hübner, 1799) Clepsis consimilana (Hübner, 1817) Clepsis pallidana (Fabricius, 1776) Clepsis steineriana (Hübner, 1799) Cnephasia asseclana (Denis & Schiffermüller, 1775) Cnephasia communana (Herrich-Schäffer, 1851) Cnephasia cupressivorana (Staudinger, 1871) Cnephasia disforma Razowski, 1983 Cnephasia divisana Razowski, 1959 Cnephasia ecullyana Real, 1951 Cnephasia fragosana (Zeller, 1847) Cnephasia graecana Rebel, 1902 Cnephasia gueneeana (Duponchel, 1836) Cnephasia hellenica Obraztsov, 1956 Cnephasia heringi Razowski, 1958 Cnephasia longana (Haworth, 1811) Cnephasia parnassicola Razowski, 1958 Cnephasia pasiuana (Hübner, 1799) Cnephasia pumicana (Zeller, 1847) Cnephasia stephensiana (Doubleday, 1849) Cnephasia tofina Meyrick, 1922 Cnephasia abrasana (Duponchel, 1843) Cnephasia incertana (Treitschke, 1835) Cochylidia heydeniana (Herrich-Schäffer, 1851) Cochylidia subroseana (Haworth, 1811) Cochylimorpha meridiana (Staudinger, 1859) Cochylimorpha straminea (Haworth, 1811) Cochylis defessana (Mann, 1861) Cochylis epilinana Duponchel, 1842 Cochylis molliculana Zeller, 1847 Cochylis nana (Haworth, 1811) Cochylis pallidana Zeller, 1847 Cochylis posterana Zeller, 1847 Crocidosema plebejana Zeller, 1847 Cryptocochylis conjunctana (Mann, 1864) Cydia alienana (Caradja, 1916) Cydia amplana (Hübner, 1800) Cydia blackmoreana (Walsingham, 1903) Cydia conicolana (Heylaerts, 1874) Cydia corollana (Hübner, 1823) Cydia duplicana (Zetterstedt, 1839) Cydia fagiglandana (Zeller, 1841) Cydia honorana (Herrich-Schäffer, 1851) Cydia ilipulana (Walsingham, 1903) Cydia johanssoni Aarvik & Karsholt, 1993 Cydia plumbiferana (Staudinger, 1870) Cydia pomonella (Linnaeus, 1758) Cydia pyrivora (Danilevsky, 1947) Cydia semicinctana (Kennel, 1901) Cydia splendana (Hübner, 1799) Cydia succedana (Denis & Schiffermüller, 1775) Cydia trogodana Prose, 1988 Diceratura ostrinana (Guenée, 1845) Diceratura rhodograpta Djakonov, 1929 Dichelia histrionana (Frolich, 1828) Dichrorampha inconspicua (Danilevsky, 1948) Dichrorampha incursana (Herrich-Schäffer, 1851) Dichrorampha lasithicana Rebel, 1916 Dichrorampha montanana (Duponchel, 1843) Dichrorampha petiverella (Linnaeus, 1758) Dichrorampha plumbagana (Treitschke, 1830) Dichrorampha plumbana (Scopoli, 1763) Eana derivana (de La Harpe, 1858) Eana italica (Obraztsov, 1950) Eana penziana (Thunberg, 1791) Eana argentana (Clerck, 1759) Eana canescana (Guenée, 1845) Endothenia gentianaeana (Hübner, 1799) Endothenia oblongana (Haworth, 1811) Endothenia sororiana (Herrich-Schäffer, 1850) Epagoge grotiana (Fabricius, 1781) Epiblema costipunctana (Haworth, 1811) Epiblema cretana Osthelder, 1941 Epiblema foenella (Linnaeus, 1758) Epiblema gammana (Mann, 1866) Epiblema graphana (Treitschke, 1835) Epiblema hepaticana (Treitschke, 1835) Epiblema mendiculana (Treitschke, 1835) Epiblema scutulana (Denis & Schiffermüller, 1775) Epinotia brunnichana (Linnaeus, 1767) Epinotia dalmatana (Rebel, 1891) Epinotia festivana (Hübner, 1799) Epinotia fraternana (Haworth, 1811) Epinotia nigricana (Herrich-Schäffer, 1851) Epinotia nigristriana Budashkin & Zlatkov, 2011 Epinotia nisella (Clerck, 1759) Epinotia subsequana (Haworth, 1811) Epinotia tedella (Clerck, 1759) Epinotia thapsiana (Zeller, 1847) Eucosma albidulana (Herrich-Schäffer, 1851) Eucosma campoliliana (Denis & Schiffermüller, 1775) Eucosma cana (Haworth, 1811) Eucosma conformana (Mann, 1872) Eucosma conterminana (Guenée, 1845) Eucosma cumulana (Guenée, 1845) Eucosma lugubrana (Treitschke, 1830) Eucosma obumbratana (Lienig & Zeller, 1846) Eudemis porphyrana (Hübner, 1799) Eugnosta lathoniana (Hübner, 1800) Eupoecilia cebrana (Hübner, 1813) Falseuncaria ruficiliana (Haworth, 1811) Grapholita funebrana Treitschke, 1835 Grapholita janthinana (Duponchel, 1843) Grapholita molesta (Busck, 1916) Grapholita compositella (Fabricius, 1775) Grapholita coronillana Lienig & Zeller, 1846 Grapholita delineana Walker, 1863 Grapholita fissana (Frolich, 1828) Grapholita gemmiferana Treitschke, 1835 Grapholita jungiella (Clerck, 1759) Grapholita nebritana Treitschke, 1830 Grapholita orobana Treitschke, 1830 Gravitarmata margarotana (Heinemann, 1863) Gynnidomorpha permixtana (Denis & Schiffermüller, 1775) Gypsonoma aceriana (Duponchel, 1843) Gypsonoma dealbana (Frolich, 1828) Gypsonoma minutana (Hübner, 1799) Gypsonoma sociana (Haworth, 1811) Hedya nubiferana (Haworth, 1811) Hedya pruniana (Hübner, 1799) Hedya salicella (Linnaeus, 1758) Hysterophora maculosana (Haworth, 1811) Isotrias hybridana (Hübner, 1817) Isotrias rectifasciana (Haworth, 1811) Lathronympha christenseni Aarvik & Karsholt, 1993 Lathronympha strigana (Fabricius, 1775) Lobesia artemisiana (Zeller, 1847) Lobesia botrana (Denis & Schiffermüller, 1775) Lobesia confinitana (Staudinger, 1870) Neosphaleroptera nubilana (Hübner, 1799) Notocelia cynosbatella (Linnaeus, 1758) Notocelia incarnatana (Hübner, 1800) Notocelia roborana (Denis & Schiffermüller, 1775) Notocelia trimaculana (Haworth, 1811) Notocelia uddmanniana (Linnaeus, 1758) Olethreutes arcuella (Clerck, 1759) Oxypteron eremica (Walsingham, 1907) Pammene aurita Razowski, 1991 Pammene christophana (Moschler, 1862) Pammene fasciana (Linnaeus, 1761) Pammene gallicolana (Lienig & Zeller, 1846) Pandemis cerasana (Hübner, 1786) Pandemis heparana (Denis & Schiffermüller, 1775) Paramesia gnomana (Clerck, 1759) Pelochrista agrestana (Treitschke, 1830) Pelochrista caecimaculana (Hübner, 1799) Pelochrista duercki (Osthelder, 1941) Pelochrista fusculana (Zeller, 1847) Pelochrista medullana (Staudinger, 1879) Phalonidia albipalpana (Zeller, 1847) Phalonidia contractana (Zeller, 1847) Phalonidia manniana (Fischer v. Röslerstamm, 1839) Phiaris stibiana (Guenée, 1845) Phtheochroa annae Huemer, 1990 Phtheochroa duponchelana (Duponchel, 1843) Phtheochroa reisseri (Razowski, 1970) Phtheochroa sodaliana (Haworth, 1811) Prochlidonia amiantana (Hübner, 1799) Propiromorpha rhodophana (Herrich-Schäffer, 1851) Pseudargyrotoza conwagana (Fabricius, 1775) Pseudococcyx tessulatana (Staudinger, 1871) Ptycholoma lecheana (Linnaeus, 1758) Ptycholomoides aeriferana (Herrich-Schäffer, 1851) Rhyacionia buoliana (Denis & Schiffermüller, 1775) Selania capparidana (Zeller, 1847) Sparganothis pilleriana (Denis & Schiffermüller, 1775) Spilonota ocellana (Denis & Schiffermüller, 1775) Syndemis musculana (Hübner, 1799) Thiodia major (Rebel, 1903) Thiodia trochilana (Frolich, 1828) Tortrix viridana Linnaeus, 1758 Xerocnephasia rigana (Sodoffsky, 1829) Zeiraphera rufimitrana (Herrich-Schäffer, 1851) Yponomeutidae Cedestis gysseleniella Zeller, 1839 Cedestis subfasciella (Stephens, 1834) Paradoxus osyridellus Stainton, 1869 Paraswammerdamia albicapitella (Scharfenberg, 1805) Paraswammerdamia nebulella (Goeze, 1783) Scythropia crataegella (Linnaeus, 1767) Swammerdamia caesiella (Hübner, 1796) Swammerdamia compunctella Herrich-Schäffer, 1855 Yponomeuta cagnagella (Hübner, 1813) Yponomeuta evonymella (Linnaeus, 1758) Yponomeuta malinellus Zeller, 1838 Yponomeuta padella (Linnaeus, 1758) Yponomeuta plumbella (Denis & Schiffermüller, 1775) Yponomeuta rorrella (Hübner, 1796) Zelleria hepariella Stainton, 1849 Zelleria oleastrella (Milliere, 1864) Ypsolophidae Ochsenheimeria taurella (Denis & Schiffermüller, 1775) Ypsolopha albiramella (Mann, 1861) Ypsolopha dentella (Fabricius, 1775) Ypsolopha instabilella (Mann, 1866) Ypsolopha kristalleniae Rebel, 1916 Ypsolopha lucella (Fabricius, 1775) Ypsolopha manniella (Staudinger, 1880) Ypsolopha minotaurella (Rebel, 1916) Ypsolopha parenthesella (Linnaeus, 1761) Ypsolopha persicella (Fabricius, 1787) Ypsolopha sculpturella (Herrich-Schäffer, 1854) Ypsolopha sylvella (Linnaeus, 1767) Ypsolopha trichonella (Mann, 1861) Ypsolopha ustella (Clerck, 1759) Zygaenidae Adscita albanica (Naufock, 1926) Adscita capitalis (Staudinger, 1879) Adscita geryon (Hübner, 1813) Adscita obscura (Zeller, 1847) Adscita statices (Linnaeus, 1758) Adscita mannii (Lederer, 1853) Jordanita chloros (Hübner, 1813) Jordanita globulariae (Hübner, 1793) Jordanita graeca (Jordan, 1907) Jordanita subsolana (Staudinger, 1862) Jordanita budensis (Ad. & Au. Speyer, 1858) Jordanita notata (Zeller, 1847) Rhagades pruni (Denis & Schiffermüller, 1775) Rhagades amasina (Herrich-Schäffer, 1851) Theresimima ampellophaga (Bayle-Barelle, 1808) Zygaena carniolica (Scopoli, 1763) Zygaena sedi Fabricius, 1787 Zygaena brizae (Esper, 1800) Zygaena laeta (Hübner, 1790) Zygaena minos (Denis & Schiffermüller, 1775) Zygaena punctum Ochsenheimer, 1808 Zygaena purpuralis (Brunnich, 1763) Zygaena angelicae Ochsenheimer, 1808 Zygaena ephialtes (Linnaeus, 1767) Zygaena filipendulae (Linnaeus, 1758) Zygaena lonicerae (Scheven, 1777) Zygaena loti (Denis & Schiffermüller, 1775) Zygaena nevadensis Rambur, 1858 Zygaena viciae (Denis & Schiffermüller, 1775) References External links Fauna Europaea Greece Greece Greece Lepidoptera
5282989
https://en.wikipedia.org/wiki/Advanced%20Technologies%20Academy
Advanced Technologies Academy
Advanced Technologies Academy (A-TECH) is a magnet public high school in Las Vegas, Nevada, United States. It focuses on integrating technology with academics for students in grades 9-12. The magnet school program was founded in 1994 and is part of the Clark County School District. The first year included only 9th and 10th grade, adding a grade each year. The first graduating class was 1997, and the first graduating class with all four years of attendance was 1998. The magnet school focuses on computer and technology related study fields. As of 2021, A-TECH is ranked #1 in the state of Nevada and #152 nationally by U.S. News & World Report. Historical events Unlike traditional high schools, A-TECH has no team sports. Students wishing to play team sports participate at their zoned high school. Games of flag football and basketball had been held between A-TECH and Las Vegas Academy (another local magnet school with no sports teams) since the school's opening, though have been discontinued since 2008. Games of flag football and soccer are held annually between A-TECH and Northwest Career and Technical Academy, a magnet school that was opened in 2008. The gymnasium building began construction during the 1998-1999 school year, and opened in 2000. Efforts to increase the student population at the school began in the early 2000s. Construction of the school's east wing (including additional classrooms, offices, and a lecture hall) and expansion of the existing cafeteria began in 2002, and were completed in time for the start of the 2003-2004 school year. The expansion increased enrollment from approximately 750 students to just over 1000. Fields of study A-TECH currently provides eight areas of study: Architectural Design: Students in Architectural Design are introduced to the principles of architectural drawing, design, and introductory civil engineering concepts using two and three-dimensional drawing techniques, rendering, and animation to prepare for jobs in architecture and engineering. Areas of concentration include building codes, construction methods and materials, climate, energy efficiency, sustainability, green building concepts, presentation skills and portfolio development. Students test their skills through project based learning activities and participation in local and national design contests. Upon successful completion of this program, students will have acquired entry-level skills for employment in this field. Computer Science: In Computer Science, students focus on programming in C++ and Java. The programing experience is enhanced by the use of IDEs such as CodeBlocks, BlueJ, and InteliJ. It also incorporates the 21st Century Curriculum and prepares students to move forward in their chosen field whether it is software development, game development, app development, or any other field. Students are further prepared through their development of workplace readiness skills and employability skills for career readiness. Students have opportunities to participate in internships, hackathons, student led workshops, and the Hour of Code. Cybersecurity (Fall 2020): Cybersecurity focuses on ways to minimize the risks of cyber theft and terrorist attaches on e-commerce, global trade, and digital communication channels. Students learn to monitor, mitigate, and prevent online threats. Students participate in hands-on learning activities, simulations, and competitions designed to prepare them for a career in cybersecurity. They leave with knowledge and skills in computer maintenance and repair, the cybersecurity life cycle, incident handling and networking. Students are prepared to take certification exams for CompTIA’s A+ and Networking +, the gateway certification for careers in IT and Cybersecurity. Engineering: Engineering students engage in open-ended problem solving, learn and apply the engineering design process, and use the same technology and software as are used in the industry. Students are immersed in design as they investigate topics such as 3D modeling, machine design and control, forces, structures, basic electronics and circuit design, manufacturing, and teamwork, beginning post-secondary education, or careers. Graphic Design: Students focus on the professional areas of graphic design, computer art, and video. They develop skills in the areas of drawing, digital and visual communications, design critiquing, portfolio development, and presentations. Projects, design competitions, and internships allow students to apply their skills at professional levels. High School of Business: Students are prepared with the principles and operations of business and management found in today’s technologically advanced economy. The curriculum prepares students for customer relationships and multiple forms of management associated with business. Economics, finance, operations, and professional development are emphasized throughout the program. The appropriate use of technology and industry-related equipment is an integral part of the program. Information Technologies: In Networking Technology, students develop the skills necessary to support microcomputers with various platforms and to administer network systems. Students are taught the fundamentals of Local Area Network design and the responsibilities of system administrators. Students prepare for the Cisco CCNA, and A+ Certification. Legal Studies: Legal Studies is in the process of being phased out of A-TECH, with only third and fourth year students in progress. 2023 graduates of the Legal Studies program will go through the Advanced Studies, which provides students who have achieved all content standards in Criminal Justice an advanced study through investigation and in-depth research. Starting in fall 2022, A-TECH will offer Biomedical as a program of study to replace Legal Studies. Awards and recognition During the 2003-04 school year, Advanced Technologies Academy was recognized with the Blue Ribbon School Award of Excellence by the United States Department of Education, the highest award an American school can receive. A-TECH was named a School of Distinction—top Technology Excellence high school—by Intel in 2005. Advanced Technologies Academy was recognized with the Exemplary School Award from the Nevada Department of Education for the graduating classes of 2002, 2003, 2005, and 2010, and received the High Achieving School Award from the Nevada Department of Education for the graduating classes of 2000, 2001, 2004, 2006, and 2007. Magnet Schools of America recognized A-TECH as a School of Distinction in 2008. U.S. News & World Report selected A-TECH as a Silver Medal Winner of America's Best High Schools in 2008. A-TECH was recognized, for the second time, with the Blue Ribbon School Award of Excellence by the United States Department of Education on September 15, 2011. A-TECH was recognized, for the third time, with the Blue Ribbon School Award of Excellence by the United States Department of Education on September 26, 2019. Notable visitors Since its opening in 1994, A-TECH has received several notable visitors. In 1996, Al Gore visited A-TECH to spotlight it as an example of how computer technology can enhance education. After receiving the Blue Ribbon School award, Laura Bush visited the school in 2004 and had a round table discussion with many members of the staff and student body. The school has also been visited by Louis Castle, cofounder of Westwood Studios. In recognition to the school's recent nomination as one of the top five magnet schools in the United States, former Florida Governor Jeb Bush visited the school in 2014. Former President Bill Clinton spoke at the school on January 21, 2016 to bolster support in Nevada for his wife, Hillary Clinton, who was angling for the Democratic presidential nomination. Olympic gold medalist Connor Fields spoke at the school's public speaking class during the week of December 10, 2018 to December 14, 2018. Notable faculty members Notable A-TECH faculty have included: Richard Knoeppel (Architectural Design) has received the following recognition: Heart of Education award recipient, 2019 Named 2019 Nevada Teacher of the Year Inducted into the National Teachers Hall of Fame in 2019 Mike Patterson (Mathematics), Milken Educator Award recipient in 2009 John Snyder (Computer Science) has received the following awards: Business Week Magazine National Award for Innovating Teaching in 1988 Named Nevada Teacher of the year and Burger King State Teacher of the year in 1990 Milken Educator Award recipient in 1992 Named Tandy Technology Scholar in 1991 Inducted into the Clark County Excellence in Education Hall of Fame in 1992 Named a Christa McAuliffe Fellow in 1994 and 1998 Dolly Parton presented him with the Chasing Rainbows Award in 2003. Inducted into the National Teachers Hall of Fame in 2007 Valarie Young (World History), 2005 recipient of the Milken Educator Award References External links Advanced Technologies Academy Clark County School District Magnet schools in Nevada Educational institutions established in 1994 High schools in Las Vegas School buildings completed in 1994 1994 establishments in Nevada Public high schools in Nevada
22732540
https://en.wikipedia.org/wiki/Critical%20Test%20Results%20Management
Critical Test Results Management
Critical Test Results Management (CTRM) also known as Critical Test Results Reporting, and Closed-Loop Reporting, is the software that handles a medical test result that has come back as critical to a patient’s health. CTRM software prevents the critical result from being lost in communication failures, improves patient safety, and documents the delivery of the results. History The Patient Safety and Quality Improvement Act of 2005 was passed into law in response to growing concerns about patient safety in the United States. The goal of the act was to improve patient safety by encouraging hospitals and their staff to voluntarily report events that adversely affected patients. The Joint Commission on Accreditation of Healthcare Organizations is a non-profit organization that gives accreditation to hospitals that meet the standards in The Joint Commission’s National Patient Safety Goals. The Joint Commission Goal 2 states that "ineffective communication is the most frequently cited root cause for sentinel events," and requires that hospitals "implement a standardized approach to hand-off communications, including an opportunity to ask and respond to questions". Software has been developed to help hospitals achieve accreditation through The Joint Commission, while saving hospitals and other medical organizations from communication errors that could result in patient injury or death, and lawsuits against the caregiver. How It Works When a radiologist, pathologist, interpreting clinician, diagnostician, or emergency department or laboratory personnel flag a study as a critical finding, this critical information is sent immediately to a healthcare professional (such as surgeons, physicians, nurses, etc.) via secure SMS text, secure email, pager, or voice. This information can include images, reports, annotations, voice clips, handwritten notes, and other critical information. When the recipient opens the message a receipt confirmation is automatically sent to the sender and recorded in the software, thereby eliminating the need to check if the message was received. Automated monitoring and escalation of undelivered findings ensure a timely receipt of the message. Audit trails and automatic report generation ensure that hospital administration has a way to document and check that results have gotten to an appropriate caregiver in the amount of time specified by an individual hospital. Providers A small number of companies have developed products in critical test results management; some of these include HIT Application Solutions, IMCO Technologies, Insure Communication, Nuance Communications, peerVue Solutions, Radar Medical Systems, and Zen Medical Technologies. Currently only IMCO-STAT, an IMCO Technologies product, has been FDA cleared for Device Regulatory Class II (Special Controls). See also Patient safety Hospital Accreditation Joint Commission Health Informatics Management systems References http://www.linkedin.com/groups/CTRM-Critical-Test-Results-Management-3217442? Next Generation Fusion CTRM Software Medical software
41592
https://en.wikipedia.org/wiki/Provisioning%20%28telecommunications%29
Provisioning (telecommunications)
In telecommunication, provisioning involves the process of preparing and equipping a network to allow it to provide new services to its users. In National Security/Emergency Preparedness telecommunications services, "provisioning" equates to "initiation" and includes altering the state of an existing priority service or capability. The concept of network provisioning or service mediation, mostly used in the telecommunication industry, refers to the provisioning of the customer's services to the network elements, which are various equipment connected in that network communication system. Generally in telephony provisioning this is accomplished with network management database table mappings. It requires the existence of networking equipment and depends on network planning and design. In a modern signal infrastructure employing information technology (IT) at all levels, there is no possible distinction between telecommunications services and "higher level" infrastructure. Accordingly, provisioning configures any required systems, provides users with access to data and technology resources, and refers to all enterprise-level information-resource management involved. Organizationally, a CIO typically manages provisioning, necessarily involving human resources and IT departments cooperating to: Give users access to data repositories or grant authorization to systems, network applications and databases based on a unique user identity. Appropriate for their use hardware resources, such as computers, mobile phones and pagers. As its core, the provisioning process monitors access rights and privileges to ensure the security of an enterprise's resources and user privacy. As a secondary responsibility, it ensures compliance and minimizes the vulnerability of systems to penetration and abuse. As a tertiary responsibility, it tries to reduce the amount of custom configuration using boot image control and other methods that radically reduce the number of different configurations involved. Discussion of provisioning often appears in the context of virtualization, orchestration, utility computing, cloud computing, and open-configuration concepts and projects. For instance, the OASIS Provisioning Services Technical Committee (PSTC) defines an XML-based framework for exchanging user, resource, and service-provisioning information - SPML (Service Provisioning Markup Language) for "managing the provisioning and allocation of identity information and system resources within and between organizations". Once provisioning has taken place, the process of SysOpping ensures the maintenance of services to the expected standards. Provisioning thus refers only to the setup or startup part of the service operation, and SysOpping to the ongoing support. Network provisioning One type of provisioning. The services which are assigned to the customer in the customer relationship management (CRM) have to be provisioned on the network element which is enabling the service and allows the customer to actually use the service. The relation between a service configured in the CRM and a service on the network elements is not necessarily a one-to-one relationship; for example, services like Microsoft Media Server (mms://) can be enabled by more than one network element. During the provisioning, the service mediation device translates the service and the corresponding parameters of the service to one or more services/parameters on the network elements involved. The algorithm used to translate a system service into network services is called provisioning logic. Electronic invoice feeds from your carriers can be automatically downloaded directly into the core of the telecom expense management (TEM) software and it will immediately conduct an audit of each single line item charge all the way down to the User Support and Operations Center (USOC) level. The provisioning software will capture each circuit number provided by all of your carriers and if billing occurs outside of the contracted rate an exception rule will trigger a red flag and notify the pre-established staff member to review the billing error. Server provisioning Server provisioning is a set of actions to prepare a server with appropriate systems, data and software, and make it ready for network operation. Typical tasks when provisioning a server are: select a server from a pool of available servers, load the appropriate software (operating system, device drivers, middleware, and applications), appropriately customize and configure the system and the software to create or change a boot image for this server, and then change its parameters, such as IP address, IP Gateway to find associated network and storage resources (sometimes separated as resource provisioning) to audit the system. By auditing the system, you ensure OVAL compliance with limit vulnerability, ensure compliance, or install patches. After these actions, you restart the system and load the new software. This makes the system ready for operation. Typically an internet service provider (ISP) or Network Operations Center will perform these tasks to a well-defined set of parameters, for example, a boot image that the organization has approved and which uses software it has license to. Many instances of such a boot image create a virtual dedicated host. There are many software products available to automate the provisioning of servers, services and end-user devices. Examples: BMC Bladelogic Server Automation, HP Server Automation, IBM Tivoli Provisioning Manager, Redhat Kickstart, xCAT, HP Insight CMU, etc. Middleware and applications can be installed either when the operating system is installed or afterwards by using an Application Service Automation tool. Further questions are addressed in academia such as when provisioning should be issued and how many servers are needed in multi-tier, or multi-service applications. In cloud computing, servers may be provisioned via a web user interface or an application programming interface (API). One of the unique things about cloud computing is how rapidly and easily this can be done. Monitoring software can be used to trigger automatic provisioning when existing resources become too heavily stressed. In short, server provisioning configures servers based on resource requirements. The use of a hardware or software component (e.g. single/dual processor, RAM, HDD, RAID Controller, a number of LAN cards, applications, OS, etc.) depends on the functionality of the server, such as ISP, virtualization, NOS, or voice processing. Server redundancy depends on the availability of servers in the organization. Critical applications have less downtime when using cluster servers, RAID, or a mirroring system. Service used by most larger-scale centers in part to avoid this. Additional resource provisioning may be done per service. There are several software on the market for server provisioning such as Cobbler or HP Intelligent Provisioning. User provisioning User provisioning refers to the creation, maintenance and deactivation of user objects and user attributes, as they exist in one or more systems, directories or applications, in response to automated or interactive business processes. User provisioning software may include one or more of the following processes: change propagation, self-service workflow, consolidated user administration, delegated user administration, and federated change control. User objects may represent employees, contractors, vendors, partners, customers or other recipients of a service. Services may include electronic mail, inclusion in a published user directory, access to a database, access to a network or mainframe, etc. User provisioning is a type of identity management software, particularly useful within organizations, where users may be represented by multiple objects on multiple systems and multiple instances. Self-service provisioning for cloud computing services On-demand self-service is described by the National Institute of Standards and Technology (NIST) as an essential characteristic of cloud computing. The self-service nature of cloud computing lets end users obtain and remove cloud services―including applications, the infrastructure supporting the applications, and configuration― themselves without requiring the assistance of an IT staff member. The automatic self-servicing may target different application goals and constraints (e.g. deadlines and cost), as well as handling different application architectures (e.g., bags-of-tasks and workflows). Cloud users can obtain cloud services through a cloud service catalog or a self-service portal. Because business users can obtain and configure cloud services themselves, this means IT staff can be more productive and gives them more time to manage cloud infrastructures. One downside of cloud service provisioning is that it is not instantaneous. A cloud virtual machine (VM) can be acquired at any time by the user, but it may take up to several minutes for the acquired VM to be ready to use. The VM startup time is dependent on factors, such as image size, VM type, data center location, and number of VMs. Cloud providers have different VM startup performance. Mobile subscriber provisioning Mobile subscriber provisioning refers to the setting up of new services, such as GPRS, MMS and Instant Messaging for an existing subscriber of a mobile phone network, and any gateways to standard Internet chat or mail services. The network operator typically sends these settings to the subscriber's handset using SMS text services or HTML, and less commonly WAP, depending on what the mobile operating systems can accept. A general example of provisioning is with data services. A mobile user who is using his or her device for voice calling may wish to switch to data services in order to read emails or browse the Internet. The mobile device's services are "provisioned" and thus the user is able to stay connected through push emails and other features of smartphone services. Device management systems can benefit end-users by incorporating plug-and-play data services, supporting whatever device the end-user is using.. Such a platform can automatically detect devices in the network, sending them settings for immediate and continued usability. The process is fully automated, keeping the history of used devices and sending settings only to subscriber devices which were not previously set. One method of managing mobile updates is to filter IMEI/IMSI pairs. Some operators report activity of 50 over-the-air settings update files per second. Mobile content provisioning This refers to delivering mobile content, such as mobile internet to a mobile phone, agnostic of the features of said device. These may include operating system type and versions, Java version, browser version, screen form factors, audio capabilities, language settings and many other characteristics. As of April 2006, an estimated 5,000 permutations were relevant. Mobile content provisioning facilitates a common user experience, though delivered on widely different handsets. Mobile device provisioning Provisioning devices involves delivering configuration data and policy settings to the mobile devices from a central point – Mobile device management system tools. Internet access provisioning When getting a customer online, the client system must be configured. Depending on the connection technology (e.g., DSL, Cable, Fibre), the client system configuration may include: Modem configuration Network authentication Installing drivers Setting up Wireless LAN Securing operating system (primarily for Windows) Configuring browser provider-specifics E-mail provisioning (create mailboxes and aliases) E-mail configuration in client systems Installing additional support software or add-on packages There are four approaches to provisioning internet access: Hand out manuals: Manuals are a great help for experienced users, but inexperienced users will need to call the support hotline several times until all internet services are accessible. Every unintended change in the configuration, by user mistake or due to a software error, results in additional calls. On-site setup by a technician: Sending a technician on-site is the most reliable approach from the provider's point of view, as the person ensures that the internet access is working, before leaving the customer's premises. This advantage comes at high costs – either for the provider or the customer, depending on the business model. Furthermore, it is inconvenient for customers, as they have to wait until they get an installation appointment and because they need to take a day off from work. For repairing an internet connection on-site or phone support will be needed again. Server-side remote setup: Server-side modem configuration uses a protocol called TR-069. It is widely established and reliable. At the current stage it can only be used for modem configuration. Protocol extensions are discussed, but not yet practically implemented, particularly because most client devices and applications do not support them yet. All other steps of the provisioning process are left to the user, typically causing many rather long calls to the support hotline. Installation CD: Also called a "client-side self-service installation" CD, it can cover the entire process from modem configuration to setting up client applications, including home networking devices. The software typically acts autonomously, i.e., it doesn't need an online connection and an expensive backend infrastructure. During such an installation process the software usually also install diagnosis and self-repair applications that support customers in case of problems, avoiding costly hotline calls. Such client-side applications also open completely new possibilities for marketing, cross- and upselling. Such solutions come from highly specialised companies or directly from the provider's development department. See also Dynamic provisioning environment References External links Network access Operating system technology
476767
https://en.wikipedia.org/wiki/Cron
Cron
The croncommand-line utility, also known as cron job, is a job scheduler on Unix-like operating systems. Users who set up and maintain software environments use cron to schedule jobs (commands or shell scripts) to run periodically at fixed times, dates, or intervals. It typically automates system maintenance or administration—though its general-purpose nature makes it useful for things like downloading files from the Internet and downloading email at regular intervals. Cron is most suitable for scheduling repetitive tasks. Scheduling one-time tasks can be accomplished using the associated at utility. Overview The actions of cron are driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. Users can have their own individual crontab files and often there is a system-wide crontab file (usually in /etc or a subdirectory of /etc e.g. ) that only system administrators can edit. Each line of a crontab file represents a job, and looks like this: # ┌───────────── minute (0 - 59) # │ ┌───────────── hour (0 - 23) # │ │ ┌───────────── day of the month (1 - 31) # │ │ │ ┌───────────── month (1 - 12) # │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday; # │ │ │ │ │ 7 is also Sunday on some systems) # │ │ │ │ │ # │ │ │ │ │ # * * * * * <command to execute> The syntax of each line expects a cron expression made of five fields which represent the time to execute the command, followed by a shell command to execute. While normally the job is executed when the time/date specification fields all match the current time and date, there is one exception: if both "day of month" (field 3) and "day of week" (field 5) are restricted (not "*"), then one or both must match the current day. For example, the following clears the Apache error log at one minute past midnight (00:01) every day, assuming that the default shell for the cron user is Bourne shell compliant: 1 0 * * * printf "" > /var/log/apache/error_log This example runs a shell program called export_dump.sh at 23:45 (11:45 PM) every Saturday. 45 23 * * 6 /home/oracle/scripts/export_dump.sh Note: On some systems it is also possible to specify */n to run for every n-th interval of time. Also, specifying multiple specific time intervals can be done with commas (e.g., 1,2,3). The below would output "hello world" to the command line every 5th minute of every first, second and third hour (i.e., 01:00, 01:05, 01:10, up until 03:55). */5 1,2,3 * * * echo hello world The configuration file for a user can be edited by calling crontab -e regardless of where the actual implementation stores this file. Some cron implementations, such as the popular 4th BSD edition written by Paul Vixie and included in many Linux distributions, add a sixth field: an account username that runs the specified job (subject to user existence and permissions). This is allowed only in the system crontabs—not in others, which are each assigned to a single user to configure. The sixth field is alternatively sometimes used for year instead of an account username—the nncron daemon for Windows does this. The Amazon EventBridge implementation of cron does not use 0 based day of week, instead it is 1-7 SUN-SAT (instead of 0-6), as well as supporting additional expression features such as first-weekday and last-day-of-month. Nonstandard predefined scheduling definitions Some cron implementations support the following non-standard macros: @reboot configures a job to run once when the daemon is started. Since cron is typically never restarted, this typically corresponds to the machine being booted. This behavior is enforced in some variations of cron, such as that provided in Debian, so that simply restarting the daemon does not re-run @reboot jobs. @reboot can be useful if there is a need to start up a server or daemon under a particular user, and the user does not have access to configure init to start the program. Cron permissions These two files play an important role: /etc/cron.allow - If this file exists, it must contain the user's name for that user to be allowed to use cron jobs. /etc/cron.deny - If the cron.allow file does not exist but the /etc/cron.deny file does exist then, to use cron jobs, users must not be listed in the /etc/cron.deny file. Note that if neither of these files exists then, depending on site-dependent configuration parameters, either only the super user can use cron jobs, or all users can use cron jobs. Time zone handling Most cron implementations simply interpret crontab entries in the system time zone setting that the cron daemon runs under. This can be a source of dispute if a large multi-user machine has users in several time zones, especially if the system default time zone includes the potentially confusing DST. Thus, a cron implementation may as a special case recognize lines of the form "CRON_TZ=<time zone>" in user crontabs, interpreting subsequent crontab entries relative to that time zone. History Early versions The cron in Version 7 Unix was a system service (later called a daemon) invoked from /etc/rc when the operating system entered multi-user mode. Its algorithm was straightforward: Read /usr/lib/crontab Determine if any commands must run at the current date and time, and if so, run them as the superuser, root. Sleep for one minute Repeat from step 1. This version of cron was basic and robust but it also consumed resources whether it found any work to do or not. In an experiment at Purdue University in the late 1970s to extend cron's service to all 100 users on a time-shared VAX, it was found to place too much load on the system. Multi-user capability The next version of cron, with the release of Unix System V, was created to extend the capabilities of cron to all users of a Unix system, not just the superuser. Though this may seem trivial today with most Unix and Unix-like systems having powerful processors and small numbers of users, at the time it required a new approach on a one-MIPS system having roughly 100 user accounts. In the August, 1977 issue of the Communications of the ACM, W. R. Franta and Kurt Maly published an article titled "An efficient data structure for the simulation event set", describing an event queue data structure for discrete event-driven simulation systems that demonstrated "performance superior to that of commonly used simple linked list algorithms", good behavior given non-uniform time distributions, and worst case complexity , "n" being the number of events in the queue. A Purdue graduate student, Robert Brown, reviewing this article, recognized the parallel between cron and discrete event simulators, and created an implementation of the Franta–Maly event list manager (ELM) for experimentation. Discrete event simulators run in virtual time, peeling events off the event queue as quickly as possible and advancing their notion of "now" to the scheduled time of the next event. Running the event simulator in "real time" instead of virtual time created a version of cron that spent most of its time sleeping, waiting for the scheduled time to execute the task at the head of the event list. The following school year brought new students into the graduate program at Purdue, including Keith Williamson, who joined the systems staff in the Computer Science department. As a "warm up task" Brown asked him to flesh out the prototype cron into a production service, and this multi-user cron went into use at Purdue in late 1979. This version of cron wholly replaced the /etc/cron that was in use on the computer science department's VAX 11/780 running 32/V. The algorithm used by this cron is as follows: On start-up, look for a file named .crontab in the home directories of all account holders. For each crontab file found, determine the next time in the future that each command must run. Place those commands on the Franta–Maly event list with their corresponding time and their "five field" time specifier. Enter main loop: Examine the task entry at the head of the queue, compute how far in the future it must run. Sleep for that period of time. On awakening and after verifying the correct time, execute the task at the head of the queue (in background) with the privileges of the user who created it. Determine the next time in the future to run this command and place it back on the event list at that time value. Additionally, the daemon responds to SIGHUP signals to rescan modified crontab files and schedules special "wake up events" on the hour and half-hour to look for modified crontab files. Much detail is omitted here concerning the inaccuracies of computer time-of-day tracking, Unix alarm scheduling, explicit time-of-day changes, and process management, all of which account for the majority of the lines of code in this cron. This cron also captured the output of stdout and stderr and e-mailed any output to the crontab owner. The resources consumed by this cron scale only with the amount of work it is given and do not inherently increase over time, with the exception of periodically checking for changes. Williamson completed his studies and departed the University with a Masters of Science in Computer Science and joined AT&T Bell Labs in Murray Hill, New Jersey, and took this cron with him. At Bell Labs, he and others incorporated the Unix at command into cron, moved the crontab files out of users' home directories (which were not host-specific) and into a common host-specific spool directory, and of necessity added the crontab command to allow users to copy their crontabs to that spool directory. This version of cron later appeared largely unchanged in Unix System V and in BSD and their derivatives, Solaris from Sun Microsystems, IRIX from Silicon Graphics, HP-UX from Hewlett-Packard, and AIX from IBM. Technically, the original license for these implementations should be with the Purdue Research Foundation who funded the work, but this took place at a time when little concern was given to such matters. Modern versions With the advent of the GNU Project and Linux, new crons appeared. The most prevalent of these is the Vixie cron, originally coded by Paul Vixie in 1987. Version 3 of Vixie cron was released in late 1993. Version 4.1 was renamed to ISC Cron and was released in January 2004. Version 3, with some minor bugfixes, is used in most distributions of Linux and BSDs. In 2007, Red Hat forked vixie-cron 4.1 to the cronie project and included anacron 2.3 in 2009. Other popular implementations include anacron and dcron. However, anacron is not an independent cron program. Another cron job must call it. dcron was made by DragonFly BSD founder Matt Dillon, and its maintainership was taken over by Jim Pryor in 2010. In 2003, Dale Mellor introduced mcron, a cron variant written in Guile which provides cross-compatibility with Vixie cron while also providing greater flexibility as it allows arbitrary scheme code to be used in scheduling calculations and job definitions. Since both the mcron daemon and the crontab files are usually written in scheme (though mcron also accepts traditional Vixie crontabs), the cumulative state of a user's job queue is available to their job code, which may be scheduled to run iff the results of other jobs meet certain criteria. Mcron is deployed by default under the Guix package manager, which includes provisions (services) for the package manager to monadically emit mcron crontabs while both ensuring that packages needed for job execution are installed and that the corresponding crontabs correctly refer to them. A webcron solution schedules ring tasks to run on a regular basis wherever cron implementations are not available in a web hosting environment. CRON expression A CRON expression is a string comprising five or six fields separated by white space that represents a set of times, normally as a schedule to execute some routine. Comments begin with a comment mark #, and must be on a line by themselves. The month and weekday abbreviations are not case-sensitive. In the particular case of the system crontab file (/etc/crontab), a user field inserts itself before the command. It is generally set to 'root'. In some uses of the CRON format there is also a seconds field at the beginning of the pattern. In that case, the CRON expression is a string comprising 6 or 7 fields. Comma Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) means Mondays, Wednesdays and Fridays. Dash ( - ) Dash defines ranges. For example, 2000–2010 indicates every year between 2000 and 2010, inclusive. Percent ( % ) Percent-signs (%) in the command, unless escaped with backslash (\), are changed into newline characters, and all data after the first % are sent to the command as standard input. Non-standard characters The following are non-standard characters and exist only in some cron implementations, such as the Quartz Java scheduler. 'L' stands for "last". When used in the day-of-week field, it allows specifying constructs such as "the last Friday" ("") of a given month. In the day-of-month field, it specifies the last day of the month. The 'W' character is allowed for the day-of-month field. This character is used to specify the weekday (Monday-Friday) nearest the given day. As an example, if "" is specified as the value for the day-of-month field, the meaning is: "the nearest weekday to the 15th of the month." So, if the 15th is a Saturday, the trigger fires on Friday the 14th. If the 15th is a Sunday, the trigger fires on Monday the 16th. If the 15th is a Tuesday, then it fires on Tuesday the 15th. However, if "1W" is specified as the value for day-of-month, and the 1st is a Saturday, the trigger fires on Monday the 3rd, as it does not 'jump' over the boundary of a month's days. The 'W' character can be specified only when the day-of-month is a single day, not a range or list of days. Hash () '#' is allowed for the day-of-week field, and must be followed by a number between one and five. It allows specifying constructs such as "the second Friday" of a given month. For example, entering "5#3" in the day-of-week field corresponds to the third Friday of every month. Question mark () In some implementations, used instead of '' for leaving either day-of-month or day-of-week blank. Other cron implementations substitute "?" with the start-up time of the cron daemon, so that would be updated to if cron started-up on 8:25am, and would run at this time every day until restarted again. Slash (/) In vixie-cron, slashes can be combined with ranges to specify step values. For example, in the minutes field indicates every 5 minutes (see note below about frequencies). It is shorthand for the more verbose POSIX form . POSIX does not define a use for slashes; its rationale (commenting on a BSD extension) notes that the definition is based on System V format but does not exclude the possibility of extensions. Note that frequencies in general cannot be expressed; only step values which evenly divide their range express accurate frequencies (for minutes and seconds, that's and because 60 is evenly divisible by those numbers; for hours, that's and ); all other possible "steps" and all other fields yield inconsistent "short" periods at the end of the time-unit before it "resets" to the next minute, second, or day; for example, entering */5 for the day field sometimes executes after 1, 2, or 3 days, depending on the month and leap year; this is because cron is stateless (it does not remember the time of the last execution nor count the difference between it and now, required for accurate frequency counting—instead, cron is a mere pattern-matcher). 'H' is used in the Jenkins continuous integration system to indicate that a "hashed" value is substituted. Thus instead of a fixed number such as '' which means at 20 minutes after the hour every hour, '' indicates that the task is performed every hour at an unspecified but invariant time for each task. This allows spreading out tasks over time, rather than having all of them start at the same time and compete for resources. See also at (command) Launchd List of Unix commands Scheduling (computing) systemd – incorporates cron equivalent (called timers) fcron Windows Task Scheduler Note References External links GNU cron (mcron) ISC Cron 4.1 cronie ACM Digital library – Franta, Maly, "An efficient data structure for the simulation event set" (requires ACM pubs subscription) Crontab syntax tutorial - Crontab syntax explained UNIX / Linux cron tutorial - a quick tutorial for UNIX like operating systems with sample shell scripts. Standard Unix programs Unix SUS2008 utilities Unix process- and task-management-related software Wikipedia articles with ASCII art Job scheduling
2336732
https://en.wikipedia.org/wiki/Installer%20VISE
Installer VISE
Installer VISE was an installer maker that supported Mac OS 9, Mac OS X, and Windows by MindVision Software. Steve Kiene, the founder of MindVision, had done software work in the area of data compression, producing the Application VISE executable compressor for the Mac platform and a disk compression software implementation; the latter was released as Stacker after having been acquired by Stac Electronics. Installer VISE (which was first called Developer VISE) initially arose from some add-on software extensions that Kiene had developed for use with the Installer software from Apple. Originally created Mac-only, Installer VISE was one of the most popular installer makers for the platform around 2006 given its visual interface that made the software easy to use by nontechnical persons in addition to its extensive features. However, its popularity has waned on Mac OS X. VISE X, an installer maker designed to produce installers specifically for installing Mac OS X software, was released by MindVision Software in 2003. Quietly, without any press release, MindVision Software was acquired by Digital River in 2006. As of 2020, the software company seems to have closed down and the website is pointing to an e-commerce solution developed by Digital River, MyCommerce. See also List of installation software References External links Installation software Classic Mac OS software Utilities for macOS Utilities for Windows
2397236
https://en.wikipedia.org/wiki/Linux%20adoption
Linux adoption
Linux adoption is the adoption of Linux computer operating systems (OS) by households, nonprofit organizations, businesses, and governments. Many factors have resulted in the expanded use of Linux systems by traditional desktop users as well as operators of server systems, including the desire to minimize software costs, increase network security and support for open-source philosophical principles. In recent years several governments, at various levels, have enacted policies shifting state-owned computers to Linux from proprietary software regimes. In August 2010, Jeffrey Hammond, principal analyst at Forrester Research, declared, "Linux has crossed the chasm to mainstream adoption," a statement attested by the large number of enterprises that had transitioned to Linux during the late-2000s recession. In a company survey completed in the third quarter of 2009, 48% of surveyed companies reported using an open-source operating system. The Linux Foundation regularly releases publications regarding the Linux kernel, Linux OS distributions, and related themes. One such publication, "Linux Adoption Trends: A Survey of Enterprise End Users," is freely available upon registration. Traditionally, the term Linux adoption, refers to adoption of a Linux OS made for "desktop" computers, the original intended use (or adoption on servers, that is essentially the same form of OS). Adoption of that form on personal computers is still low relatively, while adoption of the Android operating system is very high. The term Linux adoption, often overlooks that operating system or other uses such as in Chrome OS that also use the Linux kernel (but have almost nothing else in common, not even the name – Linux – usually applied; while Android is the most popular variant – in fact the most popular operating system in the world). Linux adopters Outside of traditional web services, Linux powers many of the biggest Internet properties (e.g., Google, Amazon, Facebook, eBay, Twitter or Yahoo!). Hardware platforms with graphical user interface Linux is used on desktop computers, servers and supercomputers, as well as a wide range of devices. Desktop and Nettop computers and Laptops Measuring desktop adoption Because Linux desktop distributions are not usually distributed by retail sale, there are no sales numbers that indicate the number of users. One downloaded file may be used to create many CDs and each CD may be used to install the operating system on multiple computers. On the other hand, the file might be used only for a test and the installation erased soon after. Due to these factors estimates of current Linux desktop often rely on webpage hits by computers identifying themselves as running Linux. The use of these statistics has been criticized as unreliable and as underestimating Linux use. Using webpage hits as a measure, until 2008, Linux accounted for only about 1% of desktop market share, while Microsoft Windows operating systems held more than 90%. This might have been because Linux was not seen at that time as a direct replacement for Windows. , W3Counter estimated "Linux" web browser market share to be 4.63%, while "Android" versions 6, 5 and 4 combined (which is based on the Linux kernel) were estimated to be 33.77%. In September 2014 Pornhub released usage statistics of their website and reported 1.7% Linux users. The Unity game engine gathers user statistics and showed in March 2016 0.4% Linux users. Similarly, the Steam client tracks usage and reported in May 2015 around 1% Linux users. In April 2009, Aaron Seigo of KDE indicated that most web-page counter methods produce Linux adoption numbers that are far too low given the system's extensive penetration into non-North American markets, especially China. He stated that the North American-based web-measurement methods produce high Windows numbers and ignore the widespread use of Linux in other parts of the world. In estimating true worldwide desktop adoption and accounting for the Windows-distorted environment in the US and Canada he indicated that at least 8% of the world desktops run Linux distributions and possibly as high as 10–12% and that the numbers are rising quickly. Other commentators have echoed this same belief, noting that competitors are expending a lot of effort to discredit Linux, which is incongruent with a tiny market share: In May 2009, Preston Gralla, contributing editor to Computerworld.com, in reacting to the Net Applications web hit numbers showing that Linux use was over 1%, said that "Linux will never become an important desktop or notebook operating system". He reasoned that the upsurge in Linux desktop use recently seen was due to Linux netbooks, a trend he saw as already diminishing and which would be further eroded when Windows 7 became available (and indeed, Linux netbooks did fall by the wayside, though whether they were solely responsible for the upsurge in Linux usage is open to question). He concluded: "As a desktop operating system, Linux isn't important enough to think about. For servers, it's top-notch, but you likely won't use it on your desktop – even though it did finally manage to crack the 1% barrier after 18 years". In 2009, Microsoft then-CEO Steve Ballmer indicated that Linux had a greater desktop market share than Mac, stating that in recent years Linux had "certainly increased its share somewhat". Just under a third of all Dell netbook sales in 2009 had Linux installed. Caitlyn Martin, researching retail market numbers in the summer of 2010 also concluded that the traditional numbers mentioned for Linux desktop adoption were far too low: Reasons for adoption Reasons to change from other operating systems to Linux include better system stability, better malware protection, low or no cost, that most distributions come complete with application software and hardware drivers, simplified updates for all installed software, free software licensing, availability of application repositories and access to the source code. Linux desktop distributions also offer multiple desktop workspaces, greater customization, free and unlimited support through forums, and an operating system that doesn't slow down over time. Environmental reasons are also cited, as Linux operating systems usually do not come in boxes and other retail packaging, but are downloaded via the Internet. The lower system specifications also mean that older hardware can be kept in use instead of being recycled or discarded. Linux distributions also get security vulnerabilities patched much more quickly than non-free operating systems and improvements in Linux have been occurring at a faster rate than those in Windows. A report in The Economist in December 2007 said: Further investments have been made to improve desktop Linux usability since that 2007 report. Indian bulk computer purchaser the Electronics Corporation of Tamil Nadu (ELCOT) started recommending only Linux in June 2008. Following testing they stated: "ELCOT has been using SUSE Linux and Ubuntu Linux operating systems on desktop and laptop computers numbering over 2,000 during the past two years and found them far superior as compared to other operating systems, notably the Microsoft Windows Operating System." In many developing nations, such as China, where, due to widespread software piracy, Microsoft Windows can be easily obtained for free, Linux distributions are gaining a high level of adoption. Hence in these countries where there is essentially no cost barrier to obtaining proprietary operating systems, users are adopting Linux based on its merit, rather than on price. In January 2001, Microsoft then-CEO Bill Gates explained the attraction of adopting Linux in an internal memo that was released in the Comes vs Microsoft case. He said: Barriers to adoption The greatest barrier to Linux desktop adoption is probably that few desktop PCs come with it from the factory. A.Y. Siu asserted in 2006 that most people use Windows simply because most PCs come with Windows pre-installed; they didn't choose it. Linux has much lower market penetration because in most cases users have to install it themselves, a task that is beyond the capabilities of many PC users: "Most users won’t even use Windows restore CDs, let alone install Windows from scratch. Why would they install an unfamiliar operating system on their computers?" TechRepublic writer Jack Wallen expands on this barrier, saying in August 2008: Linus Torvalds stated, in his June 2012 interaction with students at Aalto University, that although Linux was originally conceived as a desktop system, that has been the only market where it has not flourished. He suggested that the key reason that keeps Linux from getting a substantial presence in the desktop market is that the average desktop user does not want to install an operating system, so getting manufacturers to sell computers with Linux pre-installed would be the missing piece to fulfill the vision of Linux in the desktop market. He added that Chromebooks, by shipping with the Linux-based Chrome OS, could provide the key turning point in such a transition, much like Android allowed Linux to spread in the mobile space. In September 2012, GNOME developer Michael Meeks also indicated that the main reason for the lack of adoption of Linux desktops is the lack of manufacturers shipping computers with it pre-installed, supporting Siu's arguments from six years earlier. Meeks also indicated that users wouldn't embrace desktop Linux until there is a wider range of applications and developers won't create that wider range of applications until there are more users, a classic Catch-22 situation. In an openSUSE survey conducted in 2007, 69.5% of respondents said they dual booted a Microsoft Windows operating system in addition to a Linux operating system. In early 2007 Bill Whyman, an analyst at Precursor Advisors, noted that "there still isn't a compelling alternative to the Microsoft infrastructure on the desktop." Application support, the quality of peripheral support, and end user support were at one time seen as the biggest obstacles to desktop Linux adoption. According to a 2006 survey by The Linux Foundation, these factors were seen as a "major obstacle" for 56%, 49%, and 33% of respondents respectively at that time. Application support The November 2006 Desktop Linux Client Survey identified the foremost barrier for deploying Linux desktops was that users were accustomed to Windows applications which had not been ported to Linux and which they "just can't live without". These included Microsoft Office, Adobe Photoshop, Autodesk AutoCAD, Microsoft Project, Visio and Intuit QuickBooks. This creates a chicken or the egg situation where developers make programs for Windows due to its market share, and consumers use Windows due to availability of said programs. In a DesktopLinux.com survey conducted in 2007, 72% of respondents said they used ways to run Windows applications on Linux. 51% of respondents to the 2006 Linux Foundation survey, believed that cross-distribution Linux desktop standards should be the top priority for the Linux desktop community, highlighting the fact that the fragmented Linux market is preventing application vendors from developing, distributing and supporting the operating system. In May 2008, Gartner predicted that "version control and incompatibilities will continue to plague open-source OSs and associated middleware" in the 2013 timeframe. By 2008, the design of Linux applications and the porting of Windows and Apple applications had progressed to the point where it was difficult to find an application that did not have an equivalent for Linux, providing adequate or better capabilities. An example of application progress can be seen comparing the main productivity suite for Linux, OpenOffice.org, to Microsoft Office. With the release of OpenOffice.org 3.0 in October 2008 Ars Technica assessed the two: Peripheral support In the past the availability and quality of open source device drivers were issues for Linux desktops. Particular areas which were lacking drivers included printers as well as wireless and audio cards. For example, in early 2007, Dell did not sell specific hardware and software with Ubuntu 7.04 computers, including printers, projectors, Bluetooth keyboards and mice, TV tuners and remote controls, desktop modems and Blu-ray drives, due to incompatibilities at that time, as well as legal issues. By 2008, most Linux hardware support and driver issues had been adequately addressed. In September 2008, Jack Wallen's assessment was: End-user support Some critics have stated that compared to Windows, Linux is lacking in end-user support. Linux has traditionally been seen as requiring much more technical expertise. Dell's website described open source software as requiring intermediate or advanced knowledge to use. In September 2007, the founder of the Ubuntu project, Mark Shuttleworth, commented that "it would be reasonable to say that this is not ready for the mass market." In October 2004, Chief Technical Officer of Adeptiva Linux, Stephan February, noted at that time that Linux was a very technical software product, and few people outside the technical community were able to support consumers. Windows users are able to rely on friends and family for help, but Linux users generally use discussion boards, which can be uncomfortable for consumers. In 2005, Dominic Humphries summarized the difference in user tech support: More recently critics have found that the Linux user support model, using community-based forum support, has greatly improved. In 2008 Jack Wallen stated: In addressing the question of user support, Manu Cornet said: Other factors Linux's credibility has also been under attack at times, but as Ron Miller of LinuxPlanet points out: There is continuing debate about the total cost of ownership of Linux, with Gartner warning in 2005 that the costs of migration may exceed the cost benefits of Linux. Gartner reiterated the warning in 2008, predicting that "by 2013, a majority of Linux deployments will have no real software total cost of ownership (TCO) advantage over other operating systems." However, in the Comes v. Microsoft lawsuit, Plaintiff's exhibit 2817 revealed that Microsoft successfully lobbied Gartner for changing their TCO model in favour of Microsoft in 1998. Organizations that have moved to Linux have disagreed with these warnings. Sterling Ball, CEO of Ernie Ball, the world's leading maker of premium guitar strings and a 2003 Linux adopter, said of total cost of ownership arguments: "I think that's propaganda...What about the cost of dealing with a virus? We don't have 'em...There's no doubt that what I'm doing is cheaper to operate. The analyst guys can say whatever they want." In the SCO-Linux controversies, the SCO Group had alleged that UNIX source code donated by IBM was illegally incorporated into Linux. The threat that SCO might be able to legally assert ownership of Linux initially caused some potential Linux adopters to delay that move. The court cases bankrupted SCO in 2007 after it lost its four-year court battle over the ownership of the UNIX copyrights. SCO's case had hinged on showing that Linux included intellectual property that had been misappropriated from UNIX, but the case failed when the court discovered that Novell and not SCO was the rightful owner of the copyrights. During the legal process, it was revealed that SCO's claims about Linux were fraudulent and that SCO's internal source code audits had showed no evidence of infringement. A rival operating system vendor, Green Hills Software, has called the open source paradigm of Linux "fundamentally insecure". The US Army does not agree that Linux is a security problem. Brigadier General Nick Justice, the Deputy Program Officer for the Army's Program Executive Office, Command, Control and Communications Tactical (PEO C3T), said in April 2007: Netbooks In 2008, Gartner analysts predicted that mobile devices like Netbooks with Linux could potentially break the dominance of Microsoft's Windows as operating system provider, as the netbook concept focuses on OS-agnostic applications built as Web applications and browsing. Until 2008 the netbook market was dominated by Linux-powered devices; this changed in 2009 after Windows XP became available as option. One of the reasons given was that many customers returned Linux-based netbooks as they were still expecting a Windows-like environment, despite the netbook vision: a web-surfing and web-application device. Web thin clients In 2011, Google introduced the Chromebook, a web thin client running the Linux-based Chrome OS, with the ability to use web applications and remote desktop in to other computers running Windows, Mac OS X, a traditional Linux distribution or Chrome OS, using Chrome Remote Desktop. In 2012 Google and Samsung introduced the first version of the Chromebox, a small-form-factor desktop equivalent to the Chromebook. By 2013, Chromebooks had captured 20–25% of the sub-$300 US laptop market. Mobile devices Note: The term "mobile devices" in the computing context refers to cellphones and tablets; , the term does not include regular laptops, despite the fact that they have always been designed to be mobile. Android, which is based on Linux and is open source, is the most popular mobile platform. During the second quarter of 2013, 79.3% of smartphones sold worldwide were running Android. Android tablets are also available. Discontinued Linux-based mobile operating systems Firefox OS was another open source Linux-based mobile operating system, which has now been discontinued. Nokia previously produced some phones running a variant of Linux (e.g. the Nokia N900), but in 2013, Nokia's handset division was bought by Microsoft. Other embedded systems with graphical user interface Smartphones are gradually replacing these kinds of embedded devices, but they still exist. An example are the Portable media players. Some of the OEM firmware is Linux based. A community-driven fully free and open-source project is Rockbox. In-vehicle infotainment hardware usually involves some kind of display, either built into the Dashboard or additional displays. The GENIVI Alliance works on a Linux-based open platform to run the IVI. It may have an interface to some values delivered by the Engine control unit but is albeit completely separate system. There will be a special variant of Tizen for IVI, different for the Tizen for smartphones in several regards. Hardware platforms without graphical user interface Embedded systems without graphical user interface Linux is often used in various single- or multi-purpose computer appliances and embedded systems. Customer-premises equipment are a group of devices that are embedded and have no graphical user interface in the common sense. Some are remotely operated via Secure Shell or via some Web-based user interface running on some lightweight web server software. Most of the OEM firmware is based on the Linux kernel and other free and open-source software, e.g. Das U-Boot and Busybox. There are also a couple of community driven projects, e.g. OpenWrt. Smaller scale embedded network-attached storage-devices are also mostly Linux-driven. Servers Linux became popular in the Internet server market particularly due to the LAMP software bundle. In September 2008 Steve Ballmer (Microsoft CEO) claimed 60% of servers run Linux and 40% run Windows Server. According to IDC's report covering Q2 2013, Linux was up to 23.2% of worldwide server revenue although this does compensate for the potential price disparity between Linux and non-Linux servers. In May 2014, W3Techs estimated that 67.5% of the top 10 million (according to Alexa) websites run some form of Unix, and Linux is used by at least 57.2% of all those websites which use Unix. Web servers Linux-based solution stacks come with all the general advantages and benefits of free and open-source software. Some more commonly known examples are: LAMP MEAN stack According to the Netcraft, , nginx had the highest market share. LDAP servers There are various freely available implementations of LDAP servers. Additionally, Univention Corporate Server, as an integrated management system based on Debian, supports the functions provided by Microsoft Active Directory for the administration of computers running Microsoft Windows. Routers Free routing software available for Linux includes BIRD, B.A.T.M.A.N., Quagga and XORP. Whether on Customer-premises equipment, on Personal computer hardware or on server-hardware, the mainline Linux kernel or an adapted highly optimized Linux kernel is capable of doing routing at rates that are limited by the hardware bus throughput. Supercomputers Linux is the most popular operating system among supercomputers due to the general advantages and benefits of free and open-source software, like superior performance, flexibility, speed and lower costs. In November 2008 Linux held an 87.8 percent share of the world's top 500 supercomputers. As of November 2021, the operating systems used on the world's top 500 supercomputers were: In January 2010, Weiwu Hu, chief architect of the Loongson family of CPUs at the Institute of Computing Technology, which is part of the Chinese Academy of Sciences, confirmed that the new Dawning 6000 supercomputer will use Chinese-made Loongson processors and will run Linux as its operating system. The most recent supercomputer the organization built, the Dawning 5000a, which was first run in 2008, used AMD chips and ran Windows HPC Server 2008. Advocacy Many organizations advocate for Linux adoption. The foremost of these is the Linux Foundation which hosts and sponsors the key kernel developers, manages the Linux trademark, manages the Open Source Developer Travel Fund, provides legal aid to open source developers and companies through the Linux Legal Defense Fund, sponsors kernel.org and also hosts the Patent Commons Project. The International Free and Open Source Software Foundation (iFOSSF) is a nonprofit organization based in Michigan, USA dedicated to accelerating and promoting the adoption of FOSS worldwide through research and civil society partnership networks. The Open Invention Network was formed to protect vendors and customers from patent royalty fees while using OSS. Other advocates for Linux include: IBM through its Linux Marketing Strategy Linux User Groups Asian Open Source Centre (AsiaOSC) The Brazilian government, under president Luiz Inácio Lula da Silva Software Livre Brasil, a Brazilian organization promoting Linux adoption in schools, public departments, commerce, industry and personal desktops. FOSS: Free and Open Source Software Foundations of India and China. History Gartner claimed that Linux-powered personal computers accounted for 4% of unit sales in 2008. However, it is common for users to install Linux in addition to (as a dual boot arrangement) or in place of a factory-installed Microsoft Windows operating system. Timeline 1983 (September): GNU project announced publicly 1991 (September): First version of the Linux kernel released to the Internet mid-1990s: Linux runs on cluster computers at NASA and elsewhere late 1990s: Dell, IBM and Hewlett-Packard offer commercial support for Linux on their hardware; Red Hat and VA Linux have initial public offerings 1999: EmperorLinux started shipping specially configured laptops running modified Linux distributions to ensure usability 2001 (second quarter): Linux server unit shipments recorded a 15% annual growth rate 2004: Linux shipped on approximately 50% of the worldwide server blade units, and 20% of all rack-optimized servers 2005: System76, a Linux-only computer OEM, starts selling Ubuntu pre-installed on laptops and desktops. 2007 Dell announced it would ship select models with Ubuntu Linux pre-installed ZaReason is founded as a Linux only hardware OEM. Lenovo announced it would ship select models with SUSE Linux Enterprise Desktop pre-installed HP announced that it would begin shipping computers preinstalled with Red Hat Enterprise Linux in Australia ASUS launched the Linux-based ASUS Eee PC 2008 Dell announced it would begin shipping Ubuntu-based computers to Canada and Latin America. Dell began shipping systems with Ubuntu pre-installed in China. Acer launched the Linux-based Acer Aspire One. In June 2008, the Electronics Corporation of Tamil Nadu (ELCOT), a bulk computer buyer for students in the Indian state of Tamil Nadu, decided to switch entirely to supplying Linux after Microsoft attempted to use its monopoly position to sell the organization Windows bundled with Microsoft Office. ELCOT declined the offer stating "Any such bundling could result in serious exploitation of the consumer." In August 2008, IBM cited market disillusionment with Microsoft Vista in announcing a new partnership arrangement with Red Hat, Novell and Canonical to offer "Microsoft-free" personal computers with IBM application software, including Lotus Notes and Lotus Symphony. 2009 In January 2009, the New York Times stated: "More than 10 million people are estimated to run Ubuntu today". In mid-2009, Asus, as part of its It's better with Windows campaign, stopped offering Linux, for which they received strong criticism. The company claimed that competition from other netbook makers drove them to offer only Windows XP. Writing in May 2010 ComputerWorld columnist Steven J. Vaughan-Nichols said "I'm sure that the real reason is Microsoft has pressured Asus into abandoning Linux. On ASUS' site, you'll now see the slogan 'ASUS recommends Windows 7' proudly shown. Never mind that, while Windows 7 is a good operating system, Windows 7 is awful on netbooks." In May 2009, Fedora developer Jef Spaleta estimated on the basis of IP addresses of update downloads and statistics from the voluntary user hardware registration service Smolt that there are 16 million Fedora systems in use. No effort was made to estimate how much the Fedora installed base overlaps with other Linux distributions (enthusiasts installing many distributions on the same system). In June 2009, ZDNet reported "Worldwide, there are 13 million active Ubuntu users with use growing faster than any other distribution." 2010 In April 2010, Chris Kenyon, vice president for OEM at Canonical Ltd., estimated that there were 12 million Ubuntu users. In June 2010, a Quebec Superior Court Judge Denis Jacques ruled that the provincial government broke the law when it spent Cdn$720,000, starting in the fall of 2006 to migrate 800 government workstations to Microsoft Windows Vista and Office 2007 without carrying out a "serious and documented search" for alternatives. The search for alternatives was legally required for any expenditures over Cdn$25,000. The court case was brought by Savoir Faire Linux, a small Montreal-based company that had hoped to bid Linux software to replace the government's aging Windows XP. The judge dismissed the government's contention that Microsoft software was chosen because employees were already familiar with Windows and that switching to a different operating system would have cost more. In October 2010, a statistics company stated that Android, Google's version of Linux for smartphones (and tablets), had become the most popular operating system among new buyers. 2012 In November 2012, Top500.org's November 2012 list has all Top 10 Supercomputers as running a distribution of Linux as their Operating System. 2013 In February 2013, Dice and the Linux Foundation released a survey that showed Linux skills in high demand among employers. Valve announces its Linux-based SteamOS for video game consoles. Supercomputers, Japan's bullet trains, traffic control, Toyota IVI, NYSE, CERN, FAA air traffic control, nuclear submarines and top websites all use Linux. In December 2013, the city of Munich announced that it successfully migrated 12,000 of its 15,000 computers to LiMux Linux and that the savings in 2013 alone were about 10 million euros. 2014 In September 2014, the Italian city of Turin, the capital of Piedmont, decided to switch to Linux. In October 2014, the city of Gummersbach announced that their IT infrastructure now is based on 300 thin clients and 6 servers that run SuSe Linux. June 2014, France's National Gendarmerie has completed the migration of 65,000 to Linux "GendBuntu". In November 2014 Purism was founded as an OEM Linux manufacturer. 2017 In November 2017, all 500 of the world's top supercomputers ran Linux. 2018 In April 2018, Microsoft announced Azure Sphere, a Linux-based operating system for Internet of Things applications. In May 2018, pre-orders began for Atari VCS, a gaming console that is powered by the Linux kernel. 2019 In May 2019, Microsoft announced Windows Subsystem for Linux 2, which will rely on a pre-installed Linux kernel built by Microsoft. This marks the first time that the Linux kernel has shipped with a Microsoft operating system. In May 2019, South Korea announced that it was looking to migrate its major government systems to Linux, due to the pending end of support for Windows 7. 2021 In January 2021 the government of the Argentinian province of Misiones announced that it had developed , a distribution based on the Devuan operating system, specially designed for government offices. In February 2021 Linux was first used on Mars when NASA's Perseverance rover landed on 18 February. See also References External links O/S market share monthly estimations, based on internet traffic Operating System Market Share Worldwide | StatCounter Global Stats LinuxWorld: What's Driving Global Linux Adoption? OSDL Desktop Linux Client Survey Canadian Provincial Medical Association To Use Open Source Platform For EMR Project IDC: Latin America Linux Migration Trends 2005 OSDL Claims Linux Making Major Gains in Global Retail Sector Linux Advocacy mini-HOWTO Measuring total cost of ownership Gartner: Open source will quietly take over IDC: Linux-Related Spending Could Top $49B by 2011 Red Hat – Open Source Activity Map Linux Linux-based devices Operating system advocacy Technological change
667970
https://en.wikipedia.org/wiki/Client%20%28computing%29
Client (computing)
In computing, a client is a piece of computer hardware or software that accesses a service made available by a server as part of the client–server model of computer networks. The server is often (but not always) on another computer system, in which case the client accesses the service by way of a network. A client is a computer or a program that, as part of its operation, relies on sending a request to another program or a computer hardware or software that accesses a service made available by a server (which may or may not be located on another computer). For example, web browsers are clients that connect to web servers and retrieve web pages for display. Email clients retrieve email from mail servers. Online chat use a variety of clients, which vary on the chat protocol being used. Multiplayer video games or online video games may run as a client on each computer. The term "client" may also be applied to computers or devices that run the client software or users that use the client software. A client is part of a client–server model, which is still used today. Clients and servers may be computer programs run on the same machine and connect via inter-process communication techniques. Combined with Internet sockets, programs may connect to a service operating on a possibly remote system through the Internet protocol suite. Servers wait for potential clients to initiate connections that they may accept. The term was first applied to devices that were not capable of running their own stand-alone programs, but could interact with remote computers via a network. These computer terminals were clients of the time-sharing mainframe computer. Types In one classification, client computers and devices are either thick clients, thin clients, or diskless nodes. Thick A thick client, also known as a rich client or fat client, is a client that performs the bulk of any data processing operations itself, and does not necessarily rely on the server. The personal computer is a common example of a fat client, because of its relatively large set of features and capabilities and its light reliance upon a server. For example, a computer running an art program (such as Krita or Sketchup) that ultimately shares the result of its work on a network is a thick client. A computer that runs almost entirely as a standalone machine save to send or receive files via a network is by a standard called a workstation. Thin A thin client is a minimal sort of client. Thin clients use the resources of the host computer. A thin client generally only presents processed data provided by an application server, which performs the bulk of any required data processing. A device using web application (such as Office Web Apps) is a thin client. Diskless node A diskless node is a mixture of the above two client models. Similar to a fat client, it processes locally, but relies on the server for storing persistent data. This approach offers features from both the fat client (multimedia support, high performance) and the thin client (high manageability, flexibility). A device running an online version of the video game Diablo III is an example of diskless node. References Peer-to-peer computing
44373565
https://en.wikipedia.org/wiki/Life%20Is%20Strange%20%28video%20game%29
Life Is Strange (video game)
Life Is Strange is an episodic graphic adventure video game developed by Dontnod Entertainment and published by Square Enix's European subsidiary for Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One. The first installment of the Life Is Strange series, the game was released in five episodes periodically throughout 2015. It was ported to OS X and Linux in 2016 and iOS and Android in 2017–2018. The plot focuses on Max Caulfield, an 18-year-old photography student who discovers that she has the ability to rewind time at any moment, leading her every choice to enact the butterfly effect. The player's actions will adjust the narrative as it unfolds, and reshape it once allowed to travel back in time. Fetch quests and making environmental changes represent the forms of puzzle solving in addition to using branching choices for conversation. Development of Life Is Strange began in April 2013. It was formed with an episodic format in mind, for reasons both financial and creative. The developers conducted field research on the setting by traveling to the Pacific Northwest, and subverted known archetypes to make the characters. Player feedback influenced the adjustments made to the episodes. Story and character arc serve as the central point in the game. During its release, Life Is Strange received generally favorable reviews commending the character development, rewind game mechanic, and tackling of taboo subjects. Common criticisms included the slang that was used, poor lip-syncing, and tonal inconsistencies in the story. The game received over 75 Game of the Year awards and listings. It has sold over three million copies as of May 2017. A prequel, Life Is Strange: Before the Storm, was released in August 2017, and a sequel, Life Is Strange 2, in September 2018. An additional installment to the series, Life Is Strange: True Colors, was released in September 2021. A remastered version of the game was released as part of the Life Is Strange Remastered Collection on 1 February 2022. Gameplay Life Is Strange is a graphic adventure played from a third-person view. The mechanic of rewinding time allows the player to redo almost any action that has been taken. The player can examine and interact with objects, which enables puzzle solving in the form of fetch quests and making changes to the environment. Items that are collected before time travelling will be kept in the inventory after the fact. The player can explore various locations in the fictional setting of Arcadia Bay and communicate with non-playable characters. Dialogue exchanges can be rewound while branching options are used for conversation. Once an event is reset, the details provided earlier are permitted to avail themselves in the future. In some instances, choices in dialogue will alter and affect the story through short or long-term consequences. For each one of the choices, something good in the short term could turn out worse later. Plot Life Is Strange is set in the fictional town of Arcadia Bay, Oregon, in October 2013 and is told from the perspective of Maxine "Max" Caulfield (Hannah Telle), a twelfth-grade student attending Blackwell Academy. During photography class with her teacher Mark Jefferson (Derek Phillips), Max experiences a vision of a lighthouse being destroyed by a swelling tornado. Leaving for the restroom to regain her composure, she witnesses classmate Nathan Prescott (Nik Shriner) kill a girl in a fit of rage. In a single, sudden effort, she develops the ability to rewind time and rescues the girl, revealed to be her childhood friend Chloe Price (Ashly Burch). The two reunite and go for a walk at the lighthouse, where Max reveals to Chloe her capacity to travel back in time. It is established that the vision is rather the reckoning of a future event: a storm approaching the town. The next day, Max observes fellow student Kate Marsh (Dayeanne Hutton) being bullied for a viral video depicting her kissing several students at a party. When Max meets Chloe at the diner where her mother Joyce (Cissy Jones) works, they decide to experiment with Max's power at Chloe's secret scrapyard hideout. However, strain causes Max to have a nosebleed and faint. Chloe takes her back to Blackwell, but class is halted when everyone is called out to the courtyard. Kate commits suicide by jumping off the roof of the girls' dorm. Max manages to rewind and time stops unexpectedly as she reaches Kate, giving Max the opportunity to convince her to come down. Max ultimately resolves to uncover what happened to Kate and Chloe's missing friend Rachel Amber. Max and Chloe break into the principal's office that night to investigate and enter the pool for a swim before evading the school's security guards which may, depending on player choices, include Chloe's stepfather, David Madsen (Don McManus), who is head of security at Blackwell. The pair flee back to Chloe's place. Later that morning they sneak into the motorhome of Frank Bowers (Daniel Bonjour), drug dealer and friend of Rachel, and learn that Rachel was in a relationship with Frank and lied to Chloe about it, causing Chloe to storm off feeling betrayed. Max returns to her dormitory and examines a childhood photo of her and Chloe, but is suddenly transported to the day the picture was taken. Max prevents Chloe's father William (Joe Ochman) from dying in a traffic collision, which inadvertently creates an alternative reality where William is alive but Chloe has been paralysed from the neck down as a result of a collision in her own car. Max uses the photo to undo her decision and return to the present day, restoring Chloe's health. Continuing their investigation, Max and Chloe obtain clues leading them to an abandoned barn owned by the influential Prescott family. They discover a hidden bunker containing pictures of Kate and Rachel tied up and intoxicated, with Rachel being buried at Chloe's secret hideout. They hurry back to the scrapyard and find Rachel's grave, much to Chloe's despair. Max follows Chloe at the school party to confront Nathan, believing he will target fellow student Victoria Chase (Dani Knights). They receive a text from Nathan threatening to destroy the evidence, returning them to the scrapyard. All of a sudden, the two are ambushed by Jefferson, who anaesthetises Max and then kills Chloe with a gunshot to the head. Max is kidnapped and held captive in the "Dark Room", a place where Jefferson has been drugging and photographing young girls to capture their innocence. Jefferson also reveals that he took Nathan on as a personal student, but killed him before abducting Max due to him giving Rachel an overdose when he tried to mimic Jefferson's work, and intends to do the same to Max after he has the photos he wants. Max escapes into a photograph and emerges back at the beginning in Jefferson's class. She alerts David, getting Jefferson (and Nathan) arrested. Max is given the opportunity to go to San Francisco and have one of her photos displayed in an art gallery. She calls Chloe from the event, realising that, for all her effort, the storm has reached Arcadia Bay. Max travels back to the time at which she took the gallery photo, which eventually leads her to sojourn alternative realities as they devolve into a dreamscape nightmare. Max and Chloe finally return to the lighthouse and confront the possibility that Max brought the storm into existence by saving Chloe from being shot by Nathan earlier in the week. Max must make a choice: sacrifice Chloe's life to save Arcadia Bay, or sacrifice Arcadia Bay to spare Chloe. If Max rewinds time to undo Chloe's survival, she reluctantly allows Nathan to shoot Chloe, leading to his and Jefferson's arrest. Chloe's death is mourned, and the storm never appears. If Max maintains the timeline to stay with Chloe, the storm finally ceases. The pair then depart from the now devastated Arcadia Bay. Development Life Is Strange was Dontnod Entertainment's second title with a female protagonist. A developer diary was published before release that said most prospective publishers were unwilling to publish a game unless it had a male protagonist. Most publishers had the same objection to Dontnod's first project, Remember Me. Dontnod CEO Oskar Guilbert also challenged the idea at the start. Square Enix was the only publisher with no intention to change this. Development of Life Is Strange began in April 2013 with a team of 15, and more people were added when the collaboration with Square Enix began. The episodes were originally aimed to release about 6 weeks apart. Dontnod co-founder Jean-Maxime Moris was originally the game's Creative Director. Dontnod told them about Life Is Strange only after they had turned down a pitch for a larger game. Before signing with Square Enix, Life Is Strange was imagined as a full-length video game that Dontnod would self-publish. However, the publisher surmised that it would be more successful as an episodic title. The game was originally codenamed What If but the name was not used because of the film with the same name. Life Is Strange was born from the rewind mechanic idea, which the developer had already experimented on with their last game Remember Me. The lead character Max was created with the ability to rewind time to supplement this mechanism. The episodic format was decided upon by the studio for creative reasons, financial restrictions and marketing purposes, allowing them to tell the story in its preferred slow pace. The Pacific Northwest was picked as the setting for the purpose of conveying a nostalgic and autumnal feel. The development team visited the region, took photographs, looked at local newspapers and used Google Street View to make sure the environment was accurately portrayed. It was decided early on that most of the budget be spent on the writing and voice actors. The original story was written in French by Jean-Luc Cano, and converted into a game script by the co-directors and design team. It was subsequently handed over to Christian Divine and Cano to be fine tuned in English. Story and character development were highlighted over point-and-click puzzles, making choice and consequence integral to how the narrative unfolds. Hannah Telle auditioned for Max Caulfield in July 2014 and was offered the part; Ashly Burch auditioned for both Max and her given role Chloe Price. The recording sessions were done in Los Angeles, California, with the French developer brought in via Skype. Although it holds significant differences from Remember Me, it addresses similar themes of memory and identity. Life Is Strange was specified as an analogue look at human identity in contrast to Remember Me, the digital view of the same theme. Running on an improved version of Unreal Engine 3, it makes use of the tools and special effects like lighting and depth of field engineered for Remember Me as well as subsequent advances. Visual effects like post-processes, double exposure and overlapping screen space particles were used as an artistic approach to be displayed while the lead character rewinds time. The textures seen in the game were entirely hand painted, adapted to achieve what art director Michel Koch called "impressionistic rendering". Elements were adjusted based on player feedback, with influences like The Walking Dead, Gone Home and Heavy Rain in mind. Additional sources of inspiration include the visual novel Danganronpa, in terms of balancing gameplay and story, and the novel The Catcher in the Rye, whose protagonist Holden Caulfield shares a surname with Max, the game's lead. The characters were created using known archetypes, at first to establish an entry point for the player, and then to subvert them. For the sake of serving the realism, the supernatural elements were designed as a metaphor for the characters' inner conflict, and experts were consulted to tackle the subject of teen suicide. The score was composed by Jonathan Morali of the band Syd Matters. Inspired by modern indie folk music, the soundtrack was intended to inform the mood. The music contains a blend of licensed tracks and composed pieces. Featured artists include José González, Mogwai, Breton, Amanda Palmer, Brian Viglione, Bright Eyes, Message to Bears, Local Natives, Syd Matters, Sparklehorse, Angus & Julia Stone, alt-J, Mud Flow and Foals. Release Square Enix and Dontnod announced Life Is Strange on 11 August 2014. The episodes were released digitally on PC via Steam, PlayStation 3 and PlayStation 4 via PlayStation Network, and Xbox 360 and Xbox One via Xbox Live between 30 January 2015 and 20 October 2015. Two season pass options were available for reduced prices, one with episode 1-5 and one with episode 2-5. A demo of the first 20 minutes was released simultaneously with episode 1 for consoles and later for PC. In November 2014, the publisher said they were interested in releasing physical copies of the game, but said that at that time they were "100 per cent focused on the digital release". One year later, the retail edition was set to be released for the PC, PS4 and Xbox One in North America on 19 January 2016 and in Europe on 22 January 2016; the limited edition had an artbook, the soundtrack, score, and a director's commentary. The director's commentary was also released as a free DLC. A Japanese dubbed version was released for Microsoft Windows, PlayStation 3 and PlayStation 4 on 3 March 2016. Feral Interactive ported Life Is Strange for OS X, released on 16 June 2016, and Linux, released on 21 July 2016. That same day, the first episode was made indefinitely available for free on Linux, Windows, OS X, PS3, PS4, Xbox 360 and Xbox One. Life Is Strange was included on PlayStation Plus (for America and PAL regions) the month of June 2017. It was released for iOS between 14 December 2017 and 29 March 2018, and launched on Android on 18 July 2018, both ported by Black Wing Foundation. Reception Life Is Strange received generally favorable reviews, with a Metacritic score of 85/100 on PlayStation 4 and Xbox One. While some reviewers criticised the games's lip-syncing and use of dated slang, they lauded the character development and time travel component, suggesting that there should be more games like it. Eurogamer said it was "one of the best interactive story games of this generation" and Hardcore Gamer said it was the sleeper hit of 2015. Life Is Strange received over 75 Game of the Year awards and listings. In April 2017, Xbox One UK ranked it first in its list of Xbox One games priced under . Game director Yoko Taro listed it as one of his favourite PlayStation 4 games. Kevin VanOrd of GameSpot said Episode 1: Chrysalis is "an involving slice of life that works because its situations eloquently capture a peculiar early-college state of mind", while Game Informers Kimberley Wallace said the game's tackling of "subjects that are usually taboo for video games" was impressive. Destructoids Brett Makedonski said the episode's strongest characteristic was exploration—both "self- and worldly". Mitch Dyer of IGN said the story was ultimately obstructed by its "laughable" script and "worse performances". In response to Episode 2: Out of Time, Polygons Megan Farokhmanesh also said that the emphasis on self-exploration had considerable impact on the enjoyment of the game. Other critics said the ending was an "emotional high point" and that it brought meaning to the choices from both the first and second episodes. Mike Williams said in USgamer that the pacing of Episode 2: Out of Time was "slower and less exciting" than that of episode one. PopMatters' Eric Swain described the episode as generally sincere but containing moments that strained credibility. Adnan Riaz of Hardcore Gamer said Episode 3: Chaos Theory was a dramatic improvement that presented a "thrilling, poignant, fascinating and ... enticing" narrative whose outcome from past decisions also added a sense of realism. Peter Paras of Game Revolution complimented the character beats, particularly the development of Chloe Price, who he said "really comes into her own as [a] fully-formed character". Though GameSpots Alexa Ray Corriea said that the fetch quests interfered with its emotional quality, the episode built up to a "killer cliffhanger" according to Farokhmanesh. GameZones Matt Liebl said Episode 4: Dark Room was "easily the most emotional episode" and that the mystery of Rachel Amber had done a "tremendous job in keeping us hooked". Tom Hoggins of The Telegraph said the developer's venture into subjects like social division, online bullying, parental conflict and suicide as "bold". Critics said there were tonal problems, caused by the game's "cheap ways" of progressing the plot, such as character inconsistency and superfluous shock value. Critics were more favourable towards the episode's puzzles and relationships. They said the final episode, Polarized, had a "fitting conclusion" to the coming of age story of Max Caulfield and the relationship between the two leads was carried out successfully. One stealth sequence was described as "tedious" and "out-of-place" while other aspects inhabiting the same course of events were favoured. Reviewers were divided on the ending. Sales The first episode was ranked fifth among the best selling PlayStation 3 and PlayStation 4 video games of February 2015. Life Is Strange reached one million sales in July 2015, having accumulated over 1.2 million unique players worldwide; the attach rate to units between the complete season and season pass proved to be "extremely strong", divulged Square Enix. The retail edition made seventh place in the top ten UK game sales chart for the week ending 23 January 2016. Life Is Strange was one of the top 100 best-selling games on Steam in 2016. As of May 2017, more than three million copies have been sold. Awards Legacy and impact After Life Is Strange achieved financial and commercial success, Dontnod Entertainment started to become more prominent in the video game industry; publishers pursued the studio for the first time, whereas they previously had to pursue publishers themselves. CEO Oskar Guilbert said that the game saved his company financially after the mediocre sales of Remember Me. The Washington Post noted it as passing the Steven Spielberg test for video game as an art form in their review. Fans speculated and made theories about the plot, as well as predicting part of a possible ending. In 2016, Square Enix sponsored its own "Everyday Heroes" photography contest, inspired by the game, offering a scholarship for the winning entry. Square Enix also coordinated with Parent Advocacy Coalition for Educational Rights (PACER) to support an anti-bullying initiative based on themes within the game and donated a total amount of $25,000. In July 2016, Legendary Digital Studios and Square Enix announced that they would be adapting Life Is Strange as a digital series. At the time of the announcement, they were meeting with potential writers for the series adaptation, which would be set in Arcadia Bay. In 2017, dj2 Entertainment sold the rights to the series to streaming service Hulu. Life Is Strange: Before the Storm, a prequel developed by Deck Nine, launched on 31 August 2017. A free spin-off called The Awesome Adventures of Captain Spirit was announced and released in June 2018. Life Is Strange 2 was released on 27 September 2018, featuring a new location and cast of characters. A comic book series of the same name, set after the "Sacrifice Arcadia Bay" ending of the game, was released by Titan Comics beginning November 2018. The comic is written by Emma Vieceli, with interior and cover art by Claudia Leonardi and colours by Andrea Izzo. Square Enix also partnered with Titan Comics to produce Life Is Strange: Welcome to Blackwell Academy, a tie-in book about Blackwell Academy and the town of Arcadia Bay, written by Matt Forbeck. Remastered versions of Life Is Strange and Before the Storm were announced on 18 March 2021, as part of Life Is Strange: Remastered Collection. The remaster includes previously released content with updated visuals and gameplay puzzles, improved character animation, engine and lighting upgrades, and full facial motion capture. It is scheduled to release on 1 February 2022, on PlayStation 4, Xbox One, Microsoft Windows, Google Stadia and at a later date on Nintendo Switch. Notes References External links Official website 2015 video games Adventure games Android (operating system) games Bullying in fiction Cameras in fiction Coming-of-age fiction Video games about dreams Episodic video games Feral Interactive games High school-themed video games Interactive movie video games IOS games LGBT-related video games Linux games MacOS games Nintendo Switch games Fiction about murder Video games about the paranormal PlayStation 3 games PlayStation 4 games PlayStation Network games Video games about psychic powers Science fiction video games Single-player video games Square Enix games Suicide in fiction Video games about time travel Unreal Engine games Video games about mental health Video games adapted into television shows Video games developed in France Video games featuring female protagonists Video games featuring parallel universes PlayStation Plus games Video games scored by Jonathan Morali Video games set in 2013 Video games set in Oregon Video games with alternate endings Video games with time manipulation Windows games Xbox 360 games Xbox 360 Live Arcade games Xbox One games
4634262
https://en.wikipedia.org/wiki/Exec%20%28system%20call%29
Exec (system call)
In computing, exec is a functionality of an operating system that runs an executable file in the context of an already existing process, replacing the previous executable. This act is also referred to as an overlay. It is especially important in Unix-like systems, although it exists elsewhere. As no new process is created, the process identifier (PID) does not change, but the machine code, data, heap, and stack of the process are replaced by those of the new program. The exec call is available for many programming languages including compilable languages and some scripting languages. In OS command interpreters, the built-in command replaces the shell process with the specified program. Nomenclature Interfaces to exec and its implementations vary. Depending on programming language it may be accessible via one or more functions, and depending on operating system it may be represented with one or more actual system calls. For this reason exec is sometimes described as a collection of functions. Standard names of such functions in C are , , , , , and (see below), but not "exec" itself. The Linux kernel has one corresponding system call named "execve", whereas all aforementioned functions are user-space wrappers around it. Higher-level languages usually provide one call named . Unix, POSIX, and other multitasking systems C language prototypes The POSIX standard declares exec functions in the header file, in the C language. The same functions are declared in for DOS (see below), OS/2, and Microsoft Windows. int execl(char const *path, char const *arg0, ...); int execle(char const *path, char const *arg0, ..., char const *envp[]); int execlp(char const *file, char const *arg0, ...); int execv(char const *path, char const *argv[]); int execve(char const *path, char const *argv[], char const *envp[]); int execvp(char const *file, char const *argv[]); int fexecve(int fd, char *const argv[], char *const envp[]); Some implementations provide these functions named with a leading underscore (e.g. _execl). The base of each is exec (execute), followed by one or more letters: e – An array of pointers to environment variables is explicitly passed to the new process image. l – Command-line arguments are passed individually (a list) to the function. p – Uses the PATH environment variable to find the file named in the file argument to be executed. v – Command-line arguments are passed to the function as an array (vector) of pointers. path The argument specifies the path name of the file to execute as the new process image. Arguments beginning at arg0 are pointers to arguments to be passed to the new process image. The argv value is an array of pointers to arguments. arg0 The first argument arg0 should be the name of the executable file. Usually it is the same value as the path argument. Some programs may incorrectly rely on this argument providing the location of the executable, but there is no guarantee of this nor is it standardized across platforms. envp Argument envp is an array of pointers to environment settings. The exec calls named ending with an e alter the environment for the new process image by passing a list of environment settings through the envp argument. This argument is an array of character pointers; each element (except for the final element) points to a null-terminated string defining an environment variable. Each null-terminated string has the form: name=value where name is the environment variable name, and value is the value of that variable. The final element of the envp array must be null. In the , , , and calls, the new process image inherits the current environment variables. Effects A file descriptor open when an exec call is made remains open in the new process image, unless was ed with FD_CLOEXEC or opened with O_CLOEXEC (the latter was introduced in POSIX.1-2001). This aspect is used to specify the standard streams (stdin, stdout and stderr) of the new program. A successful overlay destroys the previous memory address space of the process, and all its memory areas, that were not shared, are reclaimed by the operating system. Consequently, all its data that were not passed to the new program, or otherwise saved, become lost. Return value A successful exec replaces the current process image, so it cannot return anything to the program that made the call. Processes do have an exit status, but that value is collected by the parent process. If an exec function does return to the calling program, an error occurs, the return value is −1, and errno is set to one of the following values: DOS operating systems DOS is not a multitasking operating system, but replacing the previous executable image has a great merit there due to harsh primary memory limitations and lack of virtual memory. The same API is used for overlaying programs in DOS and it has effects similar to ones on POSIX systems. MS-DOS exec functions always load the new program into memory as if the "maximum allocation" in the program's executable file header is set to default value 0xFFFF. The EXEHDR utility can be used to change the maximum allocation field of a program. However, if this is done and the program is invoked with one of the exec functions, the program might behave differently from a program invoked directly from the operating-system command line or with one of the spawn functions (see below). Command interpreters Many Unix shells also offer a builtin command that replaces the shell process with the specified program. Wrapper scripts often use this command to run a program (either directly or through an interpreter or virtual machine) after setting environment variables or other configuration. By using exec, the resources used by the shell program do not need to stay in use after the program is started. The command can also perform a redirection. In some shells it is even possible to use the command for redirection only, without making an actual overlay. Alternatives The traditional Unix system does not have the functionality to create a new process running a new executable program in one step, which explains the importance of exec for Unix programming. Other systems may use spawn as the main tool for running executables. Its result is equivalent to the fork–exec sequence of Unix-like systems. POSIX supports the posix_spawn routines as an optional extension that usually is implemented using vfork. Other Systems OS/360 and successors include a system call XCTL (transfer control) that performs a similar function to exec. See also Chain loading, overlaying in system programming exit (system call), terminate a process fork (system call), make a new process (but with the same executable) clone(), the way to create new threads PATH (variable), related to semantics of the argument References External links Process (computing) POSIX Process.h Unix SUS2008 utilities System calls
8852318
https://en.wikipedia.org/wiki/Betsson
Betsson
Betsson AB is a Swedish company that offers a number of online gambling products, such as casino, poker, bingo, sports betting and scratch cards through more than 20 online gaming brands including Betsson, Betsafe and NordicBet. Betsson AB is listed on the Nasdaq Stockholm Large Cap List. Corporate history Betsson AB can trace its roots back to 1963, and the foundation of AB Restaurang Rouletter by Bill Lindwall and Rolf Lundström, later renamed Cherryföretagen AB (Cherry), which provided slot machines to restaurants in Sweden. Cherry acquired a minority share of Net Entertainment in 1998, which was a company co-founded with Investment AB Kinnevik, tasked with developing online gaming solutions. The rest of the shares in Net Entertainment held by Kinnevik was acquired by Cherry in 2000, making Kinnevik the largest shareholder of Cherry in the process. In 2003, after the return of Pontus Lindwall (son of Bill Lindwall), as the CEO of Cherry, the company buys into Betsson (founded by Henrik Bergquist, Anders Holmgren and Fredrik Sidfalk), which had a gaming licence in England at that time, and later acquires a license in Malta. In 2006, Cherryföretagen changes its name to Betsson and the traditional gaming operation in the business sector of Cherry merges into a new group, Cherryföretagen, which later shortens the name back to Cherry and launches its own online sector based in Malta. Ulrik Bengtsson stepped up as CEO of Betsson in March 2016, and took over after Pontus Lindwall, who has stayed on as chairman of the board. Ulrik Bengtsson stepped down as CEO on 09.04.2017 and was succeeded by Pontus Lindwall. As a consequence, Patric Svensk was appointed Chairman of the Board after Pontus. In May 2021, Betsson announced plans to launch its brand in Greece after obtaining two licences from the Hellenic Gaming Commission. In May 2021, Betsson partnered with Norwegian Toppserien women's football club Avaldsnes IL. In 2021, it was announced that Betsson will be launching in Mexico through a partnership with local company Big Bola Casinos. In 2021, Betsson agreed a deal to become a regional sponsor of the 2021 Copa América. As part of the deal, Betsson will acquire branding rights. In 2021, Betsson expanded its operations in Eastern Europe with the launch of new brand Europebet and a new office in Minsk, Belarus. Under the brand Betsson will offer casino, sportsbook and poker services. In 2021, Betsson announced CEO Pontus Lindwall is to step down. The company is on the lookout for a new CEO after Lindwall accomplished the task of getting the betting company back on track. Lindwall stays on as CEO until a new CEO is hired. Products Betsson AB's subsidiaries owns and operates a number of websites via its subsidiaries in Malta. In November 2017, Betsson signed a deal with Scout gaming to integrate its daily fantasy sports platform across all its brands, including BetSafe, its UK facing online bookmaker. The integration is expected to be completed by 2018. Mobile Apps BetSafe released the free mobile app for App Store on May, 26 in 2016. Since then it has 25 updates. Later in 2016, Betsafe has added Poker and Virtual Sports to their app. In new updates they have also added new games, recently played widget, personalized casino lobby and have included a navigation tab bar. As for Android, Betsafe provides an app developed by BML Group ltd, compatible with wide range of Android devices including Samsung, Xiaomi, Motorola, Huawei, Vivo, LG, Sony, and HTC. Business acquisitions In 2011 the company acquired all of the shares in the Betsafe Group for SEK 292 M. The aim of the acquisition was to increase Betsson's market presence and enable continued growth. The acquisition grew Betsson's number of customers to approximately 419,000 — surpassing Unibet in the number of active players. That same year Betsson signed an agreement with a Chinese state-owned company regarding the development of joint-owned gaming operations. In March 2017, Betsson completed its £26 million acquisition of NetPlay TV's assets, adding Jackpot247, SuperCasino and Vernons to its European gaming multi-brand portfolio. In February 2020, Betsson acquired Gaming Innovation Group B2C assets. In August 2021, Betsson acquired Inkabet. Betsson subsidiary SW Nordic Limited brokered the $25 million deal. As part of the deal Betsson agreed to a $4 million performance incentive if Inkabet outperform EBIT targets. In August 2021, Betsson acquired 28% of Canadian start-up Slapshot Media Inc, for a purchase price of $2.4 million. Awards Betsson received the Best Customer Service Award at the prestigious 10th Annual eGaming Review (EGR) in London. Betsson is ahead of 800 participants. For the fourth consecutive year, the Bettson Group has been the customer service winner of the year. In 2022 Betsson won two awards at International Gaming Awards 2021 event: Safer Gambling Operator of the Year and Mobile Operator of the Year. Data Breach In January 2020 Jackpot247 sent emails to all of their customers advising them that they had suffered a data breach with the following message: "We regret to inform you that Jackpot247 has suffered a security incident and some of your personal data has been revealed to an unauthorized person. We took various mitigating measures and the unauthorised person is no longer able to access your data. Rest assured that our investigations show that your credit card, payment information, password and copies of any documents sent to Jackpot247 have not been accessed and remain secure. After conducting detailed investigations into the incident, we can confirm that the unauthorised person has been able to access your username and name, email address, telephone number, residential address, date of registration and some internal activity classifications that are not of relevance to the unauthorized person.It is our duty to report this data breach to you and inform you what data has been compromised.Users were advised to reset passwords and be wary of phishing emails being received."'' References External links Betsson AB Corporate information Online gambling companies of Malta Online poker companies Gambling companies established in 2000 Internet properties established in 2000 Online gambling companies of Sweden Companies based in Stockholm Companies listed on Nasdaq Stockholm 2000 establishments in Sweden
27511982
https://en.wikipedia.org/wiki/Lego%20Minifigures%20%28theme%29
Lego Minifigures (theme)
Minifigures is a 2010 Lego theme based on a set of collectible Lego minifigures. Each figure is an original character with new clothing and facial designs, and most contain previously unseen accessories. Each series usually contained 16 different minifigures; however, some series contain as few as 9 minifigures, while others contain up to 22. Since 2021, the number of different minifigures for a series is set to 12. Details The series consists of a number of individually themed collectible Lego minifigures based on movies, sports, history, and popular culture. The figures are sold individually in sealed, unmarked packets, giving customers a random chance at obtaining any particular figurine. While considered a novel approach by some, it has raised controversy amongst enthusiasts and collectors, increasing the amount of difficulty in obtaining a complete collection. Purchases from many retailers make no guarantees regarding the contents of a particular packet. Despite attempts to obfuscate the contents of these packets, the bags of Series 1 and 2 have a second figurine-specific bar code on the rear of the packet, next to the EAN/UPC product bar code (which is unique to each series). This has allowed customers to identify individual figures within the packet, significantly decreasing the amount of money and effort required to obtain a complete collection, and eliminates the possibility of unintentionally receiving duplicates. There are also apps for both iPhone and Android devices that utilize these bar codes. Lego has eliminated the figurine-specific bar codes on all Series 3 and 4 packets and replaced them with a braille-like system of dots embossed in the lower seal of the bag. In theory, this will allow customers to continue identifying the figure enclosed within. Later series do not have any markings to indicate their contents. On average a new series has been released every four months. Release dates sometimes vary between countries. Comparison Comparison of the series released so far. Sets The numbering of the figures in each set below is in accordance with the visual guide sheet which is included within the individual packets for each series. Series 1 Series 1 (set number: 8683) was first released on 5 March 2010 in the United Kingdom and was released on 4 June 2010 in the United States. The figures in this series are: Series 2 Series 2 (set number: 8684) was first released on 2 September 2010 in both the United Kingdom and the United States. The figures in this series are: Series 3 Series 3 (set number: 8803) was first released on 14 January 2011 in both the United Kingdom and the United States. The figures in this series are: Series 4 Series 4 (set number: 8804) was first released on 1 April 2011 worldwide. The figures in this series are: Series 5 Series 5 (set number: 8805) was first released on 22 August 2011 in the United Kingdom and the United States. The figures in this series are: Series 6 Series 6 (set number: 8827) was released in the UK in December 2011 and was released on 9 January 2012 in the US. The figures in this series are: Series 7 Series 7 (set number: 8831) was released on April 1, 2012. The figures in this series are: Team GB Olympic Series To commemorate the London Olympic games, an exclusive series of minifigures was released exclusively in the United Kingdom to mark the opening of the Olympics. This series (set number: 8909) was released on July 1, 2012. It consists of 9 figures instead of the usual 16. The figures in this series are: Series 8 Series 8 (set number: 8833) was released on August 20, 2012. The figures in this series are: Series 9 Series 9 (set number: 71000) was released in late October 2012 in the UK and January 1, 2013, in the US. The figures in this series are: Series 10 Series 10 (set number: 71001) was released in the UK on 6 February 2013 & in the US on 1 May 2013. This series introduces 17 figures, one being a limited Gold Figure of only 5000 were produced. With reportedly 40,000 boxes of 60 packets distributed worldwide, the probability of finding this figure is approximately 1 in 480 or .2%. The figures in this series are: Series 11 Series 11 (set number: 71002) was released in the UK on 30 June 2013 and the US on 1 September 2013. The figures in this series are: The Lego Movie Series The Lego Movie Series (set number: 71004) was released on January 1, 2014, and includes characters from The Lego Movie. The figures in this series are: The Lego Simpsons Series 1 The Lego Simpsons Series 1 (set number: 71005) was released on May 1, 2014, and includes characters from The Simpsons. The figures in this series are: Series 12 Series 12 (set number: 71007) was released on October 1, 2014. The figures in this series are: Series 13 Series 13 (set number: 71008) was released on January 1, 2015, and the US around Christmas 2014. The figures in this series are: The Lego Simpsons Series 2 The Lego Simpsons Series 2 (set number: 71009) was released in the UK & the US on April 26, 2015, and includes characters from The Simpsons. The figures in this series are: Series 14 Series 14 (set number: 71010) was released on September 1, 2015, for the UK and mid-August for the USA. This series has a Halloween theme. Several of the character bios make references to the discontinued Monster Fighters theme (more specifically, the Monster Realm in which it took place.) The figures in this series are: Series 15 Series 15 (set number: 71011) was released in the UK on 5 December 2015 and in the US on 1 February 2016. The figures in this series are: The Lego Disney Series 1 The Lego Disney Series 1 (set number: 71012) was released 1 May 2016, and includes characters from various Disney films, shows, and musicals. It consists of 18 figures instead of the usual 16. The figures in this series are: DFB Series The DFB German Football Team Series (set number: 71014) was released on May 14, 2016, in Germany, Austria, Switzerland as well as LEGO stores across Europe. The figures in this series are: Series 16 Series 16 (set number: 71013) was released on 1 September 2016 worldwide. The figures in this series are: The Lego Batman Movie Series 1 The Lego Batman Movie Series 1 (set number: 71017) was released on 1 January 2017, and includes characters from The Lego Batman Movie. It consists of 20 figures instead of the usual 16. The figures in this series are: Series 17 Series 17 (set number: 71018) was released on 1 May 2017 worldwide. The figures in this series are: The Lego Ninjago Movie Series The Lego Ninjago Movie Series (set number: 71019) was released around the world on 1 August 2017, and includes characters from The Lego Ninjago Movie. It consists of 20 figures instead of the usual 16. The figures in this series are: The Lego Batman Movie Series 2 The Lego Batman Movie Series 2 (set number: 71020) was released around the world on 1 January 2018, and includes characters from The Lego Batman Movie. It consists of 20 figures instead of the usual 16. The figures in this series are: Series 18 Series 18 (set number: 71021) was released on 1 April 2018 worldwide. This series is party-themed to celebrate the 40th anniversary of the Lego minifigure. It consists of 17 figures instead of the usual 16. The figures in this series are: The Lego Harry Potter and Fantastic Beasts Series 1 The Lego Harry Potter and Fantastic Beasts Series 1 (set number: 71022) was released on 1 August 2018 worldwide, and includes characters from the Harry Potter and Fantastic Beasts franchises. It consists of 22 figures instead of the usual 16. The figures in this series are: The Lego Movie 2: The Second Part Series The Lego Movie 2: The Second Part Series (set number: 71023) was released on 1 February 2019, and includes characters from The Lego Movie 2: The Second Part. It consists of 20 figures instead of the usual 16. The figures in this series are: The Lego Disney Series 2 The Lego Disney Series 2 (set number: 71024) was released on 1 May 2019, and includes characters from various Disney films, shows, and musicals. It consists of 18 figures instead of the usual 16. The figures in this series are: Series 19 Series 19 (set number: 71025) was released on 1 September 2019 worldwide. This series consists of the following units: The Lego DC Super Heroes Series The Lego DC Super Heroes Series (set number: 71026) was released on 1 January 2020 worldwide, and includes characters from the DC Comics Super Heroes. The figures in this series are: Series 20 Series 20 (set number: 71027) was released on 19 April 2020 worldwide. The figures in this series are: The Lego Harry Potter Series 2 The Lego Harry Potter Series 2 (set number: 71028) was released on 1 September 2020 worldwide, and includes characters from the Harry Potter franchise. The figures in this series are: Series 21 Series 21 (set number: 71029) was released on 1 January 2021 worldwide. It consists of 12 figures instead of the usual 16. The figures in this series are: The Lego Looney Tunes Series The Lego Looney Tunes Series (set number: 71030) is a minifigure collectible series that was released on 26 April 2021 worldwide, and includes characters from the Looney Tunes franchise. Similar to Series 21, it consists of 12 figures instead of the former usual 16. The figures in this series are: Marvel Collectible Minifigure Series The Marvel Collectible Minifigure Series (set number: 71031) is a minifigure collectible series that was released worldwide in September 2021, and includes characters from the Marvel Cinematic Universe Phase Four series, WandaVision, The Falcon and the Winter Soldier, Loki and What If...?. The series consists of 12 figures instead of the former usual 16. Series 22 Series 22 (set number: 71032) was released on 1 January 2022 worldwide. It consists of 12 figures instead of the former usual 16. The figures in this series are: Online games LEGO Minifigures Online On 29 August 2013, Funcom officially announced a massively multiplayer online game based on the Minifigures theme, in which there are several worlds the player can travel to and fight enemies, as well as dungeons based on the setting. The game uses traditional click-to-move mechanics, allowing younger users to jump into the action. However, for advanced users, there are special abilities activated using the number pad. It is going to be free-to-play, but you can unlock Minifigures by purchasing one and entering a code, however they can also be obtained in-game. It was released in late 2014 for iOS, Android, and PC as either a download client or in-browser on the LEGO website. Funcom announced that LEGO Minifigures Online will be closing on 30 September 2016. Starting 6 June 2016, new players will be unable to join the game and the in-game chat will be disabled. Existing players would still be able to play up until 30 September 2016. Awards and nominations In 2022, Lego Marvel Series Collectible Minifigures (set number: 71031) was awarded "Toy of the Year" and also "Collectible of the Year" by the Toy Association. References External links LEGO Minifigures Official Webpage Minifigures (Theme), Lego Products introduced in 2010
1822472
https://en.wikipedia.org/wiki/The%20Old%20Man%20and%20the%20%22C%22%20Student
The Old Man and the "C" Student
"The Old Man and the 'C' Student" is the twentieth episode of the tenth season of the American animated television series The Simpsons. It first aired on the Fox network in the United States on April 25, 1999. In the episode, after offending the Olympic committee during their visit to Springfield Elementary, the school's students are committed to 20 hours of community service. Bart, along with his sister Lisa, is put in charge of Springfield's retirement home, where Bart notices the doldrums that the old people go through every day. Meanwhile, Bart and Lisa's father Homer tries to sell springs. "The Old Man and the 'C' Student" was directed by Mark Kirkland and was the first episode Julie Thacker wrote for The Simpsons. While Bart's storyline was pitched by Thacker, the B-story, involving Homer, was conceived by Thacker's husband Mike Scully, who also was an executive producer and the showrunner for the episode. Jack Lalanne guest-starred as himself in the episode. On its original broadcast, "The Old Man and the 'C' Student" was seen by approximately 6.9 million viewers. Following the release of The Simpsons: The Complete Tenth Season, the episode received mostly positive reviews from critics. Plot When Lisa writes a letter to the International Olympic Committee, they decide that Springfield will be home to the next Olympics. To honor the Olympics, there is a contest for the games' mascot. Homer creates a mascot for the Olympic Games named Springy, the Springfield Spring, which becomes the mascot (beating Patty and Selma’s mascot named Ciggy, a discus thrower made entirely of cigarettes and ashtrays) and everyone in Springfield prepares for the games. When the IOC inspects the town, things go well until Bart does a stand-up comedy routine that insults foreign nations, which only Principal Skinner, Homer, and the kids find funny. In response, the IOC refuses to let Springfield host the Olympics (they award it to Shelbyville, who presumably and chronologically lost it to Sydney), and Superintendent Chalmers blames Skinner for putting Bart on stage with his racy jokes. In order to avoid losing his job, Skinner makes every one of the school's students do 20 hours of community service. After sending Milhouse to collect medical waste on the beach and leaving Martin to start a basketball program between inter-city gangs, Skinner has Bart assigned to work at the Springfield Retirement Castle, where Lisa also works voluntarily. Bart is dismayed at how little the seniors are allowed to do. Meanwhile, Homer gets 1,000 springs he intended to sell as Olympic mascots. He uses various get-rich-quick schemes to sell off the mascots, but fails miserably and gets abused due to Springfield's hatred of Bart's comedy routine and everyone including Marge being annoyed by the springs. Ultimately, he is forced to flush the springs down the toilet. At the time Lisa leads the seniors in "imagination time", but when she departs, Bart makes the seniors escape to get a taste of freedom. Bart takes the seniors on a trip on the town and on a boat ride, and Lisa is initially shocked to see these things happen, but nevertheless, she is quite impressed by what Bart does for the seniors. The seniors have fun until their boat crashes into Mr. Burns' schooner. The boat begins to sink and the seniors turn on Bart, but Grampa defends him, saying Bart gave them the best fun they have had in twenty years. However, the springs that Homer flushed down the toilet save them, causing the boat to bounce up at the surface long enough for the Coast Guard to rescue everyone. Bart finishes his community service time, but decides to help the seniors still enjoy themselves and spend more time with Grampa. Production "The Old Man and the 'C' Student" was directed by Mark Kirkland and was the first episode Julie Thacker wrote for The Simpsons. It was first broadcast on the Fox network in the United States on April 25, 1999. The episode's plot was based on a "disastrous" school program, in which students had to participate in community service in order to be allowed to advance to the next grade. Thacker, whose oldest daughter was a student at the school, was signed up to do community service at an old folks home in the town they lived in. It became the inspiration for the episode's A-story, while the B-story, which involved Homer selling springs, was conceived by Thacker's husband Mike Scully, an executive producer and the showrunner for the episode. In a scene in the episode, Lenny gets one of Homer's springs stuck in his eye. Lenny's eye injuries have since become a running gag, and "The Old Man and the 'C' Student" "started the trend", according to Thacker. The "clunky, Up With People-type" dance that the students perform for the Olympic jury was partly demonstrated during the animatic by Simpsons writer George Meyer. When Meyer later watched the episode, he found out, to his "horror", that he had been given a choreographer credit at the end of the episode. The episode features American fitness expert Jack Lalanne as himself. In the DVD commentary for the episode, Scully stated that Lalanne was "very funny" and that he "gave a great performance". Lalanne's lines were recorded separately from the series main cast members. Cultural references The episode title is a reference to the 1952 novel and 1958 film The Old Man and the Sea. In the beginning of the episode, a sign reading "International Olympic Committee" can be seen. The logo below the text parodies the logo of the real International Olympic Committee. Because they did not want to "upset" the committee, the Simpsons staff slightly altered the logo by changing the colors and not making the rings interlock. In a scene in the episode, the old people can be seen watching an edited and over-dubbed version of the 1939 film Gone With the Wind. The nurse that works in the old folks home is based on Nurse Ratched from the 1975 American drama film One Flew Over the Cuckoo's Nest. The film is referenced again in a scene where Bart takes the old folks on a boat trip and a scene where a Native American chief in the old folk's home throws a dishwasher through a window, and jumps out, mirroring the last scene in the film. The character then returns, and hands Lisa a pamphlet that reads "Prop 217". The pamphlet is a reference to Proposition 217, a proposition that allowed Native Americans to operate casinos in certain states. It is also a reference to the day Scully and Thacker met, which was on February 17. The scene in which Smithers is drawing a portrait of Mr Burns is a reference to the 1997 drama film Titanic. The scene where the old people celebrate their escape from the home is a reference to a sequence from The Beatles 1964 film A Hard Day's Night. Both are set to the group's song "Can't Buy Me Love", although in the episode the song is a cover performed by NRBQ. During the end credits, an album cover reading "A Bart Day's Night", a reference to The Beatles' album A Hard Day's Night, the film's soundtrack, is shown. "Can't Buy Me Love" also plays over the end credits. Reception and legacy In its original American broadcast on April 25, 1999, "The Old Man and the 'C' Student" received a 6.9 rating, according to Nielsen Media Research, translating to approximately 6.9 million viewers. The episode finished in 41st place in the ratings for the week of April 19–25, 1999. On August 7, 2007, the episode was released as part of The Simpsons - The Complete Tenth Season DVD box set. Matt Groening, Mike Scully, George Meyer, Julie Thacker, Ron Hauge, Nancy Cartwright and Mark Kirkland participated in the DVD's audio commentary of the episode. Following its home video release, "The Old Man and the 'C' Student" received mostly positive reviews from critics. Aaron Roxby of Collider gave it a positive review, calling it one the season's best episodes. He wrote "The Simpsons has always been great about addressing/mocking the way that our culture treats the elderly." He added that Lenny's eye injury gave the episode "Extra points". Warren Martyn and Adrian Wood of I Can't Believe It's a Bigger and Better Updated Unofficial Simpsons Guide described the episode as "A marvellous feel-good story" and "Very sweet, very endearing." They added that the "stereotyped Olympic Committee debate" at the beginning of the episode is "marvellous", and concluded by describing the episode as "terrific". Colin Jacobson of DVD Movie Guide was positive as well, writing "I gotta admit I like Springy, the Olympic mascot, and the spring-related aspects of the show entertain." He added that the story involving Bart "offer more than a few good moments," and concluded by writing "Though the episode never quite excels, it’s pretty solid." James Plath of DVD Town called it an "okay" episode. Jake McNeill of Digital Entertainment News described the episode as "not-so-good," adding that "by this point, this show has expended just about every old folks joke there is." However, he also wrote that "'I want some taquitos' never grows old." The episode gained attention in 2017 after the French representative of the International Olympic Committee said "Ah, but Paris would make a tres bon site for the next Olympic Games", when Paris was chosen to host the 2024 Summer Olympics in the 131st IOC Session that the beginning of the episode scene predicted the future event. References External links The Simpsons (season 10) episodes 1999 American television episodes
4723174
https://en.wikipedia.org/wiki/RadioTux
RadioTux
RadioTux is a German internet radio show. The topics are mostly around free and open source software, free operating systems like *BSD and Linux, as well as on sociopolitical issues. It was founded in 2001. There have been made more than 100 transmissions and many interviews with famous people like Mark Shuttleworth, Miguel de Icaza, Hans Reiser, Jon “Maddog” Hall, Richard Stallman and so on. Since 2005 there are also several podcasts available one is the interview feed in English. Everybody can participate on RadioTux. All topics are welcome: there are no limits. The site of the project is based on a wiki, which means that anyone can use his/her web browser to enhance the contents of the pages by editing them. This makes it really easy for everyone to participate. Internal communication works over a mailing list, which is open for everyone to listen in. Radio on demand Shows are produced on a monthly basis. Several volunteers are involved in this process; they include reports, essays, interviews and free music. Depending on the material these shows take 30 to 60 minutes. Users interested in the information only can download a stripped down version without music. The shows are available in MP3 as well as in the free Ogg/Vorbis format. Up to now (July 2008) more than 80 shows have been produced. Some of them have been broadcast live at the Berlin radio station. On the website everyone may leave suggestions for upcoming shows. Podcasts When RadioTux started, podcasts were not yet known, so it can be considered being one of the first podcasts which came up and which is still existing. Single articles and interviews are made available as podcast. So current news are instantly available. Due to categorisation users can subscribe to the newsfeeds of topics they are interested to and load them directly to MP3 player. This compatibility with current mobile playback devices is the reason why the podcast uses the MP3 format. Live RadioTux often participates at Linux and similar events and reports live. Live broadcasts as comprehensive programming have been transmitted from LinuxTag, the Linux World Conference & Expo and the Chemnitzer Linux-Tage. On live events RadioTux is being supported by the free radio station Kanal Ratte which provides studio equipment, streaming servers and airtime. Several shows are presented by Kanal Ratte staff and transmitted live into their programme which is available via FM, cable and livestream. At the Linux World Conference & Expo 2006 in Cologne RadioTux for the first time has been media partner of an event. Since November 2006 the weekly show RadioTux@HoRadS is presented at the Hochschulradio Stuttgart on FM and livestream. The common topics on Linux and free software were discussed there with studio guests, articles from the RadioTux archive (podcasts, interviews) provide appropriate background information. External links : podcast episodes References Internet radio in Germany Technology podcasts German podcasts Audio podcasts
3171468
https://en.wikipedia.org/wiki/Linux%20Phone%20Standards%20Forum
Linux Phone Standards Forum
The Linux Phone Standards Forum (LiPS Forum) is a consortium of a group of companies to create standards for the use of Linux on mobile devices. The main goal of the LiPS Forum is to create application programming interfaces (APIs) that will allow developers to build applications to inter-operate across Linux handsets made by all manufacturers. Founding members include ARM Ltd, Cellon, Esmertec, France Telecom, Telecom Italia, FSM Labs, Huawei, Jaluna, MIZI Research, MontaVista Software, Open-Plug and PalmSource (in March 2007, PalmSource changed its name to that of its parent company, Access Inc). Newer members include Texas Instruments, Trolltech ASA, and Movial Oy. British Telecom joined the LiPS Forum in September 2007. In September 2007, the LiPS forum announced that it was going to align its efforts with those of the Open Mobile Alliance. In June 2008, the LiPS forum announced that it would join with the LiMo Foundation and thereby cease to exist as a separate organization. See also Mobile Linux Initiative References External links https://web.archive.org/web/20081220113941/http://lipsforum.org/ Linux organizations Mobile phone standards Mobile Linux
16982989
https://en.wikipedia.org/wiki/Business%20models%20for%20open-source%20software
Business models for open-source software
Companies whose business centers on the development of open-source software employ a variety of business models to solve the challenge of how to make money providing software that is by definition licensed free of charge. Each of these business strategies rests on the premise that users of open-source technologies are willing to purchase additional software features under proprietary licenses, or purchase other services or elements of value that complement the open-source software that is core to the business. This additional value can be, but not limited to, enterprise-grade features and up-time guarantees (often via a service-level agreement) to satisfy business or compliance requirements, performance and efficiency gains by features not yet available in the open source version, legal protection (e.g., indemnification from copyright or patent infringement), or professional support/training/consulting that are typical of proprietary software applications. Historically, these business models started in the late 1990s and early 2000s as "dual-licensing" models, for example MySQL, and have matured over time to include many variations, as described in the sections below.  Pure dual licensing models are not uncommon, as a more nuanced business approach to open source software businesses has developed. Many of these variations are referred to an "open core" model, where the companies develop both open source software elements and other elements of value for a combined product. A variety of open-source compatible business approaches have gained prominence in recent years, as illustrated and tracked by the Commercial Open Source Software Index (COSSI), a list of commercial open source companies that have reached at least US$100 million in revenue. Notable examples include open core (sometimes referred to as dual licensing or multi-licensing), software as a service (not charging for the software but for the tooling and platform to consume the software as a service often via subscription), freemium, donation-based funding, crowdfunding, and crowdsourcing. There are several different types of business models for making profit using open-source software (OSS) or funding the creation and ongoing development and maintenance. Below are a list of current existing and legal commercial business models approaches in context of open-source software and open-source licenses. The acceptance of these approaches varies; some of these approaches are recommended (like open core and selling services), others are accepted, while still others are considered controversial or even unethical by the open-source community. The underlying objective of these business models is to harness the size and international scope of the open-source community (typically more than an order of magnitude larger than what would be achieved with closed-source software equivalents) for a sustainable commercial venture. The vast majority of commercial open-source companies experience a conversion ratio (as measured by the percentage of downloaders who buy something) well below 1%, so low-cost and highly-scalable marketing and sales functions are key to these firms' profitability. Not selling code Professional services Open-source software can also be commercialized from selling services, such as training, technical support, or consulting, rather than the software itself. Another possibility is offering open-source software in source code form only, while providing executable binaries to paying customers only, offering the commercial service of compiling and packaging of the software. Also, providing goods like physical installation media (e.g., DVDs) can be a commercial service. Open-source companies using this business model successfully are, for instance RedHat, IBM, SUSE, Hortonworks (for Apache Hadoop), Chef, and Percona (for open-source database software). Branded merchandise Some open-source organizations such as the Mozilla Foundation and the Wikimedia Foundation sell branded merchandise articles like t-shirts and coffee mugs. This can be also seen as an additional service provided to the user community. Software as a service Selling subscriptions for online accounts and server access to customers is one way of adding value to open-source software. Another way is combining desktop software with a service, called software plus services. Most open core companies that use this approach also provide the software in a fashion suitable for on-premises, do-it-yourself deployment. To some customers, however, there is significant value in a "plug and play" hosted product. Open source businesses that use this model often cater to small and medium enterprises who do not have the technology resources to run the software. Providing cloud computing services or software as a service (SaaS) without the release of the open-source software is not an open source deployment. The FSF called the server-side use-case without release of the source-code the "ASP loophole in the GPLv2" and encourage therefore the use of the Affero General Public License which plugged this hole in 2002. In 2007 the FSF contemplated including the special provision of AGPLv1 into GPLv3 but ultimately decided to keep the licenses separate. Voluntary donations There were experiments by Independent developers to fund development of open-source software donation-driven directly by the users, e.g. with the Illumination Software Creator in 2012. Since 2011, SourceForge allows users to donate to hosted projects that opted to accept donations, which is enabled via PayPal. Larger donation campaigns also exist. In 2004 the Mozilla Foundation carried out a fundraising campaign to support the launch of the Firefox 1.0 web browser. It placed a two-page ad in the December 16 edition of The New York Times listing the names of the thousands who had donated. In May 2019, GitHub, a Git-based software repository hosting, management and collaboration platform owned by Microsoft, launched a Sponsors program that allows people who support certain open source projects hosted on GitHub to donate money to developers who contribute and maintain the project. Crowdsourcing Crowdsourcing is a type of participative online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, the voluntary undertaking of a task via a flexible open call. The undertaking of the task, of variable complexity and modularity, and in which the crowd should participate, bringing their work, money, knowledge and/or experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken. Caveats in pursuing a Crowdsourcing strategy are to induce a substantial market model or incentive, and care has to be taken that the whole thing doesn't end up in an open source anarchy of adware and spyware plagiates, with a lot of broken solutions, started by people who just wanted to try it out, then gave up early, and a few winners. Popular examples for Crowdsourcing are Linux, Google Android, the Pirate Party movement, and Wikipedia. Selling users Partnership with funding organizations Other financial situations include partnerships with other companies. Governments, universities, companies, and non-governmental organizations may develop internally or hire a contractor for custom in-house modifications, then release that code under an open-source license. Some organizations support the development of open-source software by grants or stipends, like Google's Summer of Code initiative founded in 2005. Advertising-supported software In order to commercialize FOSS (free and open-source software), many companies (including Google, Mozilla, and Canonical) have moved towards an economic model of advertising-supported software. For instance, the open-source application AdBlock Plus gets paid by Google for letting whitelisted Acceptable Ads bypassing the browser ad remover. As another example is SourceForge, an open-source project service provider, has the revenue model of advertising banner sales on their website. In 2006, SourceForge reported quarterly takings of $6.5 million and $23 million in 2009. Pre-selling code Bounty driven development The users of a particular software artifact may come together and pool money into an open-source bounty for the implementation of a desired feature or functionality. Offering bounties as funding has existed for some time. For instance, Bountysource is a web platform which has offered this funding model for open source software since 2003. Another bounty source is companies or foundations that set up bounty programs for implemented features or bugfixes in open-source software relevant to them. For instance, Mozilla has been paying and funding freelance open-source programmers for security bug hunting and fixing since 2004. Pre-order/crowdfunding/reverse-bounty model A newer funding opportunity for open-source software projects is crowdfunding, which shares similarities with the pre-order or Praenumeration business model, as well as the reverse bounty model, typically organized over web platforms like Kickstarter, Indiegogo, or Bountysource (see also comparison of crowd funding services). One example is the successfully funded Indiegogo campaign in 2013 by Australian programmer Timothy Arceri, who offered to implement an OpenGL 4.3 extension for the Mesa library in two weeks for $2,500. Arceri delivered the OpenGL extension code which was promptly merged upstream, and he later continued his efforts on Mesa with successive crowdfunding campaigns. Later, he found work as an employee in this domain with Collabora and in 2017 with Valve. Another example is the June 2013 crowdfunding on Kickstarter of the open source video game Cataclysm: Dark Days Ahead which raised the payment of a full-time developer for 3.5 months. Patreon funding has also become an effective option, as the service gives the option to pay out each month to creators, many of whom intend to develop free and open-source software. Selling intellectual property Dual-licensing or Open Core In a dual licensing model, the vendor develops software and offers it under an open-source license but also under separate proprietary license terms. The proprietary version can be licensed to finance the continued development of the free open-source version. Customers may prefer a no-cost and open-source edition for testing, evaluation, proof of concept development, and small scale deployment. If the customer wishes to deploy the software at scale, or in proprietary distributed products, the customer then negotiates for a commercial license to an enterprise edition. Further, customers will learn of open-source software in a company's portfolio and offerings but generate business in other proprietary products and solutions, including commercial technical support contracts and services. A popular example is Oracle's MySQL database which is dual-licensed under a commercial proprietary license and also under the GPLv2. Another example is the Sleepycat License. Flask developer Armin Ronacher stated that the AGPLv3 was a "terrible success" as "vehicle for dual commercial licensing" and noted that MongoDB, RethinkDB, OpenERP, SugarCRM as well as WURFL utilizing the license for this purpose. Dual license products are generally sold as a "community version" and an "enterprise version." In a pure dual licensing model, as was common before 2010, these versions are identical but available under a choice of licensing terms. Added proprietary software may help customers analyze data, or more efficiently deploy the software on their infrastructure or platform. Examples include the IBM proprietary Linux software, where IBM contributes to the Linux open-source ecosystem, but it builds and delivers (to IBM's paying customers) database software, middleware, and other software that runs on top of the open-source core. Other examples of proprietary products built on open-source software include Red Hat Enterprise Linux and Cloudera's Apache Hadoop-based software. Selling certificates and use of trademark Another financing approach is innovated by Moodle, an open source learning management system and community platform. The business model revolves around a network of commercial partners who are certified and therefore authorised to use the Moodle name and logo, and in turn provide a proportion of revenue to the Moodle Trust, which funds core development. Re-licensing under a proprietary license If a software product uses only own software and open-source software under a permissive free software licence, a company can re-license the resulting software product under a proprietary license and sell the product without the source code or software freedoms. For instance, Apple Inc. is an avid user of this approach by using source code and software from open-source projects. For example, the BSD Unix operating system kernel (under the BSD license) was used in Apple's Mac PCs that were sold as proprietary products. Selling proprietary additives Selling optional proprietary extensions Some companies sell proprietary but optional extensions, modules, plugins or add-ons to an open-source software product. This approach is a variant of the freemium business model. The proprietary software may be intended to let customers get more value out of their data, infrastructure, or platform, e.g., operate their infrastructure/platform more effectively and efficiently, manage it better, or secure it better. Examples include the IBM proprietary Linux software, where IBM contributes to the Linux open-source ecosystem, but it builds and delivers (to IBM's paying customers) database software, middleware, and other software that runs on top of the open-source core. Other examples of proprietary products built on open-source software include Red Hat Enterprise Linux and Cloudera's Apache Hadoop-based software. Some companies appear to re-invest a portion of their financial profits from the sale of proprietary software back into the open source infrastructure. The approach can be problematic with many open source licenses ("not license conform") if not carried out with sufficient care. For instance, mixing proprietary code and open-source licensed code in statically linked libraries or compiling all source code together in a software product might violate open-source licenses, while keeping them separated by interfaces and dynamic-link libraries would adhere to license conform. Selling required proprietary parts of a software product A variant of the approach above is the keeping of required data content (for instance a video game's audio, graphic, and other art assets) of a software product proprietary while making the software's source code open-source. While this approach is completely legitimate and compatible with most open-source licenses, customers have to buy the content to have a complete and working software product. Restrictive licenses can then be applied on the content, which prevents the redistribution or re-selling of the complete software product. Examples for open-source developed software are Kot-in-Action Creative Artel video game Steel Storm, engine GPLv2 licensed while the artwork is CC-BY-NC-SA 3.0 licensed, and Frogatto & Friends with an own developed open-source engine and commercialization via the copyrighted game assets for iPhone, BlackBerry and MacOS. Other examples are Arx Fatalis (by Arkane Studios) and Catacomb 3-D (by Flat Rock Software) with source code opened to the public delayed after release, while copyrighted assets and binaries are still sold on gog.com as digital distribution. Doing so conforms with the FSF and Richard Stallman, who stated that for art or entertainment the software freedoms are not required or important. The similar product bundling of an open-source software product with hardware which prevents users from running modified versions of the software is called tivoization and is legal with most open-source licenses except GPLv3, which explicitly prohibits this use-case. Selling proprietary update systems Another variant of the approach above, mainly use for data-intensive, data-centric software programs, is the keeping of all versions of the software under a free and open-source software license, but refraining from providing update scripts from a n to an n+1 version. Users can still deploy and run the open source software. However, any update to the next version requires either exporting the data, reinstalling the new version, then reimporting the data to the new version, or subscribing to the proprietary update system, or studying the two versions and recreating the scripts from scratch. This practice does not conform with the free software principles as espoused by the FSF. Richard Stallman condemns this practice and names it "diachronically trapped software". Selling without proprietary license All of the above methods follows from the traditional approach in the selling software, where Software is licensed for installation and execution on a user- or customer-supplied infrastructure. In the classic software product business, revenues typically originate from selling software upgrades to the customer. Hovewer, it's also practicing selling exactly the same programs or add-ons but without proprietary licensing. For example, applications like ardour, radium or fritzing it's completely free software on GPL license but there is a fee to get the official binary, often bundled with tech support or the privileges of attracting developers' attention to adding new functionalities to the program. This practice does conform with the free software principles as espoused by the FSF. Other Obfuscation of source code An approach to allow commercialization under some open-source licenses while still protecting crucial business secrets, intellectual property and technical know-how is obfuscation of source code. This approach was used in several cases, for instance by Nvidia in their open-source graphic card device drivers. This practice is used to get the open-source-friendly propaganda without bearing the inconveniences. There has been debate in the free-software/open-source community on whether it is illegal to skirt copyleft software licenses by releasing source code in obfuscated form, such as in cases in which the author is less willing to make the source code available. The general consensus was that while unethical, it was not considered a violation. The Free Software Foundation is against this practice. The GNU General Public License since version 2 has defined "source code" as "the preferred form of the work for making modifications to it." This is intended to prevent the release of obfuscated source code. Delayed open-sourcing Some companies provide the latest version available only to paying customers. A vendor forks a non-copyleft software project then adds closed-source additions to it and sells the resulting software. After a fixed time period the patches are released back upstream under the same license as the rest of the codebase. This business model is called version lagging or time delaying. For instance, 2016 the MariaDB Corporation created for business compatible "delayed open-sourcing" the source-available Business source license (BSL) which automatically relicenses after three years to the FOSS GPL. This approach guarantees licensees that they have source code access (e.g. for code audits), are not locked into a closed platform, or suffer from planned obsolescence, while for the software developer a time-limited exclusive commercialization is possible. In 2017 followed version 1.1, revised with feedback also from Bruce Perens. However, this approach works only with own software or permissive licensed code parts, as there is no copyleft FOSS license available which allows the time delayed opening of the source code after distributing or selling of a software product. Open sourcing on end-of-life An extreme variant of "delayed open-sourcing" is a business practice popularized by id Software and 3D Realms, which released several software products under a free software license after a long proprietary commercialization time period and the return of investment was achieved. The motivation of companies following this practice of releasing the source code when a software reaches the commercial end-of-life, is to prevent that their software becomes unsupported Abandonware or even get lost due to digital obsolescence. This gives the user communities the chance to continue development and support of the software product themselves as an open-source software project. Many examples from the video game domain are in the list of commercial video games with later released source code. Popular non-game software examples are the Netscape Communicator which was open-sourced in 1998 and Sun Microsystems's office suite, StarOffice, which was released in October 2000 at its commercial end of life. Both releases made foundational contributions to now prominent open-source projects, namely Mozilla Firefox and OpenOffice.org/LibreOffice. Funding Unlike proprietary off-the-shelf software that come with restrictive licenses, open-source software is distributed freely, through the web and in physical media. Because creators cannot require each user to pay a license fee to fund development this way, a number of alternative development funding models have emerged. An example of those funding models is when bespoke software is developed as a consulting project for one or more customers who request it. These customers pay developers to have this software developed according to their own needs and they could also closely direct the developers' work. If both parties agree, the resulting software could then be publicly released with an open-source license in order to allow subsequent adoption by other parties. That agreement could reduce the costs paid by the clients while the original developers (or independent consultants) can then charge for training, installation, technical support, or further customization if and when more interested customers would choose to use it after the initial release. There also exist stipends to support the development of open source software, such as Google's Summer of Code and Outreachy. Another approach to funding is to provide the software freely, but sell licenses to proprietary add-ons such as data libraries. For instance, an open-source CAD program may require parts libraries which are sold on a subscription or flat-fee basis. Open-source software can also promote the sale of specialized hardware that it interoperates with, some example cases being the Asterisk telephony software developed by PC-telephony hardware manufacturer Digium and the Robot Operating System (ROS) robotics platform by Willow Garage and Stanford AI Labs. Many open source software projects have begun as research projects within universities, as personal projects of students or professors, or as tools to aid scientific research. The influence of universities and research institutions on open-source shows in the number of projects named after their host institutions, such as BSD Unix, CMU Common Lisp, or the NCSA HTTPd which evolved into Apache. Companies may employ developers to work on open-source projects that are useful to the company's infrastructure: in this case, it is developed not as a product to be sold but as a sort of shared public utility. A local bug-fix or solution to a software problem, written by a developer either at a company's request or to make his/her own job easier, can be released as an open-source contribution without costing the company anything. A larger project such as the Linux kernel may have contributors from dozens of companies which use and depend upon it, as well as hobbyist and research developers. A new funding approach for open-source projects is crowdfunding, organized over web platforms like Kickstarter, Indiegogo, or Bountysource. Challenges Open-source software can be sold and used in general commercially. Also, commercial open-source applications have been a part of the software industry for some time. While commercialization or funding of open-source software projects is possible, it is considered challenging. Since several open-source licenses stipulate that authors of derivative works must distribute them under an open-source (copyleft) license, ISVs and VARs have to develop new legal and technical mechanisms to foster their commercial goals, as many traditional mechanisms are not directly applicable anymore. Traditional business wisdom suggests that a company's methods, assets, and intellectual properties should remain concealed from market competitors (trade secret) as long as possible to maximize the profitable commercialization time of a new product. Open-source software development minimizes the effectiveness of this tactic; development of the product is usually performed in view of the public, allowing competing projects or clones to incorporate new features or improvements as soon as the public code repository is updated, as permitted by most open-source licenses. Also in the computer hardware domain, a hardware producer who provides free and open software drivers reveals the knowledge about hardware implementation details to competitors, who might use this knowledge to catch up. Therefore, there is considerable debate about whether vendors can make a sustainable business from an open-source strategy. In terms of a traditional software company, this is probably the wrong question to ask. Looking at the landscape of open source applications, many of the larger ones are sponsored (and largely written) by system companies such as IBM who may not have an objective of software license revenues. Other software companies, such as Oracle and Google, have sponsored or delivered significant open-source code bases. These firms' motivation tends to be more strategic, in the sense that they are trying to change the rules of a marketplace and reduce the influence of vendors such as Microsoft. Smaller vendors doing open-source work may be less concerned with immediate revenue growth than developing a large and loyal community, which may be the basis of a corporate valuation at merger time. FOSS and economy According to Yochai Benkler, the Berkman Professor for Entrepreneurial Legal Studies at Harvard Law School, free software is the most visible part of a new economy of commons-based peer production of information, knowledge, and culture. As examples, he cites a variety of FOSS projects, including both free software and open source. This new economy is already under development. In order to commercialize FOSS, many companies, Google being the most successful, are moving towards an economic model of advertising-supported software. In such a model, the only way to increase revenue is to make the advertising more valuable. Facebook has recently come under fire for using novel user tracking methods to accomplish this. This new economy is not without alternatives. Apple's App Stores have proven very popular with both users and developers. The Free Software Foundation considers Apple's App Stores to be incompatible with its GPL and complained that Apple was infringing on the GPL with its iTunes terms of use. Rather than change those terms to comply with the GPL, Apple removed the GPL-licensed products from its App Stores. The authors of VLC, one of the GPL-licensed programs at the center of those complaints, recently began the process to switch from the GPL to the LGPL and MPL. Examples Much of the Internet runs on open-source software tools and utilities such as Linux, Apache, MySQL, and PHP, known as the LAMP stack for web servers. Using open source appeals to software developers for three main reasons: low or no cost, access to source code they can tailor themselves, and a shared community that ensures a generally robust code base, with quick fixes for new issues. Despite doing much business in proprietary software, some companies like Oracle Corporation and IBM participated in developing free and open-source software to deter from monopolies and take a portion of market share for themselves. See Commercial open-source applications for the list of current commercial open-source offerings. Netscape's actions were an example of this, and thus Mozilla Firefox has become more popular, getting market share from Internet Explorer. Active Agenda is offered for free, but requires all extensions to be shared back with the world community. The project sells a "Non-Reciprocal Private License" to anyone interested in keeping module extensions private. Adobe Systems offers Flex for free, while selling the Flash Builder IDE. Apple Inc. offers Darwin for free, while selling Mac OS X. Asterisk, digital electronics hardware controlled by open-source software Codeweavers sells CrossOver commercially, deriving it from the free Wine project they also back. Canonical Ltd. offers Ubuntu for free, while they sell commercial technical support contracts. Cloudera's Apache Hadoop-based software. Francisco Burzi offers PHP-Nuke for free, but the latest version is offered commercially. IBM proprietary Linux software, where IBM delivers database software, middleware and other software. Ingres is offered for free, but services and support are offered as a subscription. The Ingres Icebreaker Appliance is also offered as a commercial database appliance. id Software releases their legacy game engines under the GPL, while retaining proprietary ownership on their latest incarnation. Mozilla Foundation have a partnership with Google and other companies which provides revenue for inclusion of search engines in Mozilla Firefox. MySQL is offered for free, but with the enterprise version includes support and additional features. SUSE offers openSUSE for free through the openSUSE Project, while selling SUSE Linux Enterprise (SLE). OpenSearchServer offers its community edition on SourceForge and an enterprise edition with professional services to enterprises with a paid license Oracle - VirtualBox is free and open to anyone, but the VirtualBox extension pack can only be used for free at home, thus requiring payment from business users OWASP Foundation is a professional community of open-source developers focused on raising visible for software security. Red Hat sells support subscriptions for Red Hat Enterprise Linux (RHEL) which is an enterprise distribution periodically forked from the community-developed Fedora. Sourcefire offers Snort for free, while selling Sourcefire 3D. Sun Microsystems (acquired by Oracle in 2010) once offered OpenOffice.org for free, while selling StarOffice Untangle provides its Lite Package for free, while selling its Standard and Premium Packages by subscription Zend Technologies offers Zend Server CE and Laminas for free, but sells Zend Server with support and additional features. See also Free software business model Open Source Development Labs Commercial use of copyleft works Open business Open innovation Crowdsourcing Software monetization References Further reading Business models Free software Free software culture and documents Software industry Economics of intellectual property pl:Otwarte oprogramowanie#Modele biznesowe dla otwartego oprogramowania
13208
https://en.wikipedia.org/wiki/Hera
Hera
Hera (; ; in Ionic and Homeric Greek) is the goddess of women, marriage, family and childbirth in ancient Greek religion and mythology, one of the twelve Olympians and the sister and wife of Zeus. She is the daughter of the Titans Cronus and Rhea. Hera rules over Mount Olympus as queen of the gods. A matronly figure, Hera served as both the patroness and protectress of married women, presiding over weddings and blessing marital unions. One of Hera's defining characteristics is her jealous and vengeful nature against Zeus' numerous lovers and illegitimate offspring, as well as the mortals who cross her. Hera is commonly seen with the animals she considers sacred, including the cow, lion and the peacock. Portrayed as majestic and solemn, often enthroned, and crowned with the polos (a high cylindrical crown worn by several of the Great Goddesses), Hera may hold a pomegranate in her hand, emblem of fertile blood and death and a substitute for the narcotic capsule of the opium poppy. Her Roman counterpart is Juno. Etymology The name of Hera has several possible and mutually exclusive etymologies; one possibility is to connect it with Greek ὥρα hōra, season, and to interpret it as ripe for marriage and according to Plato ἐρατή eratē, "beloved" as Zeus is said to have married her for love. According to Plutarch, Hera was an allegorical name and an anagram of aēr (ἀήρ, "air"). So begins the section on Hera in Walter Burkert's Greek Religion. In a note, he records other scholars' arguments "for the meaning Mistress as a feminine to Heros, Master." John Chadwick, a decipherer of Linear B, remarks "her name may be connected with hērōs, ἥρως, 'hero', but that is no help since it too is etymologically obscure." A. J. van Windekens, offers "young cow, heifer", which is consonant with Hera's common epithet βοῶπις (boōpis, "cow-eyed"). R. S. P. Beekes has suggested a Pre-Greek origin. Her name is attested in Mycenaean Greek written in the Linear B syllabic script as e-ra, appearing on tablets found in Pylos and Thebes, as well in the Cypriotic dialect in the dative e-ra-i. Cult Hera may have been the first deity to whom the Greeks dedicated an enclosed roofed temple sanctuary, at Samos about 800 BCE. It was replaced later by the Heraion of Samos, one of the largest of all Greek temples (altars were in front of the temples under the open sky). There were many temples built on this site so evidence is somewhat confusing and archaeological dates are uncertain. The temple created by the Rhoecus sculptors and architects was destroyed between 570–560 BCE. This was replaced by the Polycratean temple of 540–530 BCE. In one of these temples we see a forest of 155 columns. There is also no evidence of tiles on this temple suggesting either the temple was never finished or that the temple was open to the sky. Earlier sanctuaries, whose dedication to Hera is less certain, were of the Mycenaean type called "house sanctuaries". Samos excavations have revealed votive offerings, many of them late 8th and 7th centuries BCE, which show that Hera at Samos was not merely a local Greek goddess of the Aegean: the museum there contains figures of gods and suppliants and other votive offerings from Armenia, Babylon, Iran, Assyria, Egypt, testimony to the reputation which this sanctuary of Hera enjoyed and to the large influx of pilgrims. Compared to this mighty goddess, who also possessed the earliest temple at Olympia and two of the great fifth and sixth century temples of Paestum, the termagant of Homer and the myths is an "almost... comic figure", according to Burkert. Though greatest and earliest free-standing temple to Hera was the Heraion of Samos, in the Greek mainland Hera was especially worshipped as "Argive Hera" (Hera Argeia) at her sanctuary that stood between the former Mycenaean city-states of Argos and Mycenae, where the festivals in her honor called Heraia were celebrated. "The three cities I love best," the ox-eyed Queen of Heaven declares in the Iliad, book iv, "are Argos, Sparta and Mycenae of the broad streets." There were also temples to Hera in Olympia, Corinth, Tiryns, Perachora and the sacred island of Delos. In Magna Graecia, two Doric temples to Hera were constructed at Paestum, about 550 BCE and about 450 BCE. One of them, long called the Temple of Poseidon was identified in the 1950s as a second temple there of Hera. In Euboea, the festival of the Great Daedala, sacred to Hera, was celebrated on a sixty-year cycle. Hera's importance in the early archaic period is attested by the large building projects undertaken in her honor. The temples of Hera in the two main centers of her cult, the Heraion of Samos and the Heraion of Argos in the Argolis, were the very earliest monumental Greek temples constructed, in the 8th century BCE. Importance According to Walter Burkert, both Hera and Demeter have many characteristic attributes of Pre-Greek Great Goddesses. In the same vein, British scholar Charles Francis Keary suggests that Hera had some sort of "Earth Goddess" worship in ancient times, connected to her possible origin as a Pelasgian goddess (as mentioned by Herodotus). According to Homeric Hymn III to Delian Apollo, Hera detained Eileithyia to prevent Leto from going into labor with Artemis and Apollo, since the father was Zeus. The other goddesses present at the birthing on Delos sent Iris to bring her. As she stepped upon the island, the divine birth began. In the myth of the birth of Heracles, it is Hera herself who sits at the door, delaying the birth of Heracles until her protégé, Eurystheus, had been born first. The Homeric Hymn to Pythian Apollo makes the monster Typhaon the offspring of archaic Hera in her Minoan form, produced out of herself, like a monstrous version of Hephaestus, and whelped in a cave in Cilicia. She gave the creature to Python to raise. In the Temple of Hera, Olympia, Hera's seated cult figure was older than the warrior figure of Zeus that accompanied it. Homer expressed her relationship with Zeus delicately in the Iliad, in which she declares to Zeus, "I am Cronus' eldest daughter, and am honourable not on this ground only, but also because I am your wife, and you are king of the gods." Matriarchy There has been considerable scholarship, reaching back to Johann Jakob Bachofen in the mid-nineteenth century, about the possibility that Hera, whose early importance in Greek religion is firmly established, was originally the goddess of a matriarchal people, presumably inhabiting Greece before the Hellenes. In this view, her activity as goddess of marriage established the patriarchal bond of her own subordination: her resistance to the conquests of Zeus is rendered as Hera's "jealousy", the main theme of literary anecdotes that undercut her ancient cult. However, it remains a controversial claim that an ancient matriarchy or a cultural focus on a monotheistic Great Goddess existed among the ancient Greeks or elsewhere. The claim is generally rejected by modern scholars as insufficiently evidenced. Youth Hera was most known as the matron goddess, Hera Teleia; but she presided over weddings as well. In myth and cult, fragmentary references and archaic practices remain of the sacred marriage of Hera and Zeus. At Plataea, there was a sculpture of Hera seated as a bride by Callimachus, as well as the matronly standing Hera. Hera was also worshipped as a virgin: there was a tradition in Stymphalia in Arcadia that there had been a triple shrine to Hera the Girl (Παις [Pais]), the Adult Woman (Τελεια [Teleia]), and the Separated (Χήρη [Chḗrē] 'Widowed' or 'Divorced'). In the region around Argos, the temple of Hera in Hermione near Argos was to Hera the Virgin. At the spring of Kanathos, close to Nauplia, Hera renewed her virginity annually, in rites that were not to be spoken of (arrheton). The Female figure, showing her "Moon" over the lake is also appropriate, as Hebe, Hera, and Hecate; new moon, full moon, and old moon in that order and otherwise personified as the Virgin of Spring, The Mother of Summer, and the destroying Crone of Autumn. Emblems In Hellenistic imagery, Hera's chariot was pulled by peacocks, birds not known to Greeks before the conquests of Alexander. Alexander's tutor, Aristotle, refers to it as "the Persian bird." The peacock motif was revived in the Renaissance iconography that unified Hera and Juno, and which European painters focused on. A bird that had been associated with Hera on an archaic level, where most of the Aegean goddesses were associated with "their" bird, was the cuckoo, which appears in mythic fragments concerning the first wooing of a virginal Hera by Zeus. Her archaic association was primarily with cattle, as a Cow Goddess, who was especially venerated in "cattle-rich" Euboea. On Cyprus, very early archaeological sites contain bull skulls that have been adapted for use as masks (see Bull (mythology)). Her familiar Homeric epithet Boôpis, is always translated "cow-eyed". In this respect, Hera bears some resemblance to the Ancient Egyptian deity Hathor, a maternal goddess associated with cattle. Scholar of Greek mythology Walter Burkert writes in Greek Religion, "Nevertheless, there are memories of an earlier aniconic representation, as a pillar in Argos and as a plank in Samos." Epithets Hera bore several epithets in the mythological tradition, including: Ἀλέξανδρος (Alexandros) 'Protector of Men' (Alexandros) (among the Sicyonians) Αἰγοφάγος (Aigophágos) 'Goat-Eater' (among the Lacedaemonians) Ἀκραῖα (Akráia) '(She) of the Heights' Ἀμμωνία (Ammonia) Ἄνθεια (Antheia), meaning flowery Ἀργεία (Argéia) '(She) of Argos' Βασίλεια (Basíleia) 'Queen' Βουναία (Bounáia) '(She) of the Mound' (in Corinth) Βοῶπις (Boṓpis) 'Cow-Eyed' or 'Cow-Faced' Λευκώλενος (Leukṓlenos) 'White-Armed' Παῖς (Pais) 'Child' (in her role as virgin) Παρθένος (Parthénos) 'Virgin' Τελεία (Teléia) (as goddess of marriage) Χήρη (Chḗrē) 'Widowed' Τελχινία (Telchinia), Diodorus Siculus write that she was worshiped by the Ialysians and the Cameirans (both were on the island of Rhodes). She was named liked that because according to a legend, Telchines (Τελχῖνες) were the first inhabitants of the island and also the first who created statues of gods. Mythology Birth Hera is the daughter of the youngest Titan Cronus and his wife, and sister, Rhea. Cronus was fated to be overthrown by one of his children; to prevent this, he swallowed all of his newborn children whole until Rhea tricked him into swallowing a stone instead of her youngest child, Zeus. Zeus grew up in secret and when he grew up he tricked his father into regurgitating his siblings, including Hera. Zeus then led the revolt against the Titans, banished them, and divided the dominion over the world with his brothers Poseidon and Hades. Marriage with Zeus Hera is the goddess of marriage and childbirth rather more than of motherhood, and much of her mythology revolves around her marriage with her brother Zeus. She is charmed by him and she seduces him; he cheats on her and has many children by other goddesses and mortal women; she is intensely jealous and vindictive towards his children and their mothers; he is threatening and violent to her. In the Iliad, Zeus implies their marriage was some sort of elopement, as they lay secretly from their parents. Pausanias records a tale of how they came to be married in which Zeus transformed into a cuckoo bird to woo Hera. She caught the bird and kept it as her pet; this is why the cuckoo is seated on her sceptre. According to a scholion on Theocritus' Idylls, when Hera was heading toward Mount Thornax alone, Zeus created a terrible storm and transformed himself into a cuckoo bird who flew down and sat on her lap. When Hera saw him, she covered him with her cloak. Zeus then transformed back and took hold of her; because she was refusing to sleep with him due to their mother, he promised to marry her. In one account Hera refused to marry Zeus and hid in a cave to avoid him; an earthborn man named Achilles convinced her to give him a chance, and thus the two had their first sexual intercourse. According to Callimachus, their wedding feast lasted three thousand years. The Apples of the Hesperides that Heracles was tasked by Eurystheus to take were a wedding gift by Gaia to the couple. Heracles Hera is the stepmother and enemy of Heracles. The name Heracles means "Glory of Hera". In Homer's Iliad, when Alcmene was about to give birth to Heracles, Zeus announced to all the gods that on that day a child by Zeus himself, would be born and rule all those around him. Hera, after requesting Zeus to swear an oath to that effect, descended from Olympus to Argos and made the wife of Sthenelus (son of Perseus) give birth to Eurystheus after only seven months, while at the same time preventing Alcmene from delivering Heracles. This resulted in the fulfilment of Zeus's oath in that it was Eurystheus rather than Heracles. In Pausanias' recounting, Hera sent witches (as they were called by the Thebans) to hinder Alcmene's delivery of Heracles. The witches were successful in preventing the birth until Historis, daughter of Tiresias, thought of a trick to deceive the witches. Like Galanthis, Historis announced that Alcmene had delivered her child; having been deceived, the witches went away, allowing Alcmene to give birth. Hera's wrath against Zeus' son continues and while Heracles is still an infant, Hera sends two serpents to kill him as he lies in his cot. Heracles throttles the snakes with his bare hands and is found by his nurse playing with their limp bodies as if they were a child's toy. One account of the origin of the Milky Way is that Zeus had tricked Hera into nursing the infant Heracles: discovering who he was, she pulled him from her breast, and a spurt of her milk formed the smear across the sky that can be seen to this day. Unlike any Greeks, the Etruscans instead pictured a full-grown bearded Heracles at Hera's breast: this may refer to his adoption by her when he became an Immortal. He had previously wounded her severely in the breast. When Heracles reached adulthood, Hera drove him mad, which led him to murder his family and this later led to him undertaking his famous labours. Hera assigned Heracles to labour for King Eurystheus at Mycenae. She attempted to make almost all of Heracles' twelve labours more difficult. When he fought the Lernaean Hydra, she sent a crab to bite at his feet in the hopes of distracting him. Later Hera stirred up the Amazons against him when he was on one of his quests. When Heracles took the cattle of Geryon, he shot Hera in the right breast with a triple-barbed arrow: the wound was incurable and left her in constant pain, as Dione tells Aphrodite in the Iliad, Book V. Afterwards, Hera sent a gadfly to bite the cattle, irritate them and scatter them. Hera then sent a flood which raised the water level of a river so much that Heracles could not ford the river with the cattle. He piled stones into the river to make the water shallower. When he finally reached the court of Eurystheus, the cattle were sacrificed to Hera. Eurystheus also wanted to sacrifice the Cretan Bull to Hera. She refused the sacrifice because it reflected glory on Heracles. The bull was released and wandered to Marathon, becoming known as the Marathonian Bull. Some myths state that in the end, Heracles befriended Hera by saving her from Porphyrion, a giant who tried to rape her during the Gigantomachy, and that she even gave her daughter Hebe as his bride. Whatever myth-making served to account for an archaic representation of Heracles as "Hera's man" it was thought suitable for the builders of the Heraion at Paestum to depict the exploits of Heracles in bas-reliefs. Leto and the Twins: Apollo and Artemis When Hera discovered that Leto was pregnant and that Zeus was the father, she convinced the nature spirits to prevent Leto from giving birth on terra-firma, the mainland, any island at sea, or any place under the sun. Poseidon gave pity to Leto and guided her to the floating island of Delos, which was neither mainland nor a real island where Leto was able to give birth to her children. Afterwards, Zeus secured Delos to the bottom of the ocean. The island later became sacred to Apollo. Alternatively, Hera kidnapped Eileithyia, the goddess of childbirth, to prevent Leto from going into labor. The other gods bribed Hera with a beautiful necklace nobody could resist and she finally gave in. Either way, Artemis was born first and then assisted with the birth of Apollo. Some versions say Artemis helped her mother give birth to Apollo for nine days. Another variation states that Artemis was born one day before Apollo, on the island of Ortygia and that she helped Leto cross the sea to Delos the next day to give birth to Apollo. Later, Tityos attempted to rape Leto at the behest of Hera. He was slain by Artemis and Apollo. This account of the birth of Apollo and Artemis is contradicted by Hesiod in Theogony, as the twins are born prior to Zeus’ marriage to Hera. Io and Argus The myth of Io has many forms and embellishments. Generally, Io was a priestess of Hera at the Heraion of Argos. Zeus lusted after her and either Hera turned Io into a heifer to hide her from Zeus, or Zeus did so to hide her from Hera but was discovered. Hera had Io tethered to an olive-tree and set Argus Panoptes () to watch over her, but Zeus sent Hermes to kill him. Infuriated, Hera then sent a gadfly (Greek , compare oestrus) to pursue and constantly sting Io, who fled into Asia and eventually reached Egypt. There Zeus restored her to human form and she gave birth to his son Epaphus. Judgment of Paris A prophecy stated that a son of the sea-nymph Thetis, with whom Zeus fell in love after gazing upon her in the oceans off the Greek coast, would become greater than his father. Possibly for this reasons, Thetis was betrothed to an elderly human king, Peleus son of Aeacus, either upon Zeus' orders, or because she wished to please Hera, who had raised her. All the gods and goddesses as well as various mortals were invited to the marriage of Peleus and Thetis (the eventual parents of Achilles) and brought many gifts. Only Eris, goddess of discord, was not invited and was stopped at the door by Hermes, on Zeus' order. She was annoyed at this, so she threw from the door a gift of her own: a golden apple inscribed with the word καλλίστῃ (kallistēi, "To the fairest"). Aphrodite, Hera, and Athena all claimed to be the fairest, and thus the rightful owner of the apple. The goddesses quarreled bitterly over it, and none of the other gods would venture an opinion favoring one, for fear of earning the enmity of the other two. They chose to place the matter before Zeus, who, not wanting to favor one of the goddesses, put the choice into the hands of Paris, a Trojan prince. After bathing in the spring of Mount Ida where Troy was situated, they appeared before Paris to have him choose. The goddesses undressed before him, either at his request or for the sake of winning. Still, Paris could not decide, as all three were ideally beautiful, so they resorted to bribes. Hera offered Paris political power and control of all of Asia, while Athena offered wisdom, fame, and glory in battle, and Aphrodite offered the most beautiful mortal woman in the world as a wife, and he accordingly chose her. This woman was Helen, who was, unfortunately for Paris, already married to King Menelaus of Sparta. The other two goddesses were enraged by this and through Helen's abduction by Paris, they brought about the Trojan War. The Iliad Hera plays a substantial role in The Iliad, appearing in a number of books throughout the epic poem. She hates the Trojans because of Paris' decision that Aphrodite was the most beautiful goddess, and so supports the Greeks during the war. Throughout the epic Hera makes many attempts to thwart the Trojan army. In books 1 and 2, Hera declares that the Trojans must be destroyed. Hera persuades Athena to aid the Achaeans in battle and she agrees to assist with interfering on their behalf. In book 5, Hera and Athena plot to harm Ares, who had been seen by Diomedes in assisting the Trojans. Diomedes called for his soldiers to fall back slowly. Hera, Ares' mother, saw Ares' interference and asked Zeus, Ares' father, for permission to drive Ares away from the battlefield. Hera encouraged Diomedes to attack Ares and he threw his spear at the god. Athena drove the spear into Ares' body, and he bellowed in pain and fled to Mount Olympus, forcing the Trojans to fall back. In book 8, Hera tries to persuade Poseidon to disobey Zeus and help the Achaean army. He refuses, saying he doesn't want to go against Zeus. Determined to intervene in the war, Hera and Athena head to the battlefield. However, seeing the two flee, Zeus sent Iris to intercept them and make them return to Mount Olympus or face grave consequences. After prolonged fighting, Hera sees Poseidon aiding the Greeks and giving them motivation to keep fighting. In book 14 Hera devises a plan to deceive Zeus. Zeus set a decree that the gods were not allowed to interfere in the mortal war. Hera is on the side of the Achaeans, so she plans a Deception of Zeus where she seduces him, with help from Aphrodite, and tricks him into a deep sleep, with the help of Hypnos, so that the Gods could interfere without the fear of Zeus. In book 21, Hera continues her interference with the battle as she tells Hephaestus to prevent the river from harming Achilles. Hephaestus sets the battlefield ablaze, causing the river to plead with Hera, promising her he will not help the Trojans if Hephaestus stops his attack. Hephaestus stops his assault and Hera returns to the battlefield where the gods begin to fight amongst themselves. Minor stories Semele and Dionysus When Hera learned that Semele, daughter of Cadmus King of Thebes, was pregnant by Zeus, she disguised herself as Semele's nurse and persuaded the princess to insist that Zeus show himself to her in his true form. When he was compelled to do so, having sworn by Styx, his thunder and lightning destroyed Semele. Zeus took Semele's unborn child, Dionysus and completed its gestation sewn into his own thigh. In another version, Dionysus was originally the son of Zeus by either Demeter or Persephone. Hera sent her Titans to rip the baby apart, from which he was called Zagreus ("Torn in Pieces"). Zeus rescued the heart; or, the heart was saved, variously, by Athena, Rhea, or Demeter. Zeus used the heart to recreate Dionysus and implant him in the womb of Semele—hence Dionysus became known as "the twice-born". Certain versions imply that Zeus gave Semele the heart to eat to impregnate her. Hera tricked Semele into asking Zeus to reveal his true form, which killed her. Dionysus later managed to rescue his mother from the underworld and have her live on Mount Olympus. Lamia Lamia was a lovely queen of Libya, whom Zeus loved and slept with. Hera in jealousy, caused Lamia to kill her own children; out of grief for her actions, Lamia was turned into a misshapen creature that would therefore snatch and murder other people's children. Gerana Gerana was a queen of the Pygmies who boasted she was more beautiful than Hera. The wrathful goddess turned her into a crane and proclaimed that her bird descendants should wage eternal war on the Pygmy folk. Cydippe Cydippe, a priestess of Hera, was on her way to a festival in the goddess' honor. The oxen which were to pull her cart were overdue and her sons, Biton and Cleobis, pulled the cart the entire way (45 stadia, 8 kilometers). Cydippe was impressed with their devotion to her and Hera so asked Hera to give her children the best gift a god could give a person. Hera ordained that the brothers would die in their sleep. This honor bestowed upon the children was later used by Solon, as a proof while trying to convince Croesus that it is impossible to judge a person's happiness until they have died a fruitful death after a joyous life. Tiresias Tiresias was a priest of Zeus, and as a young man he encountered two snakes mating and hit them with a stick. He was then transformed into a woman. As a woman, Tiresias became a priestess of Hera, married and had children, including Manto. After seven years as a woman, Tiresias again found mating snakes; depending on the myth, either she made sure to leave the snakes alone this time, or, according to Hyginus, trampled on them and became a man once more. As a result of his experiences, Zeus and Hera asked him to settle the question of which sex, male or female, experienced more pleasure during intercourse. Zeus claimed it was women; Hera claimed it was men. When Tiresias sided with Zeus, Hera struck him blind. Since Zeus could not undo what she had done, he gave him the gift of prophecy. An alternative and less commonly told story has it that Tiresias was blinded by Athena after he stumbled onto her bathing naked. His mother, Chariclo, begged her to undo her curse, but Athena could not; she gave him prophecy instead. Chelone At the marriage of Zeus and Hera, a nymph named Chelone was disrespectful or refused to attend the wedding. Zeus thus, turned her into a tortoise. The Golden Fleece Hera hated Pelias because he had killed Sidero, his step-grandmother, in one of the goddess's temples. She later convinced Jason and Medea to kill Pelias. The Golden Fleece was the item that Jason needed to get his mother freed. Ixion When Zeus had pity on Ixion and brought him to Olympus and introduced him to the gods, instead of being grateful, Ixion grew lustful for Hera. Zeus found out about his intentions and made a cloud in the shape of Hera, who was later named Nephele, and tricked Ixion into coupling with it and from their union came Centaurus. So Ixion was expelled from Olympus and Zeus ordered Hermes to bind Ixion to a winged fiery wheel that was always spinning. Therefore, Ixion was bound to a burning solar wheel for all eternity, at first spinning across the heavens, but in later myth transferred to Tartarus. Children Genealogy Art and events Barberini Hera - a Roman sculpture of Hera/Juno Hera Borghese - sculpture related to Hera Hera Farnese - sculpture of Hera's head Heraea Games - games dedicated to Hera—the first sanctioned (and recorded) women's athletic competition to be held in the stadium at Olympia. See also Auðumbla, a primeval cow in Norse mythology Parvati Footnotes Notes References Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Burkert, Walter, Greek Religion 1985. Burkert, Walter, The Orientalizing Revolution: Near Eastern Influence on Greek Culture in the Early Archaic Age, 1998 Farnell, Lewis Richard, The cults of the Greek states I: Zeus, Hera Athena Oxford, 1896. Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Graves, Robert, The Greek Myths 1955. Use with caution. Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer; The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Evelyn-White, Hugh, The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White. Homeric Hymns. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Pindar, Odes, Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library. Ovid, Metamorphoses. Translated by A. D. Melville; introduction and notes by E. J. Kenney. Oxford: Oxford University Press. 2008. . Hyginus, Gaius Julius, The Myths of Hyginus. Edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Nonnus, Dionysiaca; translated by Rouse, W H D, III Books XXXVI–XLVIII. Loeb Classical Library No. 346, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive. Kerenyi, Carl, The Gods of the Greeks 1951 (paperback 1980) Kerenyi, Karl, 1959. The Heroes of the Greeks Especially Heracles. Kirk, G. S., J. E. Raven, M. Schofield, The Presocratic Philosophers: A Critical History with a Selection of Texts, Cambridge University Press, Dec 29, 1983. . Ogden, Daniel (2013a), Drakon: Dragon Myth and Serpent Cult in the Greek and Roman Worlds, Oxford University Press, 2013. . Ogden, Daniel (2013b), Dragons, Serpents, and Slayers in the Classical and early Christian Worlds: A sourcebook, Oxford University Press. . Ruck, Carl A.P., and Danny Staples, The World of Classical Myth 1994 Seyffert, Oskar. Dictionary of Classical Antiquities 1894. (On-line text) Seznec, Jean, The Survival of the Pagan Gods : Mythological Tradition in Renaissance Humanism and Art, 1953 Slater, Philip E. The Glory of Hera : Greek Mythology and the Greek Family (Boston: Beacon Press) 1968 (Princeton University 1992 ) Concentrating on family structure in 5th-century Athens; some of the crude usage of myth and drama for psychological interpreting of "neuroses" is dated. Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). "Gali'nthias" External links Theoi Project, Hera Hera in classical literature and Greek art The Heraion at Samos Marriage deities Queens in Greek mythology Women in Greek mythology Metamorphoses characters Deities in the Iliad Mythological rape victims Mythology of Heracles Divine women of Zeus Characters in the Odyssey Deities in the Aeneid Greek goddesses Queens of Heaven (antiquity)
31742
https://en.wikipedia.org/wiki/Unicode
Unicode
Unicode, formally the Unicode Standard, is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, which is maintained by the Unicode Consortium, defines 144,697 characters covering 159 modern and historic scripts, as well as symbols, emoji, and non-visual control and formatting codes. The Unicode character repertoire is synchronized with ISO/IEC 10646, each being code-for-code identical with the other. The Unicode Standard, however, includes more than just the base code. Alongside the character encodings, the Consortium's official publication includes a wide variety of details about the scripts and how to display them: normalization rules, decomposition, collation, rendering, and bidirectional text display order for multilingual texts, and so on. The Standard also includes reference data files and visual charts to help developers and designers correctly implement the repertoire. Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including modern operating systems, XML, and most modern programming languages. Unicode can be implemented by different character encodings. The Unicode standard defines Unicode Transformation Formats (UTF): UTF-8, UTF-16, and UTF-32, and several other encodings. The most commonly used encodings are UTF-8, UTF-16, and the obsolete UCS-2 (a precursor of UTF-16 without full support for Unicode); GB18030, while not an official Unicode standard, is standardized in China and implements Unicode fully. UTF-8, the dominant encoding on the World Wide Web (used in over 95% of websites , and up to 100% for some languages) and on most Unix-like operating systems, uses one byte (8 bits) for the first 128 code points, and up to 4 bytes for other characters. The first 128 Unicode code points represent the ASCII characters, which means that any ASCII text is also a UTF-8 text. UCS-2 uses two bytes (16 bits) for each character but can only encode the first 65,536 code points, the so-called Basic Multilingual Plane (BMP). With 1,112,064 possible Unicode code points corresponding to characters (see below) on 17 planes, and with over 144,000 code points defined as of version 14.0, UCS-2 is only able to represent less than half of all encoded Unicode characters. Therefore, UCS-2 is obsolete, though still used in software. UTF-16 extends UCS-2, by using the same 16-bit encoding as UCS-2 for the Basic Multilingual Plane, and a 4-byte encoding for the other planes. As long as it contains no code points in the reserved range U+D800–U+DFFF, a UCS-2 text is valid UTF-16 text. UTF-32 (also referred to as UCS-4) uses four bytes to encode any given code point, but not necessarily any given (loosely speaking, a grapheme), since a user-perceived character may be represented by a (a sequence of multiple code points). Like UCS-2, the number of bytes per code point is fixed, facilitating code point indexing; but unlike UCS-2, UTF-32 is able to encode all Unicode code points. However, because each code point uses four bytes, UTF-32 takes significantly more space than other encodings, and is not widely used. Although UTF-32 has a fixed size for each code point, it is also variable-length with respect to user-perceived characters. Examples include: the Devanagari kshi, which is encoded by 4 code points, and national flag emojis, which are composed of two code points. All combining character sequences are graphemes, but there are other sequences of code points that are as well, for example \r\n. Origin and development Unicode has the explicit aim of transcending the limitations of traditional character encodings, such as those defined by the ISO/IEC 8859 standard, which find wide usage in various countries of the world but remain largely incompatible with each other. Many traditional character encodings share a common problem in that they allow bilingual computer processing (usually using Latin characters and the local script), but not multilingual computer processing (computer processing of arbitrary scripts mixed with each other). Unicode, in intent, encodes the underlying characters—graphemes and grapheme-like units—rather than the variant glyphs (renderings) for such characters. In the case of Chinese characters, this sometimes leads to controversies over distinguishing the underlying character from its variant glyphs (see Han unification). In text processing, Unicode takes the role of providing a unique —a number, not a glyph—for each character. In other words, Unicode represents a character in an abstract way and leaves the visual rendering (size, shape, font, or style) to other software, such as a web browser or word processor. This simple aim becomes complicated, however, because of concessions made by Unicode's designers in the hope of encouraging a more rapid adoption of Unicode. The first 256 code points were made identical to the content of ISO/IEC 8859-1 so as to make it trivial to convert existing western text. Many essentially identical characters were encoded multiple times at different code points to preserve distinctions used by legacy encodings and therefore, allow conversion from those encodings to Unicode (and back) without losing any information. For example, the "fullwidth forms" section of code points encompasses a full duplicate of the Latin alphabet because Chinese, Japanese, and Korean (CJK) fonts contain two versions of these letters, "fullwidth" matching the width of the CJK characters, and normal width. For other examples, see duplicate characters in Unicode. Unicode Bulldog Award recipients include many names influential in the development of Unicode and include Tatsuo Kobayashi, Thomas Milo, Roozbeh Pournader, Ken Lunde, and Michael Everson. History Based on experiences with the Xerox Character Code Standard (XCCS) since 1980, the origins of Unicode date to 1987, when Joe Becker from Xerox with Lee Collins and Mark Davis from Apple started investigating the practicalities of creating a universal character set. With additional input from Peter Fenwick and Dave Opstad, Joe Becker published a draft proposal for an "international/multilingual text character encoding system in August 1988, tentatively called Unicode". He explained that "[t]he name 'Unicode' is intended to suggest a unique, unified, universal encoding". In this document, entitled Unicode 88, Becker outlined a 16-bit character model: Unicode is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as "wide-body ASCII" that has been stretched to 16 bits to encompass the characters of all the world's living languages. In a properly engineered design, 16 bits per character are more than sufficient for this purpose. His original 16-bit design was based on the assumption that only those scripts and characters in modern use would need to be encoded: Unicode gives higher priority to ensuring utility for the future than to preserving past antiquities. Unicode aims in the first instance at the characters published in modern text (e.g. in the union of all newspapers and magazines printed in the world in 1988), whose number is undoubtedly far below 214 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicodes. In early 1989, the Unicode working group expanded to include Ken Whistler and Mike Kernaghan of Metaphor, Karen Smith-Yoshimura and Joan Aliprand of RLG, and Glenn Wright of Sun Microsystems, and in 1990, Michel Suignard and Asmus Freytag from Microsoft and Rick McGowan of NeXT joined the group. By the end of 1990, most of the work on mapping existing character encoding standards had been completed, and a final review draft of Unicode was ready. The Unicode Consortium was incorporated in California on 3 January 1991, and in October 1991, the first volume of the Unicode standard was published. The second volume, covering Han ideographs, was published in June 1992. In 1996, a surrogate character mechanism was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace to over a million code points, which allowed for the encoding of many historic scripts (e.g., Egyptian hieroglyphs) and thousands of rarely used or obsolete characters that had not been anticipated as needing encoding. Among the characters not originally intended for Unicode are rarely used Kanji or Chinese characters, many of which are part of personal and place names, making them rarely used, but much more essential than envisioned in the original architecture of Unicode. The Microsoft TrueType specification version 1.0 from 1992 used the name 'Apple Unicode' instead of 'Unicode' for the Platform ID in the naming table. Unicode Consortium The Unicode Consortium is a nonprofit organization that coordinates Unicode's development. Full members include most of the main computer software and hardware companies with any interest in text-processing standards, including Adobe, Apple, Facebook, Google, IBM, Microsoft, Netflix, and SAP SE. Over the years several countries or government agencies have been members of the Unicode Consortium. Presently only the Ministry of Endowments and Religious Affairs (Oman) is a full member with voting rights. The Consortium has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingual environments. Scripts covered Unicode covers almost all scripts (writing systems) in current use today. As of 2021 a total of 159 scripts are included in the latest version of Unicode (covering alphabets, abugidas and syllabaries), although there are still scripts that are not yet encoded, particularly those mainly used in historical, liturgical, and academic contexts. Further additions of characters to the already encoded scripts, as well as symbols, in particular for mathematics and music (in the form of notes and rhythmic symbols), also occur. The Unicode Roadmap Committee (Michael Everson, Rick McGowan, Ken Whistler, V.S. Umamaheswaran) maintain the list of scripts that are candidates or potential candidates for encoding and their tentative code block assignments on the Unicode Roadmap page of the Unicode Consortium website. For some scripts on the Roadmap, such as Jurchen and Khitan small script, encoding proposals have been made and they are working their way through the approval process. For other scripts, such as Mayan (besides numbers) and Rongorongo, no proposal has yet been made, and they await agreement on character repertoire and other details from the user communities involved. Some modern invented scripts which have not yet been included in Unicode (e.g., Tengwar) or which do not qualify for inclusion in Unicode due to lack of real-world use (e.g., Klingon) are listed in the ConScript Unicode Registry, along with unofficial but widely used Private Use Areas code assignments. There is also a Medieval Unicode Font Initiative focused on special Latin medieval characters. Part of these proposals have been already included into Unicode. Script Encoding Initiative The Script Encoding Initiative, a project run by Deborah Anderson at the University of California, Berkeley was founded in 2002 with the goal of funding proposals for scripts not yet encoded in the standard. The project has become a major source of proposed additions to the standard in recent years. Versions The Unicode Consortium and the International Organization for Standardization (ISO) have together developed a shared repertoire following the initial publication of The Unicode Standard in 1991; Unicode and the ISO's Universal Coded Character Set (UCS) use identical character names and code points. However, the Unicode versions do differ from their ISO equivalents in two significant ways. While the UCS is a simple character map, Unicode specifies the rules, algorithms, and properties necessary to achieve interoperability between different platforms and languages. Thus, The Unicode Standard includes more information, covering—in depth—topics such as bitwise encoding, collation and rendering. It also provides a comprehensive catalog of character properties, including those needed for supporting bidirectional text, as well as visual charts and reference data sets to aid implementers. Previously, The Unicode Standard was sold as a print volume containing the complete core specification, standard annexes, and code charts. However, Unicode 5.0, published in 2006, was the last version printed this way. Starting with version 5.2, only the core specification, published as print-on-demand paperback, may be purchased. The full text, on the other hand, is published as a free PDF on the Unicode website. A practical reason for this publication method highlights the second significant difference between the UCS and Unicode—the frequency with which updated versions are released and new characters added. The Unicode Standard has regularly released annual expanded versions, occasionally with more than one version released in a calendar year and with rare cases where the scheduled release had to be postponed. For instance, in April 2020, only a month after version 13.0 was published, the Unicode Consortium announced they had changed the intended release date for version 14.0, pushing it back six months from March 2021 to September 2021 due to the COVID-19 pandemic. Thus far, the following major and minor versions of the Unicode standard have been published. Update versions, which do not include any changes to character repertoire, are signified by the third number (e.g., "version 4.0.1") and are omitted in the table below. Architecture and terminology Codespace and Code Points The Unicode Standard defines a codespace, a set of numerical values ranging from 0 through 10FFFF16, called code points and denoted as through ("U+" followed by the code point value in hexadecimal, which is prepended with leading zeros to a minimum of four digits; e. g., for the division sign but (not ) for the Egyptian hieroglyph .). Of these 216 + 220 defined code points, the code points from through , which are used to encode surrogate pairs in UTF-16, are reserved by the Unicode Standard and may not be used to encode valid characters, resulting in a net total of 216 − 211 + 220 = 1,112,064 assignable code points. Code planes and blocks The Unicode codespace is divided into seventeen planes, numbered 0 to 16: All code points in the BMP are accessed as a single code unit in UTF-16 encoding and can be encoded in one, two or three bytes in UTF-8. Code points in Planes 1 through 16 (supplementary planes) are accessed as surrogate pairs in UTF-16 and encoded in four bytes in UTF-8. Within each plane, characters are allocated within named blocks of related characters. Although blocks are an arbitrary size, they are always a multiple of 16 code points and often a multiple of 128 code points. Characters required for a given script may be spread out over several different blocks. General Category property Each code point has a single General Category property. The major categories are denoted: Letter, Mark, Number, Punctuation, Symbol, Separator and Other. Within these categories, there are subdivisions. In most cases other properties must be used to sufficiently specify the characteristics of a code point. The possible General Categories are: Code points in the range U+D800–U+DBFF (1,024 code points) are known as high-surrogate code points, and code points in the range U+DC00–U+DFFF (1,024 code points) are known as low-surrogate code points. A high-surrogate code point followed by a low-surrogate code point form a surrogate pair in UTF-16 to represent code points greater than U+FFFF. These code points otherwise cannot be used (this rule is ignored often in practice especially when not using UTF-16). A small set of code points are guaranteed never to be used for encoding characters, although applications may make use of these code points internally if they wish. There are sixty-six of these noncharacters: U+FDD0–U+FDEF and any code point ending in the value FFFE or FFFF (i.e., U+FFFE, U+FFFF, U+1FFFE, U+1FFFF, ... U+10FFFE, U+10FFFF). The set of noncharacters is stable, and no new noncharacters will ever be defined. Like surrogates, the rule that these cannot be used is often ignored, although the operation of the byte order mark assumes that U+FFFE will never be the first code point in a text. Excluding surrogates and noncharacters leaves 1,111,998 code points available for use. Private-use code points are considered to be assigned characters, but they have no interpretation specified by the Unicode standard so any interchange of such characters requires an agreement between sender and receiver on their interpretation. There are three private-use areas in the Unicode codespace: Private Use Area: U+E000–U+F8FF (6,400 characters), Supplementary Private Use Area-A: U+F0000–U+FFFFD (65,534 characters), Supplementary Private Use Area-B: U+100000–U+10FFFD (65,534 characters). Graphic characters are characters defined by Unicode to have particular semantics, and either have a visible glyph shape or represent a visible space. As of Unicode 14.0 there are 144,532 graphic characters. Format characters are characters that do not have a visible appearance, but may have an effect on the appearance or behavior of neighboring characters. For example, and may be used to change the default shaping behavior of adjacent characters (e.g., to inhibit ligatures or request ligature formation). There are 165 format characters in Unicode 14.0. Sixty-five code points (U+0000–U+001F and U+007F–U+009F) are reserved as control codes, and correspond to the C0 and C1 control codes defined in ISO/IEC 6429. U+0009 (Tab), U+000A (Line Feed), and U+000D (Carriage Return) are widely used in Unicode-encoded texts. In practice the C1 code points are often improperly-translated (mojibake) as the legacy Windows-1252 characters used by some English and Western European texts. Graphic characters, format characters, control code characters, and private use characters are known collectively as assigned characters. Reserved code points are those code points which are available for use, but are not yet assigned. As of Unicode 14.0 there are 829,768 reserved code points. Abstract characters The set of graphic and format characters defined by Unicode does not correspond directly to the repertoire of abstract characters that is representable under Unicode. Unicode encodes characters by associating an abstract character with a particular code point. However, not all abstract characters are encoded as a single Unicode character, and some abstract characters may be represented in Unicode by a sequence of two or more characters. For example, a Latin small letter "i" with an ogonek, a dot above, and an acute accent, which is required in Lithuanian, is represented by the character sequence U+012F, U+0307, U+0301. Unicode maintains a list of uniquely named character sequences for abstract characters that are not directly encoded in Unicode. All graphic, format, and private use characters have a unique and immutable name by which they may be identified. This immutability has been guaranteed since Unicode version 2.0 by the Name Stability policy. In cases where the name is seriously defective and misleading, or has a serious typographical error, a formal alias may be defined, and applications are encouraged to use the formal alias in place of the official character name. For example, has the formal alias , and has the formal alias . Ready-made versus composite characters Unicode includes a mechanism for modifying characters that greatly extends the supported glyph repertoire. This covers the use of combining diacritical marks that may be added after the base character by the user. Multiple combining diacritics may be simultaneously applied to the same character. Unicode also contains precomposed versions of most letter/diacritic combinations in normal use. These make conversion to and from legacy encodings simpler, and allow applications to use Unicode as an internal text format without having to implement combining characters. For example, é can be represented in Unicode as U+0065 () followed by U+0301 (), but it can also be represented as the precomposed character U+00E9 (). Thus, in many cases, users have multiple ways of encoding the same character. To deal with this, Unicode provides the mechanism of canonical equivalence. An example of this arises with Hangul, the Korean alphabet. Unicode provides a mechanism for composing Hangul syllables with their individual subcomponents, known as Hangul Jamo. However, it also provides 11,172 combinations of precomposed syllables made from the most common jamo. The CJK characters currently have codes only for their precomposed form. Still, most of those characters comprise simpler elements (called radicals), so in principle Unicode could have decomposed them as it did with Hangul. This would have greatly reduced the number of required code points, while allowing the display of virtually every conceivable character (which might do away with some of the problems caused by Han unification). A similar idea is used by some input methods, such as Cangjie and Wubi. However, attempts to do this for character encoding have stumbled over the fact that Chinese characters do not decompose as simply or as regularly as Hangul does. A set of radicals was provided in Unicode 3.0 (CJK radicals between U+2E80 and U+2EFF, KangXi radicals in U+2F00 to U+2FDF, and ideographic description characters from U+2FF0 to U+2FFB), but the Unicode standard (ch. 12.2 of Unicode 5.2) warns against using ideographic description sequences as an alternate representation for previously encoded characters: Ligatures Many scripts, including Arabic and Devanāgarī, have special orthographic rules that require certain combinations of letterforms to be combined into special ligature forms. The rules governing ligature formation can be quite complex, requiring special script-shaping technologies such as ACE (Arabic Calligraphic Engine by DecoType in the 1980s and used to generate all the Arabic examples in the printed editions of the Unicode Standard), which became the proof of concept for OpenType (by Adobe and Microsoft), Graphite (by SIL International), or AAT (by Apple). Instructions are also embedded in fonts to tell the operating system how to properly output different character sequences. A simple solution to the placement of combining marks or diacritics is assigning the marks a width of zero and placing the glyph itself to the left or right of the left sidebearing (depending on the direction of the script they are intended to be used with). A mark handled this way will appear over whatever character precedes it, but will not adjust its position relative to the width or height of the base glyph; it may be visually awkward and it may overlap some glyphs. Real stacking is impossible, but can be approximated in limited cases (for example, Thai top-combining vowels and tone marks can just be at different heights to start with). Generally this approach is only effective in monospaced fonts, but may be used as a fallback rendering method when more complex methods fail. Standardized subsets Several subsets of Unicode are standardized: Microsoft Windows since Windows NT 4.0 supports WGL-4 with 657 characters, which is considered to support all contemporary European languages using the Latin, Greek, or Cyrillic script. Other standardized subsets of Unicode include the Multilingual European Subsets: MES-1 (Latin scripts only, 335 characters), MES-2 (Latin, Greek and Cyrillic 1062 characters) and MES-3A & MES-3B (two larger subsets, not shown here). Note that MES-2 includes every character in MES-1 and WGL-4. Rendering software which cannot process a Unicode character appropriately often displays it as an open rectangle, or the Unicode "replacement character" (U+FFFD, �), to indicate the position of the unrecognized character. Some systems have made attempts to provide more information about such characters. Apple's Last Resort font will display a substitute glyph indicating the Unicode range of the character, and the SIL International's Unicode Fallback font will display a box showing the hexadecimal scalar value of the character. Mapping and encodings Several mechanisms have been specified for storing a series of code points as a series of bytes. Unicode defines two mapping methods: the Unicode Transformation Format (UTF) encodings, and the Universal Coded Character Set (UCS) encodings. An encoding maps (possibly a subset of) the range of Unicode code points to sequences of values in some fixed-size range, termed code units. All UTF encodings map code points to a unique sequence of bytes. The numbers in the names of the encodings indicate the number of bits per code unit (for UTF encodings) or the number of bytes per code unit (for UCS encodings and UTF-1). UTF-8 and UTF-16 are the most commonly used encodings. UCS-2 is an obsolete subset of UTF-16; UCS-4 and UTF-32 are functionally equivalent. UTF encodings include: UTF-8, uses one to four bytes for each code point, maximizes compatibility with ASCII UTF-EBCDIC, similar to UTF-8 but designed for compatibility with EBCDIC (not part of The Unicode Standard) UTF-16, uses one or two 16-bit code units per code point, cannot encode surrogates UTF-32, uses one 32-bit code unit per code point UTF-8 uses one to four bytes per code point and, being compact for Latin scripts and ASCII-compatible, provides the de facto standard encoding for interchange of Unicode text. It is used by FreeBSD and most recent Linux distributions as a direct replacement for legacy encodings in general text handling. The UCS-2 and UTF-16 encodings specify the Unicode Byte Order Mark (BOM) for use at the beginnings of text files, which may be used for byte ordering detection (or byte endianness detection). The BOM, code point U+FEFF, has the important property of unambiguity on byte reorder, regardless of the Unicode encoding used; U+FFFE (the result of byte-swapping U+FEFF) does not equate to a legal character, and U+FEFF in places other than the beginning of text conveys the zero-width non-break space (a character with no appearance and no effect other than preventing the formation of ligatures). The same character converted to UTF-8 becomes the byte sequence EF BB BF. The Unicode Standard allows that the BOM "can serve as signature for UTF-8 encoded text where the character set is unmarked". Some software developers have adopted it for other encodings, including UTF-8, in an attempt to distinguish UTF-8 from local 8-bit code pages. However , the UTF-8 standard, recommends that byte order marks be forbidden in protocols using UTF-8, but discusses the cases where this may not be possible. In addition, the large restriction on possible patterns in UTF-8 (for instance there cannot be any lone bytes with the high bit set) means that it should be possible to distinguish UTF-8 from other character encodings without relying on the BOM. In UTF-32 and UCS-4, one 32-bit code unit serves as a fairly direct representation of any character's code point (although the endianness, which varies across different platforms, affects how the code unit manifests as a byte sequence). In the other encodings, each code point may be represented by a variable number of code units. UTF-32 is widely used as an internal representation of text in programs (as opposed to stored or transmitted text), since every Unix operating system that uses the gcc compilers to generate software uses it as the standard "wide character" encoding. Some programming languages, such as Seed7, use UTF-32 as internal representation for strings and characters. Recent versions of the Python programming language (beginning with 2.2) may also be configured to use UTF-32 as the representation for Unicode strings, effectively disseminating such encoding in high-level coded software. Punycode, another encoding form, enables the encoding of Unicode strings into the limited character set supported by the ASCII-based Domain Name System (DNS). The encoding is used as part of IDNA, which is a system enabling the use of Internationalized Domain Names in all scripts that are supported by Unicode. Earlier and now historical proposals include UTF-5 and UTF-6. GB18030 is another encoding form for Unicode, from the Standardization Administration of China. It is the official character set of the People's Republic of China (PRC). BOCU-1 and SCSU are Unicode compression schemes. The April Fools' Day RFC of 2005 specified two parody UTF encodings, UTF-9 and UTF-18. Adoption Operating systems Unicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, 2000, XP, Vista, 7, 8, 10, and 11), which uses UTF-16 as the sole internal character encoding. The Java and .NET bytecode environments, macOS, and KDE also use it for internal representation. Partial support for Unicode can be installed on Windows 9x through the Microsoft Layer for Unicode. UTF-8 (originally developed for Plan 9) has become the main storage encoding on most Unix-like operating systems (though others are also used by some libraries) because it is a relatively easy replacement for traditional extended ASCII character sets. UTF-8 is also the most common Unicode encoding used in HTML documents on the World Wide Web. Multilingual text-rendering engines which use Unicode include Uniscribe and DirectWrite for Microsoft Windows, ATSUI and Core Text for macOS, and Pango for GTK+ and the GNOME desktop. Input methods Because keyboard layouts cannot have simple key combinations for all characters, several operating systems provide alternative input methods that allow access to the entire repertoire. ISO/IEC 14755, which standardises methods for entering Unicode characters from their code points, specifies several methods. There is the Basic method, where a beginning sequence is followed by the hexadecimal representation of the code point and the ending sequence. There is also a screen-selection entry method specified, where the characters are listed in a table in a screen, such as with a character map program. Online tools for finding the code point for a known character include Unicode Lookup by Jonathan Hedley and Shapecatcher by Benjamin Milde. In Unicode Lookup, one enters a search key (e.g. "fractions"), and a list of corresponding characters with their code points is returned. In Shapecatcher, based on Shape context, one draws the character in a box and a list of characters approximating the drawing, with their code points, is returned. Email MIME defines two different mechanisms for encoding non-ASCII characters in email, depending on whether the characters are in email headers (such as the "Subject:"), or in the text body of the message; in both cases, the original character set is identified as well as a transfer encoding. For email transmission of Unicode, the UTF-8 character set and the Base64 or the Quoted-printable transfer encoding are recommended, depending on whether much of the message consists of ASCII characters. The details of the two different mechanisms are specified in the MIME standards and generally are hidden from users of email software. The adoption of Unicode in email has been very slow. Some East Asian text is still encoded in encodings such as ISO-2022, and some devices, such as mobile phones, still cannot correctly handle Unicode data. Support has been improving, however. Many major free mail providers such as Yahoo, Google (Gmail), and Microsoft (Outlook.com) support it. Web All W3C recommendations have used Unicode as their document character set since HTML 4.0. Web browsers have supported Unicode, especially UTF-8, for many years. There used to be display problems resulting primarily from font related issues; e.g. v 6 and older of Microsoft Internet Explorer did not render many code points unless explicitly told to use a font that contains them. Although syntax rules may affect the order in which characters are allowed to appear, XML (including XHTML) documents, by definition, comprise characters from most of the Unicode code points, with the exception of: most of the C0 control codes, the permanently unassigned code points D800–DFFF, FFFE or FFFF. HTML characters manifest either directly as bytes according to document's encoding, if the encoding supports them, or users may write them as numeric character references based on the character's Unicode code point. For example, the references &#916;, &#1049;, &#1511;, &#1605;, &#3671;, &#12354;, &#21494;, &#33865;, and &#47568; (or the same numeric values expressed in hexadecimal, with &#x as the prefix) should display on all browsers as Δ, Й, ק ,م, ๗, あ, 叶, 葉, and 말. When specifying URIs, for example as URLs in HTTP requests, non-ASCII characters must be percent-encoded. Fonts Unicode is not in principle concerned with fonts per se, seeing them as implementation choices. Any given character may have many allographs, from the more common bold, italic and base letterforms to complex decorative styles. A font is "Unicode compliant" if the glyphs in the font can be accessed using code points defined in the Unicode standard. The standard does not specify a minimum number of characters that must be included in the font; some fonts have quite a small repertoire. Free and retail fonts based on Unicode are widely available, since TrueType and OpenType support Unicode. These font formats map Unicode code points to glyphs, but TrueType font is restricted to 65,535 glyphs. Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces. Newlines Unicode partially addresses the newline problem that occurs when trying to read a text file on different platforms. Unicode defines a large number of characters that conforming applications should recognize as line terminators. In terms of the newline, Unicode introduced and . This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding. Issues Philosophical and completeness criticisms Han unification (the identification of forms in the East Asian languages which one can treat as stylistic variations of the same historical character) has become one of the most controversial aspects of Unicode, despite the presence of a majority of experts from all three regions in the Ideographic Research Group (IRG), which advises the Consortium and ISO on additions to the repertoire and on Han unification. Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged. There have been several attempts to create alternative encodings that preserve the stylistic differences between Chinese, Japanese, and Korean characters in opposition to Unicode's policy of Han unification. An example of one is TRON (although it is not widely adopted in Japan, there are some users who need to handle historical Japanese text and favor it). Although the repertoire of fewer than 21,000 Han characters in the earliest version of Unicode was largely limited to characters in common modern usage, Unicode now includes more than 92,000 Han characters, and work is continuing to add thousands more historic and dialectal characters used in China, Japan, Korea, Taiwan, and Vietnam. Modern font technology provides a means to address the practical issue of needing to depict a unified Han character in terms of a collection of alternative glyph representations, in the form of Unicode variation sequences. For example, the Advanced Typographic tables of OpenType permit one of a number of alternative glyph representations to be selected when performing the character to glyph mapping process. In this case, information can be provided within plain text to designate which alternate character form to select. If the difference in the appropriate glyphs for two characters in the same script differ only in the italic, Unicode has generally unified them, as can be seen in the comparison between Russian (labeled standard) and Serbian characters at right, meaning that the differences are displayed through smart font technology or manually changing fonts. Mapping to legacy character sets Unicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be converted to Unicode and then back and get back the same file, without employing context-dependent interpretation. That has meant that inconsistent legacy architectures, such as combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode. Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '~' (1-33, WAVE DASH), heavily used in legacy database data, to either (in Microsoft Windows) or (other vendors). Some Japanese computer programmers objected to Unicode because it requires them to separate the use of and , which was mapped to 0x5C in JIS X 0201, and a lot of legacy code exists with this usage. (This encoding also replaces tilde '~' 0x7E with macron '¯', now 0xAF.) The separation of these characters exists in ISO 8859-1, from long before Unicode. Indic scripts Indic scripts such as Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for the Tibetan script in 2003 when the Standardization Administration of China proposed encoding 956 precomposed Tibetan syllables, but these were rejected for encoding by the relevant ISO committee (ISO/IEC JTC 1/SC 2). Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส. Combining characters Characters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, underdots, as needed in the romanization of Indic, will often be placed incorrectly.. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite, OpenType, or AAT technologies for advanced rendering features. Anomalies The Unicode standard has imposed rules intended to guarantee stability. Depending on the strictness of a rule, a change can be prohibited or allowed. For example, a "name" given to a code point cannot and will not change. But a "script" property is more flexible, by Unicode's own rules. In version 2.0, Unicode changed many code point "names" from version 1. At the same moment, Unicode stated that from then on, an assigned name to a code point would never change anymore. This implies that when mistakes are published, these mistakes cannot be corrected, even if they are trivial (as happened in one instance with the spelling for in a character name). In 2006 a list of anomalies in character names was first published, and, as of June 2021, there were 104 characters with identified issues, for example: : This is a small letter. The capital is . : Does not join graphemes. : This is not a Yi syllable, but a Yi iteration mark. : bracket is spelled incorrectly. Spelling errors are resolved by using Unicode alias names and abbreviations. Security issues Unicode has a large number of homoglyphs, many of which look very similar or identical to ASCII letters. Substitution of these can make an identifier or URL that looks correct, but directs to a different location than expected. Mitigation requires disallowing these characters, displaying them differently, or requiring that they resolve to the same identifier, all of this is complicated due to the huge and constantly changing set of characters. A security advisory was released in 2021 from two researchers, one from the University of Cambridge and the other from the same and from the University of Edinburgh, in which they assert that the BIDI codes can be used to make large sections of code do something different from what they appear to do. See also Comparison of Unicode encodings Religious and political symbols in Unicode International Components for Unicode (ICU), now as ICU-TC a part of Unicode List of binary codes List of Unicode characters List of XML and HTML character entity references Open-source Unicode typefaces Standards related to Unicode Unicode symbols Universal Coded Character Set Lotus Multi-Byte Character Set (LMBCS), a parallel development with similar intentions Notes References Further reading The Unicode Standard, Version 3.0, The Unicode Consortium, Addison-Wesley Longman, Inc., April 2000. The Unicode Standard, Version 4.0, The Unicode Consortium, Addison-Wesley Professional, 27 August 2003. The Unicode Standard, Version 5.0, Fifth Edition, The Unicode Consortium, Addison-Wesley Professional, 27 October 2006. Julie D. Allen. The Unicode Standard, Version 6.0, The Unicode Consortium, Mountain View, 2011, , (). The Complete Manual of Typography, James Felici, Adobe Press; 1st edition, 2002. Unicode: A Primer, Tony Graham, M&T books, 2000. . Unicode Demystified: A Practical Programmer's Guide to the Encoding Standard, Richard Gillam, Addison-Wesley Professional; 1st edition, 2002. Unicode Explained, Jukka K. Korpela, O'Reilly; 1st edition, 2006. External links The Unicode Character Database, a text document listing the names, code points and properties of all Unicode characters Alan Wood's Unicode Resources contains lists of word processors with Unicode capability; fonts and characters are grouped by type; characters are presented in lists, not grids. The World’s Writing Systems, All 294 known writing systems with their Unicode status (131 not yet encoded) Unicode BMP Fallback Font displays the Unicode 6.1 value of any character in a document, including in the Private Use Area, rather than the glyph itself. Character encoding Digital typography
32670973
https://en.wikipedia.org/wiki/House%20of%20Cards%20%28American%20TV%20series%29
House of Cards (American TV series)
House of Cards is an American political thriller streaming television series created by Beau Willimon. It is an adaptation of the 1990 BBC series of the same name and based on the 1989 novel of the same name by Michael Dobbs. The first 13-episode season was released on February 1, 2013, on the streaming service Netflix. House of Cards is the first TV series to have been produced by a studio for Netflix. House of Cards is set in Washington, D.C., and is the story of Frank Underwood (Kevin Spacey), an amoral politician and Democrat from South Carolina's 5th congressional district, and his equally ambitious wife Claire Underwood (Robin Wright). Frank is passed over for appointment as Secretary of State, House Majority Whip so he initiates an elaborate plan to attain power, aided by Claire. The series deals with themes of ruthless pragmatism, manipulation, betrayal, and power. House of Cards received positive reviews and several award nominations, including 33 Primetime Emmy Award nominations, including for Outstanding Drama Series, Outstanding Lead Actor for Spacey, and Outstanding Lead Actress for Wright. It is the first original online-only streaming television series to receive major Emmy nominations. The show also earned eight Golden Globe Award nominations, with Wright winning for Best Actress – Television Series Drama in 2014 and Spacey winning for Best Actor – Television Series Drama in 2015. In 2017, following allegations of sexual misconduct, Netflix terminated their relationship with Spacey. The sixth (and final) season was produced and released in 2018 without his involvement. Plot Season 1 (2013) Frank Underwood, a power-hungry Democratic congressman from South Carolina and House majority whip, celebrates the 2012 election of President Garrett Walker, who had agreed to appoint him Secretary of State in exchange for his support. However, Underwood learns that the President wants him to promote his agenda in Congress and will not honor their agreement. Inwardly seething, Underwood presents himself as a helpful lieutenant to Walker. In reality, Underwood begins an elaborate plan behind the President's back. Frank's wife Claire runs an NGO, the Clean Water Initiative, which she uses to cultivate her own power; she seeks to expand its scope to the international stage, often using Frank's connections. Claire shares her husband's cold-hearted, ruthless pragmatism and lust for power, and they frequently scheme together to ensure the success of each other's ventures. They both work with Remy Danton, a corporate lobbyist and former Underwood staffer, to secure funds for their operations and expand their influence. Underwood begins a symbiotic, and ultimately sexual, relationship with Zoe Barnes, a young political reporter, secretly feeding her damaging stories about his political rivals to sway public opinion as needed. Meanwhile, he manipulates Peter Russo, a troubled alcoholic congressman from Pennsylvania, into helping him undermine Walker's pick for Secretary of State, Senator Michael Kern. Underwood eventually has Kern replaced with his own choice, Senator Catherine Durant. Underwood also uses Russo in a plot to end a teachers' strike and pass an education bill, which improves Underwood's standing with Walker. Because the new Vice President is the former Governor of Pennsylvania, a special election is to be held for his successor. Underwood helps Russo get clean and props up his candidacy, but later uses sex worker Rachel Posner to break his sobriety and trigger his downfall shortly before the election. When Russo decides to come clean about his role in Underwood's schemes, Frank kills Russo and stages his death as a suicide. With the special election in chaos, Underwood convinces the Vice President to step down and run for his old position of governor – leaving the Vice Presidency open to Underwood, as was his plan all along. Underwood is introduced to Missouri billionaire Raymond Tusk, Walker's friend and advisor. Tusk reveals that he has been influencing Walker's decisions all along and convinced him to cancel the original agreement, and explains he will influence Walker to nominate Underwood for vice president if he does a favor benefiting Tusk's interests. Underwood counter-proposes to Tusk that they work together to fulfill both their interests, which Tusk accepts. Meanwhile, after ending their affair, Zoe begins piecing together clues about Underwood's various plots. The season ends when Underwood accepts the nomination for Vice President of the United States. Season 2 (2014) Zoe and two colleagues, Lucas Goodwin and Janine Skorsky, continue investigating Frank and ultimately locate Rachel. As a protective measure, Frank's aide Doug Stamper brings Rachel to a safe house while Frank lures Zoe to a Metro station and, unseen by witnesses or security cameras, pushes her in front of an oncoming train. Zoe's death compels Janine to abandon the investigation, but Lucas continues the search alone. He solicits the help of a hacker to retrieve Frank's text history. However, the hacker, Gavin Orsay, actually works for Doug and frames Lucas for cyberterrorism. Later, Gavin uses Rachel to extort Doug. Rachel, fearing potential harm and Doug's growing obsession with her, ambushes Doug and leaves him for dead, fleeing into hiding. After Frank begins his vice presidential duties, Claire becomes close with the First Lady and learns Walker's marriage is strained. Meanwhile, Frank aims to drive a wedge between Walker and Tusk. He meets Xander Feng, a Chinese businessman and ally of Tusk's, to engage in back-channel negotiations that Frank intentionally scuttles at the expense of Tusk's credibility. In the resulting trade war with China, Tusk opposes Walker's efforts to deal with the crisis and begins having a tribal casino funnel money into Republican PACs in retaliation. When Frank discovers that Feng is the source of the donations, he gets Feng to end his partnership with Tusk in exchange for a lucrative contract for a bridge over Long Island Sound. The Justice Department investigates the White House's ties to Feng and Tusk. Frank manipulates Walker into volunteering his travel records, which reveal his visits to a marriage counselor and raise questions about whether the donations were discussed. Wishing to avoid public disclosure of his personal issues, Walker has the White House Counsel coach the counselor, which the special prosecutor interprets as witness tampering. As the House Judiciary Committee begins drafting articles of impeachment, both Walker and Frank offer Tusk a presidential pardon in exchange for implicating each other. Tusk sides with Walker at first, leaving Frank no other option than to regain the president's trust as a friend. Walker then calls off his deal with Tusk, who testifies against him. With Walker forced to resign, Frank is sworn in as the 46th President of the United States . Season 3 (2015) Six months into his presidency, Frank pushes for a controversial jobs program called America Works. Determined not to be a "placeholder" President, Underwood reverses his previous pledge and runs in the 2016 election, competing against Heather Dunbar in the Democratic primaries. Meanwhile, Claire is named the U.S. Ambassador to the United Nations and faces a crisis in the Jordan Valley, which pits Frank against Russian President Viktor Petrov. When Petrov has an American gay rights activist arrested in Russia, the Underwoods persuade him to secure a release. However, Petrov demands that the activist apologize on Russian television, leading the activist to kill himself while being visited by Claire. Later, after Russian troops are killed in the Jordan Valley, Petrov convinces Frank to remove Claire as Ambassador in exchange for a peaceful resolution. Claire resigns, giving the reason that she wants to be more active in Frank's campaign. When Frank refuses to reinstate Doug as his Chief of Staff, Doug appears to switch sides and begins working for Dunbar. Gavin helps Doug track down Rachel and delivers findings purporting that she is dead, causing Doug to suffer a relapse. When Gavin reveals that Rachel is really alive, Doug brutalizes him into divulging her location. Doug finds Rachel living under a false identity in New Mexico, drives her into the desert, and eventually kills her. He returns to work as Frank's Chief of Staff after Remy resigns. Throughout the season, a writer named Thomas Yates is hired by Frank to write a biography for the purpose of promoting America Works. Yates, a fiction writer with a dark past of his own, decides to put a different spin on the book and writes less about Frank and more about his marriage with Claire. Yates reads Frank a prologue that he does not understand at first, but agrees is a decent beginning. By the end of the season, Yates has the first chapter written and Frank, not liking the direction the book is taking, fires Yates. By the season finale, tensions between the Underwoods reach a point where Claire states her intent to leave Frank. Season 4 (2016) Claire relocates to Dallas and runs for Congress in her home district. The incumbent, Doris Jones, plans to retire and endorse her daughter Celia as her successor. Claire offers them federal funding for a key Planned Parenthood clinic in exchange for stepping down, but they refuse the offer. Frank wins back Claire's support by promising not to sabotage her campaign in Texas, but he later publicly endorses Celia in his State of the Union address. Frank and Claire travel to South Carolina for a primary, but a series of scandals causes Frank to narrowly lose the state to Dunbar. Frank discovers that Claire had been leaking information to Dunbar, and she threatens to continue unless he names her as his running mate. Frank refuses. Lucas Goodwin is released from prison, and seeks revenge against Frank for having him framed and Zoe Barnes killed. He explains his story to Dunbar, but she turns him away. Desperate, he attempts to assassinate Frank, severely wounding the president in the abdomen and killing bodyguard Edward Meechum, but not before Meechum fatally wounds Lucas. While Frank remains comatose, Donald Blythe is sworn in as Acting President of the United States. Blythe is indecisive during a military crisis involving Russia, and turns to Claire for guidance. Claire goes against Frank's wishes by convincing Blythe to involve China and secure a meeting with Petrov, where she brokers an ambitious peace deal. Doug leaks information about Dunbar's secret meeting with Lucas and forces her to suspend her campaign. Frank recovers and resumes his position as President, agreeing to put Claire on the ticket for the upcoming election. Tom Hammerschmidt, Zoe and Lucas's former news editor, digs deeper into the latter's claims of Frank's misdeeds. He approaches Remy and, with his help, starts to piece together Frank's corruption. Tom also meets with Walker, convincing him to help by appealing to his anger for being forced to resign. Remy Danton and Jackie Sharp also decide to go on the record against Frank to lend credibility to the story. An American family is kidnapped in Tennessee by two supporters of a radical Islamist group called the Islamic Caliphate Organization (ICO), who agree to negotiate only with the ambitious Republican nominee, Governor Will Conway. Frank invites Conway to the White House to assist in the negotiations as a publicity stunt, and Conway helps buy critical time in locating the suspects. However, tensions between the Conways and Underwoods lead to the governor ending his role in the crisis. Frank and Claire allow the kidnappers to speak to the deposed leader of ICO, Yusuf al Ahmadi, after successfully obtaining the release of two of the hostages. Instead of defusing the situation as he agreed, al Ahmadi urges the kidnappers to kill the remaining hostage and broadcast the killing to the public. Meanwhile, Hammerschmidt publishes his story and threatens to end Frank's campaign weeks before the election. Claire urges Frank to use a heavier hand in the situation, and they decide to fight back by creating chaos. Frank addresses the public declaring that the nation is at total war, ordering the full force of the military be used to combat global terrorism regardless of the cost. The season ends with Frank and Claire watching the live execution of the hostage together, and Claire breaking the fourth wall for the first time by looking into the camera along with Frank. Season 5 (2017) In the weeks before the 2016 election, Frank uses ICO as a pretext to enacting martial law in urban areas and consolidating polling places in key states. Done mainly through back channels with Democratic governors, this is officially done in the name of safety, but in practice disenfranchises rural Republican voters. To keep the strategy of fear going, Doug blackmails hacker Aidan Macallan into launching a massive cyberattack on the NSA, slowing down Internet traffic and wiping out hundreds of thousands of files. The Underwood Administration pins the attack on ICO. On Election Day, the result hinges on Pennsylvania, Tennessee, and Ohio. The early returns seem to favor Conway. Underwood's political machine stages a terrorist attack in Knoxville, Tennessee, which is pinned on a local suspected ICO sympathizer. With Pennsylvania secured by Conway and Ohio seeming to swing his way, Frank unofficially calls Conway directly to concede. However, this is merely a tactic to put Conway off guard, as the Underwoods contact Ohio's governor and convince him to close the polls early on the pretense of a terrorist threat. Ohio and Tennessee refuse to certify the election, and neither candidate reaches the requisite number of electoral votes. Nine weeks after the unresolved election, the Twelfth Amendment is invoked, with the vote being put up to members of Congress. During a meeting with the Congressional Black Caucus, cracks begin to appear in Conway's facade as he loses his cool. In spite of this, Frank's own baggage and 12% approval rating only allows him a tie with Conway in the House, while Claire manages to secure the Senate vote, becoming Acting President of the United States. In light of the tie, Claire orders a special election for Ohio and Tennessee. Meanwhile, Jane Davis, a low-ranking Commerce Department official who has a wide-ranging network of connections and influence, begins working closely with the Underwoods. As a private citizen for the time being, Frank attends a meeting of powerful men at a secret society known as Elysian Fields, in an effort to secure their influence for votes in the upcoming special election. Meanwhile, Conway has a mental breakdown on his private plane due to feeling that the election was stolen from him. Eventually, this and other leaks from his campaign are slowly dripped to the media in a manner that seems unconnected to the Underwoods. Seeing that his candidate is losing, Conway's campaign manager, Mark Usher, switches sides to the Underwoods. The Underwood ticket wins both Ohio and Tennessee, and Frank is sworn in as President and Claire as Vice President. Meanwhile, Hammerschmidt continues to investigate Zoe's death, and is given information by an unknown leaker within the White House. Major document dumps are made available to Hammerschmidt, which, among other charges, prompts an impeachment hearing against Frank. In response, the Underwoods set up extensive surveillance on all White House personnel. Eventually, the leaker makes a voice-modulated call to Hammerschmidt, implicating Doug in Zoe's murder. The Underwoods convince Doug to take the fall for killing Zoe, and the leaker is revealed to be Frank himself. The leaks are revealed to be part of Frank's master plan to resign the presidency to Claire, believing his thirst for power can be better achieved in the private sector, working alongside his wife's presidency. Frank, concerned about Secretary Durant's intention to testify at the impeachment hearing, pushes her down a short flight of stairs upon accepting her resignation, hospitalizing her. Claire poisons Yates with an overdose of Gelsemium provided to her by Jane, concerned that he knows too much. Finally, contractors working for the Underwoods eliminate LeAnn by ramming her car off the road into a guard rail. Frank resigns as President, leaving Claire as the 47th President of the United States. The two await the proper moment for Claire to pardon him. This comes in the form of a military special operations unit finding and taking out the leader of ICO, which moves media focus away from Frank. Standing in the Oval Office, Claire appears to reconsider pardoning Frank, and ignores multiple concerned calls from him regarding the matter. The season ends with Claire ignoring Frank's call, then breaking the fourth wall to tell the viewers, "My turn." Season 6 (2018) One hundred days after Claire takes office, she faces increased scrutiny, particularly in light of her husband's death following his resignation. The brother-sister duo of Bill and Annette Shepherd seek to influence Claire. The Shepherds are connected with Mark Usher, whom Claire has made Vice President, and who is having an affair with Annette. Annette's son Duncan puts pressure on Claire through his media company, Gardner Analytics. Claire and the Shepherds battle over deregulation measures, and Claire uses a chemical leak from one of the Shepherd's operations to embarrass the two. Doug, meanwhile, is in therapy following his confession to Zoe's murder, and Claire uses Assistant Director Green and his psychiatrist to monitor him. The Shepherds decide to influence Claire by other means, including through a Supreme Court justice they convince her to nominate. They and Seth Grayson also develop a mobile application which can secretly be used to monitor the user's activity, including their location. Secretary of State Durant also comes within the Shepherds' sphere of influence, and they persuade her to speak with prosecutors investigating the Underwoods as the Shepherds become increasingly distant from Claire. As Durant's testimony proves more threatening, Claire and Jane Davis plot to assassinate her, but she fakes her death and flees the country. Following Durant's fake death, Claire and President Petrov forge a deal on Syria. Claire then discovers that Durant is alive and living in France with Petrov's help. Following this, Claire disappears for three weeks, prompting questions of her ability to lead, and leading Usher to plan to use the 25th Amendment to remove her from office. Claire foils Usher's plan and fires her Cabinet, replacing it with an all-female Cabinet, much to Annette's surprise. Annette plans to use Claire's prior abortions against her, but Claire retaliates by revealing to Duncan he is not Annette's biological son. With her new Cabinet in place, Claire decides to undertake a new, progressive, agenda. The Shepherds, meanwhile, continue to plot her downfall, enlisting the help of Brett Cole, an ambitious Congressman who seeks to become Speaker of the House. They also seek the help of Doug, but he refuses, initially. Doug meets with Hammerschmidt, providing him information on Frank's actions. Determined to strike back against her enemies, Claire frames Usher for Yates's murder, claiming he colluded with Russia to do so. She also has Hammerschmidt, Davis, and Durant killed. Claire then reveals to Doug that she is pregnant with Frank's child, who will become his heir even though Frank secretly left his assets to Doug. Four months after the murders, Claire reverts to her maiden name and continues her progressive agenda. Annette, now strained from Bill, plots with Usher, no longer Vice President, to assassinate Claire. She asks Doug to perpetrate the act, but he is reluctant, mainly desiring to protect Frank's legacy. Claire, through now-Speaker Cole, blackmails Justice Abruzzo into recusing himself in a case dealing with her power to launch nuclear weapons. Janine Skorsky and Doug continue to work to uncover the Underwoods, with Doug leaking contents of Frank's secret audio diary while Claire blames everything on Frank. Claire then uses the pretense of ICO obtaining a nuclear weapon to create a crisis, leading the Shepherds and Doug to accelerate their plans. After sending a copy of Frank's audio and letter opener to Claire, Doug visits her in the Oval Office where he admits that he killed Frank because he was undermining his own legacy. Doug threatens and wounds Claire with the letter opener, but when he draws back, she grabs it and stabs him in the stomach. As he lies bleeding on the floor, she covers his mouth and suffocates him, completely unaware that, thanks to Doug, Janine Skorsky is going to expose her crimes. Cast and characters Kevin Spacey as Francis J. "Frank" Underwood, a Democrat from South Carolina's 5th congressional district. He is House Majority Whip in season one, Vice President of the United States in season two, 46th President of the United States in seasons three to five, and the First Gentleman of the United States in season five. (seasons 1–5) Robin Wright as Claire Underwood, Frank's wife. She runs the Clean Water Initiative, a nongovernmental organization, in season one before giving it up to become Second Lady of the United States in season two. She then becomes United States Ambassador to the United Nations in season three and First Lady of the United States in seasons three to five. In season five, she is briefly acting President of the United States before becoming Vice President of the United States and finally, becomes the 47th President of the United States at the end of the season. Kate Mara as Zoe Barnes, a reporter for The Washington Herald (and later Slugline). She forms an intimate relationship with Frank Underwood, her political informant, who in turn uses her as a mouthpiece to leak stories to the press and irk his political rivals. (season 1; guest seasons 2 and 4) Corey Stoll as Peter Russo, a Democratic congressman from Pennsylvania's 1st congressional district and recovering addict. (season 1; guest season 4) Michael Kelly as Douglas "Doug" Stamper, Underwood's unwaveringly loyal White House Chief of Staff and confidant. He is temporarily replaced by Remy Danton as chief of staff after his injury for most of season three, but returns as his new chief of staff at the end of the season. Kristen Connolly as Christina Gallagher, a congressional staffer and personal assistant to President Walker, and lover to Peter Russo. (seasons 1–2) Sakina Jaffrey as Linda Vasquez, President Walker's White House Chief of Staff. (seasons 1–2; guest season 6) Sandrine Holt as Gillian Cole, the leader of a grass-roots organization called World Well that provides clean water to developing countries. (season 1; guest season 2) Constance Zimmer as Janine Skorsky, a reporter for The Washington Herald. (seasons 1–2, 6; guest season 4) Michel Gill as Garrett Walker, the 45th President of the United States, and former Governor of Colorado. He trusts Underwood as a close adviser and lieutenant, but remains blind to his machinations. (seasons 1–2, 4–5) Sebastian Arcelus as Lucas Goodwin, an editor at The Washington Herald and later Zoe's boyfriend. (seasons 1–2 and 4) Mahershala Ali as Remy Danton, a lawyer for Glendon Hill and lobbyist, who works for natural gas company SanCorp in season one and Raymond Tusk in season two. He worked in Underwood's congressional office as Communications Director prior to the series, and after severing ties with Tusk, and serves as Underwood's chief of staff for most of season three until quitting at the end of the season. (seasons 1–4) Boris McGiver as Tom Hammerschmidt, the editor-in-chief of The Washington Herald. He opens an investigation into the secret dealings of Frank and his inner circle in season four. (seasons 1–2, 4–6) Nathan Darrow as Edward Meechum, a member of the United States Capitol Police and the Underwoods' bodyguard and driver. (seasons 1–4) Reg E. Cathey as Freddy Hayes, the owner of Freddy's BBQ, and one of Underwood's few true friends and confidants. When Raymond Tusk exposes Freddy's criminal past, Freddy loses out on a franchise opportunity; and he eventually gets and leaves a job as a White House groundskeeper. (seasons 1–4) Rachel Brosnahan as Rachel Posner, a prostitute trying to make a better life for herself using Stamper. (seasons 1–3) Molly Parker as Jacqueline "Jackie" Sharp, a Democratic congresswoman from California, who succeeded Frank as majority whip. She also briefly ran for the Democratic nomination for President in season three. (seasons 2–4) Gerald McRaney as Raymond Tusk, a billionaire businessman with a wide network of influence, although he prefers to live modestly. (seasons 1–2, 4–5) Jayne Atkinson as Catherine "Cathy" Durant, a Democratic Senator from Louisiana and Secretary of State. Jimmi Simpson as Gavin Orsay, a computer hacker turned reluctant FBI informant, who works secretly with Doug Stamper in exchange for help escaping the country. (seasons 2–3) Mozhan Marnò as Wall Street Telegraph reporter Ayla Sayyad. She is assigned to the White House and does freelance investigative reporting. (seasons 2–3) Elizabeth Marvel as Heather Dunbar, a lawyer and Solicitor General of the United States in the Walker administration. She runs against Underwood for the Democratic nomination. (seasons 2–4) Derek Cecil as Seth Grayson, a political operative who becomes press secretary for Vice President Underwood through blackmail. (seasons 2–6) Paul Sparks as Thomas Yates, a successful author whom Frank asks to write a book about the America Works jobs program. He stays on as a speech writer and Claire's lover. (seasons 3–5) Kim Dickens as Kate Baldwin, the chief political reporter of the Wall Street Telegraph. She replaces Sayyad at the White House after Seth Grayson dismisses Sayyad for protocol violations. (seasons 3–5) Lars Mikkelsen as Viktor Petrov, the President of Russia. (seasons 3–6) Joel Kinnaman as Will Conway, the Republican Governor of New York and nominee for President of the United States running against Frank. (seasons 4–5) Neve Campbell as LeAnn Harvey, a Texas-based political consultant Claire hires to run her congressional campaign. She later becomes the campaign manager for the Underwoods for the 2016 election. (seasons 4–5) Dominique McElligott as Hannah Conway, the wife of New York Governor and Republican presidential nominee Will Conway. (seasons 4–5) Campbell Scott as Mark Usher, Conway's campaign manager. He later joins the Underwoods' inner circle as a "special advisor" and becomes the Vice President of the United States under Claire Underwood. (seasons 5–6) Patricia Clarkson as Jane Davis, Deputy Undersecretary of Commerce for international trade. She is very well connected and able to successfully negotiate back-channel dealings for the Underwoods. (seasons 5–6) Damian Young as Aidan Macallan, a data scientist and NSA contractor, who is friends with LeAnn Harvey. (seasons 4–5) Korey Jackson as Sean Jeffries, a young reporter at the Washington Herald working under Hammerschmidt. (season 5) James Martinez as Alex Romero, a Democratic congressman who leads the House Intelligence Committee's investigation into Frank's alleged abuse of power. (season 5) Diane Lane as Annette Shepherd, a former childhood classmate of Claire's who is the co-head and public face of Shepherd Unlimited, a leading industrial conglomerate that has worked for years to shape and influence U.S. policy. (season 6) Greg Kinnear as Bill Shepherd, Annette's behind the scenes billionaire brother and co-head of Shepherd Unlimited that prefers to stay out of the limelight but is ruthless when it comes to playing politics to suit his business needs. (season 6) Cody Fern as Duncan Shepherd, Annette's ambitious and devoted son who represents the next generation of DC power players. (season 6) Production Conception The series played a role as one of the earliest shows to launch in the "streaming era." Independent studio Media Rights Capital (MRC), founded by Mordecai Wiczyk and Asif Satchu, producer of films such as Babel, purchased the rights to House of Cards with the intention to create a series. While finishing production on his 2008 film The Curious Case of Benjamin Button, David Fincher's agent showed him House of Cards, a BBC series starring Ian Richardson. Fincher was interested in producing a potential series with Eric Roth. Fincher said that he was interested in doing television because of its long-form nature, adding that working in film does not allow for complex characterizations the way that television allows. "I felt for the past ten years that the best writing that was happening for actors was happening in television. And so I had been looking to do something that was longer form," Fincher stated. MRC approached different networks about the series, including HBO, Showtime and AMC, but Netflix, hoping to launch its own original programming, outbid the other networks. Ted Sarandos, Netflix's chief content officer, looked at the data of Netflix users' streaming habits and concluded that there was an audience for Fincher and Spacey. "It looked incredibly promising," he said, "kind of the perfect storm of material and talent." In finding a writer to adapt the series, Fincher stated that they needed someone who could faithfully translate parliamentary politics to Washington." Beau Willimon, who has served as an aide to Chuck Schumer, Howard Dean and Hillary Clinton, was hired and completed the pilot script in early 2011. Willimon saw the opportunity to create an entirely new series from the original and deepen its overall story. The project was first announced in March 2011, with Kevin Spacey attached to star and serve as an executive producer. Fincher was announced as director for the first two episodes, from scripts by Willimon. Netflix ordered 26 episodes to air over two seasons. Spacey called Netflix's model of publishing all episodes at once a "new perspective." He added that Netflix's commitment to two full seasons gave the series greater continuity. "We know exactly where we are going," he said. In a speech at the Edinburgh International Television Festival, he also noted that while other networks were interested in the show, they all wanted a pilot, whereas Netflix – relying solely on their statistics – ordered the series directly. In January 2016, show creator, executive producer and showrunner Beau Willimon's departure following season 4 was announced. He was replaced by Frank Pugliese and Melissa James Gibson, both of whom had begun writing for the series in season 3. Casting Fincher stated that every main cast member was their first choice. In the first read through, he said "I want everybody here to know that you represent our first choice — each actor here represents our first choice for these characters. So do not fuck this up." Spacey, whose last regular television role was in the series Wiseguy, which ran from 1987 until 1990, responded positively to the script. He then played Richard III at The Old Vic, which Fincher said was "great training." Spacey supported the decision to release all of the episodes at once, believing that this type of release pattern will be increasingly common with television shows. He said, "When I ask my friends what they did with their weekend, they say, 'Oh, I stayed in and watched three seasons of Breaking Bad or it's two seasons of Game of Thrones." He was officially cast on March 18, 2011. Robin Wright was approached by Fincher to star in the series when they worked together in The Girl with the Dragon Tattoo. She was cast as Claire Underwood in June 2011. Kate Mara was cast as Zoe Barnes in early February 2012. Mara's sister, Rooney, worked with Fincher in The Girl with the Dragon Tattoo, and when Kate Mara read the part of Zoe, she "fell in love with the character" and asked her sister to "put in a word for me with Fincher." The next month, she got a call for an audition. Filming Locations Principal photography for the first season began in January 2012 in Harford County, Maryland, on the Eastern seaboard of the United States. Filming of exterior scenes in 2013 centered primarily in and around the city of Baltimore, Maryland, which is about northeast of Washington, D.C. Among the numerous exteriors filmed in Baltimore, but set in Washington, D.C., are: Francis and Claire Underwood's residence, Zoe Barnes' apartment, Freddy's BBQ Rib Joint, The Clean Water Initiative building where Claire works, The Washington Herald offices, the Washington Opera House, the Secretary of State's building, Hotel Cotesworth, The Georgetown Hotel, Werner's Bar, Tio Pepe's, the DuPont Circle Bar, as well as scenes set in other locations, including Peter Russo's campaign rally in Pennsylvania and The Sentinel (military academy)'s Francis J. Underwood Library and Waldron Hall in South Carolina. Most of the interior scenes in House of Cards are filmed in a large industrial warehouse, which is located in Joppa, Maryland, also in Harford County, which is about northeast of Baltimore. The warehouse is used for the filming of some of the most iconic scenes of the series, such as the full-scale reconstruction of most of the West Wing of the White House, including the Oval Office, the Congressional offices and corridors, the large 'Slugline' open-plan office interior, and domestic interiors such as the large townhouse rooms of the Underwood residence and a large loft apartment. Extensive filming for season 5 was also done at the Maryland Historical Society in Mount Vernon, Baltimore. The series uses green screen to augment the live action, inserting views of outdoor scenes in windows and broadcast images on TV monitors, often in post-production. The Production Designer, Steve Arnold, also describes in detail the use of a three-sided green screen to insert street scenes outside car windows, with synchronized LED screens above the car (and out of camera shot), that emit the appropriate light onto the actors and parts of the car, such as window frames: "All the driving in the show, anything inside the vehicle is done on stage, in a room that is a big three-sided green screen space. The car does not move, the actors are in the car, and the cameras are set up around them. We have very long strips of LED monitors hung above the car. We had a camera crew go to Washington, D.C. to drive around and shoot plates for what you see outside when you're driving. And that is fed into the LED screens above the car. So as the scene is progressing, the LED screens are synched up to emit interactive light to match the light conditions you see in the scenery you're driving past (that will be added in post). All the reflections on the car windows, the window frames and door jambs is being shot while we're shooting the actors in the car. Then in post the green screens are replaced with the synced up driving plates, and it works really well. It gives you the sense of light passing over the actors' faces, matching the lighting that is in the image of the plate". In June 2014, filming of three episodes in the UN Security Council chamber was vetoed by Russia at the last minute. However the show was able to film in other parts of the UN Building. In August 2014, the show filmed a "mock-motorcade scene" in Washington, D.C. In December 2014, the show filmed in Española, Santa Fe, and Las Vegas, New Mexico. Tax credits According to the Maryland Film Office, the state provided millions in tax credits to subsidize the production costs. For season 1, the company received a final tax credit of $11.6 million. Production costs were $63 million, more than 1,800 Maryland businesses were involved, and nearly 2,200 Marylanders were hired with a $138 million economic impact. For season 2, the company was reported to expect to get a tax credit of about $15 million because filming costs were more than $55 million. There were nearly 2,000 Maryland businesses benefitting from the production and more than 3,700 Marylanders were hired with a $120 million estimated economic impact. For season 3, the company filed a letter of intent to film, and estimated costs and economic impact similar to season 2. Under the 2014 formula, "the show would qualify for up to $15 million in tax credits." Final season and firing of Spacey On October 11, 2017, The Baltimore Sun reported that House of Cards had been renewed for a sixth season and that filming would begin by the end of October 2017. On October 29, actor Anthony Rapp publicly stated that lead actor Spacey had made a sexual advance on him at a 1986 party when Rapp was 14. The following day, Netflix announced that the upcoming sixth season of House of Cards would be its last. Multiple sources stated that the decision to end the series was made prior to Rapp's accusation, but the announcement nevertheless caused suspicions for its timing. The following day, it was announced that production on the season would be temporarily suspended, according to an official joint statement from Netflix and MRC, "to give us time to review the current situation and to address any concerns of our cast and crew". On November 3, 2017, Netflix announced that they would no longer be associated with Spacey in any capacity whatsoever. On December 4, 2017, Ted Sarandos, Netflix's chief content officer, announced that production would restart in 2018 with Robin Wright in the lead, and revealed that the final season of the show would now consist of eight episodes. Spacey was removed from the cast and as executive producer. In 2019, the last remaining related criminal charges against him were dropped. On December 24, 2018, Spacey posted an unofficial short film titled Let Me Be Frank to his YouTube channel, in which, in-character as Francis "Frank" Underwood, he denied the allegations and stated that his character was not in fact dead. The video has been described in the media as "bizarre", "extraordinarily odd", "unsettling", and "alarming"; several actors — including Patricia Arquette, Ellen Barkin, and Rob Lowe — have criticized and ridiculed it on Twitter. As of September 2020, the video has over 12 million views, with 277,000 likes and 74,000 dislikes. Spacey posted a follow-up short film to Let Me Be Frank, titled KTWK, to his YouTube channel on December 24, 2019. Release Broadcast In Australia, where Netflix was not available prior to 2015, the series was broadcast on Showcase, premiering on May 7, 2013. Australian subscription TV provider Foxtel, and owner of Showcase, offered the entire first season to Showcase subscribers via their On Demand feature on Foxtel set-top boxes connected to the internet, as well as through their Xbox 360, Internet TV, and mobile (Foxtel Go) services. Although the entire season was made available, it maintained its weekly timeslot on Showcase. Season two returned to Showcase on February 15, 2014. As with season one, the entire season was made available on demand to Showcase subscribers while also retaining a weekly timeslot. The series has also been made available to non-Foxtel subscribers through Apple's Apple TV service. Prior to Netflix's Australian launch on March 28, 2015, Netflix renounced Showcase's rights to House of Cards, with season 3 premiering on Netflix at launch. In New Zealand, where Netflix was unavailable prior to 2015, season 1 premiered on TV3 in early 2014, followed immediately by season 2. Netflix launched in New Zealand on March 24, 2015, and unlike Australia (which had Netflix launch on the same day) where House of Cards season 3 was available at launch, the series was initially unavailable. In India, where Netflix was unavailable prior to January 2016, House of Cards premiered on February 20, 2014, on Zee Café. Seasons 1 and 2 were aired back–to–back. The channel aired all 13 episodes of season 3 on March 28 and 29, 2015. This marked the first time that an English-language general entertainment channel in India aired all episodes of the latest season of a series together. The move was intended to satisfy viewers' urge to binge-watch the season. Although Netflix launched in India in January 2016, House of Cards was not available on the service until March 4. All episodes of season 4 had their television premiere on Zee Café on March 12 and 13, 2016. House of Cards was acquired by Canadian superstation CHCH for broadcast beginning September 13, 2017, making the program available throughout Canada on cable and free-to-air in CHCH's broadcast region, which includes portions of the United States. However, the show was removed from the CHCH primetime schedule two months later, following the sexual assault allegations towards Kevin Spacey. House of Cards began airing in the United Kingdom on September 19, 2018, on Virgin TV Ultra HD, a newly established UHD/4K entertainment channel. Home media Season 1 was released on DVD and Blu-ray Disc by Sony Pictures Home Entertainment in region 1 on June 11, 2013, season 2 was released on June 17, 2014, season 3 was released on July 7, 2015, season 4 was released on July 5, 2016, season 5 was released on October 3, 2017, and season 6 was released on March 5, 2019. Reception Critical response Season 1 The first season received positive reviews from critics. On Rotten Tomatoes, the first season holds a rating of 86%, based on 43 reviews, with an average rating of 8.21/10. The site's consensus reads, "Bolstered by strong performances — especially from Kevin Spacey — and surehanded direction, House of Cards is a slick, engrossing drama that may redefine how television is produced." On Metacritic, the first season has a score of 76 out of 100, based on 25 critics, indicating "generally favorable reviews". USA Today critic Robert Bianco praised the series, particularly Spacey's and Wright's lead performances, stating "If you think network executives are nervous, imagine the actors who have to go up against that pair in the Emmys." Tom Gilatto of People Weekly lauded the first two episodes, calling them "cinematically rich, full of sleek, oily pools of darkness." In The Denver Post, critic Joanne Ostrow said the series is "[d]eeply cynical about human beings as well as politics and almost gleeful in its portrayal of limitless ambition". She added: "House of Cards is a wonderfully sour take on power and corruption." Writing in The New York Times, critic Alessandra Stanley noted that the writing in the series sometimes fails to match the high quality of its acting: "Unfortunately Mr. Spacey's lines don't always live up to the subtle power of his performance; the writing isn't Shakespeare, or even Aaron Sorkin, and at times, it turns strangely trite." Nevertheless, she lauded House of Cards as an entertainment that "revels in the familiar but always entertaining underbelly of government." Andrew Davies, the writer of the original British TV series, stated that Spacey's character lacks the "charm" of Ian Richardson's, while The Independent praised Spacey's portrayal as a more "menacing" character, "hiding his rage behind Southern charm and old-fashioned courtesy." Randy Shaw, writing for The Huffington Post, criticized House of Cards for glorifying "union bashing and entitlement slashing within a political landscape whose absence of activist groups or anyone remotely progressive resembles a Republican fantasy world". Critics such as Time television critic James Poniewozik and Hank Stuever of The Washington Post compare the series to Boss. Like the UK show and novel of the same name, many critics have noted that it is heavily influenced by both Macbeth and Richard III. In addition, some critics find elements of Othello, such as Iago's bitter ire. Season 2 The second season received positive reviews from critics. On Rotten Tomatoes the season has a rating of 84%, based on 45 reviews, with an average rating of 8.03/10. The site's critical consensus reads, "House of Cards proves just as bingeworthy in its second season, with more of the strong performances, writing, and visual design that made the first season so addictive." On Metacritic the season has a score of 80 out of 100, based on 25 critics, indicating "generally favorable reviews". But as the season progressed, reviews became more mixed. Jen Chaney of Vulture wrote that the second season "felt kind of empty" and that "the closest it came to feeling emotionally rich was when it focused on Claire". At the end of the second season, Alan Sepinwall of HitFix wrote that the show is a "ridiculous political potboiler that takes itself too seriously"; he gave the overall season a C-. Season 3 The third season received mostly positive reviews, although many critics noted it felt repetitive. On Rotten Tomatoes, the season has a rating of 74%, based on 54 reviews, with an average rating of 6.97/10. The site's consensus reads, "Season three introduces intriguing new political and personal elements to Frank Underwood's character, even if it feels like more of the same for some." On Metacritic, the season has a score of 76 out of 100, based on 24 critics, indicating "generally favorable reviews". Negative reviews came from The Daily Beasts Nick Gillespie, who accused the writers of "descending into prosaic moralism" in season 3 and asserted that it deviates from the show's original intent, and Michael Wolff of USA Today plainly asserts that "the third season of House of Cards is no good ... not just no good, but incompetent, a shambles, lost". IndieWire named the season one of the most disappointing shows of 2015. Season 4 The fourth season received positive reviews from critics. On Rotten Tomatoes, the season has a rating of 86%, based on 36 reviews, with an average rating of 7.69/10. The site's critical consensus reads, "House of Cards retains its binge-worthiness by ratcheting up the drama, and deepening Robin Wright's role even further." On Metacritic, the season has a score of 76 out of 100, based on 17 critics, indicating "generally favorable reviews". Ben Travers of IndieWire had a positive response to season four, calling it an upgrade from what he perceived as a "messy and unsatisfying melodramatic" third season, writing that "House of Cards is aiming at authenticity, and–for what feels like the first time–consistently finding it." Emily Van DerWerff of Vox had a mixed review to season four, criticizing the repetitive and predictable nature of the series, writing: "There's no such mystery with House of Cards, where you know exactly what will happen as surely as you do on NCIS. Obstacles will present themselves, but Frank (the hammy Kevin Spacey) and Claire (the almost perfect Robin Wright) Underwood will overcome. What you see is what you get." The choice to have Frank and Claire run as running mates was highly criticized by some reviewers. Jonathan Holmes of Radio Times wrote that "there are limits to the stupidity viewers are willing to accept, and with season four [House of Cards] may have stepped over the line. Claire demanding her selection as Frank's running mate is stupid. Moronic. It turns a canny political operator into a ham-brained fish-eyed jar-opener." Spencer Kornhaber of The Atlantic wrote that "in moments like this it's good to remember that Cards really, fundamentally is a stupid TV show instead of a particularly cunning comment on political reality." Season 5 The fifth season received mixed to positive reviews from critics. On Rotten Tomatoes, the season has an approval rating of 71% based on 45 reviews, with an average rating of 7.2/10. The site's critical consensus reads, "House of Cards enjoys a confident return to form this season, though its outlandish edge is tempered slightly by the current political climate." On Metacritic, the season has a score of 60 out of 100, based on 11 critics, indicating "mixed or average reviews". After the fifth season received a Best Drama Series nomination at the 69th Primetime Emmy Awards, Brian Grubb of Uproxx wrote: House of Cards has not been very good for multiple seasons now, if it was ever that good. I can understand the original excitement about it. Kevin Spacey and Robin Wright were on television. And not even "television," really. They were on a big budget series that was made for and by a streaming service. David Fincher was involved and even directed a few episodes. This was a borderline revolutionary development. ... I don't see how anyone who watched it can think it deserves a place in the best six or seven dramas on television. Season 6 The sixth season received mixed reviews from critics, with many expressing disappointment over Kevin Spacey's absence. On Rotten Tomatoes, the season has an approval rating of 67% based on 66 reviews, with an average rating of 6.11/10. The website's critical consensus reads, "House of Cards folds slightly under the weight of its labyrinthian ending – thankfully Robin Wright's commanding performance is strong enough to keep it standing strong." On Metacritic, the season has a score of 62 out of 100, based on 23 critics, indicating "generally favorable reviews". In the U.S., the average season 6 viewership dropped from 1.9 million to 1.5 for the first week compared to the previous season. The last season has also received negative reviews, including those related to the absence of Kevin Spacey. Accolades For its first season, House of Cards received nine nominations for the 65th Primetime Emmy Awards in 2013, to become the first original online-only streaming television series to receive major nominations. Among House of Cards&apos; nine nominations, "Chapter 1" received four nominations for the 65th Primetime Emmy Awards and 65th Primetime Creative Arts Emmy Awards becoming the first webisode (online-only episode) of a television series to receive a major Primetime Emmy Award nomination: Outstanding Directing for a Drama Series for David Fincher. This episode also received several Creative Arts Emmy Award nominations, including Outstanding Cinematography for a Single-Camera Series, Outstanding Single-Camera Picture Editing for a Drama Series, and Outstanding Music Composition for a Series (Original Dramatic). Although Primetime Emmy Award for Outstanding Lead Actor in a Drama Series is not a category that formally recognizes an episode, Spacey submitted "Chapter 1" for consideration to earn his nomination. At the Primetime Creative Arts Emmy Award presentation, "Chapter 1" and Eigil Bryld earned the Primetime Emmy Award for Outstanding Cinematography for a Single-Camera Series, making "Chapter 1" the first Emmy-awarded webisode. At the Primetime Emmy Awards ceremony, Fincher won for Outstanding Directing for a Drama Series for directing the pilot episode "Chapter 1" in addition to the pair of Creative Arts Emmy Awards, making "Chapter 1" the first Primetime Emmy-awarded webisode. None of the Emmy awards were considered to be in major categories. For the 71st Golden Globe Awards, House of Cards received four nominations. Among those nominations was Wright for Golden Globe Award for Best Actress – Television Series Drama for her portrayal of Claire Underwood, which she won. In so doing she became the first actress to win a Golden Globe Award for an online-only streaming television series. For its second season, House of Cards received 13 Primetime Emmy Award nominations, including Outstanding Drama Series, Kevin Spacey for Outstanding Lead Actor in a Drama Series, Robin Wright for Outstanding Lead Actress in a Drama Series, Kate Mara for Outstanding Guest Actress in a Drama Series, and Reg E. Cathey for Outstanding Guest Actor in a Drama Series. At the 72nd Golden Globe Awards, the series was nominated for Best Drama Series and Wright was nominated for Best Drama Actress, while Spacey won for Best Drama Actor. Notes References External links House of Cards at Emmys.com 2010s American LGBT-related drama television series 2010s American political television series 2013 American television series debuts 2018 American television series endings American political drama television series American television series based on British television series English-language television shows 3 English-language Netflix original programming Peabody Award-winning television programs Primetime Emmy Award-winning television series Salary controversies in television Serial drama television series Television shows based on British novels Television series by Media Rights Capital Television series by Sony Pictures Television Television shows set in Washington, D.C.
99482
https://en.wikipedia.org/wiki/Deathmatch
Deathmatch
Deathmatch, also known as free-for-all, is a gameplay mode integrated into many shooter games, including first-person shooter (FPS), and real-time strategy (RTS) video games, where the goal is to kill (or "frag") the other players' characters as many times as possible. The deathmatch may end on a frag limit or a time limit, and the winner is the player that accumulated the greatest number of frags. The deathmatch is an evolution of competitive multiplayer modes found in game genres such as fighting games and racing games moving into other genres. Description In a typical first-person shooter (FPS) deathmatch session, players connect individual computers together via a computer network in a peer-to-peer model or a client–server model, either locally or over the Internet. Each individual computer generates the first person view that the computer character sees in the virtual world, hence the player sees through the eyes of the computer character. Players are able to control their characters and interact with the virtual world by using various controller systems. When using a PC, a typical example of a games control system would be the use of a mouse and keyboard combined. For example, the movement of the mouse could provide control of the players viewpoint from the character and the mouse buttons may be used for weapon trigger control. Certain keys on the keyboard would control movement around the virtual scenery and also often add possible additional functions. Games consoles however, use hand held 'control pads' which normally have a number of buttons and joysticks (or 'thumbsticks') which provide the same functions as the mouse and keyboard. Players often have the option to communicate with each other during the game by using microphones and speakers, headsets or by 'instant chat' messages if using a PC. Every computer or console in the game renders the virtual world and characters in realtime sufficiently fast enough that the number of frames per second makes the visual simulation seem like standard full motion video or better. Manufacturers of games consoles use different hardware in their products which means that quality and performance of the games vary. Deathmatches have different rules and goals depending on the game, but an example of a typical FPS-deathmatch session is where every player is versus every other player. The game begins with each player being spawned (starting) at random locations—picked from a fixed predefined set. Being spawned entails having the score, health, armor and equipment reset to default values which usually is 0 score, full (100%) health, no armour and a basic firearm and a melee weapon. After a session has commenced, arbitrary players may join and leave the game on an ad hoc basis. In this context a player is a human operated character in the game or a character operated by a computer software AI—a bot (see Reaper bot for example). Both the human and computer operated character do have the same basic visual appearance but will in most modern games be able to select a skin which is an arbitrary graphics model but that operates on the same set of movements as the base model. A human player's character and computer bot's character features the same set of physical properties, initial health, initial armour, weapon capabilities, the same available character maneuvers and speed—i.e. they are equally matched except for the actual controlling part. For a novice player the difference (i.e. experience, not taking into account the actual skill) between a human opponent and a computer controlled opponent may be near nil, however for a skilled player the lack of human intelligence is usually easily noticed in most bot implementations; regardless of the actual skill of the bot—which lack of intelligence can be at least somewhat compensated for in terms of e.g. extreme (superhuman) accuracy and aim. However, some systems deliberately inform the player when inspecting the score list which player(s) are bots and which are human (e.g. OpenArena). In the event that the player is aware of the nature of the opponent it will affect the cognitive process of the player regardless of the player's skill. All normal maps will contain various power-ups; i.e. extra health, armor, ammunition and other (more powerful than default) weapons. Once collected by a player the power-up will respawn after a defined time at the same location, the time for an item to respawn depends upon the game mode and the type of the item. In some deathmatch modes power-ups will not respawn at all. Certain power-ups are especially powerful, which can often lead to the game rotating around controlling power-ups—i.e. assuming ceteris paribus, the player who controls the [most powerful] power-ups (namely collect the item most often) is the one that will have the best potential for making the best score. The goal for each player is killing the other players by any means possible which counts as a frag, either by direct assault or manipulating the map, the latter counts as a frag in some games, some not; in either case—to attain the highest score—this process should be repeated as many times as possible, with each iteration performed as quickly as possible. The session may have a time limit, a frag limit, or no limit at all. If there is a limit then the player with the most frags will eventually win when the session ends. The health variable will determine if a player is wounded; however, a wounded player does not entail reduced mobility or functionality in most games, and in most games a player will not bleed to death. A player will die when the health value reaches equal to or less than 0, if the value is reduced to a very low negative value, the result may be gibbing depending upon the game. In most games, when a player dies (i.e. is fragged), the player will lose all equipment gained and the screen will continue to display the visible (still animated) scene that the player normally sees, and the score list is usually displayed—the frags. The display does not go black when the player dies. Usually the player can choose to instantly respawn or remain dead. The armor variable affects the health variable by reducing the damage taken, the reduction in health is in concept inversely proportional to the value of the armor times the actual damage caused; with the obvious differences in various implementations. Some games may account for the location of the body injured when the damage is deduced, while many—especially older implementations—do not. In most games, no amount of armor causes any reduced mobility—i.e. is never experienced as a weight issue by the player. Newtonian physics are often only somewhat accurately simulated, common in many games is the ability of the player to modify the player's own vector to some degree while airborne, e.g. by retarding a forward airborne flight by moving backwards, or even jumping around a corner. Other notable concepts derived from the physics of FPS game engines are i.a. at least bunny-hopping, strafe-jumping and rocket-jumping—in all of which the player exploits the particular characteristics of the physics engine in question to obtain a high speed and/or height, or other attribute(s); e.g. with rocket-jumping the player will jump and fire at rocket at the floor area immediately under the feet of the same player, which will cause the player to jump higher compared to a regular jump as a result of the rocket blast (at the obvious expense of the health variable being somewhat reduced from self-inflicted injury). The types of techniques available and how the techniques may be performed by the player differs from the physics implementation as is as such also game dependent. The lost equipment (usually not including the armor) of a dead player can usually be picked up by any player (even the fragged player, respawned) who gets to it first. Modern implementations allow for new players to join after the game has started, the maximum number of players that can join is arbitrary for each game, map and rules and can be selected by the server. Some maps are suitable for small numbers of players, some are suitable for larger numbers. If the session does have a frag or time limit a new session will start briefly after the current session has been concluded, during the respite the players will be allowed to observe the score list, chat and will usually see an animated pseudo overview display of the map as background for the score list. Some games have a system to allow each player to announce they are now ready to begin the new session, some do not. The new sessions might be on a different map—based on a map list kept on the server—or it might always be on the same map if there is no such rotating map list. Common in many games is some form of message broadcast and private message system; the broadcast message system announces public events, e.g. if a player died it will often be informed who died and how, if fragged, then often by what weapon; the same system will also often announce if a player joins or leaves the game, and may announce how many frags are left in total and other important messages, including errors or warnings from the game; instant text messages from other players are also displayed with this system. The private message system, in contrast, only prints messages for individual players, e.g. if player A picks up a weapon, player A will get a message to confirm that the weapon was picked up. Most modern deathmatch games features a high level of graphic violence; a normal modern implementation will contain high quality human characters being killed, e.g. moderate amounts of blood, screams of pain and death, exploding bodies with associated gibs are common. Some games feature a way to disable and/or reduce the level of gore. However, the setting of the game is usually that of a fictional world, the player may resurrect in the form of mentioned respawning and the characters will usually have superhuman abilities, e.g. able to tolerate numerous point blank hits from a machine gun directly to the head without any armour, jumping extreme inhuman distances and falling extreme distances to mention a few things. These factors together may make the player experience the game less real as the game contains highly unreal and unrealistic elements. The description depicts a typical deathmatch based on major titles such as Quake, Doom, Unreal Tournament and others, the purpose served is to give a basic idea of the concept; however, given the many variations that exist and the manner that options and rules may be manipulated literally everything mentioned could be varied to a greater or lesser extent in other games. History The origin of the term deathmatch in the context of video games is disputed, especially as it is not well-defined; for pointers, the term was coined by game designer John Romero while he and lead programmer John Carmack were developing the LAN multiplayer mode for the video game Doom. World Heroes 2, also developed and released in the early 1990s, is another early use of the term. However, the latter's usage was different as it referred to the players' environment (arenas which housed dangerous hazards) rather than to the game itself. Both of these claims are controversial as the term's common definition as used by gamers (to describe a video game match in which players kill each other over and over, respawning after each time they die) predates both titles by over a decade. Romero commented on the birth of the FPS deathmatch: "Sure, it was fun to shoot monsters, but ultimately these were soulless creatures controlled by a computer. Now gamers could play against spontaneous human beings—opponents who could think and strategize and scream. We can kill each other!' If we can get this done, this is going to be the fucking coolest game that the planet Earth has ever fucking seen in its entire history!'" According to Romero, the deathmatch concept was inspired by fighting games. At id Software, the team frequently played Street Fighter II, Fatal Fury and Art of Fighting during breaks, while developing elaborate rules involving trash-talk and smashing furniture or tech. Romero stated that "what we were doing was something that invented deathmatch" and that "Japanese fighting games fueled the creative impulse to create deathmatch in our shooters." Games that had such gameplay features beforehand did not use the term, but later it gained mainstream popularity with the Quake and Unreal Tournament series of games. MIDI Maze was a multiplayer first-person shooter for the Atari ST, released in 1987, which has also been suggested as the first example of deathmatch before the term was used. Sega's 1988 third-person shooter arcade game Last Survivor featured eight-player deathmatch. Some games give a different name to these types of matches, while still using the same underlying concept. For example, in Perfect Dark, the name "Combat Simulator" is used and in Halo, deathmatch is known as "Slayer". An early example of a deathmatch mode in a first-person shooter was Taito's 1992 video game Gun Buster. It allowed two-player cooperative gameplay for the mission mode, and featured an early deathmatch mode, where either two players could compete against each other or up to four players could compete in a team deathmatch, consisting of two teams with two players each competing against each other. Background It has been suggested that in 1983, Drew Major and Kyle Powell probably played the world's first deathmatch with Snipes, a text-mode game that was later credited with being the inspiration behind Novell NetWare, although multiplayer games spread across multiple screens predate that title by at least 9 years in the form of Spasim and Maze War. Early evidence of the term's application to graphical video games exists. On August 6, 1982, Intellivision game developers Russ Haft and Steve Montero challenged each other to a game of Bi-Planes, a 1981 Intellivision release in which multiple players control fighter planes with the primary purpose of repeatedly killing each other until a limit is reached. Once killed, a player would be respawned in a fixed location, enjoying a short period of protection from attacks. The contest was referred to, at that time, as a deathmatch. Variations In a Team Deathmatch, the players are organized into two or more teams, with each team having its own frag count. Friendly fire may or may not cause damage, depending on the game and the rules used — if it does, players that kill a teammate (called a team kill) usually decrease their own score and the team's score by one point; in certain games, they may also themselves be killed as punishment, and/or may be removed from the game for repeat offenses. The team with the highest frag-count at the end wins. In a last man standing deathmatch (or a battle royale game), players start with a certain number of lives (or just one, in the case of battle royale games), and lose these as they die. Players who run out of lives are eliminated for the rest of the match, and the winner is the last and only player with at least one life. See the "Fundamental changes" section in the "Last Man Standing" article for more insight. Any arbitrary multiplayer game with the goal for each player to kill every other player(s) as many times as possible can be considered to be a form of deathmatch. In real time strategy games, deathmatch can refer to a game mode where all players begin their empires with large amounts of resources. This saves them the time of accumulation and lets hostilities commence much faster and with greater force. Destroying all the enemies is the only way to win, while in other modes some other victory conditions may be used (king of the hill, building a wonder...) History, fundamental changes Doom The first-person shooter version of deathmatch, originating in Doom by id Software, had a set of unmodifiable rules concerning weapons, equipment and scoring, known as "Deathmatch 1.0". Items do not respawn, e.g. health, armour, ammunition; however weapons had a fixed status as available to any arbitrary player except the player who acquired the weapon — i.e. the weapon did not in fact disappear as items do when picked up. The player who acquires the weapon can only collect it anew after respawning (this sometimes leads to lack of ammunition if a player survives long enough, eventually leading to one's death due to being unable to fight back) Suicide (such as falling into lava or causing an explosion too close to the player, or getting crushed by a crushing ceiling etc.) did not entail negative score points. Within months, these rules were modified into "Deathmatch 2.0" rules (included in Doom v1.2 patch). These rules were optional, the administrator of the game could decide on using DM 1.0 or DM 2.0 rules. The changes were: Picking up an object removes it from the map. Objects re-appear 30 seconds after being picked up and can be picked up by anyone; bonus objects which provide significant advantages (invisibility power-up etc.) re-appear after much longer delay, some of them may not reappear at all. Suicide counts as −1 frag. Notable power-ups that are featured in most consecutive games include the soul spheres. Although the name and/or graphics may be different in other games the concept and feature of the power-up remains the same in other games. Corridor 7: Alien Invasion CD version Corridor 7: Alien Invasion released by Capstone Software in 1994. The first FPS to include multiple character classes. The first FPS to include DM specific maps. Rise of the Triad Rise of the Triad was first released as shareware in 1994 by Apogee Software, Ltd. and honed an expansive multiplayer mode that pioneered a variety of deathmatch features. It introduced the Capture the Flag mode to the first-person-shooter genre as Capture the Triad. It was the first FPS to have an in-game scoreboard. It was the first FPS to deliver its level of multiplayer customization through a plethora of options affecting aspects of the level played like gravity or weapon persistence. It was the first FPS to have voice macros and the ability to talk to players via microphone. It introduced a unique point system that awards different numbers of points for different kills (for instance, a missile kill is worth a point more than a bullet kill). Hexen: Beyond Heretic Hexen: Beyond Heretic released by Raven Software in 1995. The first to feature multiple character classes with their own weapons; some items also functioned differently based on the class using them. Quake Quake released in 1996 by ID Software, was the first FPS deathmatch game to feature in-game joining. Quake was the first FPS deathmatch game to feature AI operated deathmatch players (bots), although not as a feature of the released product, but rather in the form of a community created content. Quake popularized rocket-jumping. Notable power-ups that are featured in most consecutive games are i.a. the quad damage. Although the name and/or graphics may be different in other games the concept and feature of the power-up remains the same in other games. Unreal With the game Unreal (1998, by Epic), the rules were enhanced with some widely accepted improvements: spawn protection (usually 2–4 seconds), which is a period of invulnerability after a player (re)enters combat (such as after being killed and respawning); spawn protection was automatically terminated when the player used a weapon (including non-attack usage, such as zooming the sniper rifle). Spawn protection prevents "easy frags" — killing a player which just spawned and is slightly disoriented and almost unarmed. "suicide-cause tracking" – if a player dies by "suicide" that was caused by some other player's action, such as knocking him off the cliff or triggering a crusher or gas chamber, the player that caused such death is credited the kill and the killed player does not lose a frag (it's not counted as a suicide). This concept increases the entertainment potential of the game (as it gives players options to be "cunning"), but it at the same time adds complexity, which may be the reason why Epic's main competitor, Id software, did not implement this concept into Quake III Arena (just as they did not implement spawn protection). Unreal Tournament "combat achievements tracking" – Unreal Tournament (1999, by Epic) added statistics tracking. The range of statistics being tracked is very wide, such as: precision of fire with each weapon (percentage of hits to fired ammunition) kills with each weapon, being killed by particular weapon, and being killed when holding particular weapon. headshots (lethal hits of combatant heads with sniper rifles and some other powerful weapons) killing sprees: Killing 5, 10, 15, 20 or 25 combatants without dying is called a killing spree, each greater kill count being considered more valuable and having a unique title (respectively; Killing Spree, Rampage, Dominating, Unstoppable, Godlike). The game tracked how many times has the player achieved each of these titles. consecutive kills: when a player kills a combatant within 5 seconds after a previous kill, a consecutive kill occurs. The timer starts ticking anew, allowing a third kill, a fourth kill etc. Alternatively, killing several enemies with a mega weapon (such as the Redeemer, which resembles a nuclear rocket) also counts as consecutive kill. The titles of these kills are: Double Kill (2), Multi kill (3), Ultra kill (4), Megakill (5), MONSTERKILL (6; 5 in the original Unreal Tournament). For comparison, id Software's "Quake III Arena" tracks double kills, but a third kill soon after results in another double kill award. Quake III Arena This game's approach to combat achievements tracking is different from Unreal Tournament. In deathmatch, the player might be rewarded with awards for the following tricks: "perfect!" – winning a round of deathmatch without getting killed "impressive!" – hitting with two consecutive shots or hitting two enemies with one shot from the railgun (a powerful, long-range hitscan weapon with a slow rate of fire) "humiliation!" – killing an opponent with the melee razor-like gauntlet (the killed player hears the announcement too, but the fact of being humiliated is not tracked for him). "accuracy" – having over 50% of hits-to-shots ratio. Last Man Standing The Last Man Standing (LMS) version of deathmatch is fundamentally different from deathmatch. In deathmatch, it does not matter how many times the player dies, only how many times the player kills. In LMS, it is the exact opposite — the important task is "not to die". Because of this, two activities that are not specifically addressed in deathmatch have to be controlled in LMS. "Camping", which is a recognized expression for staying in one location (usually somewhat protected or with only one access route) and eventually using long range weapons, such as a sniper rifle, from that location. In standard deathmatch, campers usually accumulate fewer frags than players who actively search for enemies, because close range combat usually generates frags faster than sniping from afar. In LMS, however, camping increases the average lifespan. Unreal Tournament 2003 addresses this unfairness by indicating players who are camping and providing other players with navigation to campers. "Staying dead" – after dying, player representations lie on the ground (where applicable) and are shown the results of the game in progress. They have to perform some action, usually click the "Fire" key or button, to respawn and reenter combat. This principle prevents players who might have been forced by real world situations (be it a sudden cough or a door ring) to leave the computer from dying over and over. In standard deathmatch, a player who stays dead is not a problem, as the goal is to score the most frags, not die the least times. In LMS, however, a player that would be allowed to stay dead after being killed for the first time might wait through most of the fight and respawn when there's only one opponent remaining. Because of this, Unreal Tournament 2003'' automatically respawns a player immediately after being killed. See also Player versus environment Player versus player References Video game gameplay Competitive video game terminology Video game terminology Articles containing video clips Death games in fiction
27911657
https://en.wikipedia.org/wiki/Style%20Jukebox
Style Jukebox
Style Jukebox was a hi-fi high-resolution audio cloud music streaming and storage player for the Windows, iOS, Android and Windows Phone platforms. A Web Player was also available for Mac, Windows and Linux. Style Jukebox allowed users to upload their personal music collection from their computer to Style Jukebox servers and listen to them from another compatible device (Android, iOS, Windows, Mac, and Windows Phone) by streaming or downloading songs for offline playback. Basic accounts had 10 GB of storage. Pro accounts could have up to 2 TB of music. As of July 2016, Style Jukebox had more than 250,000 registered users. On December 1, 2017, Style Jukebox discontinued their service with a very small post on their home page, and no further details were released. Features Native music applications for Windows, iOS, Android and Windows Phone. Automatic import from Dropbox, OneDrive and Google Drive available on Web, Windows, iOS and Windows Phone Special upload native software is available for Macs Store up to 2 TB of music as a Pro user Stream music playlist over cellular data/ WiFi Reduce data plan by downloading song on the phone. Selective download Support for most popular music file formats: MP3, AAC, WMA, OGC, M4A and lossless FLAC, AIFF, APE, WAV and ALAC (ALAC) File size up to 1 GB Audio quality up to 24bit/192kHz/7.1 surround Browse music by Songs, Albums, Artists, Genres and Composers. Edit song title, album title, artist name and genre from Style Jukebox or Windows or Style Jukebox Web Player. Support for lossless audio playback on Chromecast (Google Cast) Technology Style Jukebox consisted of cloud-based services for user management, music storage and programmatic interfaces (APIs); and clients for music streaming and storage on desktop and mobile operating systems. Upload was available on the desktop client only; Style Jukebox enabled users to drop music files and folders in the music player to be automatically uploaded and synced with Style Jukebox cloud-based services and made available to any other of the user's computers and devices that also have the Style Jukebox client installed. Style Jukebox cloud-based services automatically transcoded formats to match the device's supported formats. For example, a 320kbit/s WMA was transcoded to 320kbit/s MP3 on iOS devices and FLAC lossless was transcoded to WMA lossless on Windows Phone devices. References Cloud applications File hosting Online backup services File hosting for Windows Windows media players Digital audio Tag editors Android (operating system) software Freeware IOS software Music streaming services Products introduced in 2012 Mobile software distribution platforms Mobile software Streaming software IPod software Jukebox-style media players
2880574
https://en.wikipedia.org/wiki/Real-Time%20Messaging%20Protocol
Real-Time Messaging Protocol
Real-Time Messaging Protocol (RTMP) is a communication protocol for streaming audio, video, and data over the Internet. Originally developed as a proprietary protocol by Macromedia for streaming between Flash Player and a server, Adobe (which acquired Macromedia) has released an incomplete version of the specification of the protocol for public use. The RTMP protocol has multiple variations: RTMP proper, the "plain" protocol which works on top of Transmission Control Protocol (TCP) and uses port number 1935 by default. RTMPS, which is RTMP over a Transport Layer Security (TLS/SSL) connection. RTMPE, which is RTMP encrypted using Adobe's own security mechanism. While the details of the implementation are proprietary, the mechanism uses industry standard cryptographic primitives. RTMPT, which is encapsulated within HTTP requests to traverse firewalls. RTMPT is frequently found utilizing cleartext requests on TCP ports 80 and 443 to bypass most corporate traffic filtering. The encapsulated session may carry plain RTMP, RTMPS, or RTMPE packets within. RTMFP, which is RTMP over User Datagram Protocol (UDP) instead of TCP, replacing RTMP Chunk Stream. The Secure Real-Time Media Flow Protocol suite has been developed by Adobe Systems and enables end‐users to connect and communicate directly with each other (P2P). While the primary motivation for RTMP was to be a protocol for playing Flash video, it is also used in some other applications, such as the Adobe LiveCycle Data Services ES. Basic operation RTMP is a TCP-based protocol which maintains persistent connections and allows low-latency communication. To deliver streams smoothly and transmit as much information as possible, it splits streams into fragments, and their size is negotiated dynamically between the client and server. Sometimes, it is kept unchanged; the default fragment sizes are 64 bytes for audio data, and 128 bytes for video data and most other data types. Fragments from different streams may then be interleaved, and multiplexed over a single connection. With longer data chunks, the protocol thus carries only a one-byte header per fragment, so incurring very little overhead. However, in practice, individual fragments are not typically interleaved. Instead, the interleaving and multiplexing is done at the packet level, with RTMP packets across several different active channels being interleaved in such a way as to ensure that each channel meets its bandwidth, latency, and other quality-of-service requirements. Packets interleaved in this fashion are treated as indivisible, and are not interleaved on the fragment level. The RTMP defines several virtual channels on which packets may be sent and received, and which operate independently of each other. For example, there is a channel for handling RPC requests and responses, a channel for video stream data, a channel for audio stream data, a channel for out-of-band control messages (fragment size negotiation, etc.), and so on. During a typical RTMP session, several channels may be active simultaneously at any given time. When RTMP data is encoded, a packet header is generated. The packet header specifies, amongst other matters, the ID of the channel on which it is to be sent, a timestamp of when it was generated (if necessary), and the size of the packet's payload. This header is then followed by the actual payload content of the packet, which is fragmented according to the currently agreed-upon fragment size before it is sent over the connection. The packet header itself is never fragmented, and its size does not count towards the data in the packet's first fragment. In other words, only the actual packet payload (the media data) is subject to fragmentation. At a higher level, the RTMP encapsulates MP3 or AAC audio and FLV1 video multimedia streams, and can make remote procedure calls (RPCs) using the Action Message Format. Any RPC services required are made asynchronously, using a single client/server request/response model, such that real-time communication is not required. Encryption RTMP sessions may be encrypted using either of two methods: Using industry standard TLS/SSL mechanisms. The underlying RTMP session is simply wrapped inside a normal TLS/SSL session. Using RTMPE, which wraps the RTMP session in a lighter-weight encryption layer. HTTP tunneling In RTMP Tunneled (RTMPT), RTMP data is encapsulated and exchanged via HTTP, and messages from the client (the media player, in this case) are addressed to port 80 (the default for HTTP) on the server. While the messages in RTMPT are larger than the equivalent non-tunneled RTMP messages due to HTTP headers, RTMPT may facilitate the use of RTMP in scenarios where the use of non-tunneled RTMP would otherwise not be possible, such as when the client is behind a firewall that blocks non-HTTP and non-HTTPS outbound traffic. The protocol works by sending commands through the POST URL, and AMF messages through the POST body. An example is POST /open/1 HTTP/1.1 for a connection to be opened. Specification document and patent license Adobe has released a specification for version 1.0 of the protocol, dated 21 December 2012. The web landing page leading to that specification notes that "To benefit customers who want to protect their content, the open RTMP specification does not include Adobe's unique secure RTMP measures". A document accompanying the Adobe specification grants "non-exclusive, royalty-free, nontransferable, non-sublicensable, personal, worldwide" patent license to all implementations of the protocol, with two restrictions: one forbids use for intercepting streaming data ("any technology that intercepts streaming video, audio and/or data content for storage in any device or medium"), and another prohibits circumvention of "technological measures for the protection of audio, video and/or data content, including any of Adobe’s secure RTMP measures". Patents and related litigation Stefan Richter, author of some books on Flash, noted in 2008 that while Adobe is vague as to which patents apply to RTMP, appears to be one of them. In 2011, Adobe did sue Wowza Media Systems claiming, among other things, infringement of their RTMP patents. In 2015, Adobe and Wowza announced that the lawsuits have been settled and dismissed with prejudice. Packet structure Packets are sent over a TCP connection, which is established first between client and server. They contain a header and a body which, in the case of connection and control commands, is encoded using the Action Message Format (AMF). The header is split into the Basic Header (shown as detached from the rest, in the diagram) and Chunk Message Header. The Basic Header is the only constant part of the packet and is usually composed of a single composite byte, where the two most significant bits are the Chunk Type (fmt in the specification) and the rest form the Stream ID. Depending on the value of the former, some fields of the Message Header can be omitted, and their value derived from previous packets while depending on the value of the latter, the Basic Header can be extended with one or two extra bytes (as in the case of the diagramme that has three bytes in total (c)). If the value of the remaining six bits of the Basic Header (BH) (least significant) is 0 then the BH is two bytes and represents from Stream ID 64 to 319 (64+255); if the value is 1, then the BH is three bytes (with last two bytes encoded as 16bit Little Endian) and represents from Stream ID 64 to 65599 (64+65535); if the value is 2, then BH is one byte and is reserved for low-level protocol control messages and commands. The Chunk Message Header contains meta-data information such as the message size (measured in bytes), the Timestamp Delta and Message Type. This last value is a single byte and defines whether the packet is an audio, video, command or "low level" RTMP packet such as an RTMP Ping. An example is shown below as captured when a flash client executes the following code: var stream:NetStream = new NetStream(connectionObject); this will generate the following Chunk: The packet starts with a Basic Header of a single byte (0x03) where the two most significant bits (b00000011) define a chunk header type of 0 while the rest (b00000011) define a Chunk Stream ID of 3. The four possible values of the header type and their significance are: b00 = 12 byte header (full header). b01 = 8 bytes - like type b00, not including message ID (4 last bytes). b10 = 4 bytes - Basic Header and timestamp (3 bytes) are included. b11 = 1 byte - only the Basic Header is included. The last type (b11) is always used in the case of aggregate messages where, in the example above, the second message will start with an id of 0xC3 (b11000011) and would mean that all Message Header fields should be derived from the message with a stream Id of 3 (which would be the message right above it). The six least significant bits that form the Stream ID can take values between 3 and 63. Some values have special meaning, like 1 that stands for an extended ID format, in which case there will be two bytes following that. A value of two is for low level messages such as Ping and Set Client Bandwidth. The next bytes of the RTMP Header (including the values in the example packet above) are decoded as follows: byte #1 (0x03) = Chunk Header Type. byte #2-4 (0x000b68) = Timestamp delta. byte #5-7 (0x000019) = Packet Length - in this case it is 0x000019 = 25 bytes. byte #8 (0x14) = Message Type ID - 0x14 (20) defines an AMF0 encoded command message. byte #9-12 (0x00000000) = Message Stream ID. This is in little-endian order. The Message Type ID byte defines whether the packet contains audio/video data, a remote object or a command. Some possible values for are: 0x01 = Set Packet Size Message. 0x02 = Abort. 0x03 = Acknowledge. 0x04 = Control Message. 0x05 = Server Bandwidth 0x06 = Client Bandwidth. 0x07 = Virtual Control. 0x08 = Audio Packet. 0x09 = Video Packet. 0x0F = Data Extended. 0x10 = Container Extended. 0x11 = Command Extended (An AMF3 type command). 0x12 = Data (Invoke (onMetaData info is sent as such)). 0x13 = Container. 0x14 = Command (An AMF0 type command). 0x15 = UDP 0x16 = Aggregate 0x17 = Present Following the header, 0x02 denotes a string of size 0x000C and values 0x63 0x72 ... 0x6D ("createStream" command). Following that we have a 0x00 (number) which is the transaction id of value 2.0. The last byte is 0x05 (null) which means there are no arguments. Invoke Message Structure (0x14, 0x11) Some of the message types shown above, such as Ping and Set Client/Server Bandwidth, are considered low level RTMP protocol messages which do not use the AMF encoding format. Command messages on the other hand, whether AMF0 (Message Type of 0x14) or AMF3 (0x11), use the format and have the general form shown below: (String) <Command Name> (Number) <Transaction Id> (Mixed) <Argument> ex. Null, String, Object: {key1:value1, key2:value2 ... } The transaction id is used for commands that can have a reply. The value can be either a string like in the example above or one or more objects, each composed of a set of key/value pairs where the keys are always encoded as strings while the values can be any AMF data type, including complex types like arrays. Control Message Structure (0x04) Control messages are not AMF encoded. They start with a stream Id of 0x02 which implies a full (type 0) header and have a message type of 0x04. The header is followed by six bytes, which are interpreted as such: #0-1 - Control Type. #2-3 - Second Parameter (this has meaning in specific Control Types) #4-5 - Third Parameter (same) The first two bytes of the message body define the Ping Type, which can apparently take six possible values. Type 0 - Clear Stream: Sent when the connection is established and carries no further data Type 1 - Clear the Buffer. Type 2 - Stream Dry. Type 3 - The client's buffer time. The third parameter holds the value in millisecond. Type 4 - Reset a stream. Type 6 - Ping the client from server. The second parameter is the current time. Type 7 - Pong reply from client. The second parameter is the time when the client receives the Ping. Type 8 - UDP Request. Type 9 - UDP Response. Type 10 - Bandwidth Limit. Type 11 - Bandwidth. Type 12 - Throttle Bandwidth. Type 13 - Stream Created. Type 14 - Stream Deleted. Type 15 - Set Read Access. Type 16 - Set Write Access. Type 17 - Stream Meta Request. Type 18 - Stream Meta Response. Type 19 - Get Segment Boundary. Type 20 - Set Segment Boundary. Type 21 - On Disconnect. Type 22 - Set Critical Link. Type 23 - Disconnect. Type 24 - Hash Update. Type 25 - Hash Timeout. Type 26 - Hash Request. Type 27 - Hash Response. Type 28 - Check Bandwidth. Type 29 - Set Audio Sample Access. Type 30 - Set Video Sample Access. Type 31 - Throttle Begin. Type 32 - Throttle End. Type 33 - DRM Notify. Type 34 - RTMFP Sync. Type 35 - Query IHello. Type 36 - Forward IHello. Type 37 - Redirect IHello. Type 38 - Notify EOF. Type 39 - Proxy Continue. Type 40 - Proxy Remove Upstream. Type 41 - RTMFP Set Keepalives. Type 46 - Segment Not Found. Pong is the name for a reply to a Ping, with the values used as seen above. ServerBw/ClientBw Message Structure (0x05, 0x06) This relates to messages that have to do with the client up-stream and server down-stream bit-rate. The body is composed of four bytes showing the bandwidth value, with a possible extension of one byte which sets the Limit Type. This can have one of three possible values which can be: hard, soft or dynamic (either soft or hard). Set Chunk Size (0x01) The value received in the four bytes of the body. A default value of 128 bytes exists, and the message is sent only when a change is wanted. Protocol Handshake After establishing a TCP connection, an RTMP connection is established first, performing a handshake through the exchange of three packets from each side (also referred to as Chunks in the official documentation). These are referred in the official spec as C0-2 for the client sent packets and S0-2 for the server side respectively and are not to be confused with RTMP packets that can be exchanged only after the handshake is complete. These packets have a structure of their own and C1 contains a field setting the "epoch" timestamp, but since this can be set to zero, as is done in third party implementations, the packet can be simplified. The client initialises the connection by sending the C0 packet with a constant value of 0x03 representing the current protocol version. It follows straight with C1 without waiting for S0 to be received first which contains 1536 bytes, with the first four representing the epoch timestamp, the second four all being 0, and the rest being random (and which can be set to 0 in third party implementations). C2 and S2 are an echo of S1 and C1 respectively, except with the second four bytes being the time the respective message was received (instead of 0). After C2 and S2 are received, the handshake is considered complete. Connect At this point, the client, and server can negotiate a connection by exchanging AMF encoded messages. These include key value pairs which relate to variables that are needed for a connection to be established. An example message from the client is: (Invoke) "connect" (Transaction ID) 1.0 (Object1) { app: "sample", flashVer: "MAC 10,2,153,2", swfUrl: null, tcUrl: "rtmpt://127.0.0.1/sample ", fpad: false, capabilities: 9947.75 , audioCodecs: 3191, videoCodecs: 252, videoFunction: 1 , pageUrl: null, objectEncoding: 3.0 } The Flash Media Server and other implementations uses the concept of an "app" to conceptually define a container for audio/video and other content, implemented as a folder on the server root which contains the media files to be streamed. The first variable contains the name of this app as "sample" which is the name provided by the Wowza Server for their testing. The flashVer string is the same as returned by the Action-script getversion() function. The audioCodec and videoCodec are encoded as doubles and their meaning can be found in the original spec. The same is true for the videoFunction variable, which in this case is the self-explanatory SUPPORT_VID_CLIENT_SEEK constant. Of special interest is the objectEncoding which will define whether the rest of the communication will make use of the extended AMF3 format or not. As version 3 is the current default, the flash client has to be told explicitly in Action-script code to use AMF0 if that is requested. The server then replies with a ServerBW, a ClientBW and a SetPacketSize message sequence, finally followed by an Invoke, with an example message. (Invoke) "_result" (transaction ID) 1.0 (Object1) { fmsVer: "FMS/3,5,5,2004", capabilities: 31.0, mode: 1.0 } (Object2) { level: "status", code: "NetConnection.Connect.Success", description: "Connection succeeded", data: (array) { version: "3,5,5,2004" }, clientId: 1728724019, objectEncoding: 3.0 } Some values above are serialised into properties of a generic Action-script Object, which is then passed to the NetConnection event listener. The clientId will establish a number for the session to be started by the connection. Object encoding must match the value previously set. Play video To start a video stream, the client sends a "createStream" invocation followed by a ping message, followed by a "play" invocation with the file name as argument. The server will then reply with a series of "onStatus" commands followed by the video data as encapsulated within RTMP messages. After a connection is established, media is sent by encapsulating the content of FLV tags into RTMP messages of type 8 and 9 for audio and video, respectively. HTTP tunneling (RTMPT) This refers to the HTTP tunneled version of the protocol. It communicates over port 80 and passes the AMF data inside HTTP POST request and responses. The sequence for connection is as follows: POST /fcs/ident2 HTTP/1.1 Content-Type: application/x-fcs\r\n HTTP/1.0 404 Not Found POST /open/1 HTTP/1.1 Content-Type: application/x-fcs\r\n HTTP/1.1 200 OK Content-Type: application/x-fcs\r\n 1728724019 The first request has an path, and the correct reply is a 404 Not Found error. The client then sends an /open/1 request where the server must reply with a 200 ok appending a random number that will be used as the session identifier for the said communication. In this example, 1728724019 is returned in the response body. POST /idle/1728724019/0 HTTP/1.1 HTTP/1.1 200 OK 0x01 From now on, the is a polling request where the session id has been generated and returned from the server and the sequence is just a number that increments by one for every request. The appropriate response is a 200 OK, with an integer returned in the body signifying the interval time. AMF data is sent through Software implementations RTMP is implemented at these three stages: Live video encoder Live and on-demand media streaming server Live and on-demand client rtmpdump The open-source RTMP client command-line tool rtmpdump is designed to play back or save to disk the full RTMP stream, including the RTMPE protocol Adobe uses for encryption. RTMPdump runs on Linux, Android, Solaris, , and most other Unix-derived operating systems, as well as Microsoft Windows. Originally supporting all versions of 32-bit Windows including Windows 98, from version 2.2 the software will run only on Windows XP and above (although earlier versions remain fully functional). Packages of the rtmpdump suite of software are available in the major open-source repositories (Linux distributions). These include the front-end apps "rtmpdump", "rtmpsrv" and "rtmpsuck." Development of RTMPdump was restarted in October 2009, outside the United States, at the MPlayer site. The current version features greatly improved functionality, and has been rewritten to take advantage of the benefits of the C programming language. In particular, the main functionality was built into a library (librtmp) which can easily be used by other applications. The RTMPdump developers have also written support for librtmp for MPlayer, FFmpeg, XBMC, cURL, VLC and a number of other open source software projects. Use of librtmp provides these projects with full support of RTMP in all its variants without any additional development effort. FLVstreamer FLVstreamer is a fork of RTMPdump, without the code, which Adobe claims violates the DMCA in the USA. This was developed as a response to Adobe's attempt in 2008 to suppress RTMPdump. FLVstreamer is an RTMP client that will save a stream of audio or video content from any RTMP server to disk, if encryption (RTMPE) is not enabled on the stream. See also Protected Streaming Info about RTMPS and RTMPE Video on Demand (VoD) Media Source Extensions (MSE) WebSocket References External links Adobe Developer page - RTMP - official specification Adobe Flash Multimedia Network protocols
806169
https://en.wikipedia.org/wiki/DokuWiki
DokuWiki
DokuWiki is a wiki application licensed under GPLv2 and written in the PHP programming language. It works on plain text files and thus does not need a database. Its syntax is similar to the one used by MediaWiki. It is often recommended as a more lightweight, easier to customize alternative to MediaWiki. Because DokuWiki does not require a database, it can be installed on local PCs, flash drives, and folders synced with file hosting services (Dropbox) or file synchronization programs (syncthing). History DokuWiki was created by Andreas Gohr in June 2004. In July the first official release was published on Freshmeat (now known as Freecode). Originally DokuWiki used a simple list of regular expressions to transform wiki syntax into HTML. A big step forward in the development was the re-design of the parser and the renderer mechanisms based on contributions by Harry Fuecks in January 2005. The new design made use of the then-new object-oriented features of PHP4. The new parser and the introduction of a cache mechanism led to significant performance improvements, thus making DokuWiki usable for larger projects. The new parser also prepared DokuWiki for the introduction of a generic plugin interface which simplified the development and maintenance of syntax-based plugins. Over the years additional plugin mechanisms followed which allowed 3rd-party developers to extend nearly all aspects of the wiki software. The introduction of DokuWiki into the Debian and Gentoo Linux distributions in April and July 2005 respectively significantly increased the visibility of the software. The DokuWiki logo is the result of a design contest. The winning logo, designed by Esther Brunner, represents editing pages (by pens of different colors, i.e. different people) and linking them. For many years, DokuWiki's source code was managed through the Darcs distributed version control system. In 2010 a switch to Git was made, making use of GitHub for hosting. Today, DokuWiki is one of the most popular wiki engines available and has achieved significant usage with stable interest over time. Release history Since 2011, releases are named after Discworld characters. Main features Installation and Requirements DokuWiki requires only a webserver and PHP; no database is needed. It can run on cheap web hosting servers and is usually installed by simply unpacking. Additional plugins may have additional requirements. Revision control DokuWiki stores all versions of each wiki page, allowing the user to compare the current version with any older version. The difference engine is the same as the one used in MediaWiki. Parallel editing of one page by multiple users is prevented by a locking mechanism. Access control Access control can be handled by a user manager, which allows users and groups of users to be defined, and an access control list in which an administrator user can define permissions on page and namespace level, giving DokuWiki more fine-grained control than Mediawiki. Besides the built-in user management, DokuWiki also provides mechanisms for authentication against databases, LDAP Servers and Active Directory. Other authentication mechanisms are available as plugins. Plugins DokuWiki has a generic plugin interface which simplifies the process of writing and maintaining plugins. There are ~1000 plugins available. These can be easily integrated and managed by an administrator user with the help of the plugin manager. Templates The appearance of the wiki can be defined by a template. There are various templates provided by the development community. Internationalization and localization DokuWiki supports Unicode (UTF-8) and properly handles right-to-left languages, so languages such as Chinese, Thai, and Hebrew can be displayed. DokuWiki can be configured in about 70 languages. Multilingual wikis can be configured through plugins. Users can contribute translations of the DokuWiki software and of plugins through a web interface. Caching DokuWiki uses a two-level cache mechanism which stores the parsed wiki page in an intermediate serialized format which is then rendered to the desired output format, such as HTML5. This rendered format is cached again. The two levels of caching expire on different conditions. The caching helps to reduce server load and speeds up access to the information. Full text search DokuWiki has an integrated indexed search with which a user can search for keywords and phrases on the wiki. Wiki markup DokuWiki uses a simple markup language similar to that of MediaWiki. Like MediaWiki it makes use of free links, but CamelCase links can optionally be enabled. WYSIWYG editors are available as plugins. DokuWiki based software projects Some independent software projects based on DokuWiki have been created. These projects usually bundle the DokuWiki software, select plugins, a customized design and sometimes pre-built content for specialized use cases. The EinsatzleiterWiki is a German project, bundling fire fighting knowledge in a package that can be installed in fire departments and then be customized to the needs of the specific department. The wiki is used by the professional fire services of Berlin, Kaiserslautern, Wuppertal and many voluntary fire services in Germany. open|SchulPortfolio is a German project aimed at the internal management of schools. It has been created with input from the ministry of education of the German state of Baden-Würtemberg. ICKEWiki is a redistribution of DokuWiki with a focus on the use in enterprises. It was originally developed in a research project focusing on adding structured data to wikis and making it more usable in industrial production companies. As required by DokuWiki's license these projects are all licensed under the GPL version 2. Notable uses DokuWiki is used by various public and non-public wiki setups. Below is a list of more notable uses PHP Programming Language XFCE Desktop Environment OpenWRT Router Software Slackware Linux Distribution SouthEastern Railways See also List of wiki software Comparison of wiki software References Further reading External links Free wiki software Technical communication tools
24964854
https://en.wikipedia.org/wiki/WebDNA
WebDNA
WebDNA is a server-side scripting, interpreted language with an embedded database system, specifically designed for the World Wide Web. Its primary use is in creating database-driven dynamic web page applications. Released in 1995, the name was registered as a trademark in 1998. WebDNA is currently maintained by WebDNA Software Corporation. Notable features WebDNA contains a RAM-resident database system (Hybrid In-memory database) that has searching and editing capabilities. A resilient and persistent backup of the RAM databases is maintained to disk. WebDNA code can interweave with css, html/html5 and js/ajax, allowing to mix layout with programming and server-side with client-side scripting. Some instructions allow to interact with remote servers. It is usually considered as an easy-to-learn scripting language and has been designed for webmasters, webdesigners and programmers looking for quick results. WebDNA is made up of a syntax that uses square brackets ("[" "]") and the English language. For example, to display today's date on a web page, simply insert "[date]" within the HTML or CSS code where you want the live date to appear; likewise with "[time]". To show some text only to a specific client IP address request, the 'showif' context can be used: [showif [ipaddress]=xxx.xxx.xxx.xxx]Some Text[/showif]. Most WebDNA tags, contexts and commands follow similar conventions. Terminology The WebDNA syntax is based on a simple format: key names surrounded by square brackets, such as: [showif [tvar]=yes]Yes[/showif]. WebDNA instructions are based on two types: Tag single key surrounded by square brackets, such as [ipaddress] (the I.P. Address of a Client (computing) request) Context opening tag and closing tag that surrounds what is to be parsed. ie. [Format thousands .3d]7[/Format] (parses to '007') Parameters can be included in many of the Tags, Contexts or Commands. Example Code (connects to a whois server and shows the information, then stores it into a permanent database) <!--HAS_WEBDNA_TAGS--> <html> [text]info=[tcpconnect host=whois.domaindiscover.com&port=43] [tcpsend]webdna.us[unurl]%0D%0A[/unurl][/tcpsend] [/tcpconnect][/text] [append db=base.db]domain=webdna.us&whois=[info] [/append] </html> History According to Grant Hulbert, one of the Pacific Coast Software founders, WebCatalog (now WebDNA) began as a set of C macros to help accomplish website graphical tasks. Before WebDNA evolved into a general-purpose server-side language, it was a special-purpose server-side language designed to help create web pages that sold stock photography. It had shopping cart features, and a searchable fixed-field database with specialized fields for storing stock photo information. After that, Pacific Coast Software quickly saw the value in creating a web programming language. WebCatalog began its mid-1990s public debut on the Macintosh platform. As its name implies, it had an early development focus that allowed a web master or store administrator to migrate a traditional product catalog to an online catalog. This was most evident in 1997 and 1998, with its StoreBuilder and WebMerchant products that allowed for a user to quickly build a store front online. The term "WebCatalog" referred to the entire product, where the term "WebDNA" referred to the scripting syntax only. Around the year 2000, WebCatalog and Pacific Coast Software were purchased by Smith Micro Software, Inc. Smith Micro Software, Inc. then changed the name of WebCatalog to WebDNA, which at that point became a name that referenced all aspects of the product. Starting with the release of WebDNA version 4.0 and ending with version 6.0a, the years 1999 to 2004 were very active years for WebDNA and the scripting language was adopted by many national and international names, including Disney, Chrysler, Kodak, Ben and Jerry's, the Pillsbury Dough Boy Shop, the NCAA Final Four and the Museum of Television and Radio. Also during this time, development of the language gained contemporary tools, such as [function] and [scope], that lend themselves to Modular programming and Structured programming. From 2005 to 2008, for perhaps various reasons including the success of Smith Micro Software with other products, WebDNA users began to lose support from Smith Micro. WebDNA lost users against free solutions like PHP and MySQL. It was ultimately the developers of WebDNA who revived the language. In June 2008, they formed together and organized to establish WebDNA Software Corporation (WSC). WSC purchased the intellectual property that is WebDNA, and in 2009, WSC released a new WebDNA version 6.2 (Cicada). In December 2011, a FastCGI version for the WebDNA Engine was released. This version, along with offering compatibility for non-Apache installations, changes the scope of WebDNA from a server-wide application, to a domain name-specific application. This means that a website owner can now more easily install WebDNA specifically for one domain, without affecting other domains that may reside on the server. References External links Official website Download page Usage documentation Scripting languages
2888401
https://en.wikipedia.org/wiki/Trusted%20Information%20Systems
Trusted Information Systems
Trusted Information Systems (TIS) was a computer security research and development company during the 1980s and 1990s, performing computer and communications (information) security research for organizations such as NSA, DARPA, ARL, AFRL, SPAWAR, and others. History TIS was founded in 1983 by NSA veteran Steve Walker, and at various times employed notable information security experts including David Elliott Bell, Martha Branstad, John Pescatore, Marv Schaefer, Steve Crocker, Marcus Ranum, Wei Xu, John Williams, Steve Lipner and Carl Ellison. TIS was headquartered in Glenwood, Maryland, in a rural location. The company was started in Walker's basement on Shady Lane in Glenwood, MD. As the company grew, rather than move to Baltimore or the Washington D.C. suburbs, a small office building was constructed on land next to Walker's new home on Route 97. Products TIS projects included as the following: Trusted Xenix, the first commercially available B2 operating system; Trusted Mach, a research project that influenced DTOS and eventually SELinux; Domain and Type Enforcement (DTE) which likewise influenced SELinux; FWTK Firewall Toolkit (the first open source firewall software) in 1993; First whitehouse.gov e-mail server was hosted at TIS headquarters from June 1 of 1993 to January 20 of 1995; Gauntlet Firewall in 1994, one of the first commercial firewall products, with broad range of Internet Standards, including S/MIME, SNMP, DNS, DNSSEC, and many others. This Firewall became the inception of the third generation firewall; IP Security (IPSec) product in late 1994, known as the first IPSec VPN commercial product in IT history; Encryption Recovery technology integrated with IPSEC, ISAKMP, IKE, and RSA. TIS's operating system work directly affected BSD/OS, which the Gauntlet Firewall and IPSec was based on, as well as Linux, FreeBSD, HP UX, Sun OS, Darwin, and others. Post company The company went public in 1996 and soon afterwards attempted to acquire PGP Inc.; it was instead acquired in 1998 by Network Associates (NAI), which later became McAfee, who had already bought PGP Inc. in 1997. The security research organization became NAI Labs and the Gauntlet engineering and development organization was folded into Network Associates' engineering and development. NAI Labs went through a couple of branding changes which complemented Network Associates' branding efforts. In 2001 the name was changed to Network Associates Laboratories to better match the corporate identity. Then, in 2002-2003, there was a major branding initiative by Network Associates culminating in selection of the flag brand, McAfee. As a result, the security research organization became McAfee Research. In 2003, SPARTA, Inc., an employee-owned company, acquired the Network Security branch of McAfee Research. In 2005, SPARTA acquired the remaining branches of McAfee Research, which were organized into the Security Research Division (SRD) of the Information Systems Security Operation (ISSO). In 2008, Cobham, plc, a British aerospace company, acquired SPARTA. There have been no organizational changes to SRD or ISSO that affect the security research. On a separate path, TIS's primary commercial product, the Gauntlet Firewall, was acquired in 1999 from McAfee by Secure Computing Corporation (SCC), that used to be one of TIS's major competitors, because at the time McAfee wasn't interested in being a firewall vendor. The code base was integrated with Secure Computing's firewall product and branded Sidewinder Firewall, which then returned to McAfee when Secure Computing was acquired by them in 2008, and re-branded McAfee Enterprise Firewall. The end of this product line came in 2013, following McAfee's acquisition of another major firewall vendor, Finland-based Stonesoft. McAfee announced in October 2013 the intention to migrate their existing installed base of firewalls to Stonesoft's own Stonegate. References External links LinkedIn Alumni Group Firewall Toolkit Archive Stephen Walker Congressional Testimony Computer security software companies Defunct software companies of the United States Software companies based in Maryland McAfee Defunct companies based in Maryland Software companies established in 1983 Technology companies disestablished in 1998 1983 establishments in Maryland 1998 disestablishments in Maryland
35019705
https://en.wikipedia.org/wiki/German%20Informatics%20Society
German Informatics Society
The German Informatics Society (GI) () is a German professional society for computer science, with around 20,000 personal and 250 corporate members. It is the biggest organized representation of its kind in the German-speaking world. History The German Informatics Society was founded in Bonn, Germany, on September 16, 1969. Initially aimed primarily at researchers, it expanded in the mid-1970s to include computer science professionals, and in 1978 it founded its journal Informatik Spektrum to reach this broader audience. The Deutsche Informatik-Akademie in Bonn was founded in 1987 by the German Informatics Society in order to provide seminars and continuing education for computer science professionals. In 1990, the German Informatics Society contributed to the founding of the International Conference and Research Center for Computer Science (renamed since as the Leibniz Center for Informatics) at Dagstuhl; since its founding, Schloss Dagstuhl has become a major center for international academic workshops. In 1983, the German Informatics Society became a member society of the International Federation for Information Processing (IFIP), taking over the role of representing Germany from the Deutsche Arbeitsgemeinschaft für Rechenanlagen. In 1989, it joined the Council of European Professional Informatics Societies. Activities The main activity of the association is to support the professional development of its members in every aspect of the rapidly changing field of informatics. In order to realise this aim the German Informatics Society maintains a large number of committees, special interest groups, and working groups in the field of theory of computation, artificial intelligence, bioinformatics, software engineering, human computer interaction, databases, technical informatics, graphics and information visualisation, business informatics, legal aspects of computing, computer science education, social computing, and computer security. Up to now, the GI runs more than 30 local groups in cooperation with the German chapter of the Association for Computing Machinery. Other important GI activities include raising public awareness of informatics, including its benefits and risks. Lobbying activities have been organised by the office in Berlin since 2013. Additionally, the GI runs programmes designed for young people and women to foster interest in informatics. In addition to the Informatik Spektrum, which is the journal of the society, most of the society's special interest groups maintain their own journals. Overall the society has approximately 40 regular publications, and it sponsors a similar number of conferences and events annually. Many of these conferences have their proceedings published in the GI's book series, Lecture Notes in Informatics, which also publishes Ph.D. thesis abstracts and research monographs. Every two years, the German Informatics Society awards the Konrad Zuse Medal to an outstanding German computer science researcher. It also offers prizes for the best Ph.D. thesis, for computer science education, for practical innovations, and for teams of student competitors. Each year beginning in 2002, the GI has elected a small number of its members as fellows, its highest membership category. Conferences One of the biggest informatics conferences in the German-speaking world is the INFORMATIK. The conference is organised in cooperation with universities, each year in a different location. More than 1.000 participants visit workshops and keynotes regarding current challenges in the field of information technology. In addition, several special interest groups organise large meetings with an international reputation, for example the „Software Engineering (SE)“, the „Multikonferenz Wirtschaftsinformatik (MKWI), the „Mensch-Computer-Interaktion (MCI)“ and the „Datenbanksysteme für Business, Technologie und Web (BTW)“. The Detection of Intrusions and Malware, and Vulnerability Assessment event, designed to serve as a general forum for discussing malware and the vulnerability of computing systems to attacks, is another annual project under the auspices of the organization. Its last conference was held from 6 July to 7 July in the city of Bonn, Germany, being sponsored by entities such as Google, Rohde & Schwarz, and VMRay. Honorary members The following people are honorary members of the German Informatics Society due to their achievements in the field of informatics. Konrad Zuse since 1985 Friedrich Ludwig Bauer since 1987 Wilfried Brauer since 2000 Günter Hotz since 2002 Joseph Weizenbaum since 2003 Gerhard Krüger since 2007 Heinz Schwärtzel since 2008 Associated societies Swiss Informatics Society Gesellschaft für Informatik in der Land-, Forst- und Ernährungswirtschaft (GIL) German Chapter of the ACM (GChACM) References External links Official website 1969 establishments in West Germany Organizations established in 1969 Computer science organizations Professional associations based in Germany
7856152
https://en.wikipedia.org/wiki/Killall
Killall
killall is a command line utility available on Unix-like systems. There are two very different implementations. The implementation supplied with genuine UNIX System V (including Solaris) and with the Linux sysvinit tools kills all processes that the user is able to kill, potentially shutting down the system if run by root. The implementation supplied with the FreeBSD (including Mac OS X) and Linux psmisc tools is similar to the pkill and skill commands, killing only the processes specified on the command line. Both commands operate by sending a signal, like the kill program. Example usage Kill all processes named xmms: killall xmms See also List of Unix commands Signal (computing) External links Unix process- and task-management-related software de:Kill (Unix)#killall
2261397
https://en.wikipedia.org/wiki/Xpress%20Pro
Xpress Pro
Avid Xpress Pro was a non-linear video editing software aimed at professionals in the TV and movie industry. It was available for Microsoft Windows PCs and Apple Macintosh computers. Xpress Pro included many of the high-end editing features offered by other Avid editing systems and was closely based on Avid's Media Composer systems. In conjunction with the Avid Mojo hardware, it provided real-time uncompressed video editing at a professional level. Xpress Pro was capable of sharing media files with Avid's advanced Media Composer editing systems making it a capable logging or offline editing system for larger projects. While Xpress Pro was originally aimed at DV and uncompressed standard definition editors, the upgrade to Xpress Pro HD with version 5.0 of the software added support for high-definition editing with the 8-bit version of Avid's DNxHD codec or Panasonic's DVCPRO HD codec, and version 5.2 added support for HDV editing. Unlike some other editing packages, Xpress Pro HD edits HDV natively by decompressing the MPEG-2 stream on the fly, rather than transcoding into an intraframe codec. Xpress Pro was discontinued on March 17, 2008, and was no longer for sale after June 30, 2008. Avid offered Xpress Pro users a discounted upgrade price for their flagship non-linear editing software Media Composer. One of the controversial aspects of the software was that it did not work on Microsoft's Windows Vista. References External links Avid Xpress Pro Video Editing Apps Windows Movie Maker Video editing software Video editing software for macOS