id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
22193696
https://en.wikipedia.org/wiki/Tegiapa
Tegiapa
Tegiapa is a genus of moths of the family Noctuidae. The genus was described by Nye in 1975. Species Tegiapa aarviki Hacker, 2019 Gabon, Uganda, Ethiopia Tegiapa aberdarensis Hacker, 2019 Kenya Tegiapa agassizi Hacker, 2019 Ethiopia, Kenya, Tanzania, South Africa Tegiapa ambiguosa Hacker, Fiebig & Stadie, 2019 Ethiopia Tegiapa catalai Hacker, 2019 Madagascar Tegiapa comorana (Viette, 1981) Comoros Tegiapa craspedica (Hampson, 1910) Ghana, Nigeria, Congo, Equatorial Guinea, Gabon, Angola, Uganda Tegiapa forsteri Hacker, 2019 Tanzania Tegiapa goateri Hacker, 2019 South Africa, Swaziland, Zimbabwe Tegiapa griseaxea Hacker, 2019 Kenya, Uganda Tegiapa kingstoni Hacker, 2019 Kenya, Uganda, Zimbabwe Tegiapa larentiodes (Prout, 1922) South Africa, Zaire, Ghana Tegiapa lenzi Hacker, 2019 Zimbabwe Tegiapa melanochra Hacker, 2019 São Tomé & Principe Tegiapa melanoleuca Hacker, 2019 Zambia, South Africa Tegiapa microplexia (Viette, 1962) Madagascar Tegiapa nana Hacker, 2019 South Africa Tegiapa nigrilineata (Hampson, 1916) Somalia Tegiapa obliqua Hacker, Fiebig & Stadie, 2019 Ethiopia, Kenya Tegiapa politzari Hacker, 2019 Ethiopia, Kenya, Uganda, Malawi, Rwanda, Tanzania Tegiapa schreieri Hacker, 2019 Ethiopia, Tanzania, Uganda, Cameroon Tegiapa steganioides Hacker, Fiebig & Stadie, 2019 Nigeria, Gabon, Ethiopia, Uganda, Tanzania Tegiapa ugandana Hacker, 2019 Uganda Tegiapa vanjamanitra (Viette, 1981) Madagascar Tegiapa virescens (Hampson, 1910) South Africa References External links Acontiinae
6180875
https://en.wikipedia.org/wiki/City%20of%20Dublin%20Steam%20Packet%20Company
City of Dublin Steam Packet Company
The City of Dublin Steam Packet Company was a shipping line established in 1823. It served cross-channel routes between Britain and Ireland for over a century. For 70 of those years it transported the mail. It was 'wound-up' by a select committee of the House of Lords in 1922 and finally liquidated in 1930. The company operated from offices at Eden House, 15-18 Eden Quay which were constructed in 1829. History The City of Dublin Steam Packet Company began in Dublin in 1822 as Charles Wye Williams & Company; Williams is one of the unrecognised pioneers of steam navigation. His company initially operated steam ships between Dublin and Liverpool. In 1826, the line added service to London and Belfast. Later, service was also provided between Glasgow and Belfast. Transatlantic service to New York started with the Royal William departing Liverpool on 5 July 1838, becoming the first steamer to depart for an Atlantic crossing from the River Mersey. In January, 1839, they were awarded a contract to provide a night mail service from Holyhead; their ships docked at the Admiralty Pier in Holyhead. In 1843, the company took over the routes of the St. George Steam Packet Company, extending service to Wales. The Company also operated smaller steamers on the River Shannon. Up until 1850, the British Admiralty carried the Royal Mail, but in that year, contracts were awarded for the first time to private companies. Ships carrying mail on these contracts were authorized to use the designation RMS or Royal Mail Ship. The most valuable route, with the highest volume, was between Kingstown (now Dún Laoghaire), in Ireland, and Holyhead in Wales. The line won the contract and purchased RMS Saint Columba and RMS Llwywllyn from the Admiralty. In 1859, the line ordered four additional steamers, named for four provinces of Ireland, the RMS Connaught, RMS Leinster, RMS Munster and RMS Ulster; these four were commonly referred to as "The Provinces". In 1897, the line was awarded an additional 21 years for their contract with the Post Office, the CofDSPCo ordered four identical ships from Cammell Lairds of Birkenhead to replace "The Provinces"; these carried the same names as the former ships. These were twin-propeller vessels powered by an eight-cylinder steam engine, capable of 24 knots. During World War I, the company lost two steamers sunk by the Germans, the worst of which was the second Leinster which was lost with over 500 lives in 30 metres of water just North East of the Kish Light, the greatest single-incident loss of life in the Irish Sea. (The official death toll was 501. Research by Roy Stokes, author of Death in the Irish Sea: The Sinking of RMS Leinster and Philip Lecane, author of Torpedoed! The RMS Leinster Disaster suggest that the number lost was somewhat higher.) The Company was not able to financially recover from this loss. Afterwards, the remaining fleet were taken over by the British & Irish Steam Packet Company. The City of Dublin Steam Packet Company was finally liquidated in 1924. Ships operated by the line Albert 1845 Athlone 1836 (in 1849, Thomas Carlyle sailed to Dublin from London on the Athlone. He gives a lengthy description of his three-day sail from London to Dublin Harbour. The Athlone stopped at Greenwich; Gravesend; Broadstairs; Ramsgate; Deal; Dover' Plymouth and Falmouth before heading out to Ireland. Carlyle. Reminiscences of My Irish Journey in 1849, 1882, pp. 7-37. Ballinasloe 1829 Banshee 1848 Belfast 1884 Britannia 1825 Carlow 1896 Cavan 1876 City of Dublin 1824 City of Londonderry 1824 Commerce 1825 Connaught (1) 1860 RMS Connaught (2) 1897 (torpedoed and sunk en route Le Havre to Southampton, 1917) Cork 1899 (torpedoed and sunk off Point Lynas, 1918) Diamond 1846 Duchess of Kent 1837 Duke of Cambridge 1837 Eblana 1849 Emerald 1846 Galway 1891 Gipsy 1828 Hibernia 1824 Ireland 1885 Iron Duke 1844 Kerry 1897 Kildare 1867 Kilkenny 1903 Leeds 1826 Leinster (1) 1860 RMS Leinster (2) 1897 (torpedoed and sunk off Kingstown, 1918; over 500 lives lost out of 771 onboard) Leitrim 1874 Liffey 1824 Liverpool 1846 RMS Llewellyn 1848 Longford 1870 Louth 1894 Manchester 1826 Mayo 1880 Meath 1884 Mersey 1824 Mona (1) 1825 Mona (2) 1832 Mullingar 1868 Munster (1) 1860 Munster (2) 1896 Nottingham 1827 Pearl 1845 Prince Arthur 1851 Princess 1839 Queen Victoria 1838 (wrecked near Howth Head in snowstorm, 15 February 1853; 80 lives lost) Roscommon 1845 Royal Adelaide 1838 (wrecked at Tongue Sands off Margate, 1849; 250 lives lost) Royal William 1837 Shamrock 1824 Sheffield 1827 RMS St. Columba 1848 St. Patrick (see Llewellyn) Thames 1827 Town of Liverpool 1824 Trafalgar 1848 Ulster (1) 1860 Ulster (2) 1896 1895 Windsor 1846 See also British and Irish Steam Packet Company - operated from offices at 27 Sir John Rogerson's Quay and 46 East Wall and North Wall Quay. It was previously called the Dublin & London Steam Packet Company. Dublin & Glasgow Steam Packet Company - operated from offices at 73 North Wall Quay References Freda Harcourt, "Charles Wye William & Irish Steam Shipping", The Journal of Transport History, Vol. 13, No 2, Sept.1992. Info on The Ships List Anglesey-Môn Info Web Sealink-Holyhead.com External links Ship Trafalgar Collection (#1063), East Carolina Manuscript Collection, J. Y. Joyner Library, East Carolina University Defunct shipping companies of the United Kingdom Shipping companies of Ireland Packet (sea transport) Dublin Docklands
13660031
https://en.wikipedia.org/wiki/Generic%20Security%20Service%20Algorithm%20for%20Secret%20Key%20Transaction
Generic Security Service Algorithm for Secret Key Transaction
GSS-TSIG (Generic Security Service Algorithm for Secret Key Transaction) is an extension to the TSIG DNS authentication protocol for secure key exchange. It is a GSS-API algorithm which uses Kerberos for passing security tokens to provide authentication, integrity and confidentiality. GSS-TSIG (RFC 3645) uses a mechanism like SPNEGO with Kerberos or NTLM. In Windows, this implementation is called Secure Dynamic Update. GSS-TSIG uses TKEY records for key exchange between the DNS client and server in GSS-TSIG mode. For authentication between the DNS client and Active Directory, the AS-REQ, AS-REP, TGS-REQ, TGS-REP exchanges must take place for granting of ticket and establishing a security context. The security context has a limited lifetime during which dynamic updates to the DNS server can take place. References Cryptographic protocols
359403
https://en.wikipedia.org/wiki/Balance%20of%20Power%20%28video%20game%29
Balance of Power (video game)
Balance of Power is a computer strategy game of geopolitics during the Cold War, created by Chris Crawford and published in 1985 on the Macintosh by Mindscape, followed by ports to a variety of platforms over the next two years. In the game, the player takes the role of the President of the United States or General Secretary of the Soviet Union. The goal is to improve the player's country's standing in the world relative to the other superpower. During each yearly turn, random events occur that may have effects on the player's international prestige. The player can choose to respond to these events in various ways, which may prompt a response from the other superpower. This creates brinkmanship situations between the two nations, potentially escalating to a nuclear war, which ends the game. Crawford was already well-known, especially for Eastern Front (1941). His 1984 announcement that he was moving to the Macintosh platform to work on a new concept generated considerable interest. It was widely reviewed after its release, including an extremely positive review in The New York Times Magazine. It was praised for its inventive non-action gameplay that was nevertheless exciting and distinct. It has been named by Computer Gaming World as one of the most innovative computer games of all time. Balance of Power was successful on the Mac, and combined with ports it ultimately sold over a quarter million units. Gameplay The player may choose to be either the President of the United States or the General Secretary of the Communist Party of the Soviet Union, and must lead the chosen superpower for eight years, seeking to maximize "prestige" and avoiding a nuclear war. Each turn is one year long; at the beginning of each year, the player is presented with a set of incidents and crises in various countries around the globe and must choose a response to each one. Responses may range from no action to diplomatic notes to the other superpower, to military maneuvers. Each response is then met with a counter-response, which may vary from backing down to escalation. The player then gets a chance to initiate actions, and deal with the opponent's responses. This core mechanic is similar to that of Bruce Ketchledge's 1983 game Geopolitique 1990, published by SSI. One difference from the earlier game is how negotiations are resolved. In both games, backing down in a negotiation results in a loss of prestige, which will reverberate politically. Likewise, in both games brinkmanship may result in a global war. In Geopolitique, such wars were actually fought in-game, after which the game continued. In Balance of Power, such a war ends the game instantly, with the following message: "You have ignited a(n accidental) nuclear war. And no, there is no animated display of a mushroom cloud with parts of bodies flying through the air. We do not reward failure." Development Crawford had made a name for himself during his years at Atari, Inc. in the late 1970s and early 1980s. His smash hit Eastern Front (1941) made him one of the few game authors known by name, and helped ensure later releases were also successful to a degree. By early 1984, Atari was rapidly going bankrupt due to the effects of the video game crash of 1983. In March they laid off the majority of their staff, including Crawford. His ample severance was enough to allow him to write a new game as a freelancer. After considering a sequel to Eastern Front or a political game about the Inca empire, he eventually decided to write a cold war "game about peace". Crawford cited Bob Dylan's "Blowin' in the Wind" as an emotional inspiration to the game. Considering the Atari platform a lost cause, Crawford began looking at the new 16-bit platforms that were just coming to market; he dismissed the IBM PC as too crude, the Amiga as likely to fail for business reasons, and thus chose the Macintosh as the platform to develop on. At the time this was a non-trivial decision, as there were no development tools for the Mac, and programmers had to buy an Apple Lisa to write code for the platform. During an April 1984 interview at the Origins Game Fair, Crawford stated that he was "working on a game for the Macintosh entitled ARMS RACE based on the philosophy that 'H-bombs don't kill people, geopolitics kills people.'" By May the initial concept was fleshed out; the game would take place on top of a world map that could show various details, events would be presented as newspaper stories following an algorithmic concept he developed after considering the headlines in National Enquirer, and these events and their outcomes would be expressed in terms of "prestige points". Crawford had difficulty in finding a publisher for Balance of Power, due in no small part to the lingering effects of the 1983 crash. Random House eventually agreed to publish it, but there was considerable ill will between Crawford and the editor assigned to him, who had no prior experience with video games. Random House ultimately cancelled the contract and demanded that Crawford return its $10,000 advance payment, almost causing him to lose his home. His wife demanded he get a "real job". The work was saved by a friend at InfoWorld, who heard of his troubles and published a two-part column on the game. The article was seen by a producer at the newly formed Mindscape, who agreed to publish it. Betas were ready in February 1985 and were polished through the spring and summer. The game was released in September and was an immediate sensation given that the world was then at the height of the Cold War. The fame was in no small part helped by a review in The New York Times Magazine written by David L. Aaron, Jimmy Carter's Deputy Assistant to the President for National Security Affairs, who called it "about as close as one might get to the cut-and-thrust of international politics without going through confirmation by the Senate." The game was an immediate hit on the initial Macintosh and Apple II platforms. A release for Microsoft Windows 1.0 in 1986 made it a multiplatform best seller. These were followed by versions on the Atari ST (1987) and Amiga, among others. In total, the game sold over $10 million during its heyday, during a time when total sales for all gaming hardware and software combined was about $500 million. In 1986, Crawford published a book, also called Balance of Power, which details the internals of the game great depth. It explains the background of the politics, the formulas used to calculate prestige and related parameters, and an account of its (lengthy) gestation. Around 2013, fulfilling a 2011 promise, Crawford released source code of several of his games into the public domain, among them Balance of Power. Reception Computer Gaming World stated that Balance of Power "stands in a prestigious circle: that of the most innovative computer games of all time. If there were Academy Awards for computer games, BOP would get my votes for Best Picture and Best Director of 1985". A 1992 survey in the magazine of wargames with modern settings gave the game four stars out of five, a 1994 survey gave it three stars, and in 1996 the magazine listed the game's Game Over scene as #11 on its list of "the 15 best ways to die in computer gaming". Roy Wagner reviewed the game for Computer Gaming World, and stated that "This game is HIGHLY recommended." Dragon recommended the game and called it a "superb game of global strategy that goes beyond other games, wherein thoughtful, calculated geopolitics could prevent a worldwide nuclear war!" as well as "a truly entertaining and thought-provoking simulation of world geopolitics which involves the player in a quest to prevent a worldwide nuclear holocaust, while promoting either the U.S.A. or Russia to world prominence. This is a marvelous program that everyone should experience at least once." BYTE praised the game in 1986, describing the Mac version as "the best game I had ever seen on any computer ... by all means, get the program". Info gave the Amiga version four-plus stars out of five, approving of the complex gameplay with a good user interface. While complaining of the lack of sound from the "obvious Macintosh influence", the magazine concluded that it was "An excellent game". Bob Ewald reviewed Balance of Power in Space Gamer/Fantasy Gamer No. 81. Ewald commented that "In conclusion, if this type of game appeals to you most of the problems are worth suffering through. However, if you're a hardcore wargamer who isn't interested in diplomacy, I suggest that you pass on this one." Compute! presented opposing views of the game in May 1988. The magazine included Balance of Power in its list of "Our Favorite Games", calling it "an impressive recreation of the world's geopolitical landscape ... call one too many bluffs and you'll see the chilling message You have ignited a nuclear war". Orson Scott Card, however, wrote in his review of the game that Crawford—"the best designer of simulation games I've seen"—was "leaning over your shoulder and bullying you into playing the game his way. He has a sweet delusion that as long as the United States is very nice and doesn't do anything to offend them, the Russians will go home. And if you don't play that way, why, he'll stop the game with a nasty remark about how the world was just destroyed by nuclear war". He added that since no nuclear war had been fought Crawford could not know what would cause one, and "there are a lot of experts who claim that the Soviets seem to behave a lot nicer when we stand up to them than when we disarm", but that "[Crawford] is so sure he's right that Balance of Power isn't a game, it's propaganda". In 1989 Compute! stated that the 1990 edition's multipolar features were welcome additions to the game. That year Card stated that the magazine's 1988 list caused him to reevaluate the game when playing the 1990 edition, stating that it had "the most detailed, carefully extrapolated future world I've worked with". While still criticizing geopolitical "absurdities" such as forcing the United States to passively accept Soviet troops in Syria (contrary to what happened during the Yom Kippur War) and advisors in Mexico or start nuclear war, Card now concluded that such outcomes probably reflected computer limitations rather than Crawford's political views. He advised players to pretend that Balance of Power was set on an alien planet "astonishingly similar" to Earth, and to play solely based on the game's assumptions about the world. Chuck Moss disagreed with Card's revised view, describing Balance of Power in Computer Gaming World in 1992 as "reflect[ing] extreme bias on the part of [its] designers". He called it a "pacific treatise ... nuclear war erupted if the U.S. so much as sent five million dollars to Panama". Crawford stated in 1987 that he was most proud of his work on the game: "I feel [it] has made the world a better place ... I think it has made a small number of people much more realistic in their appraisal of world affairs". In 1996 Computer Gaming World ranked it as the 78th best game of all time, calling it "a wonderful game of political intrigue that felt just like the Cold War and didn't require the detail of Shadow President or CyberJudas." That year it was ranked as the 75th top game of all time by Next Generation, who commented "Although the game's premises are hopelessly outdated (U.S.S.R.? What's that?) and some disagree with the politics, the AI in Balance of Power, combined with its unique play style, make it a classic." In 2006, The Guardian listed it first on its list of "The 10 political games everyone should play". Legacy After Balance of Power, Crawford began work on an entirely new game, Trust & Betrayal: The Legacy of Siboot, released in 1987 and selling only a few thousand copies. Mindscape was disappointed, and pressured Crawford to do a follow-up to Balance of Power. The company pressed him hard on it, and Crawford felt that he "owed them one" after publishing Siboot. This sequel was released in 1989 as Balance of Power: The 1990 Edition on the Apple IIGS, Windows, Macintosh, Amiga and Atari ST. Crawford does not consider it a proper sequel, saying he was simply "tidying up, adding some bells and whistles." Balance of Power: The 1990 Edition adds more countries, advisors, to help the player, a new "multi-polar" level that allows countries to generate events of their own (such as declaring war on other countries) and a 2-player hotseat mode. In 1992 Crawford left the game industry to make something more interactive and artistic. After many false starts, this emerged as The Erasmatron, and later re-released under the name Storytron. Both systems were used to make complex, interactive storylines. As a demo, Crawford used Storytron to build Balance of Power: 21st Century. This version begins on 12 September 2001 and mostly involves the US's interactions with Asia and other emerging powers. It was not a success; Crawford himself describes it as "crap". Balance of Power has been the basis for a number of play by mail versions, manually run and based on moves posted in internet forum systems. One long-running example is hosted on eRegime. See also Balance of power in international relations Brinkmanship (Cold War) References Citations Bibliography External links Balance of Power at MobyGames Retrospective at IGN 1985 video games Amiga games Anti-war video games Apple II games Apple IIGS games Atari ST games Chris Crawford (game designer) games Cold War video games DOS games Commercial video games with freely available source code Government simulation video games Classic Mac OS games MSX2 games NEC PC-8801 games NEC PC-9801 games Video games developed in the United States Windows games Mindscape games
39437548
https://en.wikipedia.org/wiki/RationalPlan
RationalPlan
RationalPlan is a project management software suite for planning, managing, and tracking projects. The suite is developed by Stand By Soft, a Romanian developer that specializes in desktop applications, developed the software. RationalPlan runs on multiple platforms including Mac OS X, Windows, and Linux. History RationalPlan was released in 2007 for Linux and Windows. The suite was released for Mac OS X in 2008. In July 2012, Stand By Soft released RationalPlan 4.0, a major version upgrade. RationalPlan 4.1 was released in October 2012. RationalPlan earned third place in software solutions at Capterra’s Project Management Election in 2012. Product RationalPlan consists of a project guide with three main parts; labeled Project, Planning, and Controlling, each consisting of several subcategories. The project section covers basic information and scheduling, the planning section defines resources and materials, and the controlling section allows users to update and track tasks. The RationalPlan suite includes RationalPlan Single, RationalPlan Multi, RationalPlan Viewer, and RationalPlan Server. Combined, the RationalPlan suite provides management help for single projects, multiple projects, project viewing, and a centralized space for multiple users to access and manage projects. RationalPlan collaborates with IBM, Sony, and Cisco to establish rules to assist users in business management. RationalPlan has licenses proprietor software that is fee-based for Mac and Windows installations but has free use licences for Linux installations. References External links Official Website Companies established in 1997 Project management software
1540855
https://en.wikipedia.org/wiki/Translation%20management%20system
Translation management system
A translation management system (TMS), formerly globalization management system (GMS), is a type of software for automating many parts of the human language translation process and maximizing translator efficiency. The idea of a translation management system is to automate all repeatable and non-essential work that can be done by software/systems and leaving only the creative work of translation and review to be done by human beings. A translation management system generally includes at least two types of technology: process management technology to automate the flow of work, and linguistic technology to aid the translator. In a typical TMS, process management technology is used to monitor source language content for changes and route the content to various translators and reviewers. These translators and reviewers may be located across the globe and typically access the TMS via the Internet. Translation management systems are most commonly used today for managing various aspects translation business. Naming Although translation management systems (TMS) seems to be the currently favoured term in the language localisation industry, these solutions are also known as globalization management systems (GMS) or global content management systems (GCMS). They work with content management systems (CMS) as separate, but linked programs or as simple add-ons that can answer specific multilingual requirements. Overview A TMS typically connects to a CMS to manage foreign language content. It tends to address the following categories in different degrees, depending on each offering: Business administration: project management, resource management, financial management. This category is traditionally related to enterprise resource planning (ERP) tools. Business process management: workflow, collaboration, content connectors. This category is traditionally the domain of specialised project management tools. Language management: integrated translation memory, webtop translation tools, customer review and markup. This is traditionally performed with specialised translation tools. CMS excels at process management while ignoring business management and translation tools, which are strongholds of TMS. Features and benefits The measurable benefits of using a TMS are similar to those found in a CMS, but with a multilingual twist: the localization workflow is automated, thus reducing management and overhead costs and time for everyone involved; localization costs are reduced, time to market is decreased and translation quality improves; finally, the cooperation between headquarters and national branches increases thanks to more thorough reporting. A typical TMS workflow goes through the following steps: Change detection of updated or new materials is a must either with standard off-the-shelf CMSs or with the use of custom-developed connectors in the case of proprietary systems. Content is automatically extracted from the CMS and packaged for transmission to the TMS. In some cases, file manipulation may be needed for later analysis and translation. Project managers customise workflows to match their business needs. Every participant in the workflow receives a notification where there is new work to be done, and a unique number is assigned to every project and every task for traceability. Translators and revisers work either online or offline and their queries and comments are tracked through the system. Translators or revisers receive comments from the customer's in-country reviewers to verify and implement any corrections. After the documents are approved, the TM is automatically updated for later reuse. Finally, the translated materials are returned into their CMS for publishing and productivity and efficiency metrics are available through reports. Linguistic technology generally includes at least translation memory and terminology database; some systems also integrate machine translation technology. Translation memory is a database of all previously translated sentences. While a translator performs translation, he or she is automatically prompted with similar sentences from the memory that were previously translated. A terminology database is a glossary that contains specific words and phrases and their context-appropriate translations. A machine translation system is a program that uses natural language processing technology to automatically translate a text from one language to another. Future Future trends in TMSs include: interoperation with more CMS offerings: content managers should be able to order translations within their own environment tie in with text authoring environments: for existing multilingual content leverage against new writing incorporation of business management functions: to preview the localization cost and timeframe integration with enterprise systems: general ledger applications and sales force automation tools Target markets and licensing TMS vendors target two main buyers when marketing and selling their products. On the one hand, software developer-only companies attract content producers, and sell their offering with no strings attached. On the other hand, software developers can also be language service providers (LSPs), so they offer their language services over their custom-made technological offering for easier customer integration. The latter is commonly referred to as a captive solution, meaning that buyers must use the TMS developer's language services in order to take advantage of their platform. Content producers with preferred or previous language service agreements to third LSPs may prefer to maintain their independence and purchase software licences only. However, a combined option of technology solution and language services in one package is bound to be more cost effective. Similarly, LSPs may prefer to contact technology vendors who are not part of the competition, offering also language services. Many LSPs got nervous when SDL bought Trados in 2005, becoming the biggest translation technology provider, while still having language services as part of their activities. As a result of this, competitive cloud translation management systems that combine TMS functionality with CAT tools and online translation editors, started making their way to the market. See also Computer-assisted translation Internationalization and localization References Translation software Content management systems Internationalization and localization
17885031
https://en.wikipedia.org/wiki/Pipeline%20forwarding
Pipeline forwarding
Pipeline forwarding (PF) applies to packet forwarding in computer networks the basic concept of pipelining, which has been widely and successfully used in computing — specifically, in the architecture of all major central processing units (CPUs) — and manufacturing — specifically in assembly lines of various industries starting from automotive to many others. Pipelining is known to be optimal independent of the specific instantiation. In particular, PF is optimal from various points of view: High efficiency in utilization of network resources, which enables accommodating a larger amount of traffic on the network, thus lowering operation cost and being the foundation for accommodating the exponential growth of modern networks. Low implementation complexity, which enables the realization of larger and more powerful networking systems at low cost, thus offering further support to network growth. High scalability, which is an immediate consequence of the above two features. Deterministic and predictable operation with minimum delay and no packet loss even under full load condition, which is key in supporting the demanding requirements of the new and valuable services that are being deployed, or envisioned to be deployed, on modern networks, such as telephony, videoconferencing, virtual presence, video on demand, distributed gaming. Various aspects of the technology are covered by several patents issued by both the United States Patent and Trademark Office and the European Patent Office. Operating principles As in other pipelining implementations, a common time reference (CTR) is needed to perform pipeline forwarding. In the context of global networks the CTR can be effectively realized by using UTC (coordinated universal time) that is globally available via GPS (global positioning system) or Galileo in the near future. For example, the UTC second is divided into fixed duration time frames, which are grouped into time cycles so that in each UTC second there is a predefined integer number of time cycles. Alternatively, or complementary, the CTR can be obtained through the network by means of synchronization protocols such as IEEE 1588. Packets are forwarded from node to node according to predefined schedules, as shown in the figure below, i.e., each node forwards packets of a certain flow during predefined time frames. The time cycles define the periodic re-occurrence of the various predefined schedules. The periodic scheduling within each node results in a periodic packet forwarding across the network, which is referred to as pipeline forwarding for the ordered, step-by-step fashion with which packets travel toward their destination. Through a resource reservation procedure transmission capacity is booked for a flow on each link it traverses during the time frame (or time frames) predefined for its forwarding, thus setting up a synchronous virtual pipe (SVP). The capacity during each time frame can be is partially or totally reserved to one or more flows. Consequently, the time cycle provides the basis for a periodic repetition of the reservation that ensures enough transmission resources to be available on each link to forward the packets of each flow, which prevents delays due to resource contention and loss resulting to congestion. Forwarding options As exemplified in the figure above, which depicts the journey of a packet from node A to node D along three pipeline forwarding switches, the forwarding delay may have different values for different nodes, due to different propagation delays on different links (e.g., Tab, Tbc, and Tcd), and different packet processing and switching times in heterogeneous nodes (e.g., Tbb and Tcc). Moreover, two variants of the basic pipeline forwarding operation are possible. When node n deploys immediate forwarding the forwarding delay has the same value for all the packets received by node n on input link i and it is the minimum necessary to accommodate the packet propagation, processing, and switching time. When implementing non-immediate forwarding, node n may use different forwarding delays for different packets. Two implementations of the pipeline forwarding were proposed: Time-Driven Switching (TDS) - a.k.a. Fractional lambda switching (FλS) in the context of optical networks - and time-driven priority (TDP) and can be used to create pipeline forwarding parallel network in the future Internet . References Computer networking
26566394
https://en.wikipedia.org/wiki/Donn%20B.%20Parker
Donn B. Parker
Donn B. Parker was an information security researcher and consultant and a 2001 Fellow of the Association for Computing Machinery. Parker had over 50 years of experience in the computer field in computer programming, computer systems management, consulting, teaching, and research. Early life and education Parker earned BA (1952) and MA (1954) degrees in mathematics from the University of California, Berkeley. Career Parker was a senior research engineer and systems manager for General Dynamics for eight years and Control Data Corporation for eight years. He retired in 1997. In 2002, Parker proposed the Parkerian Hexad, six atomic and orthogonal elements of information security that extend the traditional model of Confidentiality, Integrity, and Availability (the CIA triad). He later became a retired emeritus senior consultant engaged in writing and lecturing, and his collected papers are archived at the Charles Babbage Institute at the University of Minnesota. Parker passed away at the age of 91 on September 22, 2021. Lectures Parker lectured at conferences, seminars, and universities worldwide. He was the subject writer on computer crime for the Encyclopædia Britannica, Groliers Encyclopedia, Microsoft Encarta Encyclopedia, and the Encyclopedia of Computer Science. Parker lectured for the Commonwealth Club of San Francisco, Churchill Club of Silicon Valley, many universities, and the World Organization of Detectives. Memberships Parker became active in the Association for Computing Machinery (ACM) in 1954. He was elected Secretary of the ACM from 1966 to 1970 while serving on the ACM Council from 1964 to 1974 and was chairman of the professional standards and practices committee for several years. In addition, he is a member of the Information Systems Security Association (ISSA) and is a Certified Information Systems Security Professional (CISSP). He was the consulting editor and columnist for the Journal of Information Systems Security (Auerbach) from 1994 to 1997. Parker was involved with many other organizations. He is a grantee of the National Science Foundation, and the US Department of Justice, and was the founder in 1986 (while at SRI International) of the International Information Integrity Institute (I-4) an ongoing confidential service to large, international corporations and governments now owned and operated by KPMG-UK. Awards In 1992, Parker received the Information Systems Security Association's Individual Achievement Award. In 1994, U.S. NIST/NSA awarded him the 1994 National Computer System Security Award and the Aerospace Computer Security Associates named him their Distinguished Lecturer. In 1996, he received MIS Infosecurity News' Lifetime Achievement Award. In 1998, the Information Security Magazine profiled him as one of the top five “Infosecurity Pioneers,” In 2001 he was named a Fellow of the Association for Computing Machinery “[f]or contributions to information security and professional ethics.” Selected publications Books Crime by Computer (Charles Scribner’s Sons, 1976) Ethical Conflicts in Computer Science and Technology (AFIPS Press, 1979) Fighting Computer Crime (Charles Scribner’s Sons, 1983) Computer Security Management (Reston Publishing, 1983) Ethical Conflicts in Information and Computer Science, Technology, and Business (QED Information Sciences, 1990) Computer Security Reference Book (Butterworth, 1993) Fighting Computer Crime, a New Framework for Protecting Information (John Wiley & Son, 1998) Major reports Computer Abuse (SRI International, 1971) Criminal Justice Resource Manuals on Computer Crime published by the US Department of Justice Computer Security Techniques (US Government Printing Office, 1980 and 1989) References External links Oral history interview with Donn B. Parker, Charles Babbage Institute, University of Minnesota. Donn B. Parker on Computer Chronicles - March 26, 1984 Donn B. Parker at the Los Angeles Times Shoulders of Infosec: Donn Parker Living people Fellows of the Association for Computing Machinery SRI International people General Dynamics University of California, Berkeley alumni Control Data Corporation Year of birth missing (living people)
48031036
https://en.wikipedia.org/wiki/Herb%20Coleman%20%28defensive%20lineman%29
Herb Coleman (defensive lineman)
Herbert Coleman (born September 4, 1971) is a former American football defensive lineman who played one season with the Saskatchewan Roughriders of the Canadian Football League (CFL) . He was drafted by the San Francisco 49ers of the National Football League (NFL) in the seventh round of the 1995 NFL Draft. He played college football at Trinity International University and attended Hillcrest High School in Country Club Hills, Illinois. Coleman was also a member of the Kansas City Chiefs, New Orleans Saints, Rhein Fire, Milwaukee Mustangs, Denver Broncos, Orlando Predators and Colorado Crush. College career Coleman played for the Trinity International Trojans from 1989 to 1990. He then transferred to South Suburban College to improve his grades while also working at a post office. He later returned to Trinity International to finish his college education, while playing for the Trojans. Professional career Coleman was selected by the San Francisco 49ers NFL with the 238th pick in the 1995 NFL Draft and signed with the team on June 29, 1995. He was released by the 49ers on August 20 and signed to the team's practice squad on August 28, 1995. He was released by the 49ers on August 20, 1996. On August 27, 1996, Coleman was signed to the practice squad of the Kansas City Chiefs of the NFL. On December 4, 1996, he was signed to the practice squad of the New Orleans Saints of the NFL. He signed with the San Francisco 49ers on January 24, 1997. Coleman played for the Rhein Fire of the World League of American Football during the 1997 season. He was released by the 49ers on August 19, 1997. He was signed by the Milwaukee Mustangs of the Arena Football League (AFL) on January 9, 1998. Coleman was placed on Other League Exempt by the Mustangs on January 23, 1998 upon joining the CFL's Saskatchewan Roughriders. He signed with the Roughriders in 1998 and played in two games, starting both, for the team during the 1998 season. He was placed on Other League Exempt by the Mustangs on March 10, 1999 upon joining the Denver Broncos of the NFL. Coleman was signed by the Broncos in March 1999. He was released by the Broncos on August 10, 1999. The Milwaukee Mustangs traded his rights to the Orlando Predators for Chris Barber on April 1, 1999. The Predators traded Coleman and Alvin Ashley to the Mustangs for Ernest Allen on November 4, 1999. Herb played for the Mustangs during the 2000 season. He signed with the AFL's Colorado Crush on November 22, 2002 and played for the team during the 2003 season. He was placed on the refused to report list on January 12, 2004 and released by the Crush on November 15, 2004. References External links Just Sports Stats Living people 1971 births American football defensive linemen Canadian football defensive linemen African-American players of American football African-American players of Canadian football Trinity International Trojans football players South Suburban College alumni San Francisco 49ers players Kansas City Chiefs players New Orleans Saints players Rhein Fire players Milwaukee Mustangs (1994–2001) players Saskatchewan Roughriders players Denver Broncos players Orlando Predators players Colorado Crush players Players of American football from Illinois Sportspeople from Cook County, Illinois People from Country Club Hills, Illinois 21st-century African-American sportspeople 20th-century African-American sportspeople
2205677
https://en.wikipedia.org/wiki/Bucolion
Bucolion
In Greek mythology, Bucolion (Ancient Greek: Βουκολίωνα) may refer to the following individuals: Bucolion, an Arcadian prince as one of the 50 sons of the impious King Lycaon either by the naiad Cyllene, Nonacris or by unknown woman. He and his siblings were the most nefarious and carefree of all people. To test them, Zeus visited them in the form of a peasant. These brothers mixed the entrails of a child into the god's meal, whereupon the enraged Zeus threw the meal over the table. Bucolion was killed, along with his brothers and their father, by a lightning bolt of the god. Bucolion, also Boucolides, was eldest but illegitimate son of the Trojan king Laomedon and the nymph Calybe. His wife was the naiad Abarbarea, and they had at least two sons, Aesepus and Pedasus. Aesepus and Pedasus participated in the Trojan War. According to Tzetzes, Bucolion and Abarbarea were the parents of the Trojan hero Euphorbus who was otherwise known as the son of Panthous and Phrontis. Bucolion, an Achaean soldier who fought in the Trojan War. He was slain by the Mysian Eurypylus. Bucolion, king of Arcadia who he succeeded his father Holaeas, son of Cypselus. He was the father of Phialus. Notes Sons of Lycaon Achaeans (Homer) Trojans References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Dionysus of Halicarnassus, Roman Antiquities. English translation by Earnest Cary in the Loeb Classical Library, 7 volumes. Harvard University Press, 1937-1950. Online version at Bill Thayer's Web Site Dionysius of Halicarnassus, Antiquitatum Romanarum quae supersunt, Vol I-IV. . Karl Jacoby. In Aedibus B.G. Teubneri. Leipzig. 1885. Greek text available at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library. Tzetzes, John, Book of Histories, Book I translated by Ana Untila from the original Greek of T. Kiessling's edition of 1826. Online version at theio.com Princes in Greek mythology Kings in Greek mythology Characters in Greek mythology Arcadian mythology
33310062
https://en.wikipedia.org/wiki/Orange%20and%20Bronze%20Software%20Labs
Orange and Bronze Software Labs
Orange and Bronze Software Labs (O&B) is a privately owned computer software development company based in the Philippines. The company delivers software consulting, product engineering, and IT training services with a focus on Java software development. O&B specializes in developing and deploying web applications using Java-based frameworks and technologies such as Spring (framework), Grails (framework), Hibernate (Java), Google Maps API, Google App Engine, and Android. O&B develops software using the Agile software development methods. The company is the first Philippine-based SpringSource partner (now VMware vFabric) and listed as a Grails framework development company. O&B is also an authorized reseller of the Pentaho Business Intelligence Suite. History Orange and Bronze was founded in July 2005 by Calen Martin Legaspi and Renato "Butch" Landingin. The company started as a two-man consulting firm doing software training for local software companies. Their clients eventually hired them as software consultants, before the company ventured into offshore software development projects. This led to partnerships with Google, SpringSource and Pentaho. As of March 2011, Orange and Bronze's headcount is at 90 employees. Founders Calen Martin Legaspi is the co-founder and CEO of Orange and Bronze Software Labs. He co-founded PinoyJUG, or the Association of Philippine Java Developers. He is currently part of the Technology Council for the Philippine Software Industry Association (PSIA) as Director for Technology. He is also the official representative to the Commission on Higher Education (CHED) Technical Committee on Computer Science. Renato "Butch" Landingin is Calen Legaspi's co-founder. He serves as the company's Chief Technology Officer. Butch Landingin is the author of Squishdot, which is described as "a web based news publishing and discussion product that allows you to handle threaded discussions with a minimum of configuration and day-to-day management by building a web-based news site." Services O&B offers software consulting and offshore product development services with a focus on Enterprise Java and Agile software development. O&B is also listed as a Google Enterprise Partner in the Philippines. The company's Google Apps implementations include nationwide change management training and deployment of 600,000 licenses for a government institution, and migration from a legacy system for the Department of Finance and for the largest Filipino software firm - Pointwest Technologies. The company also offers software training classes in the Philippines, with courses that include Java Bootcamp, Spring and Hibernate Framework training, Groovy and Grails training, Apache Struts training, JavaServer Faces training, Enterprise Architecture, Agile with Scrum (development) and XP training, and Android training through its O&B University program. Incubation Through the company’s technopreneurship program, O&B incubated the Google Maps-based real estate search site, Haybol.ph in 2010, founded by O&B employees Edge Dalmacio and Lorenzo Dee. O&B also incubated Kandroid, an Android application developed by MilObjects Software, that is "a simple application used to visualize and control personal tasks." This application can be downloaded from Google Play. In 2012, it incubated Open for Business Online (O4BO), a subscription service that follows the SaaS delivery model, that offers the Liferay, an open source enterprise portal; Openbravo, a web-based ERP; Apache Jackrabbit, a content repository; and SugarCRM, a CRM system. O4BO was conceptualized by Michael Oliver. Open source Orange and Bronze is a proponent of open-source software technologies. The company encourages its employees to contribute to open source projects. Renato “Butch” Landingin, co-founder and chief technology officer, is the author of an open source application called Squishdot, written in Python and Zope. Squishdot is a news and publishing content management system used by KDE Dot News. He also authored the Batch Jobs Management Console (Batman-Con), a web-based application written in Grails (framework) to monitor batch runs of Spring Batch-based batch jobs. Michael Mallete, the vice president for consulting services, developed open source applications S2PFace, Grails SoundManager Plugin and Robot Framework Maven Plugin. Lorenzo Dee, AVP for consulting services, developed the Java Computer Interface to Message Distribution API (JCIMD), a simple Java implementation of CIMD protocol. References Java (programming language) software Software companies of the Philippines Companies based in Makati
42744952
https://en.wikipedia.org/wiki/Motithang%20Higher%20Secondary%20School
Motithang Higher Secondary School
Motithang Higher Secondary School (Dzongkha: མུ་ཏིག་ཐང་འབྲིང་རིམ་སློབ་གྲྭ་གོང་མ) is a government high school in the capital city of Thimphu, Bhutan. It was established in the year 1975. Motithang translates to The Meadow of Pearls in English. Mr KP Nayar was the first principal of the school (formerly known as Thimphu School). Location It is located 1 mile north of the main city, in Motithang. The school shares its border with Jigme Namgyel lower secondary school. Academic Session For all the government school in Bhutan, the academic year is February–December. The two weeks summer break commences from first day of July. Houses The school named each of their houses after the current King of Bhutan, His Majesty Jigme Khesar Namgyel Wangchuck as a mark of utmost respect. They are: Druk House Gyalpo House Jigme House Khesar House Namgyel House Wangchuck House Each house is being led by a house master and a house mistress and a group of teachers. A total of four student representatives are being elected to work in the house administration. A boy and a girl vice captain are selected yearly from the eleventh graders and then promoted to full house captains in the following year. Education The subjects for the ninth and tenth grader include mathematics, English, Dzongkha, physics, chemistry, biology, computer science, economics, history and geography. The medium of instruction is English for all subjects except for the Dzongkha course. Entrance to the eleventh and twelfth grades would require an individual to obtain higher marks than the required baseline i.e. the cut-off percentage for entering into government high school. The courses for the eleventh and twelfth grader are divided into three major streams. They are: Science Courses in this field include English, Dzongkha, physics, chemistry, computer science (only for Information technology students) and mathematics/biology as a choice. Commerce Courses in this field include English, Dzongkha, commerce, accountancy, business-mathematics and economics/computer science as a choice. Arts and Humanities Courses in this field include English, Dzongkha, history, geography and Rigzhung. English literature/business mathematics and economics/computer science stands as a choice. Faculty The school is now headed by Madam Jigme Choden, the principal of Changzamtog Lower Secondary School earlier. The vice principal for the school is Mrs.Tshering Zam and the school is run by a faculty of 70 staffs. The faculty consists of fourteen non-teaching staffs (approx.) helping in the school's management. Student Body – Student Representative Council The Council consists of two presidents, two vice presidents, the general captains and PYP (Police Youth Partnership) volunteers. The student council serves as an important consultative body in the school's management, helping the school in matters concerning the students. This body serves as an important link between the students and the teachers of the school. Achievements The Ministry of Education has also introduced a new system to rank schools in Bhutan called the Performance Management System (PMS). This system uses three criterion (practices, GNH and academic) to determine whether a school performing to the expected level or not. At the Annual Education Conference held in December 2010 at Phuntsholing, Motithang higher secondary school was ranked the third well performing school. Students of grade ten and twelve in Bhutan have to sit for the examination conducted by Bhutan Council for School Examination and Assessment (BCSEA). In the BCSE and BHSEC standardized test set by the council for class ten and twelve respectively, many of the students from the school had made it to the top of the examination. 2007 Mr. Bhuwan Giri (88.75%), a science students topped the BHSEC examination. Mr. Tshering Penjore (92.40%) and Ms. Gitanjali Lamichaney (89.40%) took the second and third place in the BCSE examination. 2008 Ms. Chimmi Dema (91.60%) took the third position in the 2008 BCSE, class 10 examination. 2009 Mr. Yeshey Samdrup (90%) of science stream topped the BHSEC examination. 2010 Ms. Tshering Dolkar (87.5%) and Ms. Kesang Yuden (87.25%) from arts stream and Mr. Tandin Dorji (86.5%) from science stream took the first, second and third place respectively in BHSEC examination. 2012 Mr. Sonam Tsheten (87.75%) from arts stream took the third place in the BHSEC examination. Ms. Karma Choden (85.50%) took the first position in the Commerce stream category in BHSEC examination. Ms. Sonam Tshomo (94%) and Ms. Divya Chhetri took a tie at the third place in BCSE Examination. 2013 Ms. Deki Tshomo Rinzin (92%) took the second position in BCSE Examination. 2014 Ms. Ngawang Phuntsho (93%) topped the country in the BCSE Examination 2016 Ms. Ugyen Lhamo (86%) took the second position in the country from science stream in BHSEC examination. References Schools in Bhutan Educational institutions established in 1975
39469106
https://en.wikipedia.org/wiki/History%20of%20numerical%20control
History of numerical control
The history of numerical control (NC) began when the automation of machine tools first incorporated concepts of abstractly programmable logic, and it continues today with the ongoing evolution of computer numerical control (CNC) technology. The first NC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the controls to follow points fed into the system on punched tape. These early servomechanisms were rapidly augmented with analog and digital computers, creating the modern CNC machine tools that have revolutionized the machining processes. Earlier forms of automation Cams The automation of machine tool control began in the 19th century with cams that "played" a machine tool in the way that cams had long been playing musical boxes or operating elaborate cuckoo clocks. Thomas Blanchard built his gun-copying lathes (1820s–30s), and the work of people such as Christopher Miner Spencer developed the turret lathe into the screw machine (1870s). Cam-based automation had already reached a highly advanced state by World War I (1910s). However, automation via cams is fundamentally different from numerical control because it cannot be abstractly programmed. Cams can encode information, but getting the information from the abstract level (engineering drawing, CAD model, or other design intent) into the cam is a manual process that requires machining or filing. In contrast, numerical control allows information to be transferred from design intent to machine control using abstractions such as numbers and programming languages. Various forms of abstractly programmable control had existed during the 19th century: those of the Jacquard loom, player pianos, and mechanical computers pioneered by Charles Babbage and others. These developments had the potential for convergence with the automation of machine tool control starting in that century, but the convergence did not happen until many decades later. Tracer control The application of hydraulics to cam-based automation resulted in tracing machines that used a stylus to trace a template, such as the enormous Pratt & Whitney "Keller Machine", which could copy templates several feet across. Another approach was "record and playback", pioneered at General Motors (GM) in the 1950s, which used a storage system to record the movements of a human machinist, and then play them back on demand. Analogous systems are common even today, notably the "teaching lathe" which gives new machinists a hands-on feel for the process. None of these were numerically programmable, however, and required an experienced machinist at some point in the process, because the "programming" was physical rather than numerical. Servos and synchros One barrier to complete automation was the required tolerances of the machining process, which are routinely on the order of thousandths of an inch. Although connecting some sort of control to a storage device like punched cards was easy, ensuring that the controls were moved to the correct position with the required accuracy was another issue. The movement of the tool resulted in varying forces on the controls that would mean a linear input would not result in linear tool motion. In other words, a control such as that of the Jacquard loom could not work on machine tools because its movements were not strong enough; the metal being cut "fought back" against it with more force than the control could properly counteract. The key development in this area was the introduction of the servomechanism, which produced powerful, controlled movement, with highly accurate measurement information. Attaching two servos together produced a synchro, where a remote servo's motions were accurately matched by another. Using a variety of mechanical or electrical systems, the output of the synchros could be read to ensure proper movement had occurred (in other words, forming a closed-loop control system). The first serious suggestion that synchros could be used for machining control was made by Ernst F. W. Alexanderson, a Swedish immigrant to the U.S. working at General Electric (GE). Alexanderson had worked on the problem of torque amplification that allowed the small output of a mechanical computer to drive very large motors, which GE used as part of a larger gun laying system for US Navy ships. Like machining, gun laying requires very high accuracy – fractions of a degree – and the forces during the motion of the gun turrets was non-linear, especially as the ships pitched in waves. In November 1931 Alexanderson suggested to the Industrial Engineering Department that the same systems could be used to drive the inputs of machine tools, allowing it to follow the outline of a template without the strong physical contact needed by existing tools like the Keller Machine. He stated that it was a "matter of straight engineering development". However, the concept was ahead of its time from a business development perspective, and GE did not take the matter seriously until years later, when others had pioneered the field. Parsons Corp. and Sikorsky The birth of NC is generally credited to John T. Parsons and Frank L. Stulen, working out of Parsons Corp. of Traverse City, Michigan. For this contribution, they were jointly awarded the National Medal of Technology in 1985 for "Revolutioniz[ing] Production Of Cars And Airplanes With Numerical Controls For Machines". In 1942, Parsons was told that helicopters were going to be the "next big thing" by the former head of Ford Trimotor production, Bill Stout. He called Sikorsky Aircraft to inquire about possible work, and soon got a contract to build the wooden stringers in the rotor blades. At the time, rotor blades (rotary wings) were built in the same fashion that fixed wings were, consisting of a long tubular steel spar with stringers (or more accurately ribs) set on them to provide the aerodynamic shape that was then covered with a stressed skin. The stringers for the rotors were built from a design provided by Sikorsky, which was sent to Parsons as a series of 17 points defining the outline. Parsons then had to "fill in" the dots with a French curve to generate an outline. A wooden jig was built up to form the outside of the outline, and the pieces of wood forming the stringer were placed under pressure against the inside of the jig so they formed the proper curve. A series of trusswork members were then assembled inside this outline to provide strength. After setting up production at a disused furniture factory and ramping up production, one of the blades failed and it was traced to a problem in the spar. At least some of the problem appeared to stem from spot welding a metal collar on the stringer to the metal spar. The collar was built into the stringer during construction, then slid onto the spar and welded in the proper position. Parsons suggested a new method of attaching the stringers directly to the spar using adhesives, never before tried on an aircraft design. That development led Parsons to consider the possibility of using stamped metal stringers instead of wood. These would not only be much stronger, but far easier to make as well, as they would eliminate the complex layup and glue and screw fastening on the wood. Duplicating this in a metal punch would require the wooden jig to be replaced by a metal cutting tool made of tool steel. Such a device would not be easy to produce given the complex outline. Looking for ideas, Parsons visited Wright Field to see Frank L. Stulen, the head of the Propeller Lab Rotary Wing Branch. During their conversation, Stulen concluded that Parsons didn't really know what he was talking about. Parsons realized Stulen had reached this conclusion, and hired him on the spot. Stulen started work on 1 April 1946 and hired three new engineers to join him. Stulen's brother worked at Curtis Wright Propeller, and mentioned that they were using punched card calculators for engineering calculations. Stulen decided to adopt the idea to run stress calculations on the rotors, the first detailed automated calculations on helicopter rotors. When Parsons saw what Stulen was doing with the punched card machines, he asked Stulen if they could be used to generate an outline with 200 points instead of the 17 they were given, and offset each point by the radius of a mill cutting tool. If you cut at each of those points, it would produce a relatively accurate cutout of the stringer. This could cut the tool steel and then easily be filed down to a smooth template for stamping metal stringers. Stullen had no problem making such a program, and used it to produce large tables of numbers that would be taken onto the machine floor. Here, one operator read the numbers off the charts to two other operators, one on each of the X- and Y- axes. For each pair of numbers the operators would move the cutting head to the indicated spot and then lower the tool to make the cut. This was called the "by-the-numbers method", or more technically, "plunge-cutting positioning". It was a labor-intensive prototype of today's 2.5 axis machining (two-and-a-half-axis machining). Punch cards and first tries at NC At that point Parsons conceived of a fully automated machine tool. With enough points on the outline, no manual working would be needed to clean it up. However, with manual operation the time saved by having the part more closely match the outline was offset by the time needed to move the controls. If the machine's inputs were attached directly to the card reader, this delay, and any associated manual errors, would be removed and the number of points could be dramatically increased. Such a machine could repeatedly punch out perfectly accurate templates on command. But at the time Parsons had no funds to develop his ideas. When one of Parsons's salesmen was on a visit to Wright Field, he was told of the problems the newly formed US Air Force was having with new jet-powered designs. He asked if Parsons had anything to help them. Parsons showed Lockheed their idea of an automated mill, but they were uninterested. They decided to use 5-axis template copiers to produce the stringers, cutting from a metal template, and had already ordered the expensive cutting machine. But as Parsons noted: Now just picture the situation for a minute. Lockheed had contracted to design a machine to make these wings. This machine had five axes of cutter movement, and each of these was tracer controlled using a template. Nobody was using my method of making templates, so just imagine what chance they were going to have of making an accurate airfoil shape with inaccurate templates. Parson's worries soon came true, and Lockheed's protests that they could fix the problem eventually rang hollow. In 1949 the Air Force arranged funding for Parsons to build his machines on his own. Early work with Snyder Machine & Tool Corp proved the system of directly driving the controls from motors failed to give the accuracy needed to set the machine for a perfectly smooth cut. Since the mechanical controls did not respond in a linear fashion, you couldn't simply drive it with a given amount of power, because the differing forces meant the same amount of power would not always produce the same amount of motion in the controls. No matter how many points you included, the outline would still be rough. Parsons was confronted by the same problem that had prevented convergence of Jacquard-type controls with machining. First commercial numerically controlled machine In 1952, Arma Corporation which had done much defense work on rangefinders during the war, announced the first commercial numerically controlled lathe, developed by Dr. F. W. Cunningham. Arma's first automated lathe was made in 1948, and announced in 1950. Parsons Corp. and MIT This was not an impossible problem to solve, but would require some sort of feedback system, like a selsyn, to directly measure how far the controls had actually turned. Faced with the daunting task of building such a system, in the spring of 1949 Parsons turned to Gordon S. Brown's Servomechanisms Laboratory at MIT, which was a world leader in mechanical computing and feedback systems. During the war the Lab had built a number of complex motor-driven devices like the motorized gun turret systems for the Boeing B-29 Superfortress and the automatic tracking system for the SCR-584 radar. They were naturally suited to technological transfer into a prototype of Parsons's automated "by-the-numbers" machine. The MIT team was led by William Pease assisted by James McDonough. They quickly concluded that Parsons's design could be greatly improved; if the machine did not simply cut at points A and B, but instead moved smoothly between the points, then not only would it make a perfectly smooth cut, but could do so with many fewer points – the mill could cut lines directly instead of having to define a large number of cutting points to "simulate" a line. A three-way agreement was arranged between Parsons, MIT, and the Air Force, and the project officially ran from July 1949 to June 1950. The contract called for the construction of two "Card-a-matic Milling Machines", a prototype and a production system. Both to be handed to Parsons for attachment to one of their mills in order to develop a deliverable system for cutting stringers. Instead, in 1950 MIT bought a surplus Cincinnati Milling Machine Company "Hydro-Tel" mill of their own and arranged a new contract directly with the Air Force that froze Parsons out of further development. Parsons would later comment that he "never dreamed that anybody as reputable as MIT would deliberately go ahead and take over my project." In spite of the development being handed to MIT, Parsons filed for a patent on "Motor Controlled Apparatus for Positioning Machine Tool" on 5 May 1952, sparking a filing by MIT for a "Numerical Control Servo-System" on 14 August 1952. Parsons received US Patent 2,820,187 on 14 January 1958, and the company sold an exclusive license to Bendix. IBM, Fujitsu and General Electric all took sub-licenses after having already started development of their own devices. MIT's machine MIT fitted gears to the various handwheel inputs and drove them with roller chains connected to motors, one for each of the machine's three axes (X, Y, and Z). The associated controller consisted of five refrigerator-sized cabinets that, together, were almost as large as the mill they were connected to. Three of the cabinets contained the motor controllers, one controller for each motor, the other two the digital reading system. Unlike Parsons's original punched card design, the MIT design used standard 7-track punch tape for input. Three of the tracks were used to control the different axes of the machine, while the other four encoded various control information. The tape was read in a cabinet that also housed six relay-based hardware registers, two for each axis. With every read operation the previously read point was copied into the "starting point" register, and the newly read one into the "ending point" register. The tape was read continually and the number in the registers incremented with each hole encountered in their control track until a "stop" instruction was encountered, four holes in a line. The final cabinet held a clock that sent pulses through the registers, compared them, and generated output pulses that interpolated between the points. For instance, if the points were far apart the output would have pulses with every clock cycle, whereas closely spaced points would only generate pulses after multiple clock cycles. The pulses were sent into a summing register in the motor controllers, counting up by the number of pulses every time they were received. The summing registers were connected to a digital to analog converter that increased power to the motors as the count in the registers increased, making the controls move faster. The registers were decremented by encoders attached to the motors and the mill itself, which would reduce the count by one for every one degree of rotation. Once the second point was reached the counter would hold a zero, the pulses from the clock would stop, and the motors would stop turning. Each 1 degree rotation of the controls produced a 0.0005 inch movement of the cutting head. The programmer could control the speed of the cut by selecting points that were closer together for slow movements, or further apart for rapid ones. The system was publicly demonstrated in September 1952, appearing in that month's Scientific American. MIT's system was an outstanding success by any technical measure, quickly making any complex cut with extremely high accuracy that could not easily be duplicated by hand. However, the system was terribly complex, including 250 vacuum tubes, 175 relays and numerous moving parts, reducing its reliability in a production environment. It was also expensive; the total bill presented to the Air Force was $360,000.14 ($2,641,727.63 in 2005 dollars). Between 1952 and 1956 the system was used to mill a number of one-off designs for various aviation firms, in order to study their potential economic impact. Proliferation of NC The Air Force Numeric Control and Milling Machine projects formally concluded in 1953, but development continued at the Giddings and Lewis Machine Tool Co. and other locations. In 1955 many of the MIT team left to form Concord Controls, a commercial NC company with Giddings' backing, producing the Numericord controller. Numericord was similar to the MIT design, but replaced the punch tape with a magnetic tape reader that General Electric was working on. The tape contained a number of signals of different phases, which directly encoded the angle of the various controls. The tape was played at a constant speed in the controller, which set its half of the selsyn to the encoded angles while the remote side was attached to the machine controls. Designs were still encoded on paper tape, but the tapes were transferred to a reader/writer that converted them into magnetic form. The magtapes could then be used on any of the machines on the floor, where the controllers were greatly reduced in complexity. Developed to produce highly accurate dies for an aircraft skinning press, the Numericord "NC5" went into operation at G&L's plant at Fond du Lac, WI in 1955. Monarch Machine Tool also developed a numerical controlled lathe, starting in 1952. They demonstrated their machine at the 1955 Chicago Machine Tool Show (predecessor of today's IMTS), along with a number of other vendors with punched card or paper tape machines that were either fully developed or in prototype form. These included Kearney and Trecker's Milwaukee-Matic II that could change its cutting tool under numerical control, a common feature on modern machines. A Boeing report noted that "numerical control has proved it can reduce costs, reduce lead times, improve quality, reduce tooling and increase productivity.” In spite of these developments, and glowing reviews from the few users, uptake of NC was relatively slow. As Parsons later noted: The NC concept was so strange to manufacturers, and so slow to catch on, that the US Army itself finally had to build 120 NC machines and lease them to various manufacturers to begin popularizing its use. In 1958 MIT published its report on the economics of NC. They concluded that the tools were competitive with human operators, but simply moved the time from the machining to the creation of the tapes. In Forces of Production, Noble claims that this was the whole point as far as the Air Force was concerned; moving the process off of the highly unionized factory floor and into the non-unionized white collar design office. The cultural context of the early 1950s, a second Red Scare with a widespread fear of a bomber gap and of domestic subversion, sheds light on this interpretation. It was strongly feared that the West would lose the defense production race to the Communists, and that syndicalist power was a path toward losing, either by "getting too soft" (less output, greater unit expense) or even by Communist sympathy and subversion within unions (arising from their common theme of empowering the working class). Aside from what ever economic inefficiencies the first attempts at NC displayed, the time and effort required in the creation of the tapes also introduced possibilities for production errors. This would be a motivation for Air Force contracts ongoing in 1958 like the Automatically Programmed Tool project and the report, then later project, Computer-Aided Design: A Statement of Objectives 1960 of Douglas (Doug) T. Ross. CNC arrives Many of the commands for the experimental parts were programmed "by hand" to produce the punch tapes that were used as input. During the development of Whirlwind, MIT's real-time computer, John Runyon coded a number of subroutines to produce these tapes under computer control. Users could enter a list of points and speeds, and the program would calculate the points needed and automatically generate the punch tape. In one instance, this process reduced the time required to produce the instruction list and mill the part from 8 hours to 15 minutes. This led to a proposal to the Air Force to produce a generalized "programming" language for numerical control, which was accepted in June 1956. Doug Ross was given leadership of the project and was made head of another newly created MIT research department. He chose to name the unit the Computer Applications Group feeling the word "application" fit with the vision that general purpose machines could be "programmed" to fill many roles. Starting in September, Ross and Pople outlined a language for machine control that was based on points and lines, developing this over several years into the APT programming language. In 1957 the Aircraft Industries Association (AIA) and Air Material Command at Wright-Patterson Air Force Base joined with MIT to standardize this work and produce a fully computer-controlled NC system. On 25 February 1959 the combined team held a press conference showing the results, including a 3D machined aluminum ash tray that was handed out in the press kit. In 1959 they also described the use of APT on a 60-foot mill at Boeing since 1957. Meanwhile, Patrick Hanratty was making similar developments at GE as part of their partnership with G&L on the Numericord. His language, PRONTO, beat APT into commercial use when it was released in 1958. Hanratty then went on to develop MICR magnetic ink characters that were used in cheque processing, before moving to General Motors to work on the groundbreaking DAC-1 CAD system. APT was soon extended to include "real" curves in 2D-APT-II. With its release into the Public Domain, MIT reduced its focus on NC as it moved into CAD experiments. APT development was picked up with the AIA in San Diego, and in 1962, by Illinois Institute of Technology Research. Work on making APT an international standard started in 1963 under USASI X3.4.7, but any manufacturers of NC machines were free to add their own one-off additions (like PRONTO), so standardization was not completed until 1968, when there were 25 optional add-ins to the basic system. Just as APT was being released in the early 1960s, a second generation of lower-cost transistorized computers was hitting the market that were able to process much larger volumes of information in production settings. This reduced the cost of programming for NC machines and by the mid-1960s, APT runs accounted for a third of all computer time at large aviation firms. CADCAM meets CNC While the Servomechanisms Lab was in the process of developing their first mill, in 1953, MIT's Mechanical Engineering Department dropped the requirement that undergraduates take courses in drawing. The instructors formerly teaching these programs were merged into the Design Division, where an informal discussion of computerized design started. Meanwhile, the Electronic Systems Laboratory, the newly rechristened Servomechanisms Laboratory, had been discussing whether or not design would ever start with paper diagrams in the future. In January 1959, an informal meeting was held involving individuals from both the Electronic Systems Laboratory and the Mechanical Engineering Department's Design Division. Formal meetings followed in April and May, which resulted in the "Computer-Aided Design Project". In December 1959, the Air Force issued a one-year contract to ESL for $223,000 to fund the project, including $20,800 earmarked for 104 hours of computer time at $200 per hour. This proved to be far too little for the ambitious program they had in mind In 1959 that was a lot of money. Newly graduated engineers were making perhaps $500 to $600 per month at the time. To augment the Air Force's commitment Ross replayed the success of the APT development model. The AED Cooperative Program which ultimately ran for a five-year period had outside corporate staff, deeply experienced design manpower on loan from companies . Some relocating to MIT for half a year to 14 or 18 months at a time. Ross later estimated this value at almost six million dollars in support of AED development work, systems research, compilers. AED was a machine independent software engineering job and an extension of ALGOL 60 the standard for the publication of algorithms by research computer scientists. Development started out in parallel on the IBM 709 and the TX-0 which later enabled projects to run at various sites. The engineering calculation and systems development system, AED, was released to the Public Domain in March 1965. In 1959, General Motors started an experimental project to digitize, store and print the many design sketches being generated in the various GM design departments. When the basic concept demonstrated that it could work, they started the DAC-1 – Design Augmented by Computer – project with IBM to develop a production version. One part of the DAC project was the direct conversion of paper diagrams into 3D models, which were then converted into APT commands and cut on milling machines. In November 1963 a design for the lid of a trunk moved from 2D paper sketch to 3D clay prototype for the first time. With the exception of the initial sketch, the design-to-production loop had been closed. Meanwhile, MIT's offsite Lincoln Labs was building computers to test new transistorized designs. The ultimate goal was essentially a transistorized Whirlwind known as TX-2, but in order to test various circuit designs a smaller version known as TX-0 was built first. When construction of TX-2 started, time in TX-0 freed up and this led to a number of experiments involving interactive input and use of the machine's CRT display for graphics. Further development of these concepts led to Ivan Sutherland's groundbreaking Sketchpad program on the TX-2. Sutherland moved to the University of Utah after his Sketchpad work, but it inspired other MIT graduates to attempt the first true CAD system. It was Electronic Drafting Machine (EDM), sold to Control Data and known as "Digigraphics", which Lockheed used to build production parts for the C-5 Galaxy, the first example of an end-to-end CAD/CNC production system. By 1970 there were a wide variety of CAD firms including Intergraph, Applicon, Computervision, Auto-trol Technology, UGS Corp. and others, as well as large vendors like CDC and IBM. Proliferation of CNC The price of computer cycles fell drastically during the 1960s with the widespread introduction of useful minicomputers. Eventually it became less expensive to handle the motor control and feedback with a computer program than it was with dedicated servo systems. Small computers were dedicated to a single mill, placing the entire process in a small box. PDP-8's and Data General Nova computers were common in these roles. The introduction of the microprocessor in the 1970s further reduced the cost of implementation, and today almost all CNC machines use some form of microprocessor to handle all operations. The introduction of lower-cost CNC machines radically changed the manufacturing industry. Curves are as easy to cut as straight lines, complex 3-D structures are relatively easy to produce, and the number of machining steps that required human action have been dramatically reduced. With the increased automation of manufacturing processes with CNC machining, considerable improvements in consistency and quality have been achieved with no strain on the operator. CNC automation reduced the frequency of errors and provided CNC operators with time to perform additional tasks. CNC automation also allows for more flexibility in the way parts are held in the manufacturing process and the time required changing the machine to produce different components. Additionally, as CNC operators become more in demand, automation becomes a more viable choice than labor. During the early 1970s the Western economies were mired in slow economic growth and rising employment costs, and NC machines started to become more attractive. The major U.S. vendors were slow to respond to the demand for machines suitable for lower-cost NC systems, and into this void stepped the Germans. In 1979, sales of German machines (eg. Siemens Sinumerik) surpassed the U.S. designs for the first time. This cycle quickly repeated itself, and by 1980 Japan had taken a leadership position, U.S. sales dropping all the time. Once sitting in the #1 position in terms of sales on a top-ten chart consisting entirely of U.S. companies in 1971, by 1987 Cincinnati Milacron was in 8th place on a chart heavily dominated by Japanese firms. Many researchers have commented that the U.S. focus on high-end applications left them in an uncompetitive situation when the economic downturn in the early 1970s led to greatly increased demand for low-cost NC systems. Unlike the U.S. companies, who had focused on the highly profitable aerospace market, German and Japanese manufacturers targeted lower-profit segments from the start and were able to enter the low-cost markets much more easily. Additionally large Japanese companies established their own subsidiaries or strengthened their machine divisions to produce the machines they needed. This was seen as a national effort and largely encouraged by MITI, the Japanese Ministry for International Trade and Industry. In the early years of the development, MITI provided focused resources for the transfer of technological know-how. National efforts in the US were focused on integrated manufacturing from the historical perspective the defence sector maintained. This evolved in the later 1980s, as the so-called machine tool crisis was recognized, into a number of programs that sought to broaden transfer of know how to domestic tool makers. The Air Force sponsored Next Generation Controller Program 1989 as an example. This process continued through the 1990s to the present day from DARPA incubators and myriad research grants. As computing and networking evolved, so did direct numerical control (DNC). Its long-term coexistence with less networked variants of NC and CNC is explained by the fact that individual firms tend to stick with whatever is profitable, and their time and money for trying out alternatives is limited. This explains why machine tool models and tape storage media persist in grandfathered fashion even as the state of the art advances. DIY, hobby, and personal CNC Recent developments in small scale CNC have been enabled, in large part, by the Enhanced Machine Controller project in 1989 from the National Institute of Standards and Technology (NIST), an agency of the US Government's Department of Commerce. EMC [LinuxCNC] is a public domain program operating under the Linux operating system and working on PC based hardware. After the NIST project ended, development continued, leading to LinuxCNC which is licensed under the GNU General Public License and Lesser GNU General Public License (GPL and LGPL). Derivations of the original EMC software have also led to several proprietary low cost PC based programs notably TurboCNC, and Mach3, as well as embedded systems based on proprietary hardware. The availability of these PC based control programs has led to the development of DIY CNC, allowing hobbyists to build their own using open source hardware designs. The same basic architecture has allowed manufacturers, such as Sherline and Taig, to produce turnkey lightweight desktop milling machines for hobbyists. The easy availability of PC based software and support information of Mach3, written by Art Fenerty, lets anyone with some time and technical expertise make complex parts for home and prototype use. Fenerty is considered a principal founder of Windows-based PC CNC machining. Eventually, the homebrew architecture was fully commercialized and used to create larger machinery suitable for commercial and industrial applications. This class of equipment has been referred to as Personal CNC. Parallel to the evolution of personal computers, Personal CNC has its roots in EMC and PC based control, but has evolved to the point where it can replace larger conventional equipment in many instances. As with the Personal Computer, Personal CNC is characterized by equipment whose size, capabilities, and original sales price make it useful for individuals, and which is intended to be operated directly by an end user, often without professional training in CNC technology. Today Tape readers may still be found on current CNC facilities, since machine tools have a long operating life. Other methods of transferring CNC programs to machine tools, such as diskettes or direct connection of a portable computer, are also used. Punched mylar tapes are more robust. Floppy disks, USB flash drives and local area networking have replaced the tapes to some degree, especially in larger environments that are highly integrated. The proliferation of CNC led to the need for new CNC standards that were not encumbered by licensing or particular design concepts, like proprietary extensions to APT. A number of different "standards" proliferated for a time, often based around vector graphics markup languages supported by plotters. One such standard has since become very common, the "G-code" that was originally used on Gerber Scientific plotters and then adapted for CNC use. The file format became so widely used that it has been embodied in an EIA standard. In turn, while G-code is the predominant language used by CNC machines today, there is a push to supplant it with STEP-NC, a system that was deliberately designed for CNC, rather than grown from an existing plotter standard. While G-code is the most common method of programming, some machine-tool/control manufacturers also have invented their own proprietary "conversational" methods of programming, trying to make it easier to program simple parts and make set-up and modifications at the machine easier (such as Mazak's Mazatrol, Okuma's IGF, and Hurco). These have met with varying success. A more recent advancement in CNC interpreters is support of logical commands, known as parametric programming (also known as macro programming). Parametric programs include both device commands as well as a control language similar to BASIC. The programmer can make if/then/else statements, loops, subprogram calls, perform various arithmetic, and manipulate variables to create a large degree of freedom within one program. An entire product line of different sizes can be programmed using logic and simple math to create and scale an entire range of parts, or create a stock part that can be scaled to any size a customer demands. Since about 2006, the idea has been suggested and pursued to foster the convergence with CNC and DNC of several trends elsewhere in the world of information technology that have not yet much affected CNC and DNC. One of these trends is the combination of greater data collection (more sensors), greater and more automated data exchange (via building new, open industry-standard XML schemas), and data mining to yield a new level of business intelligence and workflow automation in manufacturing. Another of these trends is the emergence of widely published APIs together with the aforementioned open data standards to encourage an ecosystem of user-generated apps and mashups, which can be both open and commercial – in other words, taking the new IT culture of app marketplaces that began in web development and smartphone app development and spreading it to CNC, DNC, and the other factory automation systems that are networked with the CNC/DNC. MTConnect is a leading effort to bring these ideas into successful implementation. See also History of manufacturing References Cited sources Further reading Herrin, Golden E. "Industry Honors The Inventor Of NC", Modern Machine Shop, 12 January 1998. Siegel, Arnold. "Automatic Programming of Numerically Controlled Machine Tools", Control Engineering, Volume 3 Issue 10 (October 1956), pp. 65–70. Vasilash, Gary. "Man of Our Age", Christopher jun Pagarigan (Vini) Edmonton Alberta Canada. CNC Informatic, Automotive Design & Production. Computer-aided engineering CNC
39791384
https://en.wikipedia.org/wiki/CarPlay
CarPlay
CarPlay is an Apple standard that enables a car radio or head unit to be a display and a controller for an iOS device. It is available on all iPhone models beginning with iPhone 5 running iOS 7.1 or later. According to Apple's website, all major vehicle manufacturers are using CarPlay. Vehicles without CarPlay can have vehicle audio products from automotive aftermarket suppliers fitted. Software Apple's own CarPlay-enabled apps include: Phone Apple Music Apple Maps Calendar Messages Audiobooks (part of Apple Books) Podcasts Settings News Developers must request permission from Apple to develop CarPlay-enabled apps. Such apps fall into four categories: Audio: primarily provide audio content, such as music or podcasts. Examples: Amazon Music, Audible, Google Play Music, iHeartRadio, QQ Music, Spotify, and Overcast. Navigation: turn-by-turn guidance, including searching for points of interests and navigating to a destination. Examples: AutoNavi, Baidu Maps, Google Maps, and Waze. Automaker-made apps allow a user to control vehicle-specific features such as climate controls, gas levels, or radio via CarPlay. Messaging/Voice over IP (VoIP) : listen to new messages and reply using dictation in an audio-only interface. Messaging apps on CarPlay integrate with third-party Siri support (known as SiriKit), while VoIP apps integrate with the iOS calling interface using CallKit. Examples: Telegram, WhatsApp, and Zoom. Food ordering and parking services apps. To discourage distracted driving, Siri is used extensively, providing voice turn-by-turn navigation guidance and voice-input for text messages. Newscast-style weather and stock results are announced instead of displayed visually. Requests that bring up visual information may be blocked when CarPlay is in use; most native CarPlay Apps deliver audio content with minimal interaction. CarPlay-enabled apps installed on the device also appear on the CarPlay home screen. Hardware While most of the CarPlay software runs on the connected iPhone, the CarPlay interface provides the audio and display connection to the car's infotainment system. CarPlay adapts to various display sizes and control interfaces for each vehicle: touch screen, rotary dials, buttons, steering-wheel controls and hands-free microphones. Aftermarket head units may support CarPlay and/or Android Auto. Aftermarket head units can be purchased from Alpine, Clarion, Kenwood, Pioneer, Sony and JVC. The iPhone can connect to the car through a USB cable or wirelessly in two ways: by exchanging network credentials with a supporting CarPlay receiver over Bluetooth, establishing a two-way Wi-Fi connection; or by using a dongle adapter to enable a wireless connection to the system's USB port. Manufacturers Most major automakers offer vehicles with CarPlay. Manufacturers with no CarPlay models include Lada and Tesla Motors. Honda offers CarPlay on the Gold Wing motorcycle and on the Africa Twin. History Predecessor The concept of CarPlay was based on the iOS 4 feature called "iPod Out" that was produced through several years of joint development by Apple and the BMW Group's Technology Office USA. iPod Out enabled vehicles with the necessary infrastructure to "host" the analog video and audio from a supporting iOS device while receiving inputs, such as button presses and knob rotations, from a car's infotainment system, to drive the "hosted" user interface in the vehicle's built-in display. It was announced at WWDC 2010 and first shipped in BMW Group vehicles in early 2011. The BMW and Mini option was called "PlugIn" and paved the way for the first cross-OEM platforms, introducing the concept of requiring a car-specific interface for apps (as opposed to MirrorLink's simple and insufficient mirroring of what was shown on the smartphone's screen). Development CarPlay's codename was Stark. Apple's Eddy Cue announced it as iOS in the Car at WWDC 2013. In January 2014 it was reported that Apple's hardware-oriented corporate culture had led to release delays. iOS in the Car was then rebranded and launched as "CarPlay" at the Geneva Motor Show in March 2014 with Ferrari, Mercedes-Benz, and Volvo among the first car manufacturers. Features by iOS version iOS 9 iOS 9 added the ability to link car and iPhone wirelessly, not just a wired USB connector. It also allowed vehicle manufacturers to load apps that allow a user to control vehicle-specific features such as climate controls or radio via CarPlay. iOS 10 iOS 10's Messages app allows the user to listen to new messages and reply using dictation in an audio-only interface. iOS 12 iOS 12 added turn-by-turn guidance, including searching for points of interests and navigating to a destination, as well as support for third-party navigation apps like Google Maps or Waze. iOS 13 iOS 13 added Dashboard, an alternative to the app home screen, that presents a split layout of maps, media information, Calendar, or Siri Suggestions. It also added Calendar to the home screen, allowing suggested events to link towards map directions to the event location. A new Settings app enabled users to configure certain CarPlay specific settings, such as switching between light and dark modes, adjusting album art in CarPlay’s Now Playing screen, or enabling Do Not Disturb While Driving while in a CarPlay session. Third-party maps may also be displayed on Dashboard starting with iOS 13.4. It also added Apple's News app. iOS 14 iOS 14 added new preset wallpapers and the ability to run food ordering and parking services apps. iOS 15 iOS 15 brought improvements to Apple Maps and Focus modes, allowing users to customize (prioritize or postpone) notification delivery, particularly while driving. Adoption June 2013: BMW officials announced that their cars would not support CarPlay; they later changed their minds. November 2013: Siri Eyes Free mode was offered as a dealer-installed accessory in the US to some Honda Accord and Acura RDX & ILX models. In December, Honda offered additional integration, featuring new HondaLink services, on some US and Canada models of the Civic and the Fit. September 2014: A Ferrari FF was the first car with a full version of CarPlay. November 2014: Hyundai announced the Sonata sedan will be available with CarPlay by the end of the first quarter of 2015. December 2015: Volvo implemented CarPlay in the 2016 XC90. January 2016: Apple released a list detailing the car models which support CarPlay. April 2017: The new generation Scania range became the first heavy duty truck in Europe to support CarPlay. July 2017: The new Volvo VNL became the first heavy duty truck in the United States to support CarPlay. October 2017: The 2018 Honda Gold Wing became the first motorcycle to support CarPlay. January 2018: The 2019 Toyota Avalon became the first Toyota model with Apple CarPlay. July 2018: Mazda added CarPlay support to vehicles newer than 2013 and equipped with the MZD-Connect system. August 2018: Harley-Davidson CarPlay support was added to 2019 Touring models equipped with Boom! Box GTS radio. December 2019: BMW no longer requires a subscription to use CarPlay. Competition The Open Automotive Alliance's Android Auto is a similar implementation used for Android devices. Some vehicle manufacturers have their own systems for syncing the car with smartphones, for example: BMW ConnectedDrive, Hyundai Blue Link, iLane, MyFord Touch, Ford SYNC, OnStar, and Toyota Entune. General Motors has released an API to allow the development of apps that interact with vehicle software systems. MirrorLink is a standard for car-smartphone connectivity, currently implemented in vehicles by Honda, Volkswagen, SEAT, Buick, Skoda, Mercedes-Benz, Citroën, and Smart with phones by multiple manufacturers including Apple, HTC, Sony, and Samsung. Baidu CarLife is an automotive infotainment platform for iOS and Android for the Chinese market. References External links Apple Inc. software Automotive technology tradenames Dashboard head units IOS software Natural language processing software SRI International software Virtual assistants
46262052
https://en.wikipedia.org/wiki/Operation%20Cleaver
Operation Cleaver
Operation Cleaver, as labelled in a report by American firm Cylance Inc. in late 2014, was a cyberwarfare covert operation targeting critical infrastructure organizations worldwide, allegedly planned and executed by Iran. Cylance's report was later tacitly acknowledged in a confidential report by Federal Bureau of Investigation (FBI), though Iranian officials denied involvement in the operation. Cylance report In December 2014, California-based cyber security firm Cylance Inc. published results of a 2-year investigation, an 86-page technical report, indicating that an operation, called "Operation Cleaver", has targeted the military, oil and gas, energy and utilities, transportation, airlines, airports, hospitals and aerospace industries organizations worldwide. The title "Operation Cleaver" alludes to frequent uses of the word "cleaver" in the malware's coding. According to the report, over 50 entities in 16 countries have been hit by the campaign, based in the United States, Israel, China, Saudi Arabia, India, Germany, France and England among others. Cylance's research does not name individual companies, but Reuters reports citing "a person familiar with the research" Navy Marine Corps Intranet, Calpine, Saudi Aramco, Pemex, Qatar Airlines and Korean Air were among the specific targets. Stuart McClure, Cylance founder and CEO believes that the hackers are sponsored by Iran and have ties to Islamic Revolutionary Guard Corps. FBI report According to Reuters, the Federal Bureau of Investigation has filed a confidential "Flash" report, providing technical details about malicious software and techniques used in the attacks. The technical document said the hackers typically launch their attacks from two IP addresses that are in Iran, but does not attribute the attacks to the Iranian government. FBI warned businesses to stay vigilant and to report any suspicious activity spotted on the companies' computer systems. Alleged victims' reaction A Pemex spokesman said the company had not detected any attacks from the Iranian groups but was constantly monitoring. Muhammad Haneef Rana, a spokesman for Pakistan International Airlines, said he wasn’t aware of any threat from hackers and “We are well secured and our firewall is in place”. Korean Air declined to comment. Iran's reaction Iran has officially denied involvement in the hacking campaign. "This is a baseless and unfounded allegation fabricated to tarnish the Iranian government image, particularly aimed at hampering current nuclear talks", said Hamid Babaei, spokesman for Permanent mission of Islamic Republic of Iran to the United Nations. See also Cyberwarfare in Iran References External links Cylance Operation Cleaver Report Cyberwarfare in Iran Cyberwarfare in the United States Cyberattacks Hacking in the 2010s Cyberattacks on energy sector
747041
https://en.wikipedia.org/wiki/Civilization%3A%20Call%20to%20Power
Civilization: Call to Power
Civilization: Call to Power is a turn-based strategy game developed by Activision for Microsoft Windows as an attempt to capitalize on the success of the Civilization computer games by Sid Meier. It was ported to Linux by Loki Software, as well as BeOS, becoming one of the very few commercial games for that operating system. A sequel, Call to Power II, was released 18 months after the original. The sequel could not have "Civilization" in its title because Activision did not have a license for the "Civilization" name for a second game. Gameplay Timeline and epochs One of the most noticeable differences from the previous Civilization games is that the timeline of the game does not end in the 21st century, but rather goes to the year 3000. There are five epochs in this game: Ancient Age, Renaissance, Modern Age, Genetic Age, and Diamond Age. Call to Power adds a more thorough space colonization as well as sea colonization, with the appropriate technological advances (available in the Genetic Age). Terrain features Similar to Civilization II, the game uses an isometric view, although each tile is actually two separate tiles: the space level on top of a "terrestrial" level (thus, this game has a z-coordinate to represent position). Players can toggle between "Earth view" and "space view". All land and naval units are exclusively terrestrial, although most land units can be launched into space inside a cargo pod by a rail launcher in cities or via a space plane. Space fighters and space planes can freely travel in space and in the atmosphere. While the SWARM warrior can survive in space as well as the earth, it cannot launch itself into space. There are also some units that exist in space exclusively (i.e. cannot make a re-entry into the atmosphere) such as the Star Cruiser, the Phantom and the Space Bomber. Space produces no resources, as it is a vast void. However, once a space colony is built, players can build food pods and assembly bays to produce resources for the colony. "Water tiles" are also divided into several types. After submarines are available, the type of tile in oceans can be seen (e.g. continental shelf, deep sea trench, rift, etc.). Once the technology for sea colonies is discovered, undersea tunnels can be built to link to other sea colonies and dry land. Fisheries and undersea mines can also be built to produce resources. Pollution In Call to Power, pollution is produced in meaningful quantities after the Industrial Revolution advance. Cities that produce a lot of pollution will start to produce "dead tiles" within their city radius. Such tiles produce no resources. If pollution is left unchecked, eventually the game will give a warning that global disasters will occur. Disasters include change in climate, ozone deterioration, and global warming. In the case of global warming, the game informs the player that "ice caps have melted" and sea levels have risen. Tiles affected are turned into either coast or shallow water, and cities on tiles that become shallow water or coast are destroyed. The destruction of the ozone layer causes a large number of land tiles to become dead tiles. If a nation is appropriately technologically advanced, then that nation can repair dead tiles, but only at a significant cost of industrial production. The "Gaia Controller" wonder removes all pollution in the game but can be built only in the Diamond Age. Pollution is exacerbated by several city facilities such as factories and oil refineries. On the flip side, some facilities such as recycling plants and nuclear reactors will reduce the production of pollution. Additionally, certain events such as space launches and use of nuclear weapons will result in one-time additions of pollution each time that they occur. An initial setup feature is game play without pollution problems. Happiness While playing the game, the happiness level of the citizens must be maintained. If a city is far away from the capital, is overcrowded, polluted, overworked, starved or underpaid, the happiness level will drop with riots and revolts occurring. Cities also experience unhappiness during wartime, especially if they have just been conquered by another empire. Many terrorist units can decrease the happiness of an enemy city. For example, if a city is "infected", it will lose both population and happiness. If the happiness' level goes under 75, then the city is in danger of rioting. If a city riots, it does not produce any work during that turn. If happiness in cities continues to decline, revolution becomes a possibility. If that happens, the city's inhabitants become "barbarians", or change nationality to another country if another country has the Egalitarian Act Wonder. However, if the happiness of a city reaches very high levels, it "celebrates". Some buildings can increase happiness (e.g. temples, cathedrals, body transplants), and wonders can increase happiness (e.g. Immunity Chip, Ramayana). Wonders These can be built by any civilization who has acquired the technology to do so. They take a longer time to build than ordinary buildings or units of that age, but have a greater effect, and a cinematic is shown when the player builds one. The wonders usually affect the civilization as a whole (with exceptions, such as Galileo's Telescope which effectively doubles scientific production in the city which it is built), and can only be built by one civilization. As with previous games, wonders can go obsolete with technological advances; if someone researches "Age of Reason", the Stonehenge wonder no longer has any benefits. Generally, wonders of the future have a greater effect than wonders of the past. If a city containing a wonder is taken by another player, then ownership of the wonder and its benefits go to the conqueror. Victory conditions Apart from conquering all opponents, players can achieve victory by completing the Alien Life Project, which is triggered by the "wormhole sensor" wonder. After a wormhole probe is sent through the wormhole, an alien life lab and a series of upgrades must be built to achieve victory. A player can also win by converting all enemy cities on the map. The final option is to get the highest score by the year 3,000. Release Call to Power was one of the most anticipated strategy games of 1999. It was published by Activision for Windows in 1999. A Linux port was announced on January 26, 1999 by Loki Software, Inc. Pre-orders were collected by Loki and via GameCellar. The port was set to ship in the week of April 26, but the date was postponed by a week to include bug fixes by Activision. On May 15, Loki announced it had begun shipping. It was distributed in Europe via SUSE by Media Markt and other retailers. It marked the availability of the first major commercial video game for the Linux platform. A BeOS port is also known to exist. In January 2000, the game was also released for Macintosh. Reception The PC version received above-average reviews, while the Mac version received unfavorable reviews, according to the review aggregation website GameRankings. Next Generation said that the former version was "not without its good points, but in the end it's difficult to enjoy. Gamers hungry for a worthy sequel to Civ II will find it in Sid Meier's Alpha Centauri." The game was the 16th-best-selling computer game of the first half of 1999. According to PC Data, a firm that tracked sales in the U.S., it sold 293,046 units by September 2000. The game was a nominee for GameSpots "Most Disappointing Game of the Year" award, which went to Ultima IX: Ascension. References External links 1999 video games 4X video games Activision games BeOS games Call to Power Linux games Classic Mac OS games Turn-based strategy video games Video games developed in the United States Video games scored by Mark Morgan Windows games Loki Entertainment games
24339658
https://en.wikipedia.org/wiki/Trojans%20Korfball%20Club
Trojans Korfball Club
Trojans is a korfball club based in Croydon, England. The club was established in 1972 and is part of the England Korfball Association. They are 11 time National League Champions (2007–08, 2008–09, 2009–10, 2010–11, 2011–12, 2012–13, 2013–14, 2014–15, 2015–16, 2016–17, 2017–18 and 2018–19), 4 time EKA Cup Champions (2006–07, 2007–08, 2008–09 & 2009–10) and have also won the Europa Shield in 2007 in Vilanova i la Geltrú and a bronze medal at the Europa Cup 2012 in Warsawa, bronze medal at the Europa Cup 2017 in Sassenheim and bronze medal at the Europa Cup in 2019 in Kortrijk. League history 1999–2002 The club has been established in the National League for many years and has only recently begun to reap the rewards of its hard labour. From early 2000, Trojans had languished toward the bottom of the National League, often finishing 2nd bottom. This resulted in them playing a playoff match with the runners up of the then Reserve (2nd) Division for the rights to play top flight Korfball the following year. At the time, the 1st team had a mixture of young inexperienced males and older, more experienced females. The males displayed talent and potential at Korfball but needed the guidance of the older, experienced players to point them in the right direction. At this time, the average age of the 1st team males was just 17 compared to the females at 34. For 3 years in a row (1999–2000, 2000–01 and 2001–02) Trojans managed to win the playoff match and maintain their National League status with the male players showing more and more promise as the years progressed. They would often give the top teams a run for their money (Mitcham, Invicta, Croydon & Nomads) but they could not sustain a decent enough challenge and win the matches. 2002–03 In the season of 2002–03, two of the starting four females retired. This opportunity gave rise to some younger players being offered 1st team opportunities and, with the signing of a couple other young female players and a new coach, paved way for Trojans' best finish in the National League (4th). With 3rd placed Nomads not being able to commit to the Europa Shield 2003 in Trebon, Czech Republic, it gave the club and the young Trojan team a chance to sample European Club competition (the 1st time in the club's history). The young squad gained valuable experience from this tournament despite failing to win a single match and finishing with the wooden spoon. 2003–04 The young Trojan team had high expectations of themselves for the following season and despite some close matches (14–11 loss to League Champions Mitcham) they failed to beat the likes of Croydon and ended up finishing in 4th place, missing out on the top 3 and a place in Europe. The male players were really beginning to come into their own and with the help of the new, younger female players the team was finally showing a nice equilibrium. The team, however, were very disappointed not to break into the top 3 for the first time in the club's history and vowed to go at least one better next season. 2004–05 As hoped, this season proved to be a lot more cohesive than the previous year with the team playing some nice attractive Korfball, ensuring that they worked on their strengths. Although they weren't yet able to match the country's two top teams (Mitcham & Invicta), they were certainly giving them good, hard matches (this was evident in the opening game of the season in which Trojans took Invicta to within 1 goal at 11–10). Despite not being able to defeat these 2 teams, they were finally able to defeat their local rivals Croydon for the first time in both league meetings (8–21 and 15–12) as well as in the EKA Cup (16–11). Chris Spice top scored in the first match with 10 goals (9 from open play). After reaching the semi final stage of the EKA Cup, Trojans went on to beat the remaining teams in the league and finish 3rd overall, booking a place in the Europa Shield 2005, which was to be held in their hometown of Croydon, England. Invicta won the National League that season and so the other English team that joined Trojans in the Shield were their other local rivals, Mitcham. Whilst Trojans were confident that they could do better than their previous appearance in this competition, they had to face Mitcham in their 1st match of the tournament. Again, like in Trebon, they were overwhelmed by the occasion and ended up losing 12–6. This paved the way for the rest of their matches and once again they ended up finishing the tournament bottom and still without a win in Europe. To make matters worse, Mitcham won the overall competition and in doing so became the first English team to win the Europa Shield. 2005–06 This was the first season that Trojans believed they had a genuine chance at winning the league title. They felt that this season would be their coming of age and, finally, the potential shown by the group for so long would be realised and put into practice. Their first match of the season was to be against the current National League Champions Invicta, who hosted a team full of international players (including Trojans' coach). Despite holding their own early on, a couple of costly errors allowed the game to get away from them and they ended up losing out 11–9. Trojans took heart from running the League Champions so close and the performance in general, something which held them in good stead for the rest of the season. They went on to record double wins against the likes of Croydon, Nomads, Bec, Kwiek & Nottingham and even managed a 9–9 draw against Mitcham in an exhilarating finale (2 goals were scored in the final 30 seconds, first Mitcham and then Trojans with the leveller right at the whistle). This was the first time Trojans had ever managed more than a loss against their London rivals. Just when they thought that performance couldn't be bettered, they managed to defy the odds and defeat Invicta 11–10 in their 2nd to last league match and in doing so gave the title to Mitcham. Trojans ended the season in 3rd place again, only 1 point behind Invicta in 2nd and 5 behind the winners Mitcham. There was disappointment that they didn't improve on their overall league position from the previous season but the improvements were there in the performances against the top 2 teams. Once again, Trojans qualified for the Europa Shield 2006, this year being played in Selm, Germany. Determined to finally get their European credentials underway despite 2 previous lacklustre outings, Trojans travelled to Germany, riding off the confidence gained from their ability to mix it with the best in England. In their opening match against the Hungarian side MAFC, they were able to play to their strengths and overcome their opponents 9–5. The next game put them up against one of the tournament favourites and previous Europa Shield winners, Czech side TJ Znojmo. In a hotly fought contest, neither team could press home their advantage and with Trojans 1 goal down, they were awarded a penalty. David Brooks stepped up to coolly convert it and tie the match at 11–11. With a draw against the Czech side, it meant that Trojans realistically only needed to avoid defeat against the home side KV Selmer (Znojmo had previously beaten Selmer 13–10) to progress to the 3rd/4th playoff spot. They also had an outside chance of making the 1st/2nd playoff if they beat KV Selmer by 8 goals or more. The match started off pretty even with both teams trading goals however it was the home side that took a one-goal lead at half time, 6–5. The second half carried on in pretty much the same vein until KV Selmer managed to pull into a 4-goal lead at 11–7. Trojans hung on in there and with some smart, clever passing pulled back 2 quick goals to bring the score to 11–9. With the momentum seemingly back with them, Trojans pushed for another goal and got it with a smart long shot. With less than a minute remaining, the home side looked to play "keep ball" and in doing so caused Trojans to concede a penalty and effectively the match. The final score ended 12–10 to KV Selmer who advanced through to the 3rd/4th playoffs. Trojans were mightily disappointed after coming so close, but again the performances they had produced against the other teams was very pleasing. Not once were they outclassed (this was evident by their 11–11 draw with TJ Znojmo who ran out eventual winners of the competition). Trojans finished off their European campaign with victories over English rivals Invicta (10–5) and for the second time of asking Hungarian side MAFC (9–8) to claim 5th spot. 2006–07 With the team gaining some valuable experience in close fought competition (both in the league and Europe), their confidence was ever growing. They maintained that belief in their opening league fixtures which included a 1st time win over reigning National League Champions Mitcham (16–12). They were top of the league 6 games into the 14 game season and let their guard down against the less fancied but renewed Kwiek. They were missing club captain Chris Spice through injury and added to that, the coherence in the team wasn't there. This was the first game back after the Christmas period (the last of which was the victory over Mitcham) and it showed. From the beginning of the match Trojans were second best. Kwiek were playing at the peak of their game and they managed to pull off a shock victory of 8–11. Trojans were devastated at letting slip the advantage they had over rivals Mitcham. The hard work that was done beating the reigning champions had been undone and Trojans had slipped down to 2nd place on goal difference. The home match against Mitcham towards the end of the season was now a must win game (as was the other remaining 6 matches). They managed to win against the likes of Nomads, Croydon, Bec, Kingfishers & Nottingham to keep the dream alive of winning the National League for the first time in the club's history. Trojans had lost their highly influential player David Brooks through a broken metatarsal whilst training with the GB senior team - this was a big loss for them. And so it came to their home match against Mitcham, it was clear that this was going to be a hotly contested battle. Mitcham started the stronger and raced into a 0–4 lead. Trojans managed to peg back a couple of goals but they were always playing catch up. With the score 3–7 at half time, coach Gary Brooks tried installing some belief in his young players, sensing an uphill struggle in the 2nd half. They came out and tried their best to get back into the match but it wasn't enough and the experience of the Mitcham team eventually won through 8–13. With the League title effectively all but lost, Trojans had to re-group if they were to take 2nd spot. Their previous conquerors Kwiek were level on points and so the remaining match between the two sides would decide the runners up spot, the loser taking 3rd place. Trojans had some more injuries to contend with, their captain Chris Spice was again sidelined and David Brooks had still not recovered from his metatarsal problem. With a beleaguered squad, Trojans battled for the full 60 minutes but were eventually beaten by several goals and with it, secured 3rd place for the 3rd year in a row. Trojans had a chance at revenge in the EKA Cup final where they faced off against the new National Champions Mitcham. The match was tight throughout the 60 minutes and neither side could be separated, forcing the game into a Golden Goal situation. Both teams had opportunities to score and both failed to capitalise until ex-player Sinead Cafferkey found herself unmarked from 3 metres out and slotted the ball home to seal the victory for Trojans. It was their first piece of senior silverware in English Korfball. The 2007 Europa Shield was held in Barcelona, Spain and was to be Trojans 4th time in the competition. They had just signed 2 new female players over the close season which had helped to bolster their squad. Their 1st match was against English rivals Kwiek who had beaten them to 2nd place in the domestic league. Trojans led from the beginning and never looked back winning 12–6. Next up they faced a familiar opponent in MAFC. With the Hungarians playing their strong style of rebound defence, it took a while before Trojans could find their shooting boots and eventually pull away 10–5 winners. Their final match of day 1 was against KV Selmer, the team that beat them by 2 goals the previous year. Much to the satisfaction of the coach, Trojans came firing out of the blocks and put on their best performance in a while to win convincingly 14–7. The 2nd day saw Trojans face the 2 home teams of CEVG and CK Vallparadís. In the 1st match it was a tense affair. Both teams traded goals equally and with literally the last shot of the match, David Brooks levelled the score at 10–10 and took the game to a Golden Goal situation. After 5 minutes of no one scoring, the dreaded penalty shoot out was introduced. CEVG converted their 1st 2 penalties and Trojans obliged by scoring their 1st one but veteran Rob Williams failed to put away his effort, handing victory to the Catalans and with it, Trojans 1st defeat of the tournament (he later puts the penalty miss down to his shirt being too tight). The loss meant that Trojans had to beat CK Vallparadís in their final match if they were to win the tournament (CK Vallparadís had previously won all their matches). The game was very tight and understandably tense as both teams knew that the victor would be crowned the Europa Shield Champions. The players were nervous and the scoring was low throughout the match with half time coming at 3–3. The second half was very much the same, both teams working hard in defence and not allowing the attack to create many scoring opportunities. With 2 minutes remaining, Trojans worked a nice attack which saw the ball fly through their opponents basket and put them into a tantalising lead of 5–6. For the remaining 90 seconds, the Spanish team pressed and looked for the equaliser, twice coming close with efforts. Trojans managed to steal the ball and play out the final moments for a resounding and hard-fought victory. They had become only the 2nd English team to have ever won the Europa Shield and the 1st to do it on foreign soil. It seemed that all of the hard work put in by the players and coaching team was finally paying off. 2007–08 And so Trojans began their 2007–08 league campaign with renewed confidence, looking to add to their already growing collection of silverware. They picked up wins against Bec, Invicta, Croydon, Nottingham, Kwiek and Nomads before heading into the first of two big games against reigning champions Mitcham. The match was close throughout with both teams playing to their respective strengths but it was Trojans who went into half time with a slender lead of 8–7. The second half continued in the same vein with Mitcham working the block enabling their shooters to have numerous chances, however the accuracy of their shots was not as deadly. Trojans on the other hand was shooting well and extended their lead, adding further pressure on their opponents. With Mitchams chances fading with the clock, Trojans played with experience and were able to see out the match 15–13 to give the club ‘first blood’ in the head to heads. Both teams continued to win their other matches against the other teams in the National League, ensuring that the title would be decided in the final game of the season when the two teams faced off again. In what was a slow and low-scoring affair, Trojans came from 3 goals down at half time to take the match 11–10 and with it their first ever National League title. With 2 pieces of silverware already won that season, Trojans went for the treble when they reached the semi finals of the EKA Cup. This would prove to be a tough ask though as they were drawn against Mitcham 2, the winner of which would face Mitcham 1 in the final. The Trojans team fought hard and played some very exciting Korfball, defeating Mitcham 2 by 4 goals. For the third time that season, Trojans played off against Mitcham in an all important tie. With the final being played on the same day as the 2 semi finals, both teams were fatigued from their earlier escapades. Once again the match was very close and neither team managed to assert their dominance on proceedings until the latter stages of the match where Trojans pulled away with a 2-goal lead. With Mitcham chasing the game, Trojans were able to capitalise on the situation and hold out for the win. They had done it, Trojans had completed the treble and with it, their most successful season in the club's history. Trojans were at the top of the English Korfball world. 2008–09 Now that Trojans were the reigning National League and EKA Cup Champions, they were the team to beat and with this came the stigma of expectancy. This season the league format had changed whereby the top 4 teams would compete in “The Playoffs”. The team that finished 1st would play the team that finished 4th and 2nd would play 3rd in one off matches. The winners of these games would face each other in the final and whoever won this would be crowned National League Champions. Trojans progressed through the season well, finishing 2nd behind rivals Mitcham and in doing so, qualifying for the Playoffs. In their semi final, they swept aside Nottingham 20–14 and progressed to the final to face Kwiek, who had surprisingly overcome Mitcham in the other semi final. With the team full of confidence, they put in an inspired performance to win with a convincing scoreline of 25–17 and thereby retained their National League title for a 2nd year. Trojans quickly followed this success up with another EKA Cup title, this time defeating Mitcham 16–15 in the final. 2009-10 Trojans EKL Winners for 3rd time in a row 2010-11 Trojans EKL Winners for 4th time in a row 2011-12 Trojans EKL Winners for 5th time in a row 2012-13 Trojans EKL Winners for 6th time in a row 2013-14 Trojans EKL Winner for 7th time in a row. Trojans qualified as top favorites for the English Korfball League Play Offs having won all of their league matches during the regular season. In the Semi Final Trojans beat Norwich Nights to secure their place in the final where they met Bec. Bec took an early 0 - 5 lead in the game which was probably the best EKL Final ever and just after half time Trojans were trailing by 7 goals. Trojans made some subs just before half time which gave them a bit more control over the game and they never really looked out of the game after this point. They managed to get back in the game and even took the lead. At the final whistle the scores were 17 - 17 and extra time was needed to separate the two teams. In extra time Tony Woodvine narrowly missed a running in shot from the side within in seconds of the restart and managed to collect his shot and pass it to Dave Brooks who scored from a long shot. Bec now had one attack to try and level the scores, however with hesitant play Kat Goodridge intercepted the ball and Trojans had to play the ball out into their attacking half under great pressure from the Bec players, eventually Sam Brooks made the final pass to Dave and this was the end of the game with another victory for Trojans. Squad (current) Head coach Gary Brooks Fitness coach Christine Brooks Honours 2007–08, 2008–09, 2009–10, 2010–11, 2011–12, 2012–13, 2013–14, 2014–15, 2015–16, 2016–17, 2017–18 and 2018-19 - EKA National League Champions (12 times) 2006–07, 2007–2008, 2008–09 & 2009–10 - EKA Cup Champions (4 times) 2006–07 - Europa Shield Champions (1 time) 2012, 2017, 2019 - Europa Cup Bronze Medal (3 times) 2003–04, 2004–2005, 2005–06 - EKA u23 Champions (3 times) References External links Trojans website England Korfball Korfball teams in England Korfball teams Sport in the London Borough of Croydon Sports clubs established in 1972
64919262
https://en.wikipedia.org/wiki/Movie%20Maker%20%28Reston%20Publishing%29
Movie Maker (Reston Publishing)
Movie Maker (also referred to as Reston Movie Maker) is a computer program published by Reston Publishing Company in 1984 which allows users to author computer-animated visual sequences with audio. Self-playing movies can be viewed without the Movie Maker software. It was developed by Interactive Picture Systems for the Atari 8-bit family. In 1985 it was re-published by Electronic Arts, including a port to the Commodore 64. Reception David P. Stone reviewed the program for Computer Gaming World, and stated that "if you have a need, or desire, for presenting non-game, impressive animation sequences, then MMTK won't let you down. But, to fully enjoy MMTK you must have the deep personal conviction that 'getting there is half the fun'." A 1984 Antic review contained an addendum from the editor: "ANTIC was so impressed with Movie Maker that we asked the Interactive Picture Systems people to design an animated greeting card for us, which they did to the delight of all who have seen it." The only major dislike from the reviewer was having to use sounds from the existing, fixed library. Gregg Williams reviewed the program for Computer Gaming World, and stated that "a so-called animation studio that promises "dazzling animated graphics made easy." The reality is four-color "movies" (four colors--on an Atari?!) so amateurish and primitive that they would impress only a 4-year-old." References External links Review in the Christian Science Monitor 1984 Software Encyclopedia from Electronic Games Review in Electronic Fun with Computers & Games Review of Electronic Arts version in Antic Review in ANALOG Computing Review in Compute!'s Gazette Review in Creative Computing Review in Page 6 1984 software Animation software Atari 8-bit family software Commodore 64 software Electronic Arts
4720061
https://en.wikipedia.org/wiki/Oregan%20Networks
Oregan Networks
Oregan Networks Ltd is a global software and services company, providing an embedded middleware platform for delivery of multi-network Digital TV services to hybrid broadcast receivers. The company's software is utilised in devices deployed by leading telecoms and broadcast operators including British Telecom, Telefonica, CAT Telecom, NTT Plala and Batelco. A derivative of Oregan's software is also used in retail streaming video products, such as media players and Connected TV's that are sold at retail by brands including Sony, Philips, Dixons and Sharp. Oregan's headquarters and primary R&D centre are located in London UK, with branch offices in Singapore, Mexico, Korea, Taiwan and Argentina. History In its early days, Oregan provided web browser and media streaming software to Sony Computer Entertainment Europe for enabling PlayStation 2 to render the Sony games portal, Central Station. In December 2009 Oregan launched Onyx, an Oregan branded solution for Connected TVs, which was deployed by UK based LCD TV manufacturer Cello Electronics for distribution by UK retailers such as Marks & Spencers and Dixons. These Onyx powered TVs are understood to be the first Connected TVs to deliver BBC iPlayer content as well as YouTube and other OTT video services into UK homes, but were quickly followed by similar products from LCD TV market behemoths Samsung, Sony and LG due to significant market demand for accessing BBC iPlayer on Connected TVs. In more recent years, Oregan has seemingly focused on providing technology and professional services to telecoms and broadcast operators, for delivery of Pay TV solutions over DVB and IP networks. Aside from providing software for the BT Vision 2.0 STB from British Telecom in the UK, the company is active in emerging market territories including South America, SE Asia and the Middle East. In May 2018, Oregan launched advanced set-top box SparQ middleware for Linux and Android TV. See also Digital Television Personal Video Recorder IPTV Video On Demand Interactive Television Smart TV Set Top Box References External links Oregan Networks website Digital television Interactive television Conditional-access television broadcasting Companies established in 1997 Software companies of the United Kingdom
37509127
https://en.wikipedia.org/wiki/Martin%20Wolf%20%28investment%20banker%29
Martin Wolf (investment banker)
Martin Wolf is an American investment banker who is currently head of Martin Wolf M&A advisors, a global M&A Advisory firm specializing in buy/sell transactions in information technology. He is known for his commentary on current industry trends, as well as for his firm's white papers and materials tracking and measuring information technology valuations across key IT sectors in the geographies of the United States, India, and China. Wolf graduated from the Ross School of Business at the University of Michigan. His proprietary analysis and commentary has been featured on news sites such as The Wall Street Journal, CNET, Fast Company, GigaOM, CNBC, CRN, Yahoo! Finance and other tech news organizations. Wolf has also been a keynote speaker at multiple industry conferences, including the China International Software and Information Service Fair in Dalian, two conferences organized with the Bangalore Chamber of Industry and Commerce on Cross-Border M&A, HP’s American Partners Conference, AmCham Shanghai IT Industry Committee, and other trade conferences such as Brand Tech Forum, CRN Roundtable, Software Publishers Association, VARBusiness Roundtable, Information Technology Alliance and the ASP Channel Conference. Career Wolf's tech M&A background began in the early 1990s when he served as President of the Franchise and Distribution Division of ComputerLand. He subsequently founded its Datago distribution business, which was sold in 1994 to Merisel, Inc. In 1997, he founded martinwolf | M&A Advisors, a middle market IT M&A specialist, and serves as its president. Located in Scottsdale, AZ, martinwolf has been involved in more than 180 transactions with companies located in 20 countries. The firm is a recognized authority on companies in the IT Supply Chain, IT Services, Tech-Enabled BPO, and Software and SaaS spaces, and has completed more transactions with companies listed as members of CRN's Solution Provider 500 list than any other investment bank. Clients have included eight Fortune 500 companies. Notable transactions completed by the firm include advising Inacom on its sale to Datatec's Westcon group in 2002, creating the company's "Voda One" division and making Datatec the largest Lucent voice distributor in the industry. The company also advised Saber Software in their acquisition by private equity firm Accel-KKR, which was acquired by EDS one year later and ultimately folded into Hewlett-Packard. Other notable transactions include advising Insight Enterprises on its acquisition of Comark in 2002 and advising Bain Capital in its 2012 investment in VXI Global Solutions. In addition to its regular analysis, which includes the quarterly martinwolf IT Index, the company has interviewed CEOs from companies including Logicalis, SS&C Technologies, ChinaSoft International, Arrow Electronics, Systemax, Softchoice, CDW, Insight Enterprises and the Herjavec Group regarding their views on contemporary market trends. References Ross School of Business alumni Living people Year of birth missing (living people)
36585623
https://en.wikipedia.org/wiki/Edraw%20Max
Edraw Max
EdrawMax is a 2D business technical diagramming software which help create flowcharts, organizational charts, mind map, network diagrams, floor plans, workflow diagrams, business charts, and engineering diagrams. The current version, EdrawMax 11.5.0 was released in November 2021 for Microsoft Windows, macOS, and Linux. EdrawMax is a Visio-like diagramming tool. Main features EdrawMax can be used to create diagrams or charts with its built-in editable symbols and templates for a range of categories. The current version, EdrawMax, is available in two editions: Free Viewer Version and Professional Editable Version. The latter has additional templates and examples for creating diagrams. Compatibility Windows 2000/2003/2008/Vista/7/8/10 32 bit/64 bit XP users please use EdrawMax Version 8.4 Mac Mac OS X 10.10 and later Linux Linux OS X Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Mint, Knoppix, RedHat, Gentoo and more. Minimum System Requirements 1GB of RAM 1G processor 800 MB of hard disk space 1024 * 768 monitor resolution File format EdrawMax saves content in an xml file format. The .eddx suffix is the default file format. The .edxz suffix is a compressed xml file format used for sharing. Versions Below is a list of updates from EdrawMax 1.0 to the present. See also List of concept- and mind-mapping software References External links Diagramming software Technical communication tools Windows graphics-related software UML tools
65079310
https://en.wikipedia.org/wiki/Vampire%3A%20The%20Masquerade%20%28Choice%20of%20Games%29
Vampire: The Masquerade (Choice of Games)
Vampire: The Masquerade is a line of interactive fiction video games based on the tabletop game of the same name, and is part of the World of Darkness series. They are developed by Choice of Games for Android, iOS, Linux, MacOS, Microsoft Windows, and web browsers, and include Night Road (2020) by Kyle Marquis, Out for Blood (2021) by Jim Dattilo, Parliament of Knives (2021) by Jeffrey Dean, and the upcoming Sins of the Sires (2022) by Natalia Theodoridou. The games are text-based, and involve the player creating and customizing a character, and making choices that affect the direction of the plot. In Night Road, the player takes the role of a courier delivering secrets for vampire elders in the Southwestern United States; in Out for Blood, a vampire hunter who protects their town; in Parliament of Knives, a vampire navigating vampire politics following a coup in Ottawa, Canada; and in Sins of the Sires, a vampire in Athens, Greece, where an ancient vampire is rumored to plan to rule over humans as a god. Choice of Games approached Paradox Interactive, the owner of the World of Darkness series, to pitch a video game adaptation as they considered the series and its Storyteller System foundation to match up well with Choice of Games' design philosophies of game mechanics used to support rather than dominate stories. The writers differed in how they chose to adapt the tabletop game: Marquis adhered closely to the source material, while Dean adapted game mechanics more loosely. Gameplay Vampire: The Masquerade – Night Road, Out for Blood, Parliament of Knives, and Sins of the Sires are text-based interactive fiction video games where the player makes choices that affect the direction of the plot, resulting in one of several endings; these include both main endings and failure states. The player characters are created and customized by the player, including choosing their attributes, gender (non-binary, male, or female), and sexuality (bi, gay, or straight); Sins of the Sires additionally lets the player choose to be asexual, and to choose between androgyny, femininity, and masculinity. In Night Road, the player uses their vampiric powers ("disciplines") with the goal of staying ahead of pursuers. The protagonist has affinities for different disciplines depending on which of the five vampire clans Ventrue, Toreador, Brujah, Gangrel, and Banu Haqim the player chooses to belong to; disciplines include magical, mental, and physical ones. As the player completes jobs, they gain money, which can be used to buy cars with different speed, acceleration, and handling statistics; the player can also tune their car throughout the game. In Out for Blood, the player can choose to ally with one vampiric side, try to play the sides against each other, or fight against both. Depending on player choices, the protagonist can become a vampire. In Parliament of Knives, the player has to watch out for other vampires who are trying to gain power; in doing so they can choose to support or betray their character's sire, ally with another vampire faction, work independently, or sabotage local politics. Synopsis Night Road follows an elite Camarilla courier driving across the Southwestern United States to deliver vampire elders' secrets, meeting several vampires of different clan and social status in the process. The courier is based in Tucson, Arizona, and visits cities including Fort Worth, Dallas, and Phoenix, as well as desert ruins and an abandoned research facility used by the vampire clan Tremere in the 1970s. The story has four main endings. Out for Blood follows a suburban vampire hunter, who tries to protect their town from the hundred-year-old vampire Chastain, and gets drawn into a feud between her and younger vampires. Parliament of Knives follows a vampire in Ottawa, Canada following a coup that has destabilized local vampire politics: the Ottawa Camarilla's prince is missing, and his second-in-command, the protagonist's sire, is suspected. Sins of the Sires follows a vampire in Athens, Greece, where the ancient vampire Aristovoros is rumored to have plans to subjugate humans and rule over them as a god. Development The games are developed by Choice of Games in collaboration with Paradox Interactive, the owner of the World of Darkness series, and are based on the fifth edition of the tabletop role-playing game Vampire: The Masquerade. They are edited by Jason Stevan Hill, and are written by Kyle Marquis (Night Road), Jim Dattilo (Out for Blood), Jeffrey Dean (Parliament of Knives), and Natalia Theodoridou (Sins of the Sires). The project, one of many World of Darkness video game adaptations at the time, began after Paradox Interactive brought management of the series in-house and expressed a desire to license it out, after which Hill reached out to them; he considered the tabletop games and the Storyteller System they are built upon to match up with Choice of Games' design philosophy of stories enhanced by rather than dominated by mechanics, with decisions impacting outcomes and where even failure leads to good stories. In the beginning of the partnership, Marquis pitched a lot of different games, including a hardboiled Mummy: The Resurrection adaptation where the player investigates serial killers by assuming the form of their favored type of victims, and a few Vampire: The Masquerade ideas including one where the player controls an elder vampire defending their backwoods domain in Maine and one about an occult treasure hunter; Paradox liked the courier idea that was eventually used for Night Road, and adapted its core concept for their Vampire: The Masquerade web series Vein Pursuit. Marquis described Paradox as very accepting of eccentricities in his story pitches; he ended up playing the story "pretty straight" when the player is in their base of operations, but getting "weird" with it in the desert, where vampires do not need to worry as much about being found out. Paradox would also answer questions about the setting, such as whether there was anything to keep in mind while writing about a specific location or whether there was anything in the way of picking up story threads from early Vampire: The Masquerade material. Production Producing the games within the given time constraints was a challenge, as the developers had to handle not only game design and writing a story, but also managing World of Darkness lore. In adapting the tabletop game to an interactive fiction game format, the writers took different approaches: Marquis adhered closely to the tabletop game's mechanics, intending to replicate the experience of playing the tabletop game rather than a Choose Your Own Adventure book, including using the tabletop game system of dots on the Night Road character statistics page; Dattilo, meanwhile, used a more loose interpretation of the mechanics for Out for Blood. In either case, some compromises had to be done for game balance, for example with the vampire clan banes, whose effects had to be tweaked while ensuring they still feel substantial; Marquis also noted that he was unable to include two vampire clans from the tabletop game: Nosferatu, as their appearance would make them unable to work as couriers without revealing themselves as vampires, and Malkavian, as it would have required all dialogue in the game to be rewritten for them in "delusion speak", as had been done in the earlier game Vampire: The Masquerade – Bloodlines. The design of Night Road was in part influenced by the adventure game Sam & Max Hit the Road, with a forward momentum together with a sense of being able to go wherever one wants. Marquis enjoyed writing a World of Darkness game set in the present, due to how the world has changed since the World of Darkness tabletop games were originally published in the 1990s. One of the first things he did when beginning development was to write a list of things he would not include in the story, due to editorial request, personal disinterest, or due to how they would not work in a courier story. He also looked through developer notes for other World of Darkness material to avoid overlap with other stories in the series; he specifically wanted to avoid including "ancient tomb" tropes, considering this something that had never been done better than the Ankaran sarcophagus in Bloodlines, and therefore not wanting to compete with it. Instead, he opted for writing a story about "mad science", forgotten experiments, and the eeriness of unexplained disappearances. He cited novels including Jack Kerouac's On the Road, Hunter S. Thompson's Fear and Loathing in Las Vegas, and Richard Kadrey's Sandman Slim series, as influences. He also took inspiration from mythology, with the shapeshifter Coyote serving as inspiration for a polymorphic Gangrel vampire. A challenge for Marquis was to balance horror with the World of Darkness tabletop games' systematized setting; he tried to counter this by drawing from more obscure parts of the setting, to populate the game's world with "the unknown" – things that belong to the setting but are less categorized than the Camarilla vampires. Release The first three games were announced in April 2020, and were published by Choice of Games through 2020–2021 for Android, iOS, Linux, MacOS, Microsoft Windows, and web browsers: Night Road was released on September 24, 2020, Out for Blood on July 29, 2021, and Parliament of Knives on October 28, 2021. In 2020, Hill said that it was possible that Choice of Game would develop further World of Darkness games; Sins of the Sires was announced in November 2021 for release in early 2022. "Usurpers and Outcasts", a piece of downloadable content for Night Road that adds the option to play as a Tremere or Caitiff vampire, was released alongside the main game on September 24, 2020. Reception TouchArcade was excited for the games, considering World of Darkness a great space for storytelling and Choice of Games a developer with a good track record for narrative-focused video games. They liked the games' premises, and appreciated Parliament of Knives''' political theme, as one of their favorite aspects of Vampire: The Masquerade. Den of Geek, too, looked forward to them, following Choice of Games' earlier vampire-themed game series Choice of the Vampire, and encouraged readers to play it while waiting for Night Road. Comic Book Resources considered the car-related gameplay mechanics in Night Road'', such as the ability to unlock different types of cars and tuning them, among the most appealing mechanics of the game. References External links 2020 video games 2021 video games Upcoming video games scheduled for 2022 Android (operating system) games Asexuality in fiction Athens in fiction Bisexuality-related fiction Browser games Gothic video games Homosexuality-related mass media 2020s interactive fiction IOS games LGBT-related video games Linux games MacOS games Ottawa in fiction Single-player video games Transgender-related video games Vampire: The Masquerade Video games about vampires Video games featuring protagonists of selectable gender Video games set in Arizona Video games set in Canada Video games set in Greece Video games set in Illinois Video games set in Texas Windows games World of Darkness video games
859711
https://en.wikipedia.org/wiki/Siemens%20SX1
Siemens SX1
The Siemens SX1 is a GSM mobile phone running version 1.2 of the Series 60 platform for the Symbian OS. It was the first such smartphone from Siemens following its licensing agreement with Nokia for the use and development of Series 60. Though unveiled in February 2003, it was not launched until December of that year. Features The phone has a very high feature list for its time of release. It had three built in games and support for more by downloads. The three games which were bundled with the phone were Mozzies, which was awarded the title of best mobile game in 2003, Typegun, which was a game made for acquainting users to the keypad layout, and Sitris, a Tetris version with multiplayer support (via Bluetooth). In Mozzies, the Camera is used to detect the motion. So you have to position the gun on the flying mosquitoes which are superimposed on the video feed from the camera. The objective is to shoot down the mosquitoes by moving the phone around and clicking when you are aiming correctly. The phone also has a good number of connectivity options from Bluetooth to IrDA. It can also be used as a Fax Machine by connecting it to an appropriate device like a computer through the software provided. The File Manager allows sending and receiving of various files over either infrared or bluetooth. The phone has two shortcut keys on its side. One is used to start the camera and click pictures, while the other can be used to make a voice command driven call or start the voice memo. There is a built in picture editor which allows modification of images taken by the camera or any other type of image present on the phone. The phone has a few organizer capabilities. There are applications to take down short notes, a To-Do List, a scheduler application with reminders and a converter to convert currency and different measuring units. The mobile has been popular because of a Linux port to this mobile called "Linux on SX1". Linux on SX1 A Linux for the mobile exists, one can boot a modified Linux kernel on the mobile using Das U-Boot as a bootloader. There have been successful attempts to get Qtopia working on the mobile. External links The Symbian OS Smartphones SX1 Symbian devices Mobile phones introduced in 2003 Mobile phones with infrared transmitter
4410993
https://en.wikipedia.org/wiki/ArchiCAD
ArchiCAD
ARCHICAD is an architectural BIM CAD software for Macintosh and Windows developed by the Hungarian company Graphisoft. ARCHICAD offers computer aided solutions for handling all common aspects of aesthetics and engineering during the whole design process of the built environment — buildings, interiors, urban areas, etc. Development of ARCHICAD started in 1982 for the Apple Lisa, the predecessor of the original Apple Macintosh. Following its launch in 1987, with Graphisoft's "Virtual Building" concept, ARCHICAD became regarded by some as the first implementation of BIM. ARCHICAD has been recognized as the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers and considered "revolutionary" for the ability to store large amounts of information within the 3D model. Today, it has over 120,000 users. Product overview ARCHICAD is a complete design suite with 2D and 3D drafting, visualization and other building information modeling functions for architects, designers and planners. A wide range of software applications are integrated in ARCHICAD to cover most of the design needs of an architectural office: 2D CAD software — drawing tools for creating accurate and detailed technical drawings 3D Modeling software — a 3D CAD interface specially developed for architects capable of creating various kind of building forms Architectural rendering and Visualization software — a high performance rendering tool to produce photo-realistic pictures or videos Desktop publishing software — with similar features to mainstream DTP software to compose printed materials using technical drawings pixel-based images and texts Document management tool — a central data storage server with remote access, versioning tool with backup and restore features Building Information Modeling software — not just a collection of the above-mentioned applications with an integrated user interface but a novel approach to building design called BIM. Features Working with parametric objects ARCHICAD allows the user to work with data-enhanced parametric objects, often called "smart objects" by users. This differs from the operational style of other CAD programs created in the 1980s. The product allows the user to create a "virtual building" with virtual structural elements like walls, slabs, roofs, doors, windows and furniture. A large variety of pre-designed, customizable objects come with the program. ARCHICAD allows the user to work with either a 2D or 3D representation on the screen. Two-dimensional drawings can be exported at any time, even though the model in the program's database always stores data in three dimensions. Plans, elevations, and sections are generated from the three-dimensional virtual building model and are constantly updated if the user 'rebuilds' the view. Detail drawings are based on enlarged portions of the model, with 2D detail added in. Collaboration and remote access ARCHICAD released its first file exchange based Teamwork solution in its version 5.1 in 1997, which allowed more architects to work on the same building model simultaneously. A completely rewritten Teamwork "2.0" solution with a new database approach came out in version 13 in 2009 named Graphisoft BIM Server. Since only the changes and differences are sent to the central storage, this solution allows remote access to the same project over the Internet, thus allowing worldwide project collaboration and coordination. In 2014, with the introduction of the BIMcloud, better integration is provided with standard IT solutions: browser-based management, LDAP connection, and HTTP/HTTPS based communication. Also, new scalability options are available, by allowing multi-server layouts to be created, with optional caching servers. APIs and scripting Third-party vendors and some manufacturers of architectural products have compiled libraries of architectural components for use in ARCHICAD. The program includes Geometric Description Language (GDL) used to create new components. Also, API (Application Programming Interface) and ODBC database connections are supported for third party Add-On developers. Via direct API links to 4D and 5D software such as Vico Office Suite or Tocoman iLink, the ARCHICAD model can be exported to BIM-based cost estimation and scheduling. ARCHICAD is also directly linked via API to Solibri's Model checking and quality assurance tools. In addition, Graphisoft provides a direct link to Grasshopper 3D enabling a visual programming environment for parametric modelling and design. Data interchange ARCHICAD can import and export DWG, DXF and IFC and BCF files among others. Graphisoft is an active member of BuildingSMART (formerly the International Alliance for Interoperability, IAI), an industry organization that publishes standards for file and data interoperability for built environment software. Graphisoft was one of the founders of the Open BIM concept, which supports 3D BIM data exchange between the different design disciplines on open-source platforms. ARCHICAD can also export the 3D model and its corresponding 2D drawings to BIMx format which can be viewed on a number of desktop and mobile platforms with native BIMx viewers. License types and localizations License types Commercial, educational and fully functional 30-day trial versions can be installed with the same installer. As long as no hardware protection is present or the software is not activated with a trial or an educational serial number, ARCHICAD can be launched in demo mode. The installer files can be downloaded without registration. The educational or trial serial numbers can be obtained after registration. Commercial version is protected by either a hardware protection key or a software key. If no key is present, ARCHICAD switches to Demo mode where Save, Copy and Teamwork features are disabled (printing/plotting is still enabled, even in case the project file has been modified since opening). START Edition is a streamlined version of ARCHICAD for smaller practices or offices who don't need collaboration and advanced rendering functionality. Educational versions are protected by serial numbers. Saved files in ARCHICAD educational versions are compatible with commercial ARCHICAD versions, but carry a watermark identifying the license type. Once a project has been edited with an Educational version, the watermark will persist in the file. Trial version is a 30-day fully functional version in which you can save, print and publish projects. File formats are fully compatible with the commercial version once the copy of ARCHICAD is used with a commercial license. Otherwise the files created by a trial version are only readable by the same ARCHICAD instance with which they were created. The trial version is protected by a serial number. Languages and localizations ARCHICAD is available in a number of localized versions. In addition to a translated user interface and documentation, these versions have a set of parametric objects (object libraries) developed considering the specific requirements of the regional market, and different default values for object properties, menu arrangements, etc. Extensions Various free and commercial add-on products and extensions add extra functionality to ARCHICAD or provide further data exchange possibilities with other software applications. Some of these extensions are developed by Graphisoft, such as the freely available Trimble SketchUp, Google Earth or Maxon's Cinema 4D import/export add-ons or other extensions sold separately such as Graphisoft MEP Modeler, Graphisoft EcoDesigner or Graphisoft Virtual Building Explorer; while there are a number of add-ons provided by third party vendors, such as Cigraph or Cadimage. Version history For a detailed version history see the help center article. References External links Graphisoft.com - The manufacturer's website ARCHICAD installer downloads Graphisoft Help Center GDL / BIM developer for ARCHICAD Lachmi Khemlani wrote a review of ARCHICAD 17 for AEC bytes Facility management software for ARCHICAD 3D graphics software BIM software Building information modeling Computer-aided design software Computer-aided design software for Windows MacOS MacOS computer-aided design software
6109772
https://en.wikipedia.org/wiki/Verner%20W.%20Clapp
Verner W. Clapp
Verner Warren Clapp (June 3, 1901 – June 15, 1972) was a librarian, writer, and polymath. Starting as a summer clerk at the Library of Congress in 1922, Clapp rose to chief assistant librarian and acting Librarian of Congress. In 1956, he left the Library to serve as the first President of the Council on Library Resources. In these and other capacities, Clapp significantly contributed to administrative and technological modernization of the Library of Congress and to librarianship generally. Known to his peers as "Mr. Librarian", a "library giant" "the librarian's librarian", and, among other accolades, "the library world's Da Vinci" across his varied career Clapp earned tremendous professional and personal respect and many of the library industry's highest honors and awards. Librarian of Congress Lawrence Quincy Mumford said of Clapp, "His contributions to the Library of Congress and to the library world are so varied and numerous that one is staggered at the knowledge that a single person in his lifetime could accomplish this." Librarian of Princeton University, William S. Dix, said of Clapp, he was "close to the center of almost every important development in scholarly librarianship for at least 30 years". Clapp never formally trained in librarianship, having received an A.B. from Trinity College, Hartford, Connecticut and studied graduate-level philosophy at Harvard University. Instead, he applied a practical mind and insatiable curiosity to problem solving, coordination and technological solutions. Clapp's professional focus and accomplishments include materials preservation, library cooperation, technology, including microfilm and computerization, copyright, fair use, Cataloging in Publication (CIP), inter-library networking and cooperation and user access. Clapp also played significant roles in the preservation of the Declaration of Independence and other foundational documents during World War II, post-War library acquisitions, and the creation of both the United Nations Library (now Dag Hammarskjöld Library) and the Japanese National Diet Library. Early life Clapp was born in Johannesburg, South Africa (then "Transvaal") to an English mother, Mary Sybil Helms Clapp, and an American father, George Herbert Clapp. Mary Helms, daughter of Danish mining engineer Ludvig Verner Helms, was born in Sarawak in Borneo and educated in Europe and had moved to the Transvaal to work in a law office when she met George Clapp, an American entrepreneur from Dover, New Hampshire, who was in Johannesburg to sell bicycles. They were married in 1898. During the Second Boer War, Mary Clapp served as a British army nurse. In 1905 the family moved to the United States, settling in Poughkeepsie, New York, where George Clapp's brother ran a drug store. Clapp and his father took long hikes across the countryside and swam in the Hudson River, which led to Clapp's life-long passion for brisk, extended walks and the outdoors. Clapp credited a foundry across from his home for his fascination with machinery and mechanical processes, as the foundry workers entertained his questions and curiosity. Throughout his childhood Clapp was an avid reader who, he said, "finished most of Dickens, and Scott by the time I was fourteen", as well as Herodotus, given him by his mother for his fourteenth birthday and which he later read in Greek. From Poughkeepsie High School, Clapp attended Trinity University, graduating with an A.B. in 1922. Clapp was captain of the Trinity track team, member of Sigma Nu fraternity, joined the Student Army Training Corps in 1918 (see Reserve Officers' Training Corps) and was inducted into Phi Beta Kappa in 1921. In 1922, he enrolled at Harvard University to study graduate-level philosophy, where he studied under Bertrand Russell. He additionally audited literature courses by John Livingston Lowes and Irving Babbitt, "to whom", Clapp recollected, "I suppose I am more indebted than I know. And of course, there is T.S. Eliot – and so many others." Career at Library of Congress Clapp's 33-year span at the Library of Congress was described by a biographer as "a career of immense diversity and depth, a career not likely to be matched in today's age of specialization". Such rise in "specialization" came hand-in-hand with enormous technological change, making Clapp's entrance to librarianship, although unintended, propitious. Clapp scoffed at distinctions in the field between "the librarian as bookman and the librarian as retriever of information", as an interviewer queried of him in 1965, calling it: "sheer baloney". Educated in the liberal arts and philosophy, fascinated by books, gadgets, technology, and ideas—and untrained in librarianship, Clapp's contribution to what he called the "professional memory," was enormous, to which his peers would repeatedly attest, such as his entry in the Dictionary of American Library Biography: "His interests were so broad and his knowledge so extensive that one is baffled in trying to highlight his accomplishments." While promoting the study of librarianship for others, it was his overarching view of the field as a human and not just professional endeavor that yielded such a diverse and acclaimed career that started, quite literally, at the reference desk of the Library of Congress's Main Reading Room. At the 1956 dedication to the National Library of Canada, Clapp described his vision for libraries and, as such, his professional purpose: "Libraries are the repositories of man's tradition, and so, in a manner of speaking, of his collective soul." Summer internship and career start During the summer of 1922, Clapp joined his parents, who had moved to Washington, D.C., and found temporary work at the Library of Congress as a cataloger in the Manuscript Division. There, Clapp filled in for another cataloger who, Clapp recalled, "usually turned out in a year somewhat less than a third of what I turned out that summer". Clapp cataloged collections relating to the American Revolution, the Civil War and the papers of Reverdy Johnson, whose "execrable penmanship", Clapp said, "was even worse than that of Horace Greeley, which was notably bad". "Disenchanted" with study of philosophy at Harvard, Clapp returned to the Library of Congress the following year, "thinking then that I would put in a year there before deciding on a profession". Employed as an assistant reference librarian in the Main Reading Room, Clapp recalled, "They turned me loose and I began to explore this extraordinary institution into which I had fallen." Clapp engaged his innate curiosity to learn about the Library, as "I had no qualifications for the job whatsoever except a simple B.A. and I had had no library experience except my stint during the summer." On "slow evenings", Clapp would seek out chief cataloger and famed librarian, Charles Martel (creator of the Library of Congress classification system), "And then I would talk to him about cataloging problems for an hour before slipping back to work." From there Clapp immersed himself in librarianship, crediting significant influence from Librarian of Congress Herbert Putnam, whose reports to Congress Clapp read studiously, Charles C. Jewett, Librarian of the Smithsonian, John Shaw Billings, creator of the Library of the Surgeon General's Office, Harry Miller Lydenberg of the New York Library, and Melvil Dewey. Mid-career and promotions After five years at the Main Reading Room, Clapp was tasked as the first head of the Congressional Unit, which had started in 1914 as the Legislative Reference Bureau and which was the precursor to the Congressional Research Service. The Congressional Unit was charged with responding to requests for information from Congress, primarily by collating research and publications from government organizations and agencies. In 1931, Clapp was named Special Assistant to the Superintendent of the Reading Room. In this position, Clapp and his colleague, David C. Mearns, researched the early locations of the Supreme Court, cataloged the existing books in the Library from Jefferson's personal library, and helped catalog President Calvin Coolidge's books in the White House and the library of Oliver Wendell Holmes Jr. First Lady Eleanor Roosevelt requested of Clapp and Mearns a history of the White House egg roll, for which the pair produced a monograph with two editions, one listing Clapp and Mearns as authors and the other Mearns and Clapp, which caused a "minor panic" among catalogers. Clapp additionally supervised the Division of Books for the Adult Blind, for which he was credited with expanding the program, including to earn it "separate annual appropriation" to "provide books for the use of the adult blind readers in the United States", as well as to develop standards for Braille and other Library services for the blind. In 1937, Clapp was promoted to Assistant Superintendent of the Reading Room, and in 1940 was named Director of the Administrative Department. Library of Congress reorganization In 1939, Archibald MacLeish took over as Librarian of Congress from Putnam, who had served as Librarian for forty years. Following recommendations of the Librarian's Committee, consisting of three outside technical experts, and with Clapp's close guidance and direct input, MacLeish reorganized the Library's 35 administrative units into three departments (excepting statutory offices of Copyright and the Law Library), Administrative, Reference, and Processing, placing Clapp as budget officer and administrative assistant of the Administrative Department, overseeing personnel, accounting, buildings and grounds and publications. In 1943, MacLeish moved all acquisition activities into a new Acquisitions Department headed by Clapp (and abolishing the Administrative Department). Clapp remained at the Library until 1956, having served as Chief Assistant Librarian and Acting Librarian, roles and events for which are outlined below. Preservation of the Declaration of Independence during World War II In 1940, Great Britain sent to the United States the Lincoln Cathedral copy of the Magna Carta for its protection. Following U.S. entry to the War, Clapp was charged with oversight and protection of that document and other important documents, including the Declaration of Independence, the Articles of Confederation, and the U.S. Constitution, as well as a Gutenberg Bible and the Library's extensive collection of Stradivarius violins. Clapp coordinated the move of these documents and artifacts to the United States Bullion Depository at Fort Knox, Kentucky, and, most importantly, their proper protection from atmospheric exposure and deteriorating conditions by placing them in waterproof, hermetically sealed containers. Other materials were moved to fireproof buildings at several universities, and over 8 million catalog cards were microfilmed for duplication. Deteriorated condition In 1971, Clapp published "The Declaration of Independence: A Case Study in Preservation", in which he described the Declaration as "one of the most abused documents in the history of preservation of documents". Chronicling how, following the August 2, 1776 signing by most members of the Continental Congress, as absent members arrived to add their signatures, the Declaration was "pulled out, unrolled, signed and rolled up again" and subsequently "stuffed into a barrel and carried off in a cart, through all types of weather", as the document was vacated during the Revolutionary War from Philadelphia to Baltimore, Annapolis, York, and Trenton, and, again in 1814 during the British invasion of Washington, D.C., to a hiding place in Leesburg, Virginia. Worst of all, President John Quincy Adams had ordered a wet-pressing to reprint the document and distribute copies across the country. "The document has never quite recovered from that wet-pressing," noted Clapp. In 1880, a committee at the National Academy of Sciences recommended protecting the document from light, as it had been on display at the State Department in plain air. In 1921, it was delivered to the Library of Congress, which, cataloged and photographed it, then placed it on public display under glass. On January 9, 1940, the Captain of the Guards of the Declaration reported a crack in its upper right-hand corner, the result, it was determined, of expansion and contraction of the document from exposure to changing atmospheric conditions over time. Preparation and storage at Fort Knox Accordingly, Librarian MacLeish created the position, "Keeper of the Collections", to monitor the Declaration and other important historical documents. Meanwhile, the Library engaged experts in preservation of art works and historical documents from the Isabella Stewart Gardner Museum, Harvard's Fogg Art Museum and the British Museum The Library set in place its contingencies for war, including to reserve forty cubic feet of storage space at Fort Knox, where a bronze, hermetically sealed case was to be placed to hold the Declaration and the Constitution. To move the documents, the case was placed into an oak box packed with "rock wool" to protect against fire and water damage. Along with two Secret Service Agents, Clapp accompanied the documents' transfer by train to Fort Knox, where the lower vaults maintained 59% humidity. Now secured, Clapp next oversaw implementation of prior plans to properly restore and preserve the Declaration. On May 14, 1942, conservator George L. Stout of the Fogg Museum and Evelyn Ehrlich of the Gardner Museum extracted the document from its container and commenced the delicate process of removing it from its mount, "a pulp board with green velvet, and various glues, Scotch tape and other adhesives". After various consultations, it was decided to encapsulate the Declaration and the Constitution in sealed containers with slightly humidified helium along with an additional paper, created by the Bureau of Standards, to act as absorbent should a dew point arise. In 1952, these and other documents were transferred to the National Archives. Focus on preservation Clapp's role in the preservation of the Declaration of Independence was exemplary of his talents as librarian and administrator to set priorities, seek expert consultation, and coordinate implementation. The larger experience informed Clapp of the need for building a "body of literature" for document restoration and preservation, where none had previously existed. Clapp coordinated its collation from experts at the National Academy of Sciences, the Gardner Museum, and the Library of Congress itself. The experience solidified Clapp's latter-career focus on preservation. He wrote, Some conclusions may be drawn from this story. First, it is apparent that the topic of preservation has been neglected in the professional discussions of librarians .... We need manuals; we need research .... We need training. There should be more jobs such as that of the Keeper of the Collections in the Library of Congress. Preservation must become an important part of the profession. World War II and aftermath WWII efforts As first director of the Acquisitions Department, starting in 1943, Clapp focused on "programs in support of the war effort". Such wartime efforts included creation of records for teaching Arabic to troops heading to Africa, handbooks for troops, and supplying maps from the Library collections. Clapp bragged, "We prepared and distributed for use of government agencies a 96-page bibliography on North Africa within two weeks of the landing of U.S. troops." Post-War military book distribution and acquisitions After the War, Clapp supervised collection and redistribution of textbooks from the military in order to redistribute them to "institutions of higher learning at minimal cost". Clapp reoriented materials search from subject to geographic origin, which greatly enhanced the efforts of the Cooperative Acquisitions Project following the war that "aimed at securing European publications produced during the war but hitherto unavailable" and "buying all they could". As head of the Library of Congress mission to Europe to secure collections following wartime devastation, Clapp negotiated release of $100,000 pre-War orders by American libraries from booksellers in Russian-occupied Leipzig. The Cooperative Acquisitions Project yielded some two million European publications that were acquired for the Library of Congress and over 100 other U.S. libraries. These experiences enhanced Clapp's appreciation for the practical utility of libraries and as agents for the common good, for which he argued in his seminal work, The Future of the Research Library: "The democratization of the library—the great achievement to which American librarianship should pride itself—need not result in diluted scholarship, but should result in greatly widened usefulness of these institutions wherein the experience of mankind is recorded." United Nations Library San Francisco Conference In 1945, following the War, Clapp was charged with curating a library for the United Nations conference at San Francisco, thus establishing the foundations for the United Nations Library. He coordinated distribution of U.N. founding documents to libraries around the world, which initially required translation of the U.N. Charter into five languages. Establishment of the United Nations Library In August 1946, the United Nations secretary-general ordered formation of a formal library division. The UN library holdings had been housed at Hunter College in New York and then at the temporary United Nations facility at Lake Success, CT and supervised by the United Nations Information Office, which was disbanded in 1947. In April of that year, the Secretariat organized a committee of experts to set criteria for establishment of a permanent library. The committee recommended focus on reference and bibliographic services "rather than on accumulation and preservation of a large collection" and that a "competent librarian be engaged" to guide decisions, especially as regarding collection size and required space. Along with John E. Burchard of the Massachusetts Institute of Technology, a "library building expert", Clapp was engaged to set criteria for the collection. Clapp interviewed principals at the Secretariat and issued a report on May 3, 1947 with recommendation for "stack space for 500,000 volumes" for ten to twenty years and for 1,000,000 volumes for needs over thirty-eight years. Clapp additionally recommended that the Library make use of interlibrary loans in order to access the extensive existing collections in New York City and to avoid "unnecessary duplication". Based on the recommendations of Clapp and Burchard (who felt that a maximum space for 500,000 volumes would suffice), the Division of Library Services and Department of Public Information was established in 1948. Clapp next served as vice-chairman of a committee organized to create a formal policy statement that was subsequently adopted by the General Assembly in December 1949. The report followed Clapp's original recommendation that, "the emphasis will be on service and immediate usefulness, not on accumulation and preservation." Creation of the Japanese Diet Library In December 1947, Clapp left for Japan as head of the Library of Congress mission to Tokyo, Japan to coordinate legislation for and establishment of the Japanese National Diet Library. His work garnered praise from the Supreme Commander of the Allies Douglas MacArthur and Hitoshi Ashida, Japanese Foreign Minister and, later, Prime Minister. The National Diet Library (NDL) inherited collections from the Imperial Diet libraries (established 1890) and the Imperial Library (established 1872). Following the direction of the U.S. Library mission, Article 130 of the Diet Law of 1948 declared, "The National Diet Library shall be established in the Diet by a separate law, in order to assist Diet Members in their study and research." Clapp Collection at Diet Library On November 12, 1968, on the twentieth anniversary of the creation of the Diet Library, Clapp and his wife, Dorothy Devereaux Clapp, traveled to Japan where he was awarded the Merit of Second Class of the Order of the Sacred Treasure, "which became one of his most cherished honors among many professional awards". Following his death in 1973, Dorothy Clapp donated books and materials to form the "Clapp Collection" at the National Diet Library. According to the Library, "In doing this she was carrying out the wishes of Clapp, who always expressed his gratitude for having been involved in the foundation of the NDL. The collection has been named 'Clapp Collection' to honor his achievements." Additionally, the National Diet Library website notes that Clapp "was remembered by many Japanese for his passionate devotion to the job, his warm-hearted sense of humor and his sincere modesty towards his Japanese colleagues even though his stay in Japan was only a little less than two months." Chief Assistant Librarian Appointment In March 1947, just before the U.S. Mission to Japan, Librarian of Congress Luther H. Evans named Clapp Chief Assistant Librarian of Congress, an appointment "responsive to a wide popular demand", according to Evans. From this position, Clapp engaged broader Library policies and priorities. The position allowed Clapp "to keep a finger in every pie inside the library and out". Loyalty program In 1947, President Harry S. Truman issued Executive Order 9835 to ensure the "loyalty" of all Executive department employees. Librarian Evans voluntarily included the Library of Congress, not an executive branch agency, over concerns for Congressional appropriations and voluntary compliance with Civil Service regulations. In June 1948, the American Library Association (ALA) "adopted a Resolution Protesting Loyalty Investigations in Libraries", thus opposing Library of Congress enforcement of the Loyalty program. As the political demands increased, the Library fell into the position of having to balance the protections of employees against the political and appropriations pressures of Congress. Evans tasked Clapp with implementation of the Loyalty program, requiring Clapp to navigate the political and legal demands of the Loyalty program on employees, which came to include homosexuality on the grounds of susceptibility to blackmail. Clapp, whose time and attention was exhausted by this issue, wrote to a professional colleague that "loyalty investigations are just about as 'desirable' as jails and sewers" but which "owe their existence to the same cause-man's imperfection". In 1956, Clapp published a nine-page letter to the editors of The Washington Post and the Washington Times-Herald denouncing the "Star Chamber proceedings... to which penalties are attached (the penalties of dismissal and disbarment from employment) without being permitted to confront their accusers or even know their identity". In her review of the episode, Louise Robbins chronicles Clapp's navigation of competing and contradictory policies, politics, and demands, public and private, and within the Library, and concluded that "Clapp used his power within ALA to keep the association's position on loyalty programs from condemning the very practices he was charged with implementing". Fellow librarian Frederick H. Wagman recalled, "It was a painful experience made agonizing by our insights into the weaknesses of the procedure, the ineptitude of the investigators, the recording of the permanent record of gossip and hearsay, the opportunity for character assassination. Verner was one of the very few government officials who had the courage to speak out and do so forcibly." Ann L. Hallstein, Chief of the Records Management Division of the Copyright Office (and formerly a planning assistant to the Librarian), wrote, "Clapp (and Wagman) placed a high value on the individual employee's right to be trusted until proven unworthy of that trust, in matters of loyalty or anything else." Projects Among projects and priorities, Clapp assisted in the launch of the Library's National Union Catalog of Manuscript Collections, promoted "microforms as a means of preservation and medium for storage", investigated copyright law and library deposit requirements, including to explore possibilities of cataloging-in-source. Clapp additionally reoriented Library acquisition policies, advanced interlibrary cooperation, especially regarding microfilm, and addressed issues of space and stack maintenance. Most importantly, Clapp promoted the idea that "all library service is local library service – it is where the reader is, and 999 parts out of 1000 of him is in his home, his office, his lab, his school or his library. National library development as we are finding out... is the sum of local library development." Typically, Clapp responded to objections to his program with insight and humor, such as his note to the ALA regarding distribution of materials to depository libraries: "I inquired whether the association wanted us to burn the books rather than distribute them free." Clapp was enormously popular within the library community and LC staff. On the day of his appointment to Chief Assistant Librarian, the Manuscript division wore black. Clapp's popularity amongst the staff was reciprocal: "Busy, totally involved with his work, always perched on the cutting edge of all new developments in library science. Clapp nonetheless had time and concern for staff." CIA liaison Following World War II, the U.S. government became deeply concerned with managing documents and information, especially top-secret materials. Starting in the early 1950s, the Central Intelligence Agency (CIA) reached out to the Library of Congress for its expertise. As Chief Assistant Librarian, Clapp was assigned as the Library's liaison to the CIA. Following the common interest in development of document management technologies, upon starting the Council on Library Resources, Clapp maintained the relationship with the CIA, especially regarding microfilm and other document replication and management systems. For example, at a 1961 meeting of the National Microfilm Association, Clapp gave a talk titled "The Place for Microfilm in Records Management" and presided over a session on "library-archival application of microforms" that also featured Paul W. Howerton of the CIA, who spoke on "A Micro-film Unit Record System; a Case Study." Acting Librarian of Congress In July, 1953, Clapp was appointed Acting Librarian of Congress following Luther Evans' departure at the request of President Truman to head UNESCO. As Acting Librarian, Clapp carried on with existing Library and personal priorities and policies. A notable event of his tenure was the honor to present at the White House a "specially inscribed" bibliography of the works of Winston Churchill prepared by the Reference Division. The Library's Annual Report to Congress noted, "Sir. Winston remarked that he planned to add to this bibliography a 'modest' work—a history of the English-speaking peoples, to run approximately 800,000 words." The Report also noted publication by the Library of "the most comprehensive list available of motion pictures from the birth of the industry in 1894 to 1949", an important collection of titles for the bibliographic record and copyright purposes. Denial of Librarian of Congress appointment It was understood by the Library staff, the librarian community, and by Clapp himself that he would be appointed next Librarian of Congress. Clapp "enlisted the aid of friends across the country" to promote his candidacy, which would require President Eisenhower's appointment and Senate confirmation. However, an episode related to his governance as Acting Librarian interfered. In that capacity, Clapp had approved an exhibit at the Library on propagandistic photo manipulation. Called "Libraries in a Free world", the exhibit highlighted the infamous Soviet manipulation to remove Trotsky from a photo of him with Stalin, an example of Damnatio memoriae, by which an important historical actor is removed from the historical record in order to change the actual historical record. However, the exhibit compared the Soviet example with that of Maryland Senator John Marshall Butler, whose campaign, along with that of Senator Joseph McCarthy, had distributed a manipulated photograph of his opponent, Millard Tydings, superimposed with American Communist Party figure, Earl Browder. Wagman wrote, "It is said that the repercussions were resounding, in that this one defense of truth may have prevented a certain senator from supporting Clapp as a nominee for the position of librarian of Congress." Eisenhower nominated Quincy Mumford, who took office on September 1, 1954. While Evans told Clapp, "Verner, good things await you elsewhere if you feel uncomfortable in remaining at LC," Clapp continued as Chief Assistant Librarian until 1956. During his final two years at the Library, Clapp published two important articles that highlighted "the development of Clapp's thinking on library problems ... Each emphasized Clapp's view of the seamless nature of library problems, 'access' and 'organization' being primary, and that libraries should 'direct attention to doing the best with what ... [they have] ... and under the conditions given.'" Council on Library Resources In September 1956, Clapp resigned from the Library of Congress to become the first President of the Council on Library Resources (CLR) (now Council on Library and Information Resources). Inspired by Louis Booker Wright of the Folger Shakespeare Library and funded by the Ford Foundation, the primary mission of the CLR was to promote inter-library cooperation and development of library technologies with focus on the needs of academic and research libraries. Clapp advanced this mission through promotion of "machine-readable cataloging" technologies, development of the MARC format, cataloging-in-source, standards and technologies in permanent, durable paper and other preservation projects, cataloging of music, microfilm, and, above all, inter-library cooperation and coordination. Origin and appointment as president of CLR The direct origin of the CLR came of two conferences held at the Folger Library that Wright had organized to discuss his concerns about university library coordination, photoduplication technologies, and research librarianship (as opposed to general librarianship), and for which, at the suggestion of Fred Cole, he solicited and received funding from the Ford Foundation ($6,000 and $10,000 respectively). The first, held on January 15, 1955 with fifty participants, included Librarian of Congress Mumford and Chief Assistant Librarian Clapp, whom Mumford insisted be included. The second conference, held, on March 31, 1955 included a narrower group of fifteen, with, again, Mumford and Clapp, and produced a firm proposal that the Ford Foundation fund with a $5 million grant with its specific organization and purposes to be drafted by Clapp, who, according to former CLR president, Deanna B. Marcum, "had impressed the group with his grasp of library issues and was considered particularly rational in his approach". The Council was formally launched In September 1956 with Clapp as its first president. Marcum observes, "From boyhood, Clapp had been a tinkerer... loved gadgets, and was forever thinking about what could be invented to make library jobs more efficient or streamlined." Furthermore, as an academic outsider and never trained as a librarian, "Clapp's fascination with applied technology brought into sharp focus the conflict that existed within the board between those who advocated pure research and those who wanted to pursue practice applications. From the day Clapp moved to the Council, he was anxious to do practical things." Dispute over CLR direction By 1958, Wright, serving on the CLR Board of Directors, "grew increasingly concerned about its influence and its results" and appealed to the Ford Foundation to "take action to redirect the council". Clapp oversaw a review of the Council's standing and reported that "two variant, although not necessarily contradictory, views within the membership of the Board of Directors concerning the direction which the Council's program should take... [to] support basic research aimed at making the fullest use of the capabilities of modern technology... [or] immediate development of devices and systems aimed at the practical solution of obvious and immediate problems." Marcum claims that, "Instead of choosing one direction over the other, the board and the advisors to the council yielded to the opinions of the 'professional librarian,' Verner Clapp." Clapp explained that the CLR board membership "was selected to represent the public interest in libraries, rather than that of librarians; and librarians are a minority in it". At that time, members of the CLR Board of Directors represented private industry, directors of university and private libraries, university presidents and vice presidents, and a president of the American Bar Association (the two non-academics were former and current Chairman of the National Book Committee.) Wright's primary objection was to Gilbert Chapman, whom the Ford Foundation insisted upon placing as Chairman. Chapman was president of the Yale and Towne (formerly Yale Lock Manufacturing Co.) and who served as chairman of the National Book Committee and President of the New York Public Library. According to Marcum, Wright held a "deep bitterness toward Chapman, not as an individual, but as a type". Nevertheless, Clapp proceeded with his "very practical agenda, and he worked exceedingly hard to accomplish specific tasks... for he methodically chose what projects needed attention and then sought individuals and institutions that could work on the problem". Clapp articulated his approach, observing that, "Our ranks have included physicians, lawyers, chemists, and physicists, as well as philosophers, historians, and men of letters. There is no doubt that we have benefited from the cross-fertilization." Focus on problem solving In a March 1960 article published in the ALA Bulletin intended to inform ALA members about the purposes and activities of the CLR titled "'To Assist in solving the problems of libraries': The program of the Council on Library Resources", Clapp plainly described the CLR's purpose as "for getting some things done that need to be done". To identify those "things", Clapp explained, "First the Council went looking for problems. It found them by the score," including requests for "how to prepare for a censorship attack", lighting for lower-shelves in bookstacks, and "how to break up dating parties in the reading room", among issues that "didn't make sense". "Then..." Clapp wrote, "It received a group of proposals which were potentially so important that they could not be ignored, consisting of grants for funding ALA representation at a German conference "on the subject of code revision", a Rutgers University exploratory committee to identify "the points at which research and development might be profitably brought to bear", an investigation "to identify the causes of deterioration in paper in libraries and to attempt to find correctives", and "to establish a closed-circuit TV system between the main and a number of departmental libraries at the University of Virginia". Clapp reported that these projects were each fruitful, and some received additional funding from the CLR. Clapp noted that these initial projects helped the CLR identify "that its field of activity could best be projected" in four areas: cataloguing, reader access to materials, administrative support, and technological research. CLR projects and grants These projects included development of a "cooperative processing center for a number of independent libraries", development of "Cataloguing in Source" at the Library of Congress, book preservation, "studies of the most efficient use of library space" and "copying as a mechanism of physical access", including "an automatic book cradle/page turner", employment of facsimile transmission of text, "dry photocopying", and "microcopy". Marcum characterized these projects as "using technology to streamline cataloging procedures" and funding of "various educational and commercial institutions that promised to develop a new gadget – a new technological method – for solving a library problem". Clapp, on the other hand, conceived of these efforts as essential application of existing, large-scale industrial technologies to the more narrow purposes of libraries and their users, as well as to address the implications of these uses on copyright. Clapp summarized his purpose as aimed to resolve any "problem of library work", hoping that nevermore need a book be thrown away just because its paper deteriorated; that cataloging information in internationally accepted form now accompanies each book wherever it goes; that techniques are now available to reduce enormously the costs of book circulation, or to speed up the provision of material to readers (without losing control of the reference copy) through cheap and rapid copying processes; or that the techniques of telefacsimile have been so adapted to library work that duplication of any but the books most used locally can immediately become a thing of the past. To these ends, the CLR targeted grants for assisting in the development of the MEDLARS system for indexing medical literature at the National Library of Medicine, the "Machine Readable Cataloging" program, MARC, at the Library of Congress, which served as "an essential premise of a nationwide bibliographic network" and which "increased librarians' familiarity with computer technology" and "automated services", and Cataloging-in-Source initiatives to standardize cataloging of newly published books. In 1961, the CLR funded $100,000 to the Library of Congress for "a survey of the possibilities of automating the organization, storage, and retrieval of information in a large research library... not only from the point of view of the functioning of an individual institution but also from that of a research library whose activities are interrelated with those of other research libraries." Clapp's CLR legacy In Librarian Mumford's 1963 Report to Congress, the acknowledgements recognized, "In particular, the team appreciates the guidance and support given to it personally by Verner W. Clapp." The report affirmed many of Clapp's priorities as he had articulated in the 1960 ALA Bulletin article outlining the goals of the CLR. Indeed, as Hallstein observed in Clapp's entry in the Encyclopedia of Library and Information Science, "Clapp left his mark on the council, to the benefit of all. The council gave him the opportunity to extend his vision to the library world at large in a lasting way, for it funded dozens of projects which bore Clapp's imprimateur." Other important programs initiated or envisioned by and during Clapp's tenure at the CLR include the National Union Catalog of Manuscript Collection, the Union List of Serials, the National Preservation Board, the Northeast Document Conservation Center, founded in 1973 by a grant from CLR and other organizations, the Brittle Books Project of 1967 and the subsequent Preservation Microfilming Office, established in 1968 at the Library of Congress and from which Digital, Sound, and Film preservation offices at the Library of Congress arose. Continuing Clapp's interest in book preservation and paper technologies, in 1985 the CLR, under president Warren J. Haas, coordinated a joint task force with the Association of American Universities to address "the problem of decaying scholarly materials in the nation's libraries and develop a national plan for collective action". The task force led to the Commission on Preservation and Access (CPA) of 1986, that, in conjunction with the National Endowment of the Humanities, then led to "massive microfilming projects in major research libraries", as well as, in the 1980s and early 1990s under the Brittle Books Program, creation of the Digital Library Federation under the now-named Council on Library and Information Resources, in 1994. Book preservation William Barrow Clapp was ever concerned with preservation and paper deterioration, thus among the CRL's first grants was to the Virginia State Library to support the work of William J. Barrow. The grants set the basis for establishment of the Barrows Laboratory at the University of Virginia. While Barrow had earned his reputation as an expert on paper deterioration and restoration, including to create an industry standard process for restoration, Clapp's recognition of his work and CLR grants institutionalized Barrow's work and brought his expertise to mainstream librarian and publishing entities. "The Story of permanent durable book-paper" Clapp's book, The Story of permanent durable book-paper, 1115–1970, reviewed the technologies, techniques and materials of book-paper, from parchment to rag paper to wood pulp and their economy, utility and durability. Over time, Clapp explained, paper better met the needs of increasing volume. "To meet the ever-increasing demand for paper, accompanied by a correspondingly severe competition for high-quality raw materials, manufacturers began to adopt improvements in manufacturing technique which too often led to degradation of the product..." however, "The processes of degradation were slow at first and not immediately apprehended." Clapp credited Barrow with recognizing the role of acidity in paper degradation and applying the principle towards restoration in the "Barrow laminating process". Increasing demand on paper into the 19th century led to time and materials saving, such as use of alum or bleach, which allowed fabricators to ignore problems of degradation while meeting demand. Clapp noted, "Even after the relationship of alum to deterioration had been demonstrated, it kept on conveniently slipping from notice." Into the 20th century, focus on materials took precedence over effects of acidity, which were instead blamed on atmospheric pollution. Clapp credits Barrow with forcing the issue of the impact of "internal elements" over atmospheric deterioration as the cause of acid-driven deterioration. Barrow's next project, Clapp documented, was on setting standards for acid-free paper, which the publishing industry subsequently embraced. Future of the Research Library In 1964, Clapp published The Future of the Research Library, a compendium of his 1963 Windsor Lectures plus an appendix Clapp titled "The Problems of Research Libraries: Programs for Solution". The lectures encapsulated Clapp's overall missions to advance "local self-sufficiency" and "sharing the resources", from which he derived the challenges of libraries, both individually and collectively, to identify and address collections, preservation, cooperation, dissemination, and self-sufficiency, ever with an eye on expenditures and leveraging technologies from other industries. As Clapp wrote, "The real advantages should derive not from space-saving but from inexpensiveness of dissemination." Clapp considered libraries as essential to democracy, which informed his view of the very role of libraries for collection, preservation and dissemination of information. Technology, for Clapp, then, must conform to this purpose. Contemporaneous readers understood Clapp's purpose but many entirely misunderstood his method. Donald Brown Engley, Librarian at Clapp's Alma Mater, Trinity College, wrote that, "while Clapp's words were written in the context of the development of computers to make the knowledge buried in printed books more readily accessible to potential users everywhere... they carry the same force in the more elementary matter of sharing of the basic responsibilities of acquisition and storage of resources for common and community use." Reviewer Nathan Reingold wrote that "Historians of technology... will not find this volume very enlightening; it is written by a librarian's librarian for his fellow practitioners." Ironically, Reingold criticized Clapp for the "assumption that the library of the future will either have a computer at its center or around its perimeter but that the store of information, miniaturized or not, will receive practically the same treatment as today." W. L. Williamson of Columbia University (biographer of William Frederick Poole) recognized Clapp's concern for "local self-sufficiency... micro-photography, storage collections, and deterioration of book stock" but dismissed his "brief chapter" in which "he suggests some of the principal uses he expects for electronic computers and other new devices." Clapp, on the other hand, saw those technological solutions as integral to and consistent with the overall mission to promote "the democratization of the library"—whatever form they may become. Where Clapp's contemporaries saw the limits, Clapp saw possibility. Automation and technological solutions While the solutions Clapp reviewed were drawn from existing CLR and other projects, especially the Library of Congress "Automation" report, his larger contribution in The Future of the Research Library was to clarify needs of research libraries and their solutions in abstract form and as a set of principles. As such, Clapp ventured into the possibilities of "electronic computers", not for their limits but for their possibilities. For example, Clapp wrote that "Telefacsimile has long offered an attractive possibility; for more than three decades it has been successfully used in newspaper work; why not also for libraries?" Ever practical-minded, Clapp recognized the cost of such systems (an entire day would be required for "scanning articles in journals while the whole pile of books could have been delivered by truck in an hour"), noting that, instead, "machine-readable form" can be "rapidly and inexpensively" transmitted by telephone. As a biographer noted, "It seems Clapp foresaw the possibilities of networking and the necessity for strong regional and local systems before such terms were even fashionable, let alone discussed seriously." Microfilm While discussing the technical potential of microcopy (see Microform), Clapp wondered if current applications "failed to discriminate between the value of miniaturizing for the mere sake of space-saving, and its value in facilitating dissemination and in the consequent local availability of books." Regardless of the form of "massive acquisitions", Clapp reasoned, "its readers should be given the same opportunities for identifying relevant material among them as if they were originals." Clapp clarified the benefit of microfacsimile not just for its savings in cost, i.e., inexpensive reproduction and preservation, but for its "important extension of the bibliographic and physical access which is afforded by immediate availability." Clapp's vision resolved these competing and overlapping needs of reproduction, preservation and dissemination: I should like to be able to say that nevermore need a book be thrown away just because its paper deteriorated; that cataloging information in internationally accepted form now accompanies each book wherever it goes; that techniques are now available to reduce enormously the costs of book circulation, or to speed up the provision of material to readers (without losing control of the reference copy) through cheap and rapid copying processes; or that the techniques of telefacsimile have been so adapted to library work that duplication of any but the book most locally can immediately become a thing of the past. Computerization Clapp foresaw that the most effective research library will "make available to its users the informational records of mankind... But since the still-increasing rate of production of informational records will make it even more difficult than now for libraries to acquire and catalog significant portions of the total, other means must be devised." Thus, Clapp considered the "possible application of computer techniques" and that "There can be little doubt that methods will gradually be found for employing computers or computer-like machines in the operations of libraries." For dissemination purposes, Clapp realized, "It should be possible not only to update and to weed this catalog electronically at long distance, but all to consult it electronically from any point in the country." By extension, Clapp explained, "The spark needed to fire the succession of developments leading to this result is very simple – it is the commencement of the conversion of a great catalog to machine-readable form. Although conditions are not quite ripe for this, they are impending." Using the example of the bibliographic database, MEDLARS, a CLR-funded project that saw breakthroughs in font control, Clapp saw its potential not just in terms of automation and dissemination but for reduction in reliance on expensive books that are quickly outdated, assisting, thereby, the burdensome requirement of a library for "comprehensiveness" in its holdings. Clapp wrote, "Now, though the impracticability of comprehensiveness may thus be acknowledged, yet it would appear that local self-sufficiency should bear some quantitative as well as qualitative relationship to comprehensiveness," thus anticipating the necessity of affordable dissemination. At the same time, Clapp recognized the "not so shiny side" of these new technologies, principally "that dependence upon the resources of a distant library involves so much in the way of formalities, delay, cost and the frustration and indignity of having one's request subordinated protractedly or absolutely to the prior claims of the immediate users of that library." Copyright From the time of his roles as Chief Assistant Librarian and Acting Librarian through to his death in 1972, Clapp vigorously promoted, discussed, and published articles on copyright reform, especially as impacted by the developing technologies of photocopy and computerized reproduction of text. Into the 1960s, and response to those developing technologies, the expanding claims of publishing and multi-media companies challenged library purposes and processes regarding duplication, dissemination, preservation and access to copyright materials. Clapp wrote significant articles on these issues defending the interests and needs of librarians, including his 1962 "Library photocopying and copyright: recent developments" and his 1968 "Copyright—A Librarian's View" (prepared for the National Advisory Commission on Libraries) and "The Copyright Dilemma: A Librarian's View". Consistent with Clapp's comprehensive approach to library needs and potential solutions, in "The Copyright Dilemma" Clapp conceptualized the copyright "dilemma" from the librarian's point of view as: a librarian must now increasingly view copyright as a form of exterior control which threatens serious limitations of his library's freedom to put its collections to work in the service of its users. He delineated the alternatives for libraries as "(a) to abandon completely the affected services; (b) to dispense with the use of modern technology and to revert to archaic and inefficient methods for providing such services; or (c) to pay in burdensome and discriminatory fees and red tape in order to be permitted to use the more effective methods." Clapp next reviewed the constitutional and statutory purpose of copyright in U.S. law, including Federalist No. 43 ("a brief discussion of this section") and, especially, the Copyright Act of 1909 and its legislative purposes, which Clapp delineated between the constitutional purpose "to promote the progress of Science and Useful Arts" and the public need for access to information. He clarified, "First, how much will the legislation stimulate the producer and so benefit the public; and second, how much will the monopoly granted be detrimental to the public?" Clapp next attached the purpose of copying to the essence of a library in that, "They may also be said to exist for a principal purpose of facilitating further copying," from which he identified those purposes as essentially that of preserving copies. Clapp's 1962 work focused on the inherent problem for libraries in reliance upon the copyright fair use exception in the common law, since "Fair use has no statutory basis .... [and] since there is no way of predicting what a court might deem absurd, it is impossible to give a satisfactory definition of what constitutes fair use." Clapp explained that the "Gentlemen's Agreement of 1935" (see Robert C. Binkley: Joint Committee and Documentary Reproduction) between librarians book publishers that, while asserting that publishers maintained absolute right over copyright material, provided a liability exemption for "a single photographic reproduction or reduction of a part of a book or periodical volume in lieu of loan or manual transcription, solely for purposes of research." Clapp referenced the 1961 report to the House Judiciary Committee from the Register of Copyrights that called for statutory resolution to the issue. The report reflected Clapp's views on the need for statutory definition of fair use. By 1968 the issue of fair use was yet unsettled, prompting Clapp's subsequent publications on copyright, which coincided with Congressional hearings on topic. That year, Congress tabled the issue, as it would until 1976. No longer president of the CLR, in 1970 Clapp presented his undiluted views on the issue at the annual Rutgers University Graduate School of Library Service symposium, titled, "Copyright—the Librarian and the Law". The preface states, "This symposium, the eighth in the series, was so successful, productive and meaningful in its content, particularly in light of the existing problems in the area of copyright and copyright legislation, that the School and the Alumni Association decided that is was virtually imperative to publish the complete proceedings of that day's events .... Furthermore, much of the emphasis during the day's discussion was on the subject of photocopying in libraries and on the resultant question of copyright definition with which most librarians are confronted." The symposium and its perceived importance was in light of the ongoing debates in Congress over copyright reform and spoke directly to them. Charles H. Lieb, counsel to the Association of American Publishers and chairman of the American Bar Association, presented the industry perspective. Lieb argued that the then Senate version of copyright reform would, "for the first time", create statutory fair use as well as statutory exemption from liability for copyright violations for librarians—a significant concession by the industry. However, Lieb argued that statutory limits of fair use must prohibit "photocopying in excess of fair use." Lieb's overall argument, however, came down to the financial burdens of fair use upon the publishing industry. Two other speakers represented the author and "information industry" perspectives. Clapp then presented that of the librarian. Clapp reviewed the history of copyright law and the present discussion, which was animated by ongoing Congressional debates. Clapp observed, "From the point of view of Congress, which gets its directive in this matter from the Constitution, the copyright monopoly is not merely for the purpose of making money. Its primary purpose, by contrast, is to serve the public interest." Reviewing passage in the House of Representatives of H.R. 2512 in April 1967, Clapp observed that the bill limited photocopying to "archives" (and thus negating other existing fair use exemptions for copying), which resulted in a "clamor from the library world". In response, he said, the Senate adjusted the terms of Section 108 of the bill, which came "close to meeting library needs (But—alas—a miss is as good as a mile!)" The Senate version extended photocopy authorization for libraries to include "replacement copying" and "for service to readers", but, he noted, "There are, of course, a few little conditions", which he listed. "All but two of these conditions are quite acceptable to libraries. Of these, condition d. is crippling but not fatal. But condition e. is fatal." The "fatal" condition e. stated, "The user must establish to the satisfaction of the library that an unused copy cannot be obtained at a normal price from commonly-known trade sources in the United States, including authorized producing services." Clapp explained, It would send the users of libraries back to the Middle Ages. Compelled to copy out by manual drudgery materials needed for their research parentheses and there is no showing that this is anymore legal than photocopying parentheses. It would kill the vast exchange of interlibrary loans of journal articles which is possible only because libraries can send photo copies in lieu of the bound volumes which they are unwilling to lend on account of costs, risk of damage or loss and deprivation of their use. Researcher Jonathan Miller produced a study of the impact of research libraries on shaping the Copyright Act of 1976. According to Miller, Clapp's focus on technology and practical use deeply shaped the argument over fair use leading up to the its statutory clarification in the Copyright Act. He observes, "Verner Clapp and Ralph Shaw also had a significant impact with their ideas about the changing definition of 'copy' and the distinction between private copying and publishing." Furthermore, Clapp and McCarthy participated in focused lobbying efforts in which they targeted the members of relevant Congressional committees. The association staff and leaders encouraged members to contact their representatives, especially if those members sat on key committees. Individuals like Verner Clapp produced a series of publications for the professional literature keeping the membership abreast of developments and encouraging them to participate in the effort. For example, in 1967 Clapp vigorously opposed House Bill 2512, calling it "an industry bill" that would have deep implications upon libraries. Additionally, Clapp was quoted in arguments in the landmark intellectual property case, Williams & Wilkins Co. v. United States. While defending the library industry and its public purposes, as Princeton University Librarian William Dix pointed out, Clapp "recognized the critical importance of copyright reform in the encouragement of scholarship." Legacy "Change agent" In his study of Clapp's contribution to materials preservation, "Verner W. Clapp, as Opinion Leader and Change Agent in the Preservation of Library Materials", William Crowe observed, "Clapp functioned as opinion leader and change agent in one important area of librarianship." Clapp, Crowe concluded, "must be credited with calling attention, articulating and over an extended period, to a long-recognized challenge: assuring the good beginnings of the search for verifiable knowledge about it; and exhorting a diffuse community of interest to act to meet the challenge." As to his legacy in this particular, "Clapp's role as a change agent may thus fairly be judged to have had effect into the early 1980s." In a 2011 interview, Crowe explained why he selected Clapp as the subject of his dissertation: "It came out of my exposure at the University of Michigan to Verner Clapp, through the stories of the director, who had known this great man, who had died in 1971, at the Library of Congress in the 1940s and I heard a lot of stories about this man, who had been his mentor. I was very interested in understanding how the burgeoning interest in preservation in the 1970s and early 1980s was [taking] off. Where did this come from? Where might it be going? .... So I did a study of Clapp's life. He died 40 years ago. He was almost all but forgotten. But he had been the number two guy at the Library of Congress, head of the Council on Library Resources, and was the behind the scenes person, I discovered and proved, in most of the major developments in preservation in the 1930s, 40s, 50s and 60s. His fingerprints were in the background on almost everything, which was the substance of my dissertation." Cataloging in Publication In order to address problems of cost, delay and standardization in cataloging new publications, in 1958, along with CLR funding, the Library of Congress launched an experimental program, "cataloging-in-Source", for Library catalogers to log bibliographic data directly onto book proofs before publication. In all, 1200 publications from 157 publishers were cataloged in the experiment. However, the program yielded errors and a tremendous cost of $25 per book, and was dropped after eight months. Nevertheless, after the "cataloging-in-source" project, Clapp wrote that the "demonstration ... extends the prospect of at last realizing—100 years later—the possibilities of providing cataloging information in the very book to which that information refers, envisaged by Charles C. Jewett in the earliest days of American library development". Clapp had promoted the idea since at least 1950, when he wrote of cataloging-in-source as having "advantages [that] would accrue not only to the thousands of libraries using LC cards, but we believe to the publishers themselves, since their books would be available in libraries on release date or very shortly thereafter, and would not be withheld from the public." As head of the CLR, Clapp continued to promote the concept, which took the form of the 1971 Cataloging in Publication (CIP) trial program, funded by matching grants from the CLR and the Endowment for the Humanities. CIP yielded enormous benefits in bibliographic entry for new publications, especially regarding accuracy of entries and cost. Thomas M. Schmid of the University of Utah reported that the program had reduced cataloguing costs for his library from $5.80 per title to $0.75, rendering an annual savings of up to $15,000, which he anticipated would increase "as more publishers are brought into the program". CIP became a fully-funded program at the Library of Congress in 1973. According to the Library, as of 2021 the program has "created more than 2 million CIP data records". In a 2001 historical review of the CIP program, Librarian Charles Fenly credits Clapp's essential role in the development of CIP. Fenly points to a report that William J. Walsh of the Library and Clapp jointly prepared that served as a "'new look' at CIS, to be renamed cataloging-in-publication, or CIP" and which convinced Librarian Mumford to go forward with the program. Glen A. Zimmerman, who at the time was Acting Chief of the Descriptive Cataloging Division, which implemented the CIP project, reported that Clapp was "a staunch CIP supporter; he used his LC experience and management skills to further the CIP efforts". And William A. Gosling, an early CIP Program Manager, noted that Clapp's contribution was essential to the program's success, as "The idea [was] conceived and kept alive by Verner Clapp, who had been an advocate for Cataloging-in-Source, CIP's prototype..." and that Clapp had, along with publishing industry leaders and other librarians, "all worked diligently to shape the program and secure early publisher buy-in". Integration of CIP with MARC advanced digitization and electronic sharing of the data, including online entry starting in 1987 and in 1993 online transmission directly to publishers. Additionally, CIP's impact extended beyond the U.S. library and publishing industries, as it set an international example and standard adopted by other countries, including Australia, Brazil, Canada, the former Soviet Union, and the United Kingdom. Computerization and digital dissemination of records Clapp did not singularly nor specifically predict the digital revolution, however, as advocate, organizer, and thinker, Clapp's most significant contribution to the digital revolution was to articulate the needs of libraries and who, how, and for what they serve, and to identify both in process and concept possible solutions for meeting those needs and goals. While Clapp's 1964 The Future of the Research Library, and subsequent talks and articles up to his death in 1972, articulated his cumulative professional assessment of the present and future of library technologies, it was the product of the vision he had developed over his entire career. Having worked his way from the Main Reading Room, assisting patrons to retrieve Library holdings, to administrative responsibilities for building, maintaining, and disseminating them, Clapp's understanding of a library came down to two core functions, which he expressed in 1955 as "organization", or bibliographic functions, defined as "all the activities whereby all the source of information is identified", and "access", or records handling, defined as "the activities relating to their publication and distribution". The library profession recognized potential technological solutions for bibliographic functions, or "organization", as Clapp called it, via computerization and other methods of sharing and disseminating bibliographic records. But few, if any, librarians appreciated that the same could be accomplished for viewing of the records themselves, i.e., "access". In 1955, while still Chief Assistant Librarian, Clapp published an article, "Implications for Documentation and the Organization of Knowledge", that outlined the implications of the dual challenges of "organization" and "access" for libraries as a whole and not, as was understood at the time, as implicating individual libraries: There will come a time—earlier or later in proportion to the effectiveness and economy of the needed technical developments—when it will be cheaper to get a book from a thousand miles away, by telefacsimile perhaps, than to go and fetch it from an inaccessible stackroom. At this point the great collections (I am speaking of the collections of published works) will begin to weed, without loss of accessibility, just as collections with lower ambitions have done long since. Indeed, this process has already begun. In other words, Clapp understood that individual libraries were incapable of keeping pace with the exponential growth in materials and that existing solutions for sharing and dissemination were impractical. Thus, as early as 1955 Clapp had envisioned the digital information world of the 21st century, in which materials storage is permanent and access to them instantaneous from anywhere in the world: Under these circumstances [of limitations upon "organization" and "access"] I shall not dwell upon my favorite solution for the problem—and it is a primary problem—of access. This is to seal one copy of everything in concrete in a big hole in Kentucky. Next to each copy would be sealed a scanning device. To consult the item, the user would merely dial the appropriate number, and the item would be shown on his television screen. Simple controls would enable him to flip the pages; and, of course, any number of persons could consult the same item simultaneously! Perhaps this idea is not so extravagant as it sounds, especially since we already have three of the components -the books, the television screens, and the big hole in Kentucky. Shouldn't we ask the engineers to go on from there? Starting with his experience in delivering books stored locally to patrons at the Main Reading Room, Clapp essentially envisioned not just the concept of a computerized world, an idea that not unique to him, but its practical import to individual access to information. Copyright Clapp foresaw the confluence and conflicts between copying technology and copyright and promoted codification of expanded fair use provisions. His efforts at the organizational level, direct lobbying, and publications were deeply influential in shaping the debate within the library community and in Congress. While fair use in statutory law did not resolve the divide between copyright holders and libraries, it settled their disputes and clarified the rules regarding them. Most importantly, as Clapp plainly foresaw, it created the terms for addressing the challenges on copyright created by the digital revolution. In the 1968 report to the National Commission on Libraries, Clapp advised, "It is recommended that a full study of copying, not just of photocopying, be assigned, if not to the commission, then to some other body with assurance that it be dealt with, not simply as a question to be arbitrated between adversaries, but as an important question of public policy." Just as presciently, Clapp pointed to the utility of copying for "preservation" and "performance", key elements of modern multimedia copyright use. Library integration Clapp's contemporaries lauded his role shaping various areas of librarianship. In her 1965 article in The Library Quarterly on the Library Technology Project, Gladys Piez observed that whatever the accomplishments of the Project, "Clapp's prestige in the library world has opened doors which might otherwise have been at least partly closed to the project. It has certainly made the way easier." Regarding Clapp's work on collaboration and sharing of resources, Haltsein wrote, "It seems Clapp foresaw the possibilities of networking and the necessity for strong regional and local systems before such terms were even fashionable, let alone discussed seriously." Controversies As a government and industry insider, Clapp contributed to or was directly responsible for many public policy and library industry decisions, some of which have been contemporaneously or subsequently criticized. During the 1960s and 1970s, the publishing industry attacked Clapp for his promotion of library copying and the fair use doctrine. During the Loyalty program episodes, Clapp was forced to walk a thin line between public policy, library industry politics, and his personal and interpersonal relations and beliefs. In the controversial book Double Fold, writer Nicholson Baker accused Clapp of significant contribution to destruction of original materials, especially newspapers, upon microfilming pages, which, Baker claims, has destroyed those original materials while creating possibly incomplete or corrupted microfilm copies of them. Baker claims that decisions to destroy the original materials were made the result of librarians who "lied shamelessly" about their condition. As observed by contemporaries and subsequent scholarship, Clapp's legacy in these, as in other, episodes, characteristically, was to seek moderation, cooperation, and consent. General influence Overall, and beyond his specific accomplishments, Clapp influenced librarianship generally, guiding the profession towards practical applications and solutions. His contemporaries described his indefatigable effort and entrenched optimism. After compiling a substantial list of advances and new technologies, Clapp advised the ALA, "No; no millennia can be announced at this time. A number of good advances have been made, and others impend. To describe them must be the task of another report." Frederick Wagman concluded of Clapp that, "Of all his memorable traits, however, the one that was most characteristic of Verner was his enthusiasm ... Verner had it in full measure." Personal life In 1929, Clapp married Dorothy Devereaux Ladd, who worked as an assistant book buyer at Woodward & Lothrop. In 1936, the couple moved to Chevy Chase, Maryland and there raised three children, Nancy, Verner and Judith. Clapp published a volume on his wife's family genealogy and relished summers at the family camp in Maine, where he pursued his childhood fascination, along with his father, and, subsequently, his children and grandchildren, with geology, nature, and the outdoors. Whatever Clapp took on he took on fully. Washington Evening Star journalist Mary McGrory wrote, "Practically everyone [at the Library] has memories of Mr. Clapp spiritedly dashing down the corridor in search of a subordinate" and, regarding his departure from the institution, "The Library of Congress is a sadder, sedater places these days, Verner Clapp, the Chief Assistant Librarian has departed and Government has lost its most bubbling bureaucrat, and the Library, its yeastiest official. Mr. Clapp, puckish of face, unquenchable of spirit, left last month... His going from the Library which has known him, man and boy for 33 years, left every-one from Librarian Quincy Mumford down to the stackboys feeling bereft." King Features Syndicate columnist George Dixon wrote playfully of Clapp, "Contrary to popular report, Verner Clapp is a man -– a prodigy, a plymath [sic], perhaps -– but for the time being, happily mortal and endowed with unmistakably human eccentricities. Witness, for example, his habit of writing on tramways, confident in jostled legibility; his uncontrollable passion for vaulting over barricadoes [sic]; his experiments with vinification." Princeton Librarian William Dix described Clapp as "a serious man in the sense that whatever engaged his attention – and what did not? – involved him fully and even passionately". Lee Grove recalled that while waiting on delivery of his first child, Nancy, Clapp read Ludwig Hain's "Repoertorium, that standard bibliography of incunabula". A biographic entry for Clapp notes, "He was the author of over 200 monographs and articles; he enjoyed book-binding, playing the flute, working with wood, sketching, writing poems, inventing devices to assist the library user, building a cupola – he was a true Renaissance man!" On Clapp's departure from the Library, David Mearns wrote to the staff, "Aside from the loss of his outstanding contributions to the Library of Congress, there is a personal loss of that uncommon individual, Verner Clapp. He is that singular man whose mind never grows stale or fails to find fresh stimulation in the work of the day; always accessible, always willing to devote himself to the problems of colleagues, he has an extraordinary gift for infusing others with his enthusiasm and imagination... We are not only better librarians – we are better human beings because of this journey." Awards and honors 1954: Verner W. Clapp Publication Fund, Library of Congress, for "issuing facsimiles of historic and rare materials in the Library of Congress", established. 1957: Honorary Member, District of Columbia Library Association. 1960: Lippincott Award for "distinguished service to the profession of librarianship", American Library Association. 1967: Honorary Membership for Professional Recognition, American Library Association, the ALA's highest award. 1968: Order of the Sacred Treasure, Gold and Silver Star (Kun-Nito Zuihosho), conferred to Clapp by the government of Japan. 1972: "The Librarian's Librarian", Special Citation from the Special Libraries Association. 1973: The Clapp Collection, Japanese National Diet Library, established. 1973: Verner W. Clapp papers, Library of Congress, established. undated: Verner Clapp Distinguished Research Fellowship, Council on Library and Information Resources Publications For a full bibliography of Clapp's publications, see Verner Warren Clapp, 1901–1972: a memorial tribute 1937. The Constitution of the United States; an account of its travels since September 17, 1787, with David Chambers Mearns. U.S. Government Printing Office (1937). 1950. Library of Congress Bibliographical Survey: A Report Prepared for Unesco (1950). 1953. Current trends in libraries of the United States government Editors: Verner W. Clapp and Scott Adams (1953). 1955 Implications for Documentation and the Organization of Knowledge, The Library Quarterly, Vol 25, No. 4 (Oct 1955). 1958. The University library and the wise man: addresses by Theodore R. McKeldin and Verner W. Clapp (1958). 1960. "'To assist in solving the problems of libraries': The program of the Council on Library Resources", ALA Bulletin, American Library Association, Vol 54, No.3 (March 1960). 1962. The United Nations Library, 1945–1961, Libri, Vol. 12 (1962). 1962. "Library Photocopying and Copyright: recent developments", Law Library Journal, Vol. 55 (Feb 1962). 1964. The Future of the Research Library, University of Illinois Press (1964). 1968. "Copyright: A Librarian's View" Washington, Copyright Committee, Association of Research Libraries (1968). 1968. "The Copyright Dilemma: A Librarian's View" Library Quarterly, Vol 38 (Oct. 1968). 1971. "The Declaration of Independence: a case study in preservation", Special Libraries Vol. 62, issue 12. (December 1971). 1971. The Story of Permanent /Durable Book-Paper 1115–1970, Scholarly Publishing (1971), reprinted by Restaurator Press, (1972). 1971. "The Greatest Invention Since the Title-Page?", Wilson Library Bulletin, Vol 46 (Dec 1971). References External links "Verner W. Clapp, Librarian For Library of Congress, Dies", The New York Times, June 16, 1972 Verner W. Clapp Papers, Library of Congress Library of Congress Profile "A Biography of Verner Warren Clapp" 1901 births 1972 deaths Harvard Graduate School of Arts and Sciences alumni American librarians Librarians at the Library of Congress Trinity College (Connecticut) alumni
2724842
https://en.wikipedia.org/wiki/Software%20craftsmanship
Software craftsmanship
Software craftsmanship is an approach to software development that emphasizes the coding skills of the software developers. It is a response by software developers to the perceived ills of the mainstream software industry, including the prioritization of financial concerns over developer accountability. Historically, programmers have been encouraged to see themselves as practitioners of the well-defined statistical analysis and mathematical rigor of a scientific approach with computational theory. This has changed to an engineering approach with connotations of precision, predictability, measurement, risk mitigation, and professionalism. Practice of engineering led to calls for licensing, certification and codified bodies of knowledge as mechanisms for spreading engineering knowledge and maturing the field. The Agile Manifesto, with its emphasis on "individuals and interactions over processes and tools" questioned some of these assumptions. The Software Craftsmanship Manifesto extends and challenges further the assumptions of the Agile Manifesto, drawing a metaphor between modern software development and the apprenticeship model of medieval Europe. Overview The movement traces its roots to the ideas expressed in written works. The Pragmatic Programmer by Andy Hunt and Dave Thomas and Software Craftsmanship by Pete McBreen explicitly position software development as heir to the guild traditions of medieval Europe. The philosopher Richard Sennett wrote about software as a modern craft in his book The Craftsman. Freeman Dyson, in his essay "Science as a Craft Industry", expands software crafts to include mastery of using software as a driver for economic benefit: "In spite of the rise of Microsoft and other giant producers, software remains in large part a craft industry. Because of the enormous variety of specialized applications, there will always be room for individuals to write software based on their unique knowledge. There will always be niche markets to keep small software companies alive. The craft of writing software will not become obsolete. And the craft of using software creatively is flourishing even more than the craft of writing it." Following initial discussion, conferences were held in both London and Chicago, after which, a manifesto was drafted and put online to gather signatories. This was followed by the development of practices to further develop the movement including the exchange of talent in "Craftsman Swaps" and the assessment of skills in "Craftsmanship Spikes" Manifesto From the Software Craftsmanship website http://manifesto.softwarecraftsmanship.org/ History The origins of software craftsmanship came from the Agile software development movement which aimed to reform software project management in the 1990s. In 1992, Jack W. Reeves' essay "What Is Software Design?" suggested that software development is both a craft and an engineering discipline. Seven years later, in 1999, The Pragmatic Programmer was published. Its sub-title, "From Journeyman to Master", suggested that programmers go through stages in their professional development akin to the medieval guild traditions of Europe. In 2001, Pete McBreen's book Software Craftsmanship was published. It suggested that software developers need not see themselves as part of the engineering tradition and that a different metaphor would be more suitable. In his August keynote at Agile 2008, Robert C. Martin proposed a fifth value for the Agile Manifesto, namely "Craftsmanship over Crap". He later changed his proposal to "Craftsmanship over Execution". In December 2008, a number of aspiring software craftsmen met in Libertyville, Illinois, with the intent of establishing a set of principles for software craftsmanship. Three months later, a summary of the general conclusions was decided upon. It was presented publicly, for both viewing and signing, in the form of a Manifesto for Software Craftsmanship. In April 2009, two of the companies in the software craftsmanship movement, 8th Light and Obtiva, experimented with a "Craftsman Swap." The Chicago Tribune covered this event on 15 June 2009. In January 2010, a second Craftsman Swap was held between Obtiva and Relevance. The 'London Software Craftsmanship Community' (LSCC) was founded in 2010 and is today the largest and most active Software Craftsmanship community in the world, with more than 5000 craftspeople. In 2014, Sandro Mancuso, one of its co-founders, published a book The Software Craftsman: Professionalism, Pragmatism, Pride. It brought the software craftsmanship movement additional visibility, reinforcing the efforts to achieve higher technical excellence and customer satisfaction. References Further reading External links Software Craftsmanship North America Software Craftsmanship Manifesto London Software Craftsmanship Community Latvian Software Craftsmanship Community Software Craftsmanship Community in Barcelona Software Craftsmanship Discussion Group Applying Software Craftsmanship to Big Teams Computer programming
40111866
https://en.wikipedia.org/wiki/ObjecTime%20Developer
ObjecTime Developer
ObjecTime Developer (or ObjecTime or OTD, for short) is a software automation tool designed to meet the development needs of real-time software development teams. The tool was created by ObjecTime Limited of Kanata, Ontario, and was aimed at aiding software developers in building applications using Real-Time Object-Oriented Modeling (ROOM) for real-time, graphical design models. ObjecTime, using the design models, will then generate production-quality applications for real-time operating systems, using the C and C++ programming languages. An important aspect of the development process using OTD was the capability to visually see the execution of the generated software as animation of the design models. This was true both for the software running both on the development or target platform. The concepts behind the modeling language described in ROOM (actors and structure/state) was gradually incorporated into UML, first as a profile for real-time (capsules and structure/state) and then as an integral part of UML 2 (composite classes). When Rational Software acquired ObjecTime Limited in 2000 their respective products — Rational Rose and ObjecTime Developer — were then merged into a product that was rechristened Rational Rose RealTime. This major changes resulted in the loss of a commercial implementation of ROOM. Today only the recently started Eclipse open source project eTrice is implementing ROOM. Literature Bran Selic, Garth Gullekson, Paul T. Ward: "Real-Time Object-Oriented Modeling", New York, John Wiley & Sons Inc, 1994, References External links IBM legacy documentation: ObjecTime — legacy documentation Rational Rose — current and legacy documentation Rational Rose RealTime — current and legacy documentation Unified Modeling Language UML tools
4070102
https://en.wikipedia.org/wiki/Source%20code%20escrow
Source code escrow
Source code escrow is the deposit of the source code of software with a third-party escrow agent. Escrow is typically requested by a party licensing software (the licensee), to ensure maintenance of the software instead of abandonment or orphaning. The software's source code is released to the licensee if the licensor files for bankruptcy or otherwise fails to maintain and update the software as promised in the software license agreement. Necessity of escrow As the continued operation and maintenance of custom software is critical to many companies, they usually desire to make sure that it continues even if the licensor becomes unable to do so, such as because of bankruptcy. This is most easily achieved by obtaining a copy of the up-to-date source code. The licensor, however, will often be unwilling to agree to this, as the source code will generally represent one of their most closely guarded trade secrets. As a solution to this conflict of interest, source code escrow ensures that the licensee obtains access to the source code only when the maintenance of the software cannot otherwise be assured, as defined in contractually agreed-upon conditions. Escrow agreements Source code escrow takes place in a contractual relationship, formalized in a source code escrow agreement, between at least three parties: one or several licensors, one or several licensees, the escrow agent. The service provided by the escrow agent – generally a business dedicated to that purpose and independent from either party – consists principally in taking custody of the source code from the licensor and releasing it to the licensee only if the conditions specified in the escrow agreement are met. Source code escrow agreements provide for the following: They specify the subject and scope of the escrow. This is generally the source code of a specific software, accompanied by everything that the licensee requires to independently maintain the software, such as documentation, software tools or specialized hardware. They oblige the licensor to put updated versions of the software in escrow in specific intervals. They specify the conditions that must be met for the agent to release the source code to the licensee. Typical conditions include the bankruptcy of the licensor, the cancellation of a software development project or the express unwillingness of the licensor to fulfil his contractual maintenance obligations. Because it is often important to the licensee that the code be released as soon as possible once the conditions are met, the conditions tend to be worded as plainly and unambiguously as possible. They circumscribe the rights obtained by the licensee with respect to the source code after the release of the software. These rights are generally limited and may include the right to modify the source code for the purpose of fixing errors, or the right to continue independent development of the software. They specify the services provided by the escrow agent beyond a simple custody of the source code. Specialised agents may, for instance, verify that the source code storage media is readable, or even build the software based on the source code, verifying that its features match the binary version used by the licensee. They may provide that non-compete clauses in the licence agreement, such as any that prohibit the licensee from employing the licensor's employees, are void in the event of the release conditions being met, enabling the licensee to acquire the know-how required for the maintenance of the software. They also provide for the fees due to the escrow agent for his services. Whether a source code escrow agreement is entered into at all, and who bears its costs, is subject to agreement between the licensor and the licensee. Software license agreements often provide for a right of the licensee to demand that the source code be put into escrow, or to join an existing escrow agreement. Bankruptcy laws may interfere with the execution of a source code escrow agreement, if the bankrupt licensor's creditors are legally entitled to seize the licensor's assets – including the code in escrow – upon bankruptcy, preventing the release of the code to the licensee. Third party escrow agents Museums, archives and other GLAM organizations have begun to act as independent escrow agents due to growing digital obsolescence. Notable examples are the Internet Archive in 2007, the Library of Congress in 2006, ICHEG, Computer History Museum, or the MOMA. There are also some cases where software communities act as escrow agent, for instance for Wing Commander video game series or Ultima 9 of the Ultima series. Software open-sourcing to the public The escrow agreements described above are most applicable to custom-developed software which is not available to the general public. In some cases, source code for commercial off-the-shelf software may be deposited into escrow to be released as free and open-source software under an open source license when the original developer ceases development and/or when certain fundraising conditions are met (the threshold pledge system). For instance, the Blender graphics suite was released in this way following the bankruptcy of Not a Number Technologies; the widely used Qt toolkit is covered by a source code escrow agreement secured by the "KDE Free Qt Foundation". There are many cases of end-of-life open-sourcing which allow the community continued self-support, see List of commercial video games with later released source code. See also Source code repository for open source Orphan works References Further reading Computerworld (7/20/92, page 99): Don't Rush Into Source Code Escrow A Guide to IT Contracting: Checklists, Tools, and Techniques (, 2013) - Page 262 Escrow Computer law
52638102
https://en.wikipedia.org/wiki/Linux%20Spike%20Trojan%20malware
Linux Spike Trojan malware
Linux Spike Trojan malware, more widely known as MrBlack, is a type of malware that infects routers, and eventually spreads to other routers. Incapsula, an internet security firm, first saw this malware in December 2014. This tool is prone to attack devices that still use the default credentials. A "bot" is a type of malware that allows an attacker to take control over an affected computer. Also known as "Web robots," bots are usually part of a network of infected machines, known as a "botnet," which is typically made up of victim machines that stretch across the globe. Once the router has been infected, the attack injects malware in order to configure DDoS attacks. It can be used for MITM attacks, cookie hijacking and other attacks after the targeted router has been infected. MrBlack is regarded as one of the most widespread malware varieties infecting over forty thousand routers. References Linux malware IoT malware Denial-of-service attacks
1505975
https://en.wikipedia.org/wiki/PCBoard
PCBoard
PCBoard (PCB) was a bulletin board system (BBS) application first introduced for DOS in 1983 by Clark Development Company. Clark Development was founded by Fred Clark. PCBoard was one of the first commercial BBS packages for DOS systems, and was considered one of the "high end" packages during the rapid expansion of BBS systems in the early 1990s. Like many BBS companies, the rise of the Internet starting around 1994 led to serious downturns in fortunes, and Clark Development went bankrupt in 1997. Most PCB sales were of two-line licenses; additional line licenses (in ranges of 5, 10, 25, 50, 100, 250 and 1000) were also available. A native 32-bit IBM OS/2 version became also available with PCB V15.22 and higher. There were also a few tools available for PCBoard, which were specifically developed for the OS/2 2.0 and OS/2 Warp operating system. Multinode support PCBoard supported the 16C550 UARTs (universal asynchronous receiver transmitter), such as 16550 UART ("Fifo"), 16554 UART and 16650 UART, which made it possible to run multiple nodes of the BBS on a single (multitasking) computer using either using IBM OS/2 or the DOS multitasking tool DESQview in combination with the memory manager QEMM. Some sysops tried to run PCBoard on the (then) new Windows 95 operating system by Microsoft and reported mixed results. Stability was critical for a BBS, which was usually running 24/7, and the early version of the Microsoft 32-bit operating system lacked it. Windows 95 was never officially supported by CDC. Standard PCs then and today have only one or two (if any) serial ports (COM ports), which are needed to connect an external modem to a computer. This made multiport cards like the G-Tek "BlackBoard", "BBS550" or "SmartCard" and the "DigiCard" by Digi International popular among sysops. Other options were internal multi-modem cards and multiple computers connected by local area network. PCBoard also supports ISDN (Integrated Services Digital Network) and Telnet access via the Internet. The open source terminal emulator SyncTERM, available for Win32, Linux, FreeBSD, NetBSD, OpenBSD, Solaris and Mac OS X can be used for example to connect to the few remaining PCBoard BBS installations that are connected to the Internet. Multi-BBS networks Starting in 1988, the RelayNet, also known as RIME for RelayNet International Mail Exchange, allowed BBS's running PCBoard to join a network that exchanged messages with other BBS's in a system similar to the older FidoNet. History The first version of PCBoard was released in 1983. Clark Development Company (CDC) pioneered the FILE ID.DIZ format as well as a powerful scripting language (PPL), which supported modifications and to a large degree replacement of most standard commands and processes. A compiled interpreter script written in PPL was called PPE (PCBoard Programming Executable). PPEs were generated by the PCBoard Programming Language Compiler (PPLC), which was an optional tool provided by Clark Development Company and was also available for purchase as stand alone tool. It was less than $100 by itself and less than $50 in combination with any BBS license. This allowed programmers to develop PPEs for PCBoard without having to purchase a PCBoard BBS license. Also optional and available by itself were the printed PCBoard manual and the printed PPLC reference handbook. The script language was introduced with version 15.0 and made this version of PCBoard even more successful than PCBoard V14.5. Various door programs were in use, including Sam Smith's Prodoor, which added a full screen editor and other features which were later included in PCBoard itself. The script language PPL and PPE's which became more and more available, increased the popularity of PCBoard and emerged by the mid nineties as the de facto-standard BBS system for warez BBS on the IBM PC. The warez BBS's used mostly pirated versions of the BBS software and thus did not appear in any official sales or usage statistic for the software. What PCBoard was for warez BBSes on the IBM PC, was Amiexpress for BBSes running on Commodore Amiga computers. Despite the high price tag Clark Development Company sold more than 50,000 PCBoard licenses by 1995. The last full release of PCBoard by Clark Development Company was version 15.3 in September 1996. Clark Technologies, a division of Clark Development Company announced on July 29, 1996 the availability of source code and OEM licenses for the PCBoard BBS software. The final release was 15.4 beta, which had a one-month trial period. Later, the lead software engineer from Clark Development Company released information on how to bypass the trial period timeout; the timeout had been inserted as a reminder and had not been intended to permanently disable the software. Clark Development Company went bankrupt in July 1997 and closed its offices without prior warning, leaving a great number of upset customers behind. Customers were never notified by the company, and customers who had just purchased licences for the software were not notified, refunded or provided access to the software they had paid for. PCBoard after CDC Sysops continued to use PCBoard around the world, even after support by CDC stopped when the company went out of business. Help was available from many individuals who created tools and documentation for the PCBoard system. The company did not exist anymore when the Year 2000 problem, also known as the Y2K problem or millennium bug, made headline news. However, PCBoard only had a few minor problems with the year 2000 (and 2001) and fixes were made available by several individuals. The last full release version of PCBoard, version 15.3, never really caught on and most systems that were online after 1997 continued to use the previous 15.2x versions of the software. PCBoard is still in use today by nostalgic BBS fans. There is a freeware FOSSIL driver called NetFoss which allows PCBoard to be accessible via telnet under Windows. There was also a DOS based PCBoard add-on "PCB Internet Collection" which allowed telnet access by installing a (DOS-only) packet driver. Awards PCBoard programming language / PPLC compiler 1994 - Dvorak Award for "Outstanding PC Telecommunications Technology" BBS Software 1995 - PC Magazine Editors Choice Award (August 1995 Issue) 1995 - Dvorak Award for "Best OS/2-based BBS software" 1997 - Inducted (Clark Development) into the Shareware Hall of Fame in 1997 by the SIAF board Features Packet switch support Full Internationalization of dates & code page FOSSIL support for virtually any intelligent serial card(/M code) File attachment to messages Multiple daily events Full support for 2 byte international character sets Built-in .QWK message packet support Jukebox & "slow" CD-ROM support Incoming fax support Carbon-copy list support Return receipt message support Caller-ID support ALIAS support by conference area RIPscrip support for remote callers PPL (PCBoard programming language compiler) (optional) Automatic 16550 UART recognition & support Intelligent & non-intelligent multi-port serial card support Full screen text editor ANSI graphics support Full color operation Thread reading of messages Supports up to 65,535 conference (message base) areas 36 file transfer protocols supported Supports 32,767 DOORS per conference Real-time 255 channel node chat (CB chat) Long message headers for all NetMail programs including Internet, Usenet & others. Local network logins for in-house e-mail support Direct connect support for in-house serial networks & PADS Communicates directly with OS/2 COMM drivers to allow large number of multiple nodes under OS/2 Automatically detects OS/2 operation for time-slice control Full network support for any NetBIOS compatible network, including NetWare, LANtastic, 3-Com, Vines & more. Full remote DOS access for SysOp if desired True & complete multi-lingual language support Requirements IBM PC compatible Minimum 320k RAM DOS 3.1 or higher Modem to support remote dial-in, a Virtual Modem such as NetSerial, or the NetFoss telnet FOSSIL for Windows. PCBoard/M Needs 80386 CPU or higher since code is written using 80386 instructions for maximum speed. PPLC versions PCBoard V15.00 PPLC V1.00 PCBoard V15.10 PPLC V2.00 PCBoard V15.20 PPLC V3.00 PCBoard V15.21 PPLC V3.10 PCBoard V15.22 PPLC V3.20 PCBoard V15.30 PPLC V3.30 PCBoard V15.40 PPLC V3.40 (both beta) PPE/PPL groups Thousands of PPEs were developed and published, often available free of charge, by individuals or scene groups. A number of commercial PPEs were also developed, mostly under shareware licensing. A number of release groups who were specialized in PPE and other PCB/BBS tool developments were formed, such as the French group Aegis Corp, the Russian group Brutal PPE/PCE/PRO Coders (BPC) and the German groups Peanuts (PNS) and Paranoia (PNA). Warez groups Such as PWA and DOD released several PPEs which were used by many sysops, including the PWA "NewScan" PPE, the PWA "Files-Reverse" PPE and the DOD "LARS Upload Processor". The French group Aegis Corp distributed a PPE De-compiler (PPLX) written by Lone Runner, which allowed the de-compilation of PPE binaries back to human-readable PPL code. Lone Runner also wrote the Aegis PPL compiler, which produced smaller and faster code than Clark Development's original PPL compiler. Other tools for PPE developers followed. Another PPE De-compiler was PPLDecompiler (PPLD) written by CHiCKEN, a member of the "Swiss Coding Division" of the group Electronic Rats (EcR). PCBoard Metaworlds Metaworlds was an attempt by CDC to establish a BBS-like environment on the Internet, basically a closed mailbox in HTML format online. Access to the content was only possible for registered users who had to authenticate themselves with username and password to access the system. Metaworlds supported the parallel operation with the standard ANSI based PCBoard BBS software and used Microsoft SQL Server as underlying database. A runtime-version of Microsoft SQL Server came with the Metaworlds software. CDC went out of business before Metaworlds was finished. Metaworlds was developed by CDC to make the transition to the Internet when the decline of the BBS became apparent. See also FILE_ID.DIZ (for DESC.SDI files) References External links Brief history of PCBoard from bbsdocumentary.com (and software downloads) 153 PCBoard PPE's by PWA at The BBS Archive PCboard File Archive A-C (556 PPE's), D-F (333 PPE's), G-I (188 PPE's), J-O (358 PPE's), P-R (354 PPE's), S-Z (337 PPE's) at The BBS Archive PCBoard file archive,Filegate.net DOS BBS software, Mpoli.fi Sysop's corner list of PCB related sites Original Install Disks and Collection of over 4,000 PPE's & Tools Bulletin board system software DOS software OS/2 software 1983 software
16453227
https://en.wikipedia.org/wiki/4138%20Kalchas
4138 Kalchas
4138 Kalchas is a large Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 19 September 1973, by Dutch astronomers Ingrid and Cornelis van Houten at Leiden, on photographic plates taken by Tom Gehrels at the Palomar Observatory in California. The assumed C-type asteroid is the principal body of the proposed Kalchas family and has a rotation period of 29.2 hours. It was named after the seer Calchas from Greek mythology. Orbit and classification Kalchas is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the leading Greek camp at the Gas Giant's Lagrangian point, 60° ahead of its orbit . It orbits the Sun at a distance of 4.9–5.4 AU once every 11 years and 9 months (4,286 days; semi-major axis of 5.16 AU). Its orbit has an eccentricity of 0.04 and an inclination of 2° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar in May 1956, more than 17 years prior to its official discovery observation. Palomar–Leiden Trojan survey While the discovery date aligns with the second Palomar–Leiden Trojan survey, Kalchas did not receive a prefixed survey designation, which was assigned for the discoveries made by the fruitful collaboration between the Palomar and Leiden observatories in the 1960s and 1970s. Gehrels used Palomar's Samuel Oschin telescope (also known as the 48-inch Schmidt Telescope), and shipped the photographic plates to Ingrid and Cornelis van Houten at Leiden Observatory where astrometry was carried out. The trio are credited with the discovery of several thousand asteroids. Kalchas family Fernando Roig and Ricardo Gil-Hutton identified Kalchas as the principal body of a small Jovian asteroid family, using the hierarchical clustering method (HCM), which looks for groupings of neighboring asteroids based on the smallest distances between them in the proper orbital element space. According to the astronomers, the Kalchas family belongs to the larger Menelaus clan, an aggregation of Jupiter trojans which is composed of several families, similar to the Flora family in the inner asteroid belt. However this family is not included in David Nesvorný HCM-analysis from 2014. Instead, Kalchas is listed as a non-family asteroid of the Jovian background population on the Asteroids Dynamic Site (AstDyS) which based on another analysis by Milani and Knežević. Naming This minor planet was named from Greek mythology after Calchas, a Greek prophet during the Trojan War. The official naming citation was published by the Minor Planet Center on 11 March 1990 (). Physical characteristics Kalchas is an assumed C-type asteroid, with a V–I color index of 0.81. Rotation period In December 2011, a rotational lightcurve of Kalchas was obtained by Robert Stephens at GMARS in California. Lightcurve analysis gave a well-defined rotation period of hours with a brightness variation of 0.40 magnitude (). In July and August 2015, photometric observations by the Kepler space observatory determined two concurring periods of 29.13 and 29.411 hours (). Diameter and albedo According to the surveys carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer and the Japanese Akari satellite, Kalchas measures 46.46 and 61.04 kilometers in diameter and its surface has an albedo of 0.082 and 0.057, respectively. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 53.16 kilometers based on an absolute magnitude of 10.1. References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center Asteroid 4138 Kalchas at the Small Bodies Data Ferret 004138 Discoveries by Cornelis Johannes van Houten Discoveries by Ingrid van Houten-Groeneveld Discoveries by Tom Gehrels Minor planets named from Greek mythology Named minor planets 19730919
418646
https://en.wikipedia.org/wiki/H-1B%20visa
H-1B visa
The H-1B is a visa in the United States under the Immigration and Nationality Act, section 101(a)(15)(H) that allows U.S. employers to temporarily employ foreign workers in specialty occupations. A specialty occupation requires the application of specialized knowledge and a bachelor's degree or the equivalent of work experience. The duration of stay is three years, extendable to six years; after which the visa holder may need to reapply. Laws limit the number of H-1B visas that are issued each year: 188,100 new and initial H-1B visas were issued in 2019. Employers must generally withhold Social Security and Medicare taxes from the wages paid to employees in H-1B status. The H-1B visa has its roots in the H1 visa of the Immigration and Nationality Act of 1952; the split between H-1A (for nurses) and H-1B was created by the Immigration Act of 1990. 65,000 H-1B visas were made available each fiscal year, out of which employers could apply through Labor Condition Applications. Additional modifications to H-1B rules were made by legislation in 1998, 2000, in 2003 for Singapore and Chile, in the H-1B Visa Reform Act of 2004, 2008, and 2009. United States Citizenship and Immigration Services has modified the rules in the years since then. Structure of the program An H-1B visa allows an individual to enter the United States to temporarily work at an employer in a specialty occupation. The regulations define a specialty occupation as requiring theoretical and practical application of a body of highly specialized knowledge in a field of human endeavor including but not limited to biotechnology, chemistry, computing, architecture, engineering, statistics, physical sciences, journalism, medicine and health: doctor, dentists, nurses, physiotherapists, etc., economics, education, research, law, accounting, business specialties, technical writing, theology, and the arts, and requiring the attainment of a bachelor's degree or its equivalent as a minimum (with the exception of fashion models, who must be "of distinguished merit and ability"). Likewise, the foreign worker must possess at least a bachelor's degree or its equivalent and state licensure, if required to practice in that field. H-1B work-authorization is strictly limited to employment by the sponsoring employer. A person in H-1B status must continue to be employed by their employer in order to stay in H-1B status. If the person's employment ends for any reason, the person must leave the United States, unless the person applies for and is granted a change of status or finds another employer compatible with the H-1B status. Effective January 17, 2017, the United States Citizenship and Immigration Services allows grace period of up to 60 days to stay in the United States after the person's end of employment. Duration of stay The duration of stay is three years, extendable to six years. An exception to maximum length of stay applies in certain circumstances: If a visa holder has submitted an I-140 immigrant petition or a labor certification prior to their fifth anniversary of having the H-1B visa, they are entitled to renew their H-1B visa in one-year increments until a decision has been rendered on their application for permanent residence. This is backed up by the Immigration and Nationality Act 106(a). If the visa holder has an approved I-140 immigrant petition, but is unable to initiate the final step of the green card process due to their priority date not being current, they may be entitled to a three-year extension of their H-1B visa until their adjustment of status can finish. This exception originated with the American Competitiveness in the 21st Century Act of 2000 section 104a (AC21 104a). The maximum duration of the H-1B visa is ten years for exceptional United States Department of Defense project related work. A time increment of less than three years has sometimes applied to citizens of specific countries. For example, during Melania Trump's time as a H-1B visa holder, she was limited to one year increments, which was the maximum time allowed then per H-1B visa for citizens of Slovenia. Melania Trump became a citizen in 2006. H-1B holders who want to continue to work in the U.S. after six years, but who have not obtained permanent residency status, must remain outside of the U.S. for one year before reapplying for another H-1B visa if they do not qualify for one of the exceptions noted above allowing for extensions beyond six years. Despite a limit on length of stay, no requirement exists that the individual remain for any period in the job the visa was originally issued for. This is known as H-1B portability or transfer, provided the new employer sponsors another H-1B visa, which may or may not be subjected to the quota. Congressional yearly numerical cap and exemptions The Immigration Act of 1990 established a limit of 65,000 foreign nationals who may be issued a visa or otherwise provided H-1B status each fiscal year; the annual limit is often called a quota or a cap. An additional 20,000 H-1Bs are available to foreign nationals holding a master's or higher degree from U.S. universities. In addition, excluded from the ceiling are all H-1B non-immigrants who work at (but not necessarily for) universities, non-profit research facilities associated with universities, and government research facilities. Institutions of higher education, nonprofit organizations that are related or affiliated to institutions of higher education, nonprofit research organizations, and governmental-research organizations are exempt from H-1B annual quotas. Contractors working at, but not directly employed by, these institutions may be exempt from the annual quotas as well. However, employers must show that, first, the majority of the worker's duties will be performed at the qualifying institution, organization or entity and, second, the job duties directly and predominantly further the essential purpose, mission objectives or functions of the qualifying institution, organization, or entity. The Chile–United States Free Trade Agreement and the Singapore–United States Free Trade Agreement created a separate quota of 1,400 H-1B1 visas for Chilean nationals and 5,400 H-1B1 visas for Singapore nationals. If these reserved visas are not used, however, then they added to the following fiscal year's H-1B annual quota. Due to these exemptions and rollovers, the number of H-1B visas issued each year is often greater than 65,000, such as when 117,828 H-1B visas were issued in fiscal year 2010, 129,552 in fiscal year 2011, and 135,991 in fiscal year 2012. In some years, the cap was not reached. For example, in fiscal year 1996, the Immigration and Naturalization Service (now known as USCIS) announced on August 20, 1996, that a preliminary report indicated that the quota had been exceeded, and processing of H-1B applications was temporarily halted. However, when more accurate numbers became available on September 6, it became apparent the quota had not been reached after all, and processing resumed for the remainder of the fiscal year. The United States Citizenship and Immigration Services starts accepting applications on the first business day of April for visas that count against the fiscal year starting in October. For instance, H-1B visa applications that count against the fiscal year 2013 cap were submitted starting Monday, 2012 April 2. USCIS accepts H-1B visa applications no more than 6 months in advance of the requested start date. Beneficiaries not subject to the annual cap are those who currently hold cap-subject H-1B status or have held cap-subject H-1B status at some point in the past six years. Lottery Each year, generally on April 1, the H-1B season commences for the following federal fiscal year; employment authorizations are granted on October 1. Due to a pre-employment application limit window of six months, the first weekday in April is the earliest that an applicant may legally apply for the next year's allotment of cap-subject H-1B. H-1B "cap cases" are delineated on the envelope's label, preferably in red ink, with "Regular Cap" for the bachelor's degree, "C/S Cap" for H-1B1 treaty cases and "U.S. Master'"s for the U.S. master's degrees or higher exemption. USCIS publishes a memo when enough cap-subject applications have been received, indicating the closure of cap-subject application season, the associated random selection process is often referred to as the H-1B lottery. Those who have the U.S. master's exemption have two chances to be selected in the lottery: first, a regular lottery is held to award the 65,000 visas available to all H-1B applicants, and those not selected are then entered in another lottery for 20,000 extra spots. Those without a U.S. master's are entered only in the first, regular, lottery. Pro-H-1B pundits claim that the early closure, and number of applications received (172,500 in Fiscal Year 2015), are indications of employment demand and advocate increasing the 65,000 bachelor's degree cap. David North, of the Center for Immigration Studies, claimed that unlike other immigration categories, H-1B filing fees, for applications which are not randomly selected, are refunded to the intending employer. However, applications that are not selected are simply returned unopened to the petitioner, with no money changing hands or refunded. Computerworld and The New York Times have reported on the inordinate share of H-1B visas received by firms that specialize in offshore-outsourcing, the subsequent inability of employers to hire foreign professionals with legitimate technical and language skill combinations, and the outright replacement of American professionals already performing their job functions and being coerced to train their foreign replacements. The United States Chamber of Commerce maintains a list of years when the random selection process (lottery) was implemented. The lottery process was challenged in Tenrec v. USCIS, a class action lawsuit in Oregon, but the case was decided against the plaintiffs. On June 28, 2021, the lottery process was challenged again in LIU et al. v. MAYORKAS et al., a lawsuit filed in United States District Court for the District of Columbia by 500+ FY 2022 H1-B applicants who didn't get selected in March 2021. Tax status of an individual with H-1B status The taxation of income for an individual with H-1B status depends on whether they are categorized as either nonresident aliens or resident aliens for tax purposes. A nonresident alien for tax purposes is only taxed on income that is effectively connected with a trade or business in the United States and United States-source income that is fixed, determinable, annual, or periodical. A resident alien for tax purposes is taxed on all income, including income from outside the United States. The classification is determined based on the substantial presence test. If the substantial presence test indicates that the individual is a resident, then income taxation is like any other U.S. person and may be filed using Form 1040 and the necessary schedules. Otherwise, the individual must file as a non-resident alien using Form 1040NR or Form 1040NR-EZ; the individual may claim a benefit from tax treaties that exist between the United States and the individual's country of citizenship. An individual in the first year in the U.S. may choose to be considered a resident for taxation purposes for the entire year, and must pay taxes on their worldwide income for that year. This first-year choice can only be made once in an individual's lifetime. A spouse, regardless of visa status, must include a valid Individual Taxpayer Identification Number or Social Security number on a joint tax return with the individual in H-1B status. Tax filing rules for an individual in H-1B status may be complex, depending on the individual situation. A tax professional who is knowledgeable about the rules for foreigners may be consulted. Social Security tax and Medicare tax Employers must generally withhold Social Security and Medicare taxes from the wages paid to employees in H-1B status. Similarly to U.S. citizens, a person who worked in H-1B status may be eligible to receive Social Security benefit payments at retirement. Generally, a worker must have worked in the U.S. and paid Social Security taxes obtaining at least 40 credits before retirement. The person will not be eligible for payments if the person moves outside the U.S. and is a citizen of a country with a social insurance system or a pension system that pays periodic payments upon old age, retirement, or death. The U.S. has bilateral agreements with several countries to ensure that the credit granted into the U.S. Social Security system, even if it is fewer than 40 credits, is taken into account in the foreign country's comparable system and vice versa. H-1B and intent to immigrate permanently Even though the H-1B visa is a non-immigrant visa, it is one of the few temporary visa categories recognized as dual intent, meaning an H-1B holder could legally have an immigration intent (apply for and obtain the green card) while still being a holder of the H-1B visa. However, this was only allowed in special cases by the USCIS, such as EB-1 visas. Effectively, the non-immigrant visa may eventually lead to permanent residence; companies often support it with the agreement to support the employee with green card petitions. In the past, the employment-based green card process used to take only a few years, less than the duration of the H-1B visa itself because requirement to maintain a foreign address for this non-immigrant classification was removed in the Immigration Act of 1990. The Trump administration expressed its dislike of the use of the H-1B visa, a nonimmigrant visa, as a pathway to permanent residence, and it has said it intends to restructure the immigration/permanent residence pathway with efficient systems such as Points-based immigration system. In apparent response, some green card seekers look to alternatives, like the EB-5 visa, which offers better prospects for permanent immigration than the H-1B visa. As a response to the abuse of H-1B visas, groups like Progressives for Immigration Reform advertised opposition posters throughout San Francisco's Bay Area Rapid Transit (BART) stations and trains. Dependents of H-1B visa holders H-1B visa holders can bring immediate family members (spouse and children under 21) to the United States under the H-4 visa category as dependents. An H-4 visa holder may remain in the U.S. as long as the H-1B visa holder retains legal status. An H-4 visa holder is allowed to attend school, apply for a driver's license, and open a bank account in the United States. Effective May 26, 2015, United States Citizenship and Immigration Services allows some spouses of H-1B visa holders to apply for eligibility to work in the United States. The spouse would need to file Form I-765, Application for Employment Authorization, with supporting documents and the required filing fee. The spouse is authorized to work in the United States only after the Form I-765 is approved and the spouse receives an Employment Authorization Document card. Administrative processing When an H-1B worker travels outside the U.S. for any reason (other than to Canada or Mexico), he or she must have a valid visa stamped on his or her passport for re-entry in the United States. If the worker has an expired stamp but an unexpired i-797 petition, he or she will need to appear in a U.S. Embassy to get a new stamp. In some cases, H-1B workers can be required to undergo "administrative processing," involving extra background checks of different types. Under current rules, these checks are supposed to take ten days or less, but in some cases, have lasted years. Complications for entrepreneurs and consultants The United States immigration system's EB-5 visa program permits foreign entrepreneurs to apply for a green card if they make a sufficient investment in a commercial enterprise and intend to create 10 or more jobs in the United States. Dissatisfied with this mechanism, and taking advantage of the lack of a cap for H-1B visa issued to educational institutions, the University of Massachusetts began a program in 2014 that allows entrepreneurs to found U.S. companies while fulfilling visa requirements by teaching and mentoring on campus, with the university as sponsoring employer. Likewise, self-employed consultants have no visa that would allow them to enter the country and perform work independently for unspecified, extended periods (although, note that a B-1 visa would permit temporary travel to the U.S. to consult for specific periods), so consulting companies have been formed for the sole purpose of sponsoring employees on H-1B visas to allow them to perform work for clients, with the company sharing the resulting profit. Application process The process of getting a H-1B visa has three stages: The employer files with the United States Department of Labor a Labor Condition Application (LCA) for the employee, making relevant attestations, including attestations about wages (showing that the wage is at least equal to the prevailing wage and wages paid to others in the company in similar positions) and working conditions. With an approved Labor Condition Application, the employer files a Form I-129 (Petition for a Nonimmigrant Worker) requesting H-1B classification for the worker. This must be accompanied by necessary supporting documents and fees. Once the Form I-129 is approved, the worker may begin working with the H-1B classification on or after the indicated start date of the job, if already physically present in the United States in valid status at the time. If the employee is outside the United States, he/she may use the approved Form I-129 and supporting documents to apply for the H-1B visa. With a H-1B visa, the worker may present himself or herself at a United States port of entry seeking admission to the United States, and get a Form I-94 to enter the United States. (Employees who started a job on H-1B status without a H-1B visa because they were already in the United States still need to get a H-1B visa if they ever leave and wish to reenter the United States while on H-1B status.) Premium processing An applicant may choose to pay for Premium Processing Service. U.S. Citizenship and Immigration Services guarantees processing of applications and petitions within 15 calendar days. U.S. Citizenship and Immigration Services will send an approval notice, a denial notice, a notice of intent to deny, a request for additional evidence, or open an investigation for fraud or misrepresentation within 15 calendar days of receiving Form I-907: Request for Premium Processing Service. If the 15-calendar-day window is not satisfied, then the Premium Processing Fee will be refunded. On March 3, 2017, U.S. Citizenship and Immigration Services announced on its website that beginning on April 3, 2017, it would temporarily suspend premium processing for all H-1B visa petitions until further notice. Premium processing resumed on October 3, 2017. On March 20, 2018, premium processing was suspended for all cap-subject H-1B petitions for fiscal year 2019. On August 28, 2018, the premium processing suspension was extended and expanded to include all H-1B petitions filed at the Vermont Service Center and the California Service Center. OPT STEM extension and cap-gap extension On April 2, 2008, the U.S. Department of Homeland Security (DHS) Secretary Michael Chertoff announced a 17-month extension to the Optional Practical Training for students in qualifying STEM fields. The Optional Practical Training extension was included in the rule-change commonly referred to as the H-1B Cap-Gap Regulations. The OPT extension only benefits foreign STEM (Science, Technology, Engineering, or Mathematics) students and is not available to foreign students of other disciplines. The 17-month work-authorization extension allows the foreign STEM student to work up to 29 months in total on the student visa, allowing the STEM student multiple years to obtain an H-1B visa. To be eligible for the 12-month work-permit, any bachelor's degree in any field of study is valid. For the 17-month OPT extension, a student must have received a STEM degree in one of the approved majors listed on the USCIS website. The STEM extension can be combined with the cap-gap extension. On March 11, 2016, the U.S. Department of Homeland Security (DHS) published its final rule on the science, technology, engineering and mathematics (STEM) optional practical training (OPT) extension. This final rule will go into effect on May 10, 2016, and at that time will replace the current 17-month STEM OPT extension.DHS extended the period of time from 17 months to 24 months and strengthened oversight of the program by adding new features.   In 2014, a federal court denied the government's motion to dismiss the Washington Alliance of Technology Workers (Washtech) and three other plaintiffs' case against the OPT STEM extension. Judge Huvelle noted that the plaintiffs had standing due to increased competition in their field, that the OPT participation had ballooned from 28,500 in 2008, to 123,000 and that while the students are working under OPT on student visas, employers are not required to pay Social Security and Medicare contributions, nor prevailing wage. On August 9, 2018, U.S. Citizenship and Immigration Services issued a policy memorandum changing how the agency calculates unlawful presence for students and exchange visitors in F, J, and M nonimmigrant status, including F-2, J-2, and M-2 dependents, who fail to maintain status in the United States. Additional information is available on the Unlawful Presence and Bars to Admissibility page. History Creation On June 27, 1952, Congress passed the Immigration and Nationality Act after overriding a veto by President Harry S. Truman. For the first time, the Immigration and Nationality Act codified United States' immigration, naturalization, and nationality law into permanent statutes, and it introduced a system of selective immigration by giving special preference to foreigners possessing skills that are urgently needed by the country. Several different types of visas were established, including a H-1 visa for "an alien having a residence in a foreign country which he has no intention of abandoning who is of distinguished merit and ability and who is coming temporarily to the United States to perform temporary services of an exceptional nature requiring such merit and ability." The term "distinguished merit and ability" was not new to United States immigration law; it had previously been used as a qualification for musicians and artists who had wanted to enter the United States. The visa was called an H-1 visa because it had been made into law by section 101(15)(H)(1) of the Immigration and Nationality Act. Immigration Act of 1990 The Immigration Act of 1990 was signed into law by President George H. W. Bush on November 20, 1990. The H-1 visa was replaced by two different visas. An H-1A visa was created for nurses, and the H-1B visa was established for workers in a "specialty occupation". The Immigration Act defined a specialty occupation as "an occupation that requires theoretical and practical application of a body of highly specialized knowledge, and attainment of a bachelor's or higher degree in the specific specialty (or its equivalent) as a minimum for entry into the occupation in the United States". In order to qualify, a visa applicant needed any applicable state license for the particular occupation and either an educational degree related to the occupation or an equivalent amount of professional experience. For the first time, there was established a quota of 65,000 H-1B visas available each fiscal year, rather than an unlimited amount before. An employer was required by law to pay such employees at least the prevailing wage for the position, and employers were required to make certain attestations by way of a Labor Condition Application. American Competitiveness and Workforce Improvement Act of 1998 President Bill Clinton signed the American Competitiveness and Workforce Improvement Act into law on October 21, 1998. The law required that each application for an H-1B must include an additional $500 payment, which would be used for retraining U.S. workers in order to reduce the need for H-1B visas in the future. The quota of H-1B visas was increased from 65,000 to 115,000 for fiscal years 1999 and 2000 only. For an employer with a large number of employees in H-1B status or who had committed a willful misrepresentation in the recent past, the employer attest that the additional H-1B worker would not displace any U.S. workers. The act also gave investigative authority to the United States Department of Labor. American Competitiveness in the 21st Century Act of 2000 On October 17, 2000, the American Competitiveness in the 21st Century Act was signed by President Bill Clinton. Under the law, the required retraining fee was increased from $500 to $1,000. The quota was increased to 195,000 H-1B visas in fiscal years 2001, 2002, and 2003 only. Nonprofit research institutions sponsoring workers for H-1B visas became exempt from the H-1B visa quotas. Under the law, a worker in H-1B status who had already been subject to a visa quota would not be subject to quotas if requesting a transfer to a new employer or if applying for a three-year extension. An H-1B worker became allowed to change employers if the worker had an I-485 application pending for six months and an approved I-140 and if the new position is substantially comparable to their current position. In the case of an H-1B holder's spouse in H-4 status, the spouse may be eligible to work in the United States under certain circumstances. The spouse must have an approved "Immigration Petition for Alien Worker" form or the spouse must have been given H-1B status under sections 106(a) and (b) of the American Competitiveness in the 21st Century Act of 2000. Singapore–United States and Chile–United States Free Trade Agreements in 2003 Congress ratified the Singapore–United States Free Trade Agreement in 2003. It ratified the Chile–United States Free Trade Agreement later that year. With these free trade agreements, a new H-1B1 visa was established as being available solely for people from either Singapore or Chile. Unlike H-1B visas that had a limited renewal time, H-1B1 visas can be renewed indefinitely. H-1B1 visas are subject to a separate quota of 6,000 per fiscal year. Unlike H-1B visas, an H-1B1 visa is not a dual-intent visa, and an H-1B1 applicant must convince the visa officer that they have no intention of permanently immigrating to the United States. H-1B Visa Reform Act of 2004 The H-1B Visa Reform Act of 2004 was a part of the Consolidated Appropriations Act, 2005, which President George W. Bush signed on December 6, 2004. For employers with 26 or more employees, the required retraining fee was increased from $1,000 to $1,500, and it was reduced to $750 for all other employers. A new $500 "anti-fraud fee" was established that was required to be paid by the employer with the visa application. While the H-1B quota returned to 65,000 per year, the law added 20,000 visas for applicants with J-1 status with either a master's degree or a doctorate degree from a U.S. graduate school. Governmental entities became exempt from H-1B visa quotas. According to the law, H-1B visas that were revoked due to either fraud or willful misrepresentation would be added to the H-1B visa quota for the following fiscal year. The law also allowed one-year extensions of H-1B for H-1B visa holders who were applying for permanent residency and whose petitions had been pending for a long time. The United States Department of Labor had more investigative authority, but an employer could defend against misdeeds by using either the Good Faith Compliance Defense or the Recognized Industry Standards Defense. Consolidated Natural Resources Act of 2008 The Consolidated Natural Resources Act of 2008 federalized immigration in the U.S. territory of the Commonwealth of the Northern Mariana Islands, and it stipulated that, during a transition period, numerical limitations would not apply to otherwise qualified workers in the H visa category in the U.S. territories of Guam and the Commonwealth of the Northern Mariana Islands. The exemption does not apply to any employment to be performed outside of Guam or the Commonwealth of the Northern Mariana Islands. Employ American Workers Act of 2009 The Employ American Workers Act, as part of the American Recovery and Reinvestment Act of 2009, was signed into law by President Barack Obama on February 17, 2009. For employers who applied to sponsor a new H-1B and who had received funds under either the Troubled Asset Relief Program (TARP) or the Federal Reserve Act Section 13, the employers were required to attest that the additional H-1B worker would not displace any U.S. workers and that the employer had not laid off, and would not lay off, any U.S. worker in a job equivalent to the H-1B position in the area of intended employment of the H-1B worker within the period beginning 90 days prior to the filing of the H-1B petition and ending 90 days after its filing. 2010 memorandum about employee-employer relationship On January 8, 2010, United States Citizenship and Immigration Services issued a memorandum stating that there must be a clear employee-employer relationship between the petitioner (employer) and the beneficiary (prospective visa holder). The memorandum outlines what the employer must do to be considered in compliance as well as putting forth the documentation requirements to back up the employer's assertion that a valid relationship exists. An employer must maintain a valid employee-employer relationship throughout the period of the worker's stay in the United States. The employer must have actual control or the right to control the employee and to be able to decide when, where, and how the employee will be employed and performs work. A valid employee-employer relationship typically includes many of the following: supervising the worker on- and off-site; maintaining such supervision through calls, reports, or visits; having a right to control the work on a day-to-day basis if such control is required; providing the worker with the tools and equipment needed for the job; having the ability to hire, pay, and terminate the worker's job; evaluating the worker's products, progress, and performance; provision of some type of employee benefits; allowing the worker to use the employer's proprietary information when performing work; assigning work to the worker that produces an end-product related to the employer's business; and having the ability to control the manner and means in which the worker accomplishes tasks. The memorandum further states that "common law is flexible about how these factors are to be weighed". Third-party placement firms and staffing firms do not qualify for H-1B status because they fail the control test. 2015 rule about work authorization for certain H-4 holders In 2015, the U.S. Department of Homeland Security issued a rule that allowed certain spouses of H-1B workers to be eligible for work authorization. Under the rule, the H-1B worker must either be the principal beneficiary of an approved Immigrant Petition for Alien Worker (Form I-140) or have H-1B status under the American Competitiveness in the Twenty-first Century Act of 2000, as amended by the 21st Century Department of Justice Appropriations Authorization Act, and the spouse must be in the United States with H-4 status. The U.S. Department of Homeland Security stated that it issued the rule in order to ease the financial burden that these families may experience as they transition from nonimmigrant to lawfully permanent resident status. The rule also reduces disincentives for H-1B workers to leave the United States, which disrupts the United States businesses employing these workers. 2015 final guidance on changes of work site In 2015, United States Citizenship and Immigration Services issued final guidance stating that an H-1B worker whose work site location of changes to a different metropolitan area, the change is considered a material change that requires the employer to certify a new Labor Condition Application to the Department of Homeland Security. Temporary changes of work site do not require a new Labor Condition Application, such as when the H-1B worker attends a training session, a seminar, or a conference of a short duration or when the H-1B worker is temporarily moved to a short-term placements of less than 30 days. If the amended H-1B petition is disapproved but the original petition remains valid, then the H-1B worker retains their H-1B status as long as they return to work at the original work site. 2016 clarification on maximum period of stay On December 5, 2016, United States Citizenship and Immigration Services issued a memorandum to provide guidance regarding periods of admissions for an individual in H-1B status. The memorandum stated that time spent as either an H-4 dependent or an L-2 dependent does not reduce the maximum allowable period of stay available to individuals in H-1B status. 2017 rule allowing a grace period after end of employment On November 18, 2017, United States Citizenship and Immigration Services released a rule that affects individuals in H-1B status whose employment ends. In these cases, the individual has a grace period of 60 days to leave the United States or change to another legal status that allows them to remain in the United States. 2017 work authorization victims of domestic violence In 2005, the Violence Against Women and Department of Justice Reauthorization Act of 2005 allowed work authorization for victims of domestic violence who are in H-4 status. On February 17, 2017, United States Citizenship and Immigration Services instituted a process for these victims to apply for work authorization. Eligible individuals include an individual who is currently married to a H-1B worker; an individual whose deceased spouse was in H-1B status at death; an individual whose spouse lost H-1B status because of an incident of domestic violence; or an individual whose marriage to a H-1B worker ended because of battery or extreme cruelty perpetrated by the individual's former spouse. The individual must also have entered the United States in an H status, must continue to be in H-4 status, and were themselves or their child battered or subjected to extreme cruelty perpetrated by the H-1B spouse. Among other things, the spouse's application must include evidence of the abuse, such as protection orders, police reports, court records, medical records, reports from social services agencies, a signed statement from the applicant detailing abuse suffered since your admission to the United States, and/or affidavits from other people who can knowledgeably attest to the abuse experienced. Before this policy was implemented, an abused spouse in H-4 status would be required to leave the United States the date the person divorced the abusive spouse. The divorced spouse now has a way to legally remain in the United States and legally work in the United States after the divorce is finalized or pending. If approved, the authorization is valid for two years. 2017 memorandum on computer-programming positions A memorandum from December 22, 2000, stated that, because most computer-programming positions required a bachelor's degree, computer programming was considered a specialty occupation that qualified for an H-1B visa. On March 31, 2017, United States Citizenship and Immigration Services released a memorandum stating that computer programming would no longer be automatically considered a specialty occupation, partly because a bachelor's degree was no longer typically required of these positions. An application for an H-1B visa for a computer programmer must sufficiently describe the duties and the level of experience and responsibilities of computer-programming positions in order to demonstrate how the position is a senior, complex, specialized, or unique computer-programming position rather than an entry-level position in order to qualify for an H-1B visa. In addition, the United States Department of Justice warned employers not to discriminate against U.S. workers by showing a preference for hiring H-1B workers. Proposed reform in 2017 and 2018 In 2017, the U.S. Congress considered more than doubling the minimum wage required for an H-1B holder from the $60,000 (USD) established in 1989 and unchanged since then. The High Skilled Integrity and Fairness Act, introduced by U.S. Rep. Zoe Lofgren of California, would raise H-1B holders' minimum salaries to $130,000. The action was criticized in the Indian press for confirming "the worst fears of [Indian] IT companies" in the wake of the reforms discussed during the 2016 Presidential election by both major candidates, and for causing a 5% drop in the BSE SENSEX index. Though, India in general has been welcoming this change and requirement since 2015. Lofgren's office described it as a measure to "curb outsourcing abuse" citing unfair tech hiring practices by employers including Disney and University of California San Francisco. On April 18, 2017, President Trump signed an executive order that directed federal agencies to implement a "Buy American, Hire American" strategy. At a press briefing, the Department of Labor, Department of Justice, Department of Homeland Security, and Department of State said that an executive order will direct federal agencies to implement a new system that favors higher-skilled, higher-paid applicants. The executive order is the first initiative in response to a key pledge made by Trump during his presidential campaign to promote a "Buy American, Hire American" policy. The executive order is intended to order federal agencies to review and propose reforms to the H-1B visa system. Through the executive order, Trump stated his broad policy of economic nationalism without having to go through Congress. The Secretary of Labor, Attorney General, Secretary of Homeland Security, and Secretary of State will "fill in the details with reports and recommendations about what the administration can legally do." Trump said that the executive would "end the theft of American prosperity," which he said had been brought on by low-wage immigrant labor. On January 9, 2018, the USCIS said that it was not considering any proposal that would force H-1B visa holders to leave the United States during the green-card process. USCIS said an employer could request extensions in one-year increments under section 106(a)–(b) of the American Competitiveness in the 21st Century Act instead. 2018 effect of extension disapprovals On June 28, 2018, the United States Citizenship and Immigration Services announced that, when a person's request for a visa extension is rejected, the person will be deported from the United States. The Trump administration said they are not considering any proposal that would force H-1B visa holders to leave the country. Suspension of entry in 2020 On April 22, 2020, President Donald Trump signed a presidential proclamation that temporarily suspended the entry of people with non-immigrant visas, including H-1B visas. On June 22, 2020, President Trump extended the suspension for H-1B visa holders until December 31, 2020. On December 31, 2020, President Trump issued a presidential proclamation extending the suspension of entry until March 31, 2021, because they would pose "a risk of displacing and disadvantaging United States workers during the economic recovery following the COVID-19 outbreak". President Joe Biden allowed the suspension to expire on March 31, 2021, which allowed H-1B visa holders to enter the U.S. beginning on April 1, 2021. Protections for U.S. workers Labor Condition Application The U.S. Department of Labor (DOL) is responsible for ensuring that foreign workers do not displace or adversely affect wages or working conditions of U.S. workers. For every H-1B petition filed with the USCIS, there must be included a Labor Condition Application (LCA) (not to be confused with the labor certification), certified by the U.S. Department of Labor. The LCA is designed to ensure that the wage offered to the non-immigrant worker meets or exceeds the "prevailing wage" in the area of employment. ("Immigration law has a number of highly technical terms that may not mean the same thing to the average reader.") The LCA also contains an attestation section designed to prevent the program from being used to import foreign workers to break a strike or replace U.S. citizen workers. While an employer is not required to advertise the position before hiring an H-1B non-immigrant pursuant to the H-1B visa approval, the employer must notify the employee representative about the Labor Condition Application (LCA)—or if there is no such representation, the employer must publish the LCA at the workplace and the employer's office. Under the regulations, LCAs are a matter of public record. Corporations hiring H-1B workers are required to make these records available to any member of the public who requests to look at them. Copies of the relevant records are also available from various web sites, including the Department of Labor. History of the Labor Condition Application form The LCA must be filed electronically using Form ETA 9035E. Over the years, the complexity of the form increased from one page in 1997 to three pages in 2008, to five pages as of August 2012. Employer attestations By signing the LCA, the employer attests that: The employer pays H-1B non-immigrants the same wage level paid to all other individuals with similar experience and qualifications for that specific employment, or the prevailing wage for the occupation in the area of employment, whichever is higher. The employment of H-1B non-immigrants does not adversely affect working conditions of workers similarly employed. On the date the application is signed and submitted, there is not a strike, lockout, or work stoppage in the course of a labor dispute in the occupation in which H-1B non-immigrants will be employed at the place of employment. If such a strike or lockout occurs after this application is submitted, the employer must notify the DOL's Employment and Training Administration (ETA) within three days, and the application is not used to support petition filings with USCIS (formerly known as INS) for H-1B non-immigrants to work in the same occupation at the place of employment until ETA determines the strike or lockout is over. A copy of this application has been, or will be, provided to each H-1B non-immigrant employed pursuant to this application, and, as of the application date, notice of this application has been provided to workers employed in the occupation in which H-1B non-immigrants will be employed: Notice of this filing has been provided to bargaining representative of workers in the occupation in which H-1B non-immigrants will be employed; or There is no such bargaining representative; therefore, a notice of this filing has been posted and was, or will remain, posted for 10 days in at least two conspicuous locations where H-1B non-immigrants will be employed. The law requires H-1B workers to be paid the higher of the prevailing wage for the same occupation and geographic location, or the same as the employer pays to similarly situated employees. Other factors, such as age and skill were not permitted to be taken into account for the prevailing wage. Congress changed the program in 2004 to require the Department of Labor to provide four skill-based prevailing wage levels for employers to use. This is the only prevailing wage mechanism the law permits that incorporates factors other than occupation and location. The approval process for these applications are based on employer attestations and documentary evidence submitted. The employer is advised of their liability if they are replacing a U.S. worker. Limits on employment authorization USCIS clearly states the following concerning H-1B nonimmigrants' employment authorization. When a H-1B nonimmigrant works with multiple employers, if any of employers fail to file the petition, it is considered as an unauthorized employment and the nonimmigrant fails to maintain the status. H-1B fees earmarked for U.S. worker education and training In 2007, the U.S. Department of Labor, Employment and Training Administration (ETA), reported on two programs, the High Growth Training Initiative and Workforce Innovation Regional Economic Development (WIRED), which have received or will receive $284 million and $260 million, respectively, from H-1B training fees to educate and train U.S. workers. According to the Seattle Times $1 billion from H-1B fees have been distributed by the Labor Department to further train the U.S. workforce since 2001. Impact Studies have shown that H-1B visas have welfare-improving effects for Americans, leading to significant wage gains, lower consumer prices, greater innovation, and greater total factor productivity growth. While H-1B visas lead to lower wages and employment for competing U.S. workers, H-1B visas have led to lower prices for consumers, greater output, and improved performance by companies. H-1B visa holders have been associated with greater innovation and economic performance. Criticisms of the program The H-1B program has been criticized on many grounds. It was the subject of a hearing, "Immigration Reforms Needed to Protect Skilled American Workers," by the United States Senate Committee on the Judiciary on March 17, 2015. According to Senator Chuck Grassley of Iowa, chairman of the committee: According to the editorial board of The New York Times, speaking in June 2015, loopholes and lax enforcement of the H-1B visa program has resulted in exploitation of both visa holders and American workers. Use for outsourcing In some cases, rather than being used to hire talented workers not available in the American labor market, the program is being used for outsourcing. Senators Dick Durbin of Illinois and Charles Grassley of Iowa began introducing "The H-1B and L-1 Visa Fraud & Prevention Act" in 2007. According to Durbin, speaking in 2009, "The H-1B visa program should complement the U.S. workforce, not replace it;" "The…program is plagued with fraud and abuse and is now a vehicle for outsourcing that deprives qualified American workers of their jobs." The proposed legislation has been opposed by Compete America, a tech industry lobbying group, In June 2015, ten Senators requested the U.S. Department of Labor open an investigation of outsourcing of technical tasks by Southern California Edison to Tata Consultancy Services and Infosys then laying off 500 technology workers. After a ten-month investigation, the U.S. Department of Labor determined that no charges should be filed against any of the firms. No labor shortages Paul Donnelly, in a 2002 article in Computerworld, cited Milton Friedman as stating that the H-1B program acts as a subsidy for corporations. Others holding this view include Norman Matloff, who testified to the U.S. House Judiciary Committee Subcommittee on Immigration on the H-1B subject. Matloff's paper for the University of Michigan Journal of Law Reform claims that there has been no shortage of qualified American citizens to fill American computer-related jobs, and that the data offered as evidence of American corporations needing H-1B visas to address labor shortages was erroneous. The United States General Accounting Office found in a report in 2000 that controls on the H-1B program lacked effectiveness. The GAO report's recommendations were subsequently implemented. High-tech companies often cite a tech-worker shortage when asking Congress to raise the annual cap on H-1B visas, and have succeeded in getting various exemptions passed. The American Immigration Lawyers Association (AILA), described the situation as a crisis, and the situation was reported on by the Wall Street Journal, BusinessWeek and Washington Post. Employers applied pressure on Congress. Microsoft chairman Bill Gates testified in 2007 on behalf of the expanded visa program on Capitol Hill, "warning of dangers to the U.S. economy if employers can't import skilled workers to fill job gaps." Congress considered a bill to address the claims of shortfall but in the end did not revise the program. According to a study conducted by John Miano and the Center for Immigration Studies, there is no empirical data to support a claim of employee worker shortage. Citing studies from Duke, Alfred P. Sloan Foundation, Georgetown University and others, critics have also argued that in some years, the number of foreign programmers and engineers imported outnumbered the number of jobs created by the industry. Hire Americans First has also posted hundreds of first hand accounts of H-1B visa harm reports directly from individuals negatively impacted by the program, many of whom are willing to speak with the media. A study by economist Morgan Raux found that domestic employers are 28 percent more likely to apply for an H-1B visa when job postings are posted for longer periods of time. That effect is stronger in engineering and computer science applications, where employers are 50 percent more likely to apply for visas. Studies carried out from the 1990s through 2011 by researchers from Columbia U, Computing Research Association (CRA), Duke U, Georgetown U, Harvard U, National Research Council of the NAS, RAND Corporation, Rochester Institute of Technology, Rutgers U, Alfred P. Sloan Foundation, Stanford U, SUNY Buffalo, UC Davis, UPenn Wharton School, Urban Institute, and U.S. Dept. of Education Office of Education Research & Improvement have reported that the U.S. has been producing sufficient numbers of able and willing STEM (Science, Technology, Engineering and Mathematics) workers, while several studies from Hal Salzman, B. Lindsay Lowell, Daniel Kuehn, Michael Teitelbaum and others have concluded that the U.S. has been employing only 30% to 50% of its newly degreed able and willing STEM workers to work in STEM fields. A 2012 IEEE announcement of a conference on STEM education funding and job markets stated "only about half of those with under-graduate STEM degrees actually work in the STEM-related fields after college, and after 10 years, only some 8% still do." Ron Hira, a professor of public policy at Howard University and a longtime critic of the H-1B visa program, recently called the IT talent shortage "imaginary," a front for companies that want to hire relatively inexpensive foreign guest workers. Wage depression Wage depression is a chronic complaint critics have about the H-1B program. The Department of Homeland Security annual report indicates that H-1B workers in the field of Computer Science are paid a mean salary of $75,000 annually (2014) almost 25,000 dollars below the average annual income for software developers and studies have found that H-1B workers are paid significantly less than U.S. workers. It is claimed that the H-1B program is primarily used as a source of cheap labor. A paper by George J. Borjas for the National Bureau of Economic Research found that "a 10 percent immigration-induced increase in the supply of doctorates lowers the wage of competing workers by about 3 to 4 percent." A 2016 study found that H-1B visas kept wages for U.S. computer scientists 2.6% to 5.1% lower, and employment in computer science for U.S. workers 6.1% to 10.8% lower, but resulted in greater production efficiency, lowered the prices of IT products, raised the output of IT products and caused substantially higher profits for IT firms. The Labor Condition Application (LCA) included in the H-1B petition is supposed to ensure that H-1B workers are paid the prevailing wage in the labor market, or the employer's actual average wage (whichever is higher), but evidence exists that some employers do not abide by these provisions and avoid paying the actual prevailing wage despite stiff penalties for abusers. Theoretically, the LCA process appears to offer protection to both U.S. and H-1B workers. However, according to the U.S. General Accounting Office, enforcement limitations and procedural problems render these protections ineffective. Ultimately, the employer, not the Department of Labor, determines what sources determine the prevailing wage for an offered position, and it may choose among a variety of competing surveys, including its own wage surveys, provided that such surveys follow certain defined rules and regulations. The law specifically restricts the Department of Labor's approval process of LCAs to checking for "completeness and obvious inaccuracies." In FY 2005, only about 800 LCAs were rejected out of over 300,000 submitted. Hire Americans First has posted several hundred first hand accounts of individuals negatively impacted by the program. According to attorney John Miano, the H-1B prevailing wage requirement is "rife" with loopholes. Ron Hira, assistant professor of public policy at the Rochester Institute of Technology, compiled the median wage in 2005 for new H-1B information technology (IT), these wages were found to $50,000, lower than starting wages for IT graduates with a B.S. degree. The U.S. government OES office's data indicates that 90% of H-1B IT wages were below the median U.S. wage and 62% in the 25th percentile for the same occupation. In 2002, the U.S. government began an investigation into Sun Microsystems' hiring practices after an ex-employee, Guy Santiglia, filed complaints with the U.S. Department of Justice and U.S. Department of Labor alleging that the Santa Clara firm discriminates against American citizens in favor of foreign workers on H-1B visas. Santiglia accused the company of bias against U.S. citizens when it laid off 3,900 workers in late 2001 and at the same time applied for thousands of visas. In 2002, about 5 percent of Sun's 39,000 employees had temporary work visas, he said. In 2005, it was decided that Sun violated only minor requirements and that neither of these violations was substantial or willful. Thus, the judge only ordered Sun to change its posting practices. 2016 presidential election and the H-1B visa The H-1B visa became an issue in the 2016 United States presidential election. According to Computerworld, candidate Donald Trump took a stance to "pause" and re-write the H-1B system. Additionally, during some of his rallies he invited guest speakers to raise awareness of the hundreds of IT workers who were displaced by H-1B guest workers. Candidate Hillary Clinton spoke negatively of H-1B workers being hired because they are less expensive and since they are reliant on the employer, more likely to be compliant during abuse. Clinton said that she was unlikely to consider the H-1B system individually and only would look towards reforms as part of a comprehensive immigration reform. Presidential candidate Bernie Sanders opposed guest worker programs and was also skeptical about skilled immigrant (H-1B) visas, saying, "Last year, the top 10 employers of H-1B guest workers were all offshore outsourcing companies. These firms are responsible for shipping large numbers of American information technology jobs to India and other countries." In an interview with Vox he stated his opposition to an open borders immigration policy, describing it as: ...a right-wing proposal, which says essentially there is no United States...you're doing away with the concept of a nation-state. What right-wing people in this country would love is an open-border policy. Bring in all kinds of people, work for $2 or $3 an hour, that would be great for them. I don't believe in that. I think we have to raise wages in this country, I think we have to do everything we can to create millions of jobs. Risks for employees Historically, H-1B holders have sometimes been described as indentured servants, and while the comparison is no longer as compelling, it had more validity prior to the passage of American Competitiveness in the Twenty-First Century Act of 2000. Indeed, guest workers on H-1B visas in the IT sector have a significant degree of inter-firm mobility. Although immigration generally requires short- and long-term visitors to disavow any ambition to seek the green card (permanent residency), H-1B visa holders are an important exception, in that the H-1B is legally acknowledged as a possible step towards a green card under what is called the doctrine of dual intent. H-1B visa holders may be sponsored for their green cards by their employers through an Application for Alien Labor Certification, filed with the U.S. Department of Labor. In the past, the sponsorship process has taken several years, and for much of that time the H-1B visa holder was unable to change jobs without losing their place in line for the green card. This created an element of enforced loyalty to an employer by an H-1B visa holder. Critics alleged that employers benefit from this enforced loyalty because it reduced the risk that the H-1B employee might leave the job and go work for a competitor, and that it put citizen workers at a disadvantage in the job market, since the employer has less assurance that the citizen will stay at the job for an extended period of time, especially if the work conditions are tough, wages are lower or the work is difficult or complex. It has been argued that this makes the H-1B program extremely attractive to employers, and that labor legislation in this regard has been influenced by corporations seeking and benefiting from such advantages. Some recent news reports suggest that the recession that started in 2008 will exacerbate the H-1B visa situation, both for supporters of the program and for those who oppose it. The process to obtain the green card has become so long that during these recession years it has not been unusual that sponsoring companies fail and disappear, thus forcing the H-1B employee to find another sponsor, and lose their place in line for the green card. An H-1B employee could be just one month from obtaining their green card, but if the employee is laid off, he or she may have to leave the country, or go to the end of the line and start over the process to get the green card, and wait as much as 15 more years, depending on the nationality and visa category. The American Competitiveness in the Twenty-First Century Act of 2000 provides some relief for people waiting for a long time for a green card, by allowing H-1B extensions past the normal 6 years, as well as by making it easier to change the sponsoring employer. Some workers who come to the U.S. on H-1B visas receive poor, unfair, and illegal treatment by brokers who place them with jobs in the US, according to a report published in 2014. The United States Trafficking Victims Protection Reauthorization Act of 2013 was passed to help protect the rights of foreign workers in the United States, and the U.S. Department of State distributes pamphlets to inform foreign workers of their rights. The out-sourcing/off-shoring visa In his floor statement on H-1B visa reform, Senator Dick Durbin stated "The H-1B job visa lasts for three years and can be renewed for three years. What happens to those workers after that? Well, they could stay. It is possible. But these new companies have a much better idea for making money. They send the engineers to America to fill spots—and get money to do it—and then after the three to six years, they bring them back to work for the companies that are competing with American companies. They call it their outsourcing visa. They are sending their talented engineers to learn how Americans do business and then bring them back and compete with those American companies." Critics of H-1B use for outsourcing have also noted that more H-1B visas are granted to companies headquartered in India than companies headquartered in the United States. Of all computer systems analysts and programmers on H-1B visas in the U.S., 74 percent were from Asia. This large scale migration of Asian IT professionals to the United States has been cited as a central cause for the quick emergence of the offshore outsourcing industry. In FY 2009, due to the worldwide recession, applications for H-1B visas by offshore outsourcing firms were significantly lower than in previous years, yet 110,367 H-1B visas were issued, and 117,409 were issued in FY2010. Departure requirement on job loss If an employer lays off an H-1B worker, the employer is required to pay for the laid-off worker's transportation outside the United States. If an H-1B worker is laid off or quits, the worker has a grace period of 60 days or until the I-94 expiration date, whichever is shorter, to find a new employer or leave the country. There also is a 10-day grace period for an H-1B worker to depart the United States at the end of his/her authorized period of stay. This grace period only applies if the worker works until the H-1B expiration date listed on his/her I-797 approval notice, or I-94 card. American workers are ordered to train their foreign replacements There have been cases where employers used the program to replace their American employees with H-1B employees, and in some of those cases, the American employees were even ordered to train their replacements. Fraud The United States Citizenship and Immigration Services "H-1B Benefit Fraud & Compliance Assessment" of September 2008 concluded 21% of H-1B visas granted originate from fraudulent applications or applications with technical violations. Fraud was defined as a willful misrepresentation, falsification, or omission of a material fact. Technical violations, errors, omissions, and failures to comply that are not within the fraud definition were included in the 21% rate. In 2009, federal authorities arrested people for a nationwide H-1B visa scam in which they allegedly submitted false statements and documents in connection with petitions for H-1B visas. Fraud has included acquisition of a fake university degree for the prospective H-1B worker, coaching the worker on lying to consul officials, hiring a worker for which there is no U.S. job, charging the worker money to be hired, benching the worker with no pay, and taking a cut of the worker's U.S. salary. The workers, who have little choice in the matter, are also engaged in fraud, and may be charged, fined, and deported. Similar visas In addition to H-1B visas, there are a variety of other visa categories that allow foreign workers to come into the U.S. to work for some period of time. L-1 visas are issued to foreign employees of a corporation. Under recent rules, the foreign worker must have worked for the corporation for at least one year in the preceding three years prior to getting the visa. An L-1B visa is appropriate for non-immigrant workers who are being temporarily transferred to the United States based on their specialized knowledge of the company's techniques and methodologies. An L-1A visa is for managers or executives who either manage people or an essential function of the company. There is no requirement to pay prevailing wages for the L-1 visa holders. For Canadian residents, a special L visa category is available. TN-1 visas are part of the North American Free Trade Agreement (NAFTA), and are issued to Canadian and Mexican citizens. TN visas are only available to workers who fall into one of a preset list of occupations determined by the NAFTA treaty. There are specific eligibility requirements for the TN Visa. E-3 visas are issued to citizens of Australia under the Australia free-trade treaty. H-1B1 visas are a sub-set of H-1B issued to residents of Chile and Singapore. H-1B1 visas for residents of Chile was part of the Chile–United States Free Trade Agreement; PL 108-77 Section 402(a)(2)(B), 117 Stat. 909, 940; S 1416, HR 2738, which was passed by the U.S. House of Representatives on July 24, 2003. H-1B1 visas for residents of Singapore was part of the Singapore–United States Free Trade Agreement, PL 108-78 Section 402(2), 117 Stat. 948, 970-971; S 1417, HR 2739, which passed the U.S. House of Representatives on July 24, 2003, passed the U.S. Senate on July 31, 2003, and signed by President George W. Bush on May 6, 2003. According to U.S. Citizenship and Immigration Services, if there are any unused H-1B1 visas during a particular year, that number is added to the following year's H-1B base quota. One recent trend in work visas is that various countries attempt to get special preference for their nationals as part of treaty negotiations. Another trend is for changes in immigration law to be embedded in large Authorization or Omnibus bills to avoid the controversy that might accompany a separate vote. H-2B visa: The H-2B non-immigrant program permits employers to hire foreign workers to come to the U.S. and perform temporary non-agricultural work, which may be one-time, seasonal, peak load or intermittent. There is a 66,000 per year limit on the number of foreign workers who may receive H-2B status. H-1B demographics and tables H-1B applications approved H-1B visas issued per year Top H-1B employers by visas approved Use for other countries An individual with a valid H-1B visa does not need a visa to enter Costa Rica for tourism for up to 30 days. The H-1B visa must be stamped in the passport and be valid for at least six months. The passport needs to be valid for at least six months after entering Costa Rica. See also SKIL Bill Free trade debate Labor shortage Immigration Voice L-1 visa Disney litigation Immigration Innovation Act of 2015 Notes References United States Citizenship and Immigration Service, "Characteristics of Specialty Occupation Workers (H-1B)," for FY 2004 and FY 2005, November 2006. "Microsoft Cuts 5,000 Jobs as Recession Curbs Growth (Update5)", Bloomberg, 22 Jan 2009 (Microsoft Lays off 5,000 even as they use 3,117 visas in 2006.) Bill Gates, Chairman of Microsoft, Testimony to the U.S. Senate Committee Health, Education, Labor, and Pensions. Hearing "Strengthening American Competitiveness for the 21st Century." March 7, 2007 Business Week, Immigration: Google Makes Its Case, 7 Jun 2007. Business Week, Who Gets Temp Work Visas? 7 Jun 2007 (Top 200 H-1B Visa Users Chart) Business Week, Immigration Fight: Tech vs. Tech, 25 May 2007. Business Week, Crackdown on Indian Outsourcing Firms, 15 May 2007. Dr. Norman Matloff, Debunking the Myth of a Desperate Software Labor Shortage, Testimony to the U.S. House Judiciary Committee, April 1998, updated December 2002 CNN, Lou Dobbs, Programmers Guild Interview & Transcript, August 26, 2005 Congressional Record: Illegal Aliens Taking American Jobs, June 18, 2003 (House) Center for Immigration Studies, Backgrounder: The bottom of the pay scale, Wages for H-1B Computer Programmer's, John Milano, 2005. Further reading Pittsburgh law firm's immigration video sparks an Internet firestorm, Pittsburgh Post-Gazette, June 22, 2007 "Lawmakers Request Investigation Into YouTube Video" Sen. Chuck Grassley and Rep. Lamar Smith ask the Labor Department to look into a video they say documents H-1B abuse by companies. Information Week, June 21, 2007 Oct. 2007 study by The Urban Institute – ''Into the Eye of the Storm: Assessing the Evidence on Science and Engineering Education, Quality, and Workforce Demand' B. Lindsay Lowell and Hal Salzman' Guestworkers in the High-Skill U.S. Labor Market: An Analysis of Supply, Employment, and Wage Trends "(2013) Hal Salzman, Daniel Kuehn, B. Lindsay Lowell Economic Policy Institute" External links U.S. Department of State information on H-1B visa U.S. GAO Report on H-1B Problems, PDF format H-1B Quota Updates from USCIS United States visas by type Employment of foreign-born 1990 introductions
1127046
https://en.wikipedia.org/wiki/Linux%20Virtual%20Server
Linux Virtual Server
Linux Virtual Server (LVS) is load balancing software for Linux kernel–based operating systems. LVS is a free and open-source project started by Wensong Zhang in May 1998, subject to the requirements of the GNU General Public License (GPL), version 2. The mission of the project is to build a high-performance and highly available server for Linux using clustering technology, which provides good scalability, reliability and serviceability. Overview The major work of the LVS project is now to develop advanced IP load balancing software (IPVS), application-level load balancing software (KTCPVS), and cluster management components. IPVS: an advanced IP load balancing software implemented inside the Linux kernel. The IP Virtual Server code is merged into versions 2.4.x and newer of the Linux kernel mainline. KTCPVS: implements application-level load balancing inside the Linux kernel, still under development. LVS can be used for building highly scalable and highly available network services, such as web, email, media and VoIP services, and integrating scalable network services into large-scale reliable e-commerce or e-government applications. LVS-based solutions already have been deployed in many real applications throughout the world, including Wikipedia. The LVS components depend upon the Linux Netfilter framework, and its source code is available in the net/netfilter/ipvs subdirectory within the Linux kernel source. LVS is able to handle UDP, TCP layer-4 protocols as well as FTP passive connection by inspecting layer-7 packets. It provides a hierarchy of counters in the /proc directory. The userland utility program used to configure LVS is called ipvsadm, which requires superuser privileges to run. Schedulers LVS implements several balancing schedulers, listed below with the relevant source files: Round-robin (ip_vs_rr.c) Weighted round-robin (ip_vs_wrr.c) Least-connection (ip_vs_lc.c) Weighted least-connection (ip_vs_wlc.c) Locality-based least-connection (ip_vs_lblc.c) Locality-based least-connection with replication (ip_vs_lblcr.c) Destination hashing (ip_vs_dh.c) Source hashing (ip_vs_sh.c) Shortest expected delay (ip_vs_sed.c) Never queue (ip_vs_nq.c) Maglev hashing (ip_vs_mh.c) Glossary Commonly used terms include the following: LVS director: load balancer that receives all incoming client requests for services and directs them to a specific "real server" to handle the request Real servers: nodes that make up an LVS cluster which are used to provide services on the behalf of the cluster Client computers: computers requesting services from the virtual server VIP (Virtual IP address): the IP address used by the director to provide services to client computers RIP (Real IP address): the IP address used to connect to the cluster nodes DIP (Directors IP address): the IP address used by the director to connect to network of real IP addresses CIP (Client IP address): the IP address assigned to a client computer, that it uses as the source IP address for requests being sent to the cluster Examples Setting up a virtual HTTP server with two real servers: ipvsadm -A -t 192.168.0.1:80 -s rr ipvsadm -a -t 192.168.0.1:80 -r 172.16.0.1:80 -m ipvsadm -a -t 192.168.0.1:80 -r 172.16.0.2:80 -m The first command assigns TCP port 80 on IP address 192.168.0.1 to the virtual server. The chosen scheduling algorithm for load balancing is round-robin (-s rr). The second and third commands are adding IP addresses of real servers to the LVS setup. The forwarded network packets shall be masked (-m). Querying the status of the above configured LVS setup: # ipvsadm -L -n IP Virtual Server version 1.0.8 (size=65536) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.1:80 rr -> 172.16.0.2:80 Masq 1 3 1 -> 172.16.0.1:80 Masq 1 4 0 See also IP Virtual Server Netfilter and nftables Network scheduler References External links Linux Virtual Server Configuration Wiki IBM eServer BladeCenter, Linux, and Open Source: Blueprint for e-business on demand (includes LVS as a component) High-availability cluster computing Free network-related software Internet software for Linux Linux-only free software Parallel computing
25252662
https://en.wikipedia.org/wiki/SHR%20%28operating%20system%29
SHR (operating system)
SHR (formerly Stable Hybrid Release) was a community-driven Linux distribution for smartphones which was based on OpenEmbedded, Xorg, and the freesmartphone.org (FSO) framework. Several different graphical toolkits were made available, such as GTK+ and Qt. Supported devices The unstable and testing releases were released for Openmoko's Neo 1973 and FreeRunner smartphones. Later on, SHR Core supported the FreeRunner and the GTA04. SHR Core was also being ported to several devices like the HTC Dream, the Nexus S, the Palm Pre and Nokia N900 devices. with various degrees of completion. Applications SHR had several rudimentary applications specially made for it, like an address book software, a dialer, an SMS application and so on. It was also possible to install these applications on Debian. Many graphical Linux applications were also available like Midori and Pidgin, and it was also possible to use the terminal with the ash shell through a terminal application. Front-ends for MPlayer, and other software like FoxtrotGPS that were developed for the OpenMoko and/or the distributions that ran on it were also available on SHR. Software stack The use of Xorg enabled to use many Linux applications, with various degrees of usability due to hardware constraints of the supported devices. Most/All of the supported devices had small displays with high Pixel density, and most of them had only a touchscreen based input and very few buttons. SHR used the FSO framework middleware to handle the smartphones power management and various peripherals. Several SHR developers were also contributing to the FSO framework in order to port SHR to newer devices. See also Openmoko Replicant (operating system) Rooting (Android OS) Comparison of Linux distributions References External links SHR official website SHR on Openmoko Wiki ARM Linux distributions Embedded Linux Smartphones Mobile operating systems Free mobile software Openmoko Linux distributions
24074788
https://en.wikipedia.org/wiki/Cynthia%20Richards
Cynthia Richards
Cynthia Richards (born Sinthia Richards, 1944) is a Jamaican singer whose career began in the 1960s. Biography Born in Duhaney Park, Kingston, Jamaica, in 1944, Richards attended the Denham Town Primary School where after impressing teachers with a performance at an end-of-term concert she was encouraged to appear on the Vere Johns Talent Show. While working as a magistrate's clerk, she continued her career, working in the 1960s with Bobby Aitken's Carib Beats band and later the Falcons (with a young Dennis Brown), Byron Lee & the Dragonaires, and the Mighty Vikings. She recorded her debut single, "How Could I", in 1969, produced by Clement "Coxsone" Dodd for his Studio One label. Although this wasn't a success, it brought her to the attention of Clancy Eccles, who produced "Foolish Fool", which became a major hit in Jamaica. She also recorded as one half of the duo Cynthia & Archie. She moved on to work with several producers, including Duke Reid, for whom she recorded "Jungle Fever", "Sentimental Reason", and "Aily I", the latter a hit with UK reggae audiences in 1972. Other producers she worked with included Alvin Ranglin and Larry Lawrence, before moving into self-production with singles such as "Mr. Postman", and had further chart success in Jamaica with her version of The Staple Singers' "If You're Ready (Come Go With Me)". She was voted Top Female Artist of 1973 in Jamaica and toured with Elmo and Toots and the Maytals. Later in the 1970s she worked as a backing singer, contributing to Beres Hammond's 1976 album Soul Reggae and Pat Kelly's 1978 album Lonely Man. Discography Foolish Fool (1969), Clan Disc (Trojan) Conversation (1970), Clan Disc (Trojan) Promises (1970), Clan Disc (Trojan) Can't Wait (1970), Clan Disc (Trojan) I Was Just Thinking About You (1970), Clan Disc (Trojan) United We Stand (1970), Pressure Drop (Trojan) Stand By Your Man (1971), Clan Disc (Trojan) Is There A Place in Your Heart For Me (1971), GG Records (Trojan) Moving On (1971), Big Shot Aily I (1973), Attack (Trojan) Sentimental Reason (1973), Lord Koos If You're Ready Come Go With Me (1974) Think About Me (1974), Cactus Keep It in the Family (1974), Cactus I Shall Not Remove (1977), Grove Music As Clancy and Cynthia Tomorrow (1971), Clan Disc (Trojan) References 1944 births Living people Musicians from Kingston, Jamaica 20th-century Jamaican women singers
1543423
https://en.wikipedia.org/wiki/Eye%20tracking
Eye tracking
Eye tracking is the process of measuring either the point of gaze (where one is looking) or the motion of an eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human-computer interaction, and in product design. Eye trackers are also being increasingly used for rehabilitative and assistive applications (related,for instance, to control of wheel chairs, robotic arms and prostheses). There are a number of methods for measuring eye movement. The most popular variant uses video images from which the eye position is extracted. Other methods use search coils or are based on the electrooculogram. History In the 1800s, studies of eye movement were made using direct observations. For example, Louis Émile Javal observed in 1879 that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops (called fixations) and quick saccades. This observation raised important questions about reading, questions which were explored during the 1900s: On which words do the eyes stop? For how long? When do they regress to already seen words? Edmund Huey built an early eye tracker, using a sort of contact lens with a hole for the pupil. The lens was connected to an aluminum pointer that moved in response to the movement of the eye. Huey studied and quantified regressions (only a small proportion of saccades are regressions), and he showed that some words in a sentence are not fixated. The first non-intrusive eye-trackers were built by Guy Thomas Buswell in Chicago, using beams of light that were reflected on the eye, then recording on film. Buswell made systematic studies into reading and picture viewing. In the 1950s, Alfred L. Yarbus performed eye tracking research, and his 1967 book is often quoted. He showed that the task given to a subject has a very large influence on the subject's eye movement. He also wrote about the relation between fixations and interest: "All the records ... show conclusively that the character of the eye movement is either completely independent of or only very slightly dependent on the material of the picture and how it was made, provided that it is flat or nearly flat." The cyclical pattern in the examination of pictures "is dependent on not only what is shown on the picture, but also the problem facing the observer and the information that he hopes to gain from the picture." "Records of eye movements show that the observer's attention is usually held only by certain elements of the picture.... Eye movement reflects the human thought processes; so the observer's thought may be followed to some extent from records of eye movement (the thought accompanying the examination of the particular object). It is easy to determine from these records which elements attract the observer's eye (and, consequently, his thought), in what order, and how often." "The observer's attention is frequently drawn to elements which do not give important information but which, in his opinion, may do so. Often an observer will focus his attention on elements that are unusual in the particular circumstances, unfamiliar, incomprehensible, and so on." "... when changing its points of fixation, the observer's eye repeatedly returns to the same elements of the picture. Additional time spent on perception is not used to examine the secondary elements, but to reexamine the most important elements." In the 1970s, eye-tracking research expanded rapidly, particularly reading research. A good overview of the research in this period is given by Rayner. In 1980, Just and Carpenter formulated the influential Strong eye-mind hypothesis, that "there is no appreciable lag between what is fixated and what is processed". If this hypothesis is correct, then when a subject looks at a word or object, he or she also thinks about it (process cognitively), and for exactly as long as the recorded fixation. The hypothesis is often taken for granted by researchers using eye-tracking. However, gaze-contingent techniques offer an interesting option in order to disentangle overt and covert attentions, to differentiate what is fixated and what is processed. During the 1980s, the eye-mind hypothesis was often questioned in light of covert attention, the attention to something that one is not looking at, which people often do. If covert attention is common during eye-tracking recordings, the resulting scan-path and fixation patterns would often show not where our attention has been, but only where the eye has been looking, failing to indicate cognitive processing. The 1980s also saw the birth of using eye-tracking to answer questions related to human-computer interaction. Specifically, researchers investigated how users search for commands in computer menus. Additionally, computers allowed researchers to use eye-tracking results in real time, primarily to help disabled users. More recently, there has been growth in using eye tracking to study how users interact with different computer interfaces. Specific questions researchers ask are related to how easy different interfaces are for users. The results of the eye tracking research can lead to changes in design of the interface. Another recent area of research focuses on Web development. This can include how users react to drop-down menus or where they focus their attention on a website so the developer knows where to place an advertisement. According to Hoffman, current consensus is that visual attention is always slightly (100 to 250 ms) ahead of the eye. But as soon as attention moves to a new position, the eyes will want to follow. We still cannot infer specific cognitive processes directly from a fixation on a particular object in a scene. For instance, a fixation on a face in a picture may indicate recognition, liking, dislike, puzzlement etc. Therefore, eye tracking is often coupled with other methodologies, such as introspective verbal protocols. Thanks to advancement in portable electronic devices, portable head-mounted eye trackers currently can achieve excellent performance and are being increasingly used in research and market applications targeting daily life settings. These same advances have led to increases in the study of small eye movements that occur during fixation, both in the lab and in applied settings. In the 21st century, the use of artificial intelligence (AI) and artificial neural networks has become a viable way to complete eye-tracking tasks and analysis. In particular, the convolutional neural network lends itself to eye-tracking, as it is designed for image-centric tasks. With AI, eye-tracking tasks and studies can yield additional information that may not have been detected by human observers. The practice of deep learning also allows for a given neural network to improve at a given task when given enough sample data. This requires a relatively large supply of training data, however. The potential use cases for AI in eye-tracking cover a wide range of topics from medical applications to driver safety to game theory. While the CNN structure may fit relatively well with the task of eye-tracking, researchers have the option to construct a custom neural network that is tailored for the specific task at hand. In those instances, these in-house creations can outperform pre-existing templates for a neural network. In this sense, it remains to be seen if there is a way to determine the ideal network structure for a given task. Tracker types Eye-trackers measure rotations of the eye in one of several ways, but principally they fall into one of three categories: (i) measurement of the movement of an object (normally, a special contact lens) attached to the eye; (ii) optical tracking without direct contact to the eye; and (iii) measurement of electric potentials using electrodes placed around the eyes. Eye-attached tracking The first type uses an attachment to the eye, such as a special contact lens with an embedded mirror or magnetic field sensor, and the movement of the attachment is measured with the assumption that it does not slip significantly as the eye rotates. Measurements with tight-fitting contact lenses have provided extremely sensitive recordings of eye movement, and magnetic search coils are the method of choice for researchers studying the dynamics and underlying physiology of eye movement. This method allows the measurement of eye movement in horizontal, vertical and torsion directions. Optical tracking The second broad category uses some non-contact, optical method for measuring eye motion. Light, typically infrared, is reflected from the eye and sensed by a video camera or some other specially designed optical sensor. The information is then analyzed to extract eye rotation from changes in reflections. Video-based eye trackers typically use the corneal reflection (the first Purkinje image) and the center of the pupil as features to track over time. A more sensitive type of eye-tracker, the dual-Purkinje eye tracker, uses reflections from the front of the cornea (first Purkinje image) and the back of the lens (fourth Purkinje image) as features to track. A still more sensitive method of tracking is to image features from inside the eye, such as the retinal blood vessels, and follow these features as the eye rotates. Optical methods, particularly those based on video recording, are widely used for gaze-tracking and are favored for being non-invasive and inexpensive. Electric potential measurement The third category uses electric potentials measured with electrodes placed around the eyes. The eyes are the origin of a steady electric potential field which can also be detected in total darkness and if the eyes are closed. It can be modelled to be generated by a dipole with its positive pole at the cornea and its negative pole at the retina. The electric signal that can be derived using two pairs of contact electrodes placed on the skin around one eye is called Electrooculogram (EOG). If the eyes move from the centre position towards the periphery, the retina approaches one electrode while the cornea approaches the opposing one. This change in the orientation of the dipole and consequently the electric potential field results in a change in the measured EOG signal. Inversely, by analysing these changes in eye movement can be tracked. Due to the discretisation given by the common electrode setup, two separate movement components – a horizontal and a vertical – can be identified. A third EOG component is the radial EOG channel, which is the average of the EOG channels referenced to some posterior scalp electrode. This radial EOG channel is sensitive to the saccadic spike potentials stemming from the extra-ocular muscles at the onset of saccades, and allows reliable detection of even miniature saccades. Due to potential drifts and variable relations between the EOG signal amplitudes and the saccade sizes, it is challenging to use EOG for measuring slow eye movement and detecting gaze direction. EOG is, however, a very robust technique for measuring saccadic eye movement associated with gaze shifts and detecting blinks. Contrary to video-based eye-trackers, EOG allows recording of eye movements even with eyes closed, and can thus be used in sleep research. It is a very light-weight approach that, in contrast to current video-based eye-trackers, requires low computational power, works under different lighting conditions and can be implemented as an embedded, self-contained wearable system. It is thus the method of choice for measuring eye movement in mobile daily-life situations and REM phases during sleep. The major disadvantage of EOG is its relatively poor gaze-direction accuracy compared to a video tracker. That is, it is difficult to determine with good accuracy exactly where a subject is looking, though the time of eye movements can be determined. Technologies and techniques The most widely used current designs are video-based eye-trackers. A camera focuses on one or both eyes and records eye movement as the viewer looks at some kind of stimulus. Most modern eye-trackers use the center of the pupil and infrared / near-infrared non-collimated light to create corneal reflections (CR). The vector between the pupil center and the corneal reflections can be used to compute the point of regard on surface or the gaze direction. A simple calibration procedure of the individual is usually needed before using the eye tracker. Two general types of infrared / near-infrared (also known as active light) eye-tracking techniques are used: bright-pupil and dark-pupil. Their difference is based on the location of the illumination source with respect to the optics. If the illumination is coaxial with the optical path, then the eye acts as a retroreflector as the light reflects off the retina creating a bright pupil effect similar to red eye. If the illumination source is offset from the optical path, then the pupil appears dark because the retroreflection from the retina is directed away from the camera. Bright-pupil tracking creates greater iris/pupil contrast, allowing more robust eye-tracking with all iris pigmentation, and greatly reduces interference caused by eyelashes and other obscuring features. It also allows tracking in lighting conditions ranging from total darkness to very bright. Another, less used, method is known as passive light. It uses visible light to illuminate, something which may cause some distractions to users. Another challenge with this method is that the contrast of the pupil is less than in the active light methods, therefore, the center of iris is used for calculating the vector instead. This calculation needs to detect the boundary of the iris and the white sclera (limbus tracking). It presents another challenge for vertical eye movements due to obstruction of eyelids. Eye-tracking setups vary greatly. Some are head-mounted, some require the head to be stable (for example, with a chin rest), and some function remotely and automatically track the head during motion. Most use a sampling rate of at least 30 Hz. Although 50/60 Hz is more common, today many video-based eye trackers run at 240, 350 or even 1000/1250 Hz, speeds needed to capture fixational eye movements or correctly measure saccade dynamics. Eye movements are typically divided into fixations and saccades – when the eye gaze pauses in a certain position, and when it moves to another position, respectively. The resulting series of fixations and saccades is called a scanpath. Smooth pursuit describes the eye following a moving object. Fixational eye movements include microsaccades: small, involuntary saccades that occur during attempted fixation. Most information from the eye is made available during a fixation or smooth pursuit, but not during a saccade. Scanpaths are useful for analyzing cognitive intent, interest, and salience. Other biological factors (some as simple as gender) may affect the scanpath as well. Eye tracking in human–computer interaction (HCI) typically investigates the scanpath for usability purposes, or as a method of input in gaze-contingent displays, also known as gaze-based interfaces. Data presentation Interpretation of the data that is recorded by the various types of eye-trackers employs a variety of software that animates or visually represents it, so that the visual behavior of one or more users can be graphically resumed. The video is generally manually coded to identify the AOIs(Area Of Interests) or recently using artificial intelligence. Graphical presentation is rarely the basis of research results, since they are limited in terms of what can be analysed - research relying on eye-tracking, for example, usually requires quantitative measures of the eye movement events and their parameters, The following visualisations are the most commonly used: Animated representations of a point on the interface This method is used when the visual behavior is examined individually indicating where the user focused their gaze in each moment, complemented with a small path that indicates the previous saccade movements, as seen in the image. Static representations of the saccade path This is fairly similar to the one described above, with the difference that this is static method. A higher level of expertise than with the animated ones is required to interpret this. Heat maps An alternative static representation, used mainly for the agglomerated analysis of the visual exploration patterns in a group of users. In these representations, the ‘hot’ zones or zones with higher density designate where the users focused their gaze (not their attention) with a higher frequency. Heat maps are the best known visualization technique for eyetracking studies. Blind zones maps, or focus maps This method is a simplified version of the Heat maps where the visually less attended zones by the users are displayed clearly, thus allowing for an easier understanding of the most relevant information, that is to say, we are informed about which zones were not seen by the users. Saliency maps Similar to heat maps, a saliency map illustrates areas of focus by brightly displaying the attention-grabbing objects over an initially black canvas. The more focus is given to a particular object, the brighter it will appear. Eye-tracking vs. gaze-tracking Eye-trackers necessarily measure the rotation of the eye with respect to some frame of reference. This is usually tied to the measuring system. Thus, if the measuring system is head-mounted, as with EOG or a video-based system mounted to a helmet, then eye-in-head angles are measured. To deduce the line of sight in world coordinates, the head must be kept in a constant position or its movements must be tracked as well. In these cases, head direction is added to eye-in-head direction to determine gaze direction. If the measuring system is table-mounted, as with scleral search coils or table-mounted camera (“remote”) systems, then gaze angles are measured directly in world coordinates. Typically, in these situations head movements are prohibited. For example, the head position is fixed using a bite bar or a forehead support. Then a head-centered reference frame is identical to a world-centered reference frame. Or colloquially, the eye-in-head position directly determines the gaze direction. Some results are available on human eye movements under natural conditions where head movements are allowed as well. The relative position of eye and head, even with constant gaze direction, influences neuronal activity in higher visual areas. Practice A great deal of research has gone into studies of the mechanisms and dynamics of eye rotation, but the goal of eye tracking is most often to estimate gaze direction. Users may be interested in what features of an image draw the eye, for example. It is important to realize that the eye tracker does not provide absolute gaze direction, but rather can measure only changes in gaze direction. To determine precisely what a subject is looking at, some calibration procedure is required in which the subject looks at a point or series of points, while the eye tracker records the value that corresponds to each gaze position. (Even those techniques that track features of the retina cannot provide exact gaze direction because there is no specific anatomical feature that marks the exact point where the visual axis meets the retina, if indeed there is such a single, stable point.) An accurate and reliable calibration is essential for obtaining valid and repeatable eye movement data, and this can be a significant challenge for non-verbal subjects or those who have unstable gaze. Each method of eye-tracking has advantages and disadvantages, and the choice of an eye-tracking system depends on considerations of cost and application. There are offline methods and online procedures like AttentionTracking. There is a trade-off between cost and sensitivity, with the most sensitive systems costing many tens of thousands of dollars and requiring considerable expertise to operate properly. Advances in computer and video technology have led to the development of relatively low-cost systems that are useful for many applications and fairly easy to use. Interpretation of the results still requires some level of expertise, however, because a misaligned or poorly calibrated system can produce wildly erroneous data. Eye-tracking while driving a car in a difficult situation The eye movement of two groups of drivers have been filmed with a special head camera by a team of the Swiss Federal Institute of Technology: Novice and experienced drivers had their eye-movement recorded while approaching a bend of a narrow road. The series of images has been condensed from the original film frames to show 2 eye fixations per image for better comprehension. Each of these stills corresponds to approximately 0.5 seconds in realtime. The series of images shows an example of eye fixations #9 to #14 of a typical novice and an experienced driver. Comparison of the top images shows that the experienced driver checks the curve and even has Fixation No. 9 left to look aside while the novice driver needs to check the road and estimate his distance to the parked car. In the middle images, the experienced driver is now fully concentrating on the location where an oncoming car could be seen. The novice driver concentrates his view on the parked car. In the bottom image the novice is busy estimating the distance between the left wall and the parked car, while the experienced driver can use his peripheral vision for that and still concentrate his view on the dangerous point of the curve: If a car appears there, he has to give way, i.e. stop to the right instead of passing the parked car. More recent studies have also used head-mounted eye tracking to measure eye movements during real-world driving conditions. Eye-tracking of younger and elderly people while walking While walking, elderly subjects depend more on foveal vision than do younger subjects. Their walking speed is decreased by a limited visual field, probably caused by a deteriorated peripheral vision. Younger subjects make use of both their central and peripheral vision while walking. Their peripheral vision allows faster control over the process of walking. Applications A wide variety of disciplines use eye-tracking techniques, including cognitive science; psychology (notably psycholinguistics; the visual world paradigm); human-computer interaction (HCI); human factors and ergonomics; marketing research and medical research (neurological diagnosis). Specific applications include the tracking eye movement in language reading, music reading, human activity recognition, the perception of advertising, playing of sports, distraction detection and cognitive load estimation of drivers and pilots and as a means of operating computers by people with severe motor impairment. In the field of virtual reality, eye tracking is used in head mounted displays for a variety of purposes including to reduce processing load by only rendering the graphical area within the user's gaze. Commercial applications In recent years, the increased sophistication and accessibility of eye-tracking technologies have generated a great deal of interest in the commercial sector. Applications include web usability, advertising, sponsorship, package design and automotive engineering. In general, commercial eye-tracking studies function by presenting a target stimulus to a sample of consumers while an eye tracker records eye activity. Examples of target stimuli may include websites, television programs, sporting events, films and commercials, magazines and newspapers, packages, shelf displays, consumer systems (ATMs, checkout systems, kiosks) and software. The resulting data can be statistically analyzed and graphically rendered to provide evidence of specific visual patterns. By examining fixations, saccades, pupil dilation, blinks and a variety of other behaviors, researchers can determine a great deal about the effectiveness of a given medium or product. While some companies complete this type of research internally, there are many private companies that offer eye-tracking services and analysis. One field of commercial eye-tracking research is web usability. While traditional usability techniques are often quite powerful in providing information on clicking and scrolling patterns, eye-tracking offers the ability to analyze user interaction between the clicks and how much time a user spends between clicks, thereby providing valuable insight into which features are the most eye-catching, which features cause confusion and which are ignored altogether. Specifically, eye-tracking can be used to assess search efficiency, branding, online advertisements, navigation usability, overall design and many other site components. Analyses may target a prototype or competitor site in addition to the main client site. Eye-tracking is commonly used in a variety of different advertising media. Commercials, print ads, online ads and sponsored programs are all conducive to analysis with current eye-tracking technology. One example is the analysis of eye movements over advertisements in the Yellow Pages. One study focused on what particular features caused people to notice an ad, whether they viewed ads in a particular order and how viewing times varied. The study revealed that ad size, graphics, color, and copy all influence attention to advertisements. Knowing this allows researchers to assess in great detail how often a sample of consumers fixates on the target logo, product or ad. Hence an advertiser can quantify the success of a given campaign in terms of actual visual attention. Another example of this is a study that found that in a search engine results page, authorship snippets received more attention than the paid ads or even the first organic result. Yet another example of commercial eye-tracking research comes from the field of recruitment. A study analyzed how recruiters screen Linkedin profiles and presented results as heat maps. Safety applications Scientists in 2017 constructed a Deep Integrated Neural Network (DINN) out of a Deep Neural Network and a convolutional neural network. The goal was to use deep learning to examine images of drivers and determine their level of drowsiness by "classify[ing] eye states." With enough images, the proposed DINN could ideally determine when drivers blink, how often they blink, and for how long. From there, it could judge how tired a given driver appears to be, effectively conducting an eye-tracking exercise. The DINN was trained on data from over 2,400 subjects and correctly diagnosed their states 96%-99.5% of the time. Most other artificial intelligence models performed at rates above 90%. This technology could ideally provide another avenue for driver drowsiness detection. Game theory applications In a 2019 study, a Convolutional Neural Network (CNN) was constructed with the ability to identify individual chess pieces the same way other CNNs can identify facial features. It was then fed eye-tracking input data from 30 chess players of various skill levels. With this data, the CNN used gaze estimation to determine parts of the chess board to which a player was paying close attention. It then generated a saliency map to illustrate those parts of the board. Ultimately, the CNN would combine its knowledge of the board and pieces with its saliency map to predict the players' next move. Regardless of the training dataset the neural network system was trained upon, it predicted the next move more accurately than if it had selected any possible move at random, and the saliency maps drawn for any given player and situation were more than 54% similar. Assistive technology People with severe motor impairment can use eye tracking for interacting with computers as it is faster than single switch scanning techniques and intuitive to operate. Motor impairment caused by Cerebral Palsy or Amyotrophic lateral sclerosis often affects speech, and users with Severe Speech and Motor Impairment (SSMI) use a type of software known as Augmentative and Alternative Communication (AAC) aid, that displays icons, words and letters on screen and uses text-to-speech software to generate spoken output. In recent times, researchers also explored eye tracking to control robotic arms and powered wheelchairs. Eye tracking is also helpful in analysing visual search patterns, detecting presence of Nystagmus and detecting early signs of learning disability by analysing eye gaze movement during reading. Aviation applications Eye tracking has already been studied for flight safety by comparing scan paths and fixation duration to evaluate the progress of pilot trainees, for estimating pilots’ skills, for analyzing crew’s joint attention and shared situational awareness. Eye tracking technology was also explored to interact with helmet mounted display systems and multi-functional displays in military aircraft. Studies were conducted to investigate the utility of eye tracker for Head-up target locking and Head-up target acquisition in Helmet mounted display systems (HMDS). Pilots' feedback suggested that even though the technology is promising, its hardware and software components are yet to be matured. Research on interacting with multi-functional displays in simulator environment showed that eye tracking can improve the response times and perceived cognitive load significantly over existing systems. Further, research also investigated utilizing measurements of fixation and pupillary responses to estimate pilot's cognitive load. Estimating cognitive load can help to design next generation adaptive cockpits with improved flight safety. Eye tracking is also useful for detecting pilot fatigue. Automotive applications In recent time, eye tracking technology is investigated in automotive domain in both passive and active ways. National Highway Traffic Safety Administration measured glance duration for undertaking secondary tasks while driving and used it to promote safety by discouraging the introduction of excessively distracting devices in vehicles In addition to distraction detection, eye tracking is also used to interact with IVIS. Though initial research investigated the efficacy of eye tracking system for interaction with HDD (Head Down Display), it still required drivers to take their eyes off the road while performing a secondary task. Recent studies investigated eye gaze controlled interaction with HUD (Head Up Display) that eliminates eyes-off-road distraction. Eye tracking is also used to monitor cognitive load of drivers to detect potential distraction. Though researchers explored different methods to estimate cognitive load of drivers from different physiological parameters, usage of ocular parameters explored a new way to use the existing eye trackers to monitor cognitive load of drivers in addition to interaction with IVIS. Entertainment applications The 2021 video game Before Your Eyes registers and reads the player's blinking, and uses it as the main way of interacting with the game. Engineering applications The widespread use of eye-tracking technology has shed light to its use in empirical software engineering in the most recent years. The eye-tracking technology and data analysis techniques are used to investigate the understandability of software engineering concepts by the researchers. These include the understandability of business process models, and diagrams used in software engineering such as UML activity diagrams and EER diagrams. Eye-tracking metrics such as fixation, scan-path, scan-path precision, scan-path recall, fixations on area of interest/relevant region are computed, analyzed and interpreted in terms of model and diagram understandability. The findings are used to enhance the understandability of diagrams and models with proper model related solutions and by improving personal related factors such as working-memory capacity, cognitive-load, learning style and strategy of the software engineers and modelers. Privacy concerns With eye tracking projected to become a common feature in various consumer electronics, including smartphones, laptops and virtual reality headsets, concerns have been raised about the technology’s impact on consumer privacy. With the aid of machine learning techniques, eye tracking data may indirectly reveal information about a user's ethnicity, personality traits, fears, emotions, interests, skills, and physical and mental health condition. If such inferences are drawn without a user's awareness or approval, this can be classified as an inference attack. Eye activities are not always under volitional control, e.g., "stimulus-driven glances, pupil dilation, ocular tremor, and spontaneous blinks mostly occur without conscious effort, similar to digestion and breathing”. Therefore, it can be difficult for eye tracking users to estimate or control the amount of information they reveal about themselves. See also AttentionTracking Eye movement Eye movement in language reading Eye movement in music reading Eye tracking on the ISS Fovea Foveated imaging Gaze-contingency paradigm Marketing research Mouse-Tracking Peripheral vision Saccade Screen reading visage SDK Notes References Romano Bergstrom, Jennifer (2014). Eye Tracking in User Experience Design. Morgan Kaufmann. . Bojko, Aga (2013). Eye Tracking The User Experience (A Practical Guide to Research). Rosenfeld Media. . Commercial eye tracking Attention Cognitive science Articles containing video clips Human eye History of human–computer interaction Market research Multimodal interaction Promotion and marketing communications Usability Vision Web design Applications of computer vision
645730
https://en.wikipedia.org/wiki/MontaVista
MontaVista
MontaVista Software is a company that develops embedded Linux system software, development tools, and related software. Its products are made for other corporations developing embedded systems such as automotive electronics, communications equipment, mobile phones, and other electronic devices and infrastructure. MontaVista is based in Santa Clara, California and was founded in 1999 by James "Jim" Ready (formerly at Mentor Graphics and creator of Versatile Real-Time Executive (VRTX)) and others. On November 10, 2009 Cavium Networks announced that it had signed a definitive agreement to purchase MontaVista for $50 million. After Cavium got acquired by Marvell, Montavista operated as an independent entity. Products Linux May 12, 2009, MontaVista announced MontaVista Linux 6 (MVL6) comprising Market Specific Distributions, MontaVista Integration Platform, Software Development Kit, MontaVista Zone Content Server, and support and services. There are several differences between MVL6 and prior MontaVista Linux products. The main ones are: Market Specific Distributions (MSD)Linux operating systems (kernel + userland) optimized for each specific semiconductor vendor's hardware. MontaVista Integration Platformbased on BitBake, analogous to make, which analyzes a set of directives and then builds a task dependency tree to satisfy a user command. BitBake then executes the defined tasks to completion. MontaVista Zone Content Serveraccessed from behind a proxy server, or local web mirror for offline operations, to fetch software and updates. Rather than depending on a mix of public HTTP, Concurrent Versions System (CVS), Git, and Subversion servers across the Internet, there is one source for each original source archive and patch. MontaVista Linux (formerly named Hard Hat Linux) is a Linux distribution that has been enhanced to become a full real-time operating system. The work on real-time performance has since continued to a point where MontaVista claims to support hard real-time tasks on embedded Linux as of MontaVista Linux 4.0, with response times as fast as other real-time operating systems. MontaVista sells subscriptions, which consist of software, documentation, and technical support. The software includes a Linux kernel and toolchain aimed at a specific hardware configuration, collectively called a Linux Support Package (LSP), and other integrated tools including the Eclipse-based DevRocket integrated development environment (IDE). The distribution is available in three editions, each aimed at different market segments: Professional Edition, Carrier Grade Edition, and Mobilinux. The MontaVista Linux toolkit includes specific code libraries to easily migrate from Wind River Systems' VxWorks and the pSOS operating systems. OpenCGX Project OpenCGX is an open and free to use embedded Linux distribution from MontaVista Software LLC. OpenCGX is based on MontaVista’s eleventh generation Carrier Grade Linux. Engineers can quickly jumpstart their ARM and x86 development with a full embedded Linux distribution that is easily customizable. OpenCGX in its introduction is based on Yocto 2.4 with Linux Kernel 4.14 (or latest LTS kernel) and GNU 7.2 toolchain. Carrier Grade Express (CGX) MontaVista Linux Carrier Grade eXpress (CGX) is MontaVista's main operating system product that delivers Carrier Grade reliability, security, and serviceability in a highly configurable, flexible package with consistent high quality. CGX features address a large embedded device segment including networking and communications, instrumentation and control, aerospace and defense, SOHO devices, medical electronics and the "Internet of Things (IoT)" market besides general embedded devices. CGX is a Carrier Grade distribution, validated for CGL spec version 5.0. MVShield MVShield is MontaVista's professional maintenance and support program available for Carrier-Grade Linux products and other distributions such as CentOS or Yocto. One of MontaVista’s most popular support program has been MVShield for CentOS, that is best suited for customers who utilize CentOS in markets like network and wireless infrastructure, medical, and military-aerospace. MVShield for Yocto is MontaVista’s professional services offering to support customer’s open-source Yocto baselines to allow access to industry-leading support and maintenance services without any migration effort to commercial Linux baselines. Carrier Grade Edition MontaVista Linux Carrier Grade Edition (CGE) is a commercial-grade Linux development platform for developers working with reliability, availability, and serviceability (RAS) managed hardware (Hardware Platform Interface (HPI), Intelligent Platform Management Interface (IPMI)) or custom hardware, who need long-term support and high availability. Carrier Grade Linux is governed by the Linux Foundation CGL working group. DevRocket MontaVista DevRocket is a set of Eclipse plug-ins for facilitating application and system development with MontaVista Linux. DevRocket integrated development environment (IDE) runs on Linux, Solaris and Windows. It uses the Eclipse C++ Development Toolkit (CDT). Starting with DevRocket 5.0, users can add MontaVista's plug-ins into an existing Eclipse installation, or install Eclipse with the plugins already loaded. DevRocket is available in two varieties: a Platform Developer Kit (PDK) and Application Developer Kit (ADK). The Platform Developer Kit includes the ability to communicate with a target (RSE, SSH), create and manage file systems, debugging (kgdb), and performance tuning (memory leak, memory use, system profiling). The application developer kit includes a virtual target for developing applications earlier in the development cycle, one-click edit/compile/debug, and performance tuning. Services Custom Hardware Enablement MontaVista supports the x86, ARM, MIPS, and PowerPC architectures with Board Support Packages (BSPs) and associated kernel, driver, and user land adaptions on SoC or third party reference hardware boards. However, most companies develop and then ship their products based on custom hardware created for their specific requirements. MontaVista can adapt MontaVista Linux products to fit customers' software and hardware environment. RTOS/Legacy Migration MontaVista's RTOS/Legacy Migration services helps developing teams move from legacy products using a home grown or commercial real-time operating system (RTOS) to Embedded Linux. System Certification MontaVista Linux is being used in applications within IT and telecom that need to be certified according to Common Criteria, up to EAL4+ level. Evaluation Assurance Level 4+ (EAL4+) is commonly used to  secure connectivity in  critical environments, such as fire and police departments, or aviation and industry control systems. With the help of EAL4+ these critical environments can, for example, ensure secure and safe phone calls, control communication in-flight and for the assembly line, and achieve secure internet access. Training MontaVista customer education provides education and training in developing intelligent device applications using MontaVista Linux. With the skills gained, development teams are able to reduce application development cycles while minimizing development risk. Legacy products Several legacy products are available from MontaVista under long-term support agreements. Mobilinux MontaVista Mobilinux is for wireless handsets and other mobile devices such as Global Positioning System (GPS) devices, portable medical devices, and wireless POS terminals. Mobilinux's key features include dynamic power management, real-time performance, fast booting, and small memory footprint. Professional Edition MontaVista Professional Edition (Pro) is for general embedded Linux developers who want all the benefits of an open source development platform (open source, Linux, easily accessible software, etc.) and added MontaVista benefits including higher quality (fewer bugs), integration with open source tools for a given hardware architecture, and support. Pro is for intelligent device markets, including networking and communications, instrumentation and control, aerospace and defense, small office/home office (SOHO) devices, and medical devices. Future development of MontaVista Pro has been folded into MontaVista Linux, effectively ending this as a separate edition starting version 5.0.24. Open source contributions MontaVista has a history of being a major contributor to the Linux kernel and the open source community. From the start, Jim Ready said he wanted to make it "100% pure Linux" under the GPL. The core changes to make MontaVista Linux into a real-time operating system were made by Nigel Gamble and later updated by Robert Love. Robert Love submitted the changes to the Linux kernel in 2001. The Linux 2.6 stable kernel series is the first to include similar features, such as priority-based preemption. As of 2008, MontaVista had contributed 1.2% of the Linux kernel, making it the 9th-largest corporate contributor to the Linux kernel, according to a survey by the Linux Foundation. MontaVista has also spun off independent open source projects based on several of its features, including dynamic power management, high resolution POSIX timers, the pramfs file system, and the openais implementation of the Service Availability Forum's (SA Forum) Application Interface Specification. Distribution Other versions of MontaVista Linux are used in devices made by a number of partners, including Sony Bravia TVs, NEC routers, and others, especially in Japan. A version of MontaVista Linux OS is used in Dell Latitude E4200 and E4300 notebooks to provide the Latitude ON feature. Cisco NX-OS is based on HardHat Linux. Mobile phones Motorola became the first company to use Linux on a mobile phone when it released the Motorola A760 to the Chinese market on February 16, 2003. Motorola chose to use MontaVista Linux in the Motorola A760 and future Linux-based phones, despite the fact that Motorola was a founding member of the competing Symbian OS. Since then, Motorola has increased focus on its Linux platform and publicly stated that the future platform for all its mid- and high-tier mobile phones will be Linux with Java, and other phone manufacturers NEC and Panasonic have developed a common platform based on MontaVista Linux. See also Green Hills Software LynuxWorks Versatile Real-Time Executive (VRTX) TimeSys Cavium Networks References External links Linux companies Embedded Linux Software companies based in California Companies based in Santa Clara, California Software companies established in 1999 Embedded Linux distributions Software companies of the United States 1999 establishments in California 1999 establishments in the United States Companies established in 1999
13315805
https://en.wikipedia.org/wiki/Skyhook%20Wireless
Skyhook Wireless
Skyhook is a location technology company based in Boston, MA that specializes in location positioning, context and intelligence. Founded in 2003, Skyhook originally began by geolocating Wi-Fi access points and evolved with the idea that hybrid positioning technology, which incorporates Wi-Fi, GPS, cell towers, IP address and device sensors, could improve device location. History Skyhook was founded in 2003 by Ted Morgan and Michael Shean. Skyhook's database was initially gathered through wardriving, when the company sent teams of drivers around the United States, Canada, Western Europe and selected Asian countries to map out Wi-Fi hot spots. Skyhook powers location-based services for companies such as Apple, Samsung, Sony, HP, Dell, Sharp, Philips and MapQuest. The firm received its first patent in 2007, and as of early 2020 holds over 650 patents across the United States and foreign markets. In 2010 Skyhook sued Google over the use of Wi-Fi locator technology in cell phones. The complaint claimed that Andy Rubin, Google's Vice President for Engineering, gave Sanjay K. Jha, Chief Executive of Motorola's mobile devices division, a “stop ship” order, preventing Motorola from shipping phones with the Android operating system using the Skyhook software. The litigation was settled in 2015: Skyhook  received $90 million in settlement with the tech giant, a third of which was consumed by legal fees. The figure was revealed in a securities filing by Liberty Broadband Corp., Skyhook's Colorado-based parent company. In February 2014, Skyhook Wireless was acquired by TruePosition Inc, a subsidiary of Liberty Broadband. In 2016, the two companies merged under the Skyhook brand, which now rests under Liberty Broadband, which is a part of the Liberty Media family. In 2016 Skyhook launched new products dedicated to the advertising technology market: Retailer Personas, Power Personas, and On-Demand Personas. These solution, based on Skyhook's processing of billion location points, help marketers finely target consumers based on where they have been in order to personalize mobile marketing campaigns. In February 2019 Skyhook announced that it is working closely with Qualcomm Technologies to bring Wi-Fi positioning and location-assistance services based on Qualcomm Snapdragon Wear platforms. In September 2019 Mozilla announced changes to commercial use of its Mozilla Location Service, which caused SailfishOS location services not being able to use the service any more. The changes were made due to patent infringement allegations by Skyhook. In February 2020 Deutsche Telekom announced that Skyhook was among its new technology partners for IoT Solution Optimizer, an ecosystem of company developed to “scale-up IoT business faster, and support enterprises of all sizes wanting to succeed in the Internet of Things.“ In April 2020 Skyhook partnered with Kyocera to provide accurate location services to DuraXV Extreme, a rugged flip phone. Coverage To pinpoint location, Skyhook uses a reference network composed of the known locations of over 5 billion Wi-Fi access points and 250 million cellular towers, . Skyhook's coverage area includes most major metro areas in North America, Europe, Asia and Australia. The system can also be used to enhance the performance of GPS enabled devices where GPS reception is weak. Precision Location SDK Skyhook offers a software development kit (SDK), which allows developers to create location-enabled applications using Skyhook's software-only Hybrid Positioning System on the platform of their choice. The SDK supports Android 2.2 (Froyo), 2.3.x (Gingerbread), 4.0.x (Ice Cream Sandwich), and 4.1.x (Jelly Bean), 4.4 (KitKat), 5.0-5.1 (Lollipop), and 6.0 (Marshmallow) including forked platforms such as the Kindle Fire, along with Linux, Windows, and Mac OS X. Context SDK In 2013, Skyhook launched Context, which uses Skyhook's location network along with venue data to give mobile users awareness of the world around them. It enables personalized mobile experiences and enhanced revenue opportunities using 1st Party Location Network and precisely located venues. The SDK requires iOS 6.0+ and a device with region monitoring support: iPhone 4+, iPad (Wi-Fi only) 3+, iPad (Wi-Fi+Cell) 2+, iPad mini, iPad Air, iPod Touch 5+. The Android SDK is supported on Android 2.2 (Froyo), 2.3.x (Gingerbread), 4.0.x (Ice Cream Sandwich), 4.1.x-4.3 (Jelly Bean), 4.4 (KitKat), 5.0-5.1 (Lollipop), and 6.0 (Marshmallow) including forked platforms such as the Kindle Fire. Competitors Skyhook's main competitors include Google, HERE, Unwired Labs, Mozilla and Combain. The latter was sued by Skyhook in 2019. They finally announced a collaboration in a common press release issued in April 2020. See also Hybrid positioning system Mobile phone tracking Local Positioning Systems Wi-Fi positioning system References External links Skyhook Wireless corporate website Meet The Location Data Company That, Literally, Put The Blue Dot On The Map Skyhook Wireless brings location services to wearables in a petit package- TechCrunch Skyhook Launches its new Personas to help solve a $100B industry problem - VentureBeat Skyhook Gets Shoutout by Steve Jobs at MacWorld USA Today: Steve Jobs, iPhone have Skyhook pointed in right direction Geomarketing Wi-Fi Companies based in Boston Software companies based in Massachusetts Location-based software Software companies of the United States American companies established in 2003 2003 establishments in Massachusetts Software companies established in 2003
2175221
https://en.wikipedia.org/wiki/Geeqie
Geeqie
Geeqie is a free software image viewer and image organiser program for Unix-like operating systems, which includes Linux-based systems and Apple's OS X. It was first released in March 2010, having been created as a fork of GQview, which appeared to have ceased development. It uses the GTK toolkit. In September 2015, development was moved from SourceForge to GitHub. Features Viewing raster and vector images, in the following formats: 3FR ANI APM ARW AVIF BMP CR2 CR3 CRW CUR DDS DjVu DNG ERF GIF HEIC HEIF ICNS ICO JP2/JPC/JPX/J2K/JPF JPE/JPEG/JPG JPS JPEGXL KDC MEF MOS MPO MRW NEF ORF PBM/PGM/PNM/PPM PEF PNG PSD PTX QIF/QTIF (QuickTime Image Format) SCR (ZX Spectrum) RAF RAW RW2 SR2 SRF SVG/SVGZ TGA/TARGA TIF/TIFF WEBP WMF X3F XBM XPM Images can be displayed singly in normal or fullscreen mode; static or slideshow mode; in sets of two or four per page for comparison; or as thumbnails of various sizes. Animated images are supported. All available metadata and Exif/IPTC/XMP data can be displayed, as well as colour histograms and assigned tags, keywords and comments Panels can be docked or floating Tags, both predefined and custom, can be assigned to images, and stored either as image metadata (where the file format allows), sidecar files, or in directory metadata files. Keywords and comments can also be assigned. Basic editing in the form of lossless 90/180-degree rotation and flipping is supported; external programs such as GIMP, Inkscape, and custom scripts using ImageMagick can be linked to allow further processing. Advanced searching is available using criteria such as filename, file size, age, image dimensions, similarity to a specified image, or by keywords or comments. If images have GPS coordinates embedded, you may also search for images within a radius of a geographical point. Geeqie supports applying the colour profile embedded in an image along with the system monitor profile (or a user-specified monitor profile) Geeqie sessions can be remotely controlled from external software, so it can be used as an image-viewer component of a bigger application. Geeqie includes a 'find duplicates' tool which can compare images using a variety of criteria (filename, file size, visual similarity, etc.), either within a single folder or between two folders. Images may be given a rating value (also known as a "star rating"). Maps from http://www.openstreetmap.org may be displayed in a side panel. If an image has GPS coordinates embedded, its position will be displayed on the map - if Image Direction is encoded, that will be displayed also. If an image does not have embedded GPS coordinates, it may be dragged-and-dropped onto the map to encode its position. A more extensive list of features may be found here. Reception Geeqie has been generally well received in the technical press. A 2012 review in Free Software Magazine said it is "highly recommended, if not best in class". A 2011 Linux Insider review awarded it 5 out of 5 stars. A 2010 Linux Magazine review called Geeqie an "indispensable tool", "lightning fast". A 2012 Libre Graphics World review noted that Geeqie seems to be "the only up-to-date JPS and MPO viewer on Linux right now". A negative review in 2010 from Tom's Hardware said it "doesn't offer much more than system default apps". GQview GQview is the predecessor to Geeqie. It had been developed from 1998 to 2006 by John Ellis, the last release being in December 2006. Efforts to contact Ellis since then proved unsuccessful, so a group of interested developers forked the GQview code, adopted the name Geeqie, and set about enhancing it. In some Linux distributions (such as Debian and its derivatives), a gqview package was provided as a shortcut to Geeqie for easier upgrade. See also Comparison of image viewers References External links Free image organizers Free image viewers Free photo software Free software programmed in C Graphics software that uses GTK Linux image viewers
21213425
https://en.wikipedia.org/wiki/Dialogic%20Corp
Dialogic Corp
Dialogic is an American multinational technology company headquartered in Parsippany, New Jersey, United States, with operations in over 25 countries. Dialogic provides a cloud-optimized communications technology for real-time communications media, applications, and infrastructure to service providers, enterprises, and developers. Dialogic products are sold directly, as well as through a network of distributors, resellers, and partners, and supplied to OEMs. History Dialogic was founded in 1983 in Parsippany, New Jersey by three engineers, Nicholas Zwick and James Shinn, both of whom had worked for Advanced Micro Devices, and Kenneth Burkhardt, formerly with Unisys. In 1999, at the height of the dot-com era, the company was acquired by Intel for $780 million and was run as the Media & Signaling Division. In October 2006, Eicon bought the Media & Signaling Division of Intel and then changed its name back to Dialogic. Since the formation of the new Dialogic, other competitors in the same industry have been acquired and merged into the company, including NMS Communications and Cantata. The latter was itself a collection of companies including Brooktrout and the Excel Switching Corporation. In May 2010, Dialogic announced a merger with San Jose, California-based Veraz Networks, a company specializing in softswitch and compression technology. The resulting company would retain the name Dialogic. That same year, Dialogic moved its headquarters to Milpitas, California. On November 24, 2014, Dialogic was acquired by Novacap TMT IV, L.P., a Canadian private equity firm. They moved the headquarters back to Parsippany, New Jersey. On February 9, 2016, Dialogic Corp announced the completed acquisition of Apex Communications, a supplier of global real-time communications applications for service providers and enterprise networks. The company launched an application development practice for networked, premises and cloud-based applications. The line of applications include WebRTC, unified communications, IVR, on-demand voicemail and cloud-based PBX. On January 9, 2018, Sangoma Technologies Corporation acquired all key assets of the Converged Communication Division (CCD) from Dialogic Corporation. By divesting its CCD hardware business, Dialogic became a software-centric company. The remaining company, under the name Dialogic Group, was acquired on January 2, 2020, by Enghouse Systems Ltd of Markham, Ontario. Products Dialogic designs, sells and supports real-time communications media, applications, and infrastructure solutions to communications service providers, enterprises, and developers. Its products include media servers, gateways and boards; load balancers, communications applications, softswitches, diameter signaling controllers, session border controllers, signaling stacks and software, fax boards and FoIP software, and bandwidth optimization solutions. References Companies established in 1983 Software companies based in New Jersey Telecommunications companies of the United States VoIP companies of the United States Privately held companies of the United States Telecommunications equipment vendors Software companies of Canada Telecommunications companies of Canada Privately held companies of Canada Parsippany-Troy Hills, New Jersey Companies based in Morris County, New Jersey Software companies of the United States
43555064
https://en.wikipedia.org/wiki/Dirty%20Politics
Dirty Politics
Dirty Politics: How attack politics is poisoning New Zealand’s political environment is a book by Nicky Hager published in August 2014. The book is based on emails hacked from Cameron Slater's Gmail account and on Facebook chats. These communications occurred around the same time that a denial-of-service (DOS) attack took down Slater's website – the right-wing blog Whale Oil Beef Hooked – and feature hundreds of items of correspondence in which prominent New Zealanders are criticised and vilified. Hager's book describes a lengthy history of correspondence between Slater and Justice Minister Judith Collins that eventually contributed to her resignation as a Minister. In response to the allegations in the book, Prime Minister John Key said that he talked to Slater on a regular basis. Hager claimed that using bloggers rather than journalists allowed Key to maintain a friendly public persona, while using right-wing blogs as a vehicle to attack opponents. Other bloggers mentioned in the book include Matthew Hooten, Cathy Odgers ( who goes by the name of Cactus Kate) and David Farrar. However, Hager reserves his strongest criticism for Cameron Slater, who he says received payment to write attack articles on public figures who opposed or criticised National Party policy – and Dirty Politics identifies those who paid him. Background Whale Oil Beef Hooked (generally referred to as WhaleOil) is the personal blog of Cameron Slater which he started in 2005. Slater, the son of former National Party president John Slater, claims WhaleOil is a "super-blog" attracting over one million unique visits every month. He admitted at that time to suffering from depression and has a reputation for vitriol and anger. He said in 2010 that he didn't care if people regarded him as 'stupid, a bully, or vicious' but would resent being called a hypocrite. In January 2014, WhaleOil was hacked some time after he posted a blog post with the headline "Feral dies in Greymouth, did world a favour." Three other children in this family had already been killed in accidents and the post provoked a 'furious public reaction'. Some weeks later, Hager received an 8 gigabyte USB stick in the mail containing thousands of pages of emails hacked from Slater's website. Hager contacted the hacker, known as Rawshark, and persuaded him that putting the information in a book would provide "something more lasting and of bigger value" than releasing it on Twitter. Hager says Rawshark was motivated by his dislike of Slater, and that he secretly met the hacker in public parks to discuss how the information should be released. He then spent a significant amount of time analysing the content and only published material which he believed was in the public interest. He was careful to avoid releasing sensitive material about Slater and others mentioned in the hacked emails which did not involve a significant public interest. Production The production, printing, and distribution of the book was done in secret by Craig Potton Publishing. Co-owner of the publishing company, Robbie Burton, said: "It was one of the great and surreal and scary moments of my publishing career." Burton said the secrecy was necessary because they were wary that the book might annoy some people with "deep pockets" who "might be inclined to revenge". The book was produced very quickly by Craig Potton Publishing, where staff worked on it from Thursday evening to the following Tuesday. Once the book was finished, first it was printed in secret in Wellington, before being shipped back to Nelson and stored in a warehouse, and then finally being sent unannounced to stores around the country. Release Dirty Politics was released at Unity Books in Wellington on 13 August 2014, with a crowd of approximately 150. Prior to the release of the book, details of what it would be about were the subject of substantial speculation, as the topic of the book was kept secret in order to avoid it being blocked from release by a court injunction. Four days after release, it had sold about 10,000 copies, in addition to 1,000 e-book sales. Robbie Burton, executive director of Craig Potton Publishing, said: "I've never seen anything like this before, in terms of the speed at which it's happening." A week after release, sales had reached approximately 15,000, and by mid September, sales were at almost 20,000. The first two chapters are available free on-line. Two weeks after the release of the book, Hager spoke at the four-day biennial Writers & Readers Festival in Christchurch. About 5000 people attended 57 events, one of which was a public discussion called Secrets, Spies and Free Speech featuring Dirty Politics author Nicky Hager, Guardian journalist Luke Harding and Australian journalist Richard King. The discussion was mainly about the freedom of speech in Western democracies. Main features Hager provides numerous incidents of dirty politics in the book. He describes multiple email exchanges between Cameron Slater and National Party spin doctor Jason Ede who works for Prime Minister John Key. Up till August 2011 Mr Ede was employed by Ministerial Services and, in effect, was paid by the taxpayer. Hager says Ede tried to avoid scrutiny by using a 'dynamic' IP address, one that changed frequently making it difficult for him to be identified. When the book was published, Mr Ede was employed by the National Party, rather than by John Key, but was still working from the Prime Minister's office. He resigned the day before the election. Dirty Politics also describes numerous email contacts between Cameron Slater and Judith Collins who are long-time friends. The email records span from 2009 to 2014 covering the entire time she was in Cabinet. One particular email shows that Collins asked Slater to edit her Wikipedia page and sent him a new photo with the message: "Cam, any chance of a better photo going up?? Eek it looks really bad." Slater appears to have obliged using the pseudonym 'Jc press sec', which was subsequently changed to Polkad0t. In return, Slater also asked Collins for favours. The emails indicate Slater asked Ms Collins to get a prisoner moved to a different prison, while she was Minister of Corrections. Slater asked Collins to get him moved because he was friends with the prisoner's ex-wife, who wanted the man moved further away from their daughter who had been visiting him in prison. The more serious allegations in Dirty Politics are described below. Radio New Zealand political editor Brent Edwards says that while the book is not always definitive, The Labour Party website One of the main allegations is that during the 2011 election campaign, Cameron Slater and Jason Ede took advantage of a badly-configured website to enter the database of the Labour Party website. This enabled them to look at emails and personal details which were then used to attack the Labour Party. 18,000 emails, lists of the party's donors and supporters and their credit-card transactions were downloaded. The book reveals Mr Slater and Mr Ede both accessed the site more than once and then "plotted how best to reveal the downloaded material on Mr Slater's blog". Slater did not deny he had accessed the website in May 2011, and said: "I was certainly into the backend of the Labour Party's website, if they couldn't manage security of their website, their credit card details and financial information about their membership, well then they weren't really fit for government at the time." Hager alleges that material from the website breach was selectively released by Slater, starting from 12 June. In response to being accused of illegally accessing the Labour website, Slater published a blog post titled "My confession: How I accessed the Labour Party web site". In the post, he shows how he claims to have accessed information on four Labour party websites, and denies allegations of "hacking". Using easily accessible links, he could view all the files on the web server, including sensitive membership and credit card information which was unencrypted. Mr Key also claimed there was nothing wrong with Jason Ede accessing the Labour Party website because "they left their website open." He justified the action to Radio New Zealand by saying: "If the Wallabies on Tuesday night had left their starting line-up up on their website, on their private website, would the All Blacks go and have a look? The answer is yes. The reason I know that is it's happened." The Security Intelligence Service Another allegation is that in 2011, the Prime Minister's office facilitated the release of confidential Security Intelligence Service (SIS) documents to Cameron Slater so he could use them to embarrass Phil Goff. Slater made an Official Information Act (OIA) request to the SIS for confidential documents which confirmed the SIS had briefed Phil Goff about suspected Israeli spies who left the country after the second Christchurch earthquake – after Goff had said publicly he was not briefed. Slater was given access to documents within 24 hours – long before other media outlets who had also asked for them. Slater's emails described in Dirty Politics suggest he knew what was in the documents before they were released and that the OIA would be expedited in unusually quick time. Goff believes this inside information could only have come from the SIS director Warren Tucker or from staff in the prime minister's office. Attacks on public servant The book also presents evidence which suggests that Justice Minister Judith Collins leaked the name of public servant, Simon Pleasants, to Cameron Slater. Pleasants worked in the ministerial property department at Internal Affairs, and Collins assumed he was responsible for leaking details of Mr English's accommodation payments to Labour in 2009. Slater then published a series of blog attacks on Pleasants – who subsequently received a torrent of abuse including death threats. Pleasants denies ever leaking any information about English. Nick Bryant, Gerry Brownlee's press secretary is identified in Dirty Politics as the user of the pseudonym "Former Hack" to post anonymous comments on Whaleoil encouraging Cameron Slater's campaign against Pleasants. On 23 August, The New Zealand Herald reported it was able to confirm the use of Mr Bryant's ministerial computer and subsequently revealed that hundreds of messages were sent from ministerial or government computers to the comments section of Slater's blog. The messages are linked to email addresses on government servers and IP addresses from which they were posted. Commercial 'hit' jobs Dirty Politics also refers to hacked emails which suggest Slater allowed his website to be used for commercial "hit" jobs supplied by PR consultant, Carrick Graham, whose clients include tobacco and alcohol companies. For instance, Slater has frequently used his WhaleOil blog to attack Professor Doug Sellman, a leading authority on addiction and an outspoken critic of the alcohol industry posting blogs with pejorative titles such as “Confirmed: Doug Sellman Gone Mad”. These were based on information supplied by Carrick Graham, son of former National party member Sir Douglas Graham. Hager says Carrick Graham's company, Facilitate Communications, paid Slater $6,555 a month and speculated that it was for him to do attack blogs. Katherine Rich Former National MP Katherine Rich, the chief executive of the NZ Food and Grocery Council, is implicated in the book. Emails show that information she passed to Carrick Graham was passed on to Slater. Slater then used the information to attack health campaigners such as Professor Doug Sellman who campaigns about liquor legislation and New Zealand Herald columnist Wendyl Nissen, who writes about additives and healthy food. New Zealand allows 14 additives that have been banned or never allowed in the European Union or the United States and in her role at the New Zealand Food and Grocery Council, Katherine Rich, has claimed all additives used in New Zealand are safe. Ms Nissen said she was attacked because Katherine Rich objected to her "highlighting the use of artificial colours commonly used in soft drinks and processed foods and the fact many had been banned in other countries." Confirming his connection with Katherine Rich, Slater publicly acknowledged that she had paid him twice, but claimed it was for speaking at conferences not for writing blog articles. However, in 2012, the National Government appointed Rich to the board of the Health Promotional Agency, which runs campaigns to promote healthier lifestyles. In this capacity, Katherine Rich's primary role is to promote public health, suggesting her involvement with the Food and Grocery Council means she has a conflict of interest. Once the allegations in Dirty Politics became public, a group of 33 scientists and health practitioners were so concerned they wrote to John Key asking him to investigate Rich's apparent conflict of interest. Green Party health spokesperson Kevin Hague said Rich should resign, arguing that "Katherine Rich, Carrick Graham and Cameron Slater have all been involved in a systematic undermining of health promotion in New Zealand. She cannot tenably remain on the board of that organisation." Rich responded by saying she had never been involved in undermining public health and found the claim very offensive. She said she put her heart and soul into the Health Promotion Agency and adhered to all parts of the Crown Entities Act. Nevertheless, the Auditor-General was asked to investigate and conducted a review of the HPA’s board minutes. It declared "we have not identified any particular matters before, or decisions by, the HPA board that might raise serious concerns about its management of conflicts of interest". However, the Auditor-General acknowledged they did not interview Ms Rich, nor did they investigate "allegations in Nicky Hager’s 2014 book Dirty Politics about things Mrs Rich is said to have done in her private capacity". Rich said she felt vindicated by the finding, but the Green Party called for an additional investigation by the State Services Commission. Further revelations Attacks on the Serious Fraud Office On 30 August, John Key released an email written by Slater in 2011 suggesting Judith Collins was also 'gunning' for former Serious Fraud Office (SFO) boss Adam Feely. Collins was the Minister responsible for the SFO at the time, and the email said: "Any information that we can provide her [Collins] on his [Feely's] background is appreciated." It was addressed to 'Mark' (with the surname redacted) and Carrick Graham. Other emails, released to Fairfax Media by Rawshark, appear to suggest that 'Mark' is former Hanover Finance director Mark Hotchin. Hotchin secretly paid Slater and another right wing blogger, Cathy Odgers, to write attack posts undermining Feely, the SFO and the Financial Markets Authority while they were investigating the collapse of Hanover Finance in 2011. Slater wrote a series of highly critical blogs about Feely in late 2011. Release of emails on @WhaleDump Prime Minister John Key characterised Dirty Politics as a "cynically timed attack" from "a well known left wing conspiracy theorist". In response to Key's criticism that the book's publication so close to the 2014 general election was designed to be a political bombshell, Hager says he published the material that was provided to him in March 2014 as soon as he could. Rather than being timed for maximum political damage, Hager says "it was actually just the practicalities of how fast I could get it out." Mr Key went on to say the book "makes all sorts of unfounded allegations and voters will see it for what it is." He also accused Nicky Hager of joining "a whole lot of dots that can't be connected." Key said: "[Hager] didn't do what a true journalist would do. He didn't go and check out the facts, he didn't try and get that side of the story." In response, Hager spoke to his source to try to get back some of the emails that the book was based on in order to release them online. The hacker, known as Rawshark, apparently said: "Leave this to me" and began releasing the emails through an account on Twitter, @WhaleDump. The emails seem to back up many of the claims Hager makes in the book. The hacker also communicated with journalists using encryption to protect his identity. He denied any political motivation but indicated a desire to "take down this corrupt network". After a series of email releases on the @WhaleDump account, Cameron Slater tried to take out a court injunction to prevent the media from reporting hacked information obtained from his Whale Oil blog. Media lawyer Julian Miles QC argued on behalf of the media outlets that the information was of huge public interest. High Court Judge John Fogarty agreed but granted an order stopping the hacker, who was an unnamed defendant, from publishing any more "WhaleDumps", although media were allowed to publish material they had already received. In his judgement Justice Fogarty said that Slater's case against the hacker was overwhelming and that taking information from his computer was akin to entering someone's private property and taking their private papers. He wrote: "There is no doubt at all that (Slater) has a serious argument that the unknown hacker has committed a wrong or tort against him, to which the common law will grant such remedies as can be practicably imposed." On 4 September, the original @WhaleDump account was suspended by Twitter. Soon afterwards, the hacker Rawshark reopened a Twitter account with the username @Whaledump2. On 5 September, after the release of a number of other documents online, Rawshark announced that he would no longer be releasing any information, tweeting a number of messages: Impact The New Zealand Herald commentator John Armstrong wrote: "Hager's allegations are many and varied. They are extremely serious." Armstrong points to the allegation of the involvement of National Party personnel in hacking into the Labour Party database as the most serious, and draws parallels with the Watergate scandal. David Cunliffe, leader of the Labour Party, also describes the events portrayed in the book as akin to Watergate. The revelations in Dirty Politics have also been reported in international media. Under the headline "New Zealand's own House of Cards is collapsing", The Guardian newspaper said "New Zealanders are currently witnesses to an exposé of unprecedented proportions." Also in The Guardian, New Zealand journalist Toby Manhire wrote: "Dirty Politics has blown the National party (election) strategy dramatically off course, propelling the campaign into uncharted territory. Its allegations have dominated news bulletins for the 10 days since its publication, as accusations of dirty tricks, smear campaigns and conspiracy sally in every direction." In addition to the calls for Judith Collins to resign, Manhire suggested that John Key's "sparkling reputation" may be "contaminated by association with Slater’s toxic style." Cameron Slater subsequently laid a complaint with the police about the illegal hacking of his emails, and made a separate complaint with the Privacy Commissioner over the release of the email by the Prime Minister. Resignations Judith Collins Collins had been under pressure to resign earlier in the year over her involvement with Chinese officials at Oravida. Once Dirty Politics was published and Collins' alleged treatment of public servant Simon Pleasants came to light, there were renewed calls for her resignation. The Public Service Association said her actions breached an important constitutional principle and "Minister Collins must take responsibility for her actions and resign." Labour Leader David Cunliffe said Mr Key should fire Ms Collins, "because she has acted in a way that is unbecoming and unfit" for a minister. Deputy Prime Minister Bill English distanced himself from her behaviour by saying "it's not a style I like". John Key called her "unwise" but refused to take any action. As the pressure on Collins mounted, she avoided the media entirely for a few days. Then on 23 August she fronted up but refused to answer questions about her friendship with Mr Slater. She declared she had no plans to resign and refused to acknowledge her behaviour was damaging the National Party in any way. A week later right wing blogger, Cathy Odgers, apparently sent the damning email to John Key's office in which Slater said Collins was 'gunning' for Adam Feely. Fairfax Media journalist Matt Nippert, who was investigating the hacked emails, believes that Odgers thought Nippert was about to exposé the connection between Slater, Collins and Feely. He says Odgers panicked and "alerted the Beehive to what may have been coming." John Key called a press conference the next day, Saturday 30 August 2014, at which Collins resigned as a minister. In October, the Prime Minister announced that Collins would not be allowed to use the title 'Honourable' before her name unless she is cleared by the inquiry into her behaviour. Collins was reported to be 'appalled' and 'seething' that she heard about this decision through the media rather than being told personally by Mr Key. Jason Ede Jason Ede was a central figure in Dirty Politics. Hager said Ede drafted Official Information Act requests on Slater's behalf and was frequently in contact with the blogger about ways to discredit the Labour Party. After the book's release, Ede entirely avoided the media. Slater subsequently described him as "squeamish" and "gutless" for going to ground rather than talking to the media about their relationship. Ede resigned from his position as a media researcher for the National Party the day before the election. Announcing his resignation after 11 years "loyal service to the National Party", John Key said: Blogger David Farrar said he thought Ede had resigned voluntarily and not due to the book's publication. However, Hager said he was sceptical about the reasons behind Ede's departure: Inspector General inquiry into the release of SIS information On 1 September, the Inspector-General of Intelligence and Security, Cheryl Gwyn, announced she would conduct an "in-depth and robust inquiry" in response to claims that details of a briefing between the head of the SIS with former Labour leader Phil Goff were given to Cameron Slater. Gwyn promised a comprehensive investigation saying: "The scale of the inquiry is significant – it involves access to a variety of IT systems, documents, phone records and so forth. I then need to analyse that information, discard what is irrelevant, and then take into account the evidence from those summoned." She said the inquiry was in "recognition of public interest" that the SIS may have been used for political purposes. The results on the inquiry were released on 25 November 2014 three months later. Ms Gwyn found that the SIS gave information to John Key's office from where his deputy chief of staff Phil de Joux passed it on to another John Key employee, Jason Ede. Mr Ede contacted Cameron Slater and they then discussed how an Official Information Act request for the information should be worded. Slater emailed the request while they were still talking on the phone and also provided Slater with draft blog posts attacking Mr Goff. Cameron Slater subsequently used the information to attack former Labour leader, Phil Goff, on his blog. Ms Gwyn concluded that former SIS director Warren Tucker provided "incomplete, inaccurate and misleading information" to Cameron Slater and to Prime Minister John Key and failed to take adequate steps to maintain the agency's political neutrality. In the process, Ms Gwyn said "the SIS had effectively delivered Mr Slater an exclusive story while denying other media the same information." The report went on to say that Warren Tucker "having released inaccurate information that was predictably misinterpreted, the then director of the service had a responsibility to take positive steps to correct the interpretation", but failed to do so. Gwyn further said that "These errors resulted in misplaced criticism of the then-Leader of the Opposition, Hon Phil Goff MP". Ms Gwyn recommended that current SIS director, Rebecca Kittredge should apologize to Mr Goff on behalf of the SIS. After reading the report, Ms Kitteridge went a step further and apologised to the new Labour leader, Andrew Little, and Prime Minister John Key as well. In a blog post responding to the release of the inquiry's findings, Hager wrote that: He finishes his blog post by noting that many of the other components of Dirty Politics have yet to be investigated fully: Inquiry into Collins & SFO There were also calls for a wide ranging inquiry into the National party's dealings with Cameron Slater. Labour's finance spokesman, David Parker, described the revelations in Dirty Politics as "just the tip of the iceberg" and laid a formal complaint with Police into allegations that Judith Collins perverted the course of justice. David Cunliffe said there should be a commission of inquiry into all National Ministers' dealings with Slater. Greens Party co-leader Metiria Turei said there should be a Royal Commission. Winston Peters said a full Royal Commission of Inquiry would be a bottom line in any negotiations by New Zealand First in the formation of a government after the election. The Prime Minister agreed to an inquiry but limited its scope to only one issue – the allegations that Collins undermined the Director of the Serious Fraud Office, Adam Feeley in 2011. He appointed former High Court judge Lester Chisholm to conduct the inquiry. David Cunliffe felt this was an inadequate response and said Labour should be consulted on the terms of reference for the inquiry: "There is a well embedded constitutional convention that requires the Prime Minister to consult with the leader of the opposition this close to an election on a matter this serious. I would expect to be consulted on the terms of reference." Key refused to consult Labour over the terms saying "We're not going on a witch hunt." On 25 November 2014, the inquiry released its findings that although there were two independent efforts to undermine former Serious Fraud Office chief Executive Adam Feeley, there was no evidence that Collins participated. However, Judge Chisholm had quite limited access to Slater's accomplices or to relevant information. He only interviewed two of the five people who were party to the email that led to Ms Collins' resignation – Cameron Slater and Carrick Graham. He didn't talk to Hanover boss Mark Hotchin or Hong Kong-based blogger Cathy Odgers. Chisholm also had no access to Ms Collins' Facebook communications which she deleted in 2013. He also did not have access to her parliamentary landline calls or to cell phone calls from Slater to her. Justice Chisholm noted that "The absence of telephone records for Mr Slater's calls is surprising given that both Ms Collins and Mr Slater confirmed that they phoned each other often." Nevertheless, Collins felt vindicated and said: In regards to Collins' relationship with Cameron Slater, she said he remained a "family friend", although she felt "incredibly let down" by him. On opinion polls prior to the 2014 election Although it was difficult to gauge the New Zealand public's views on the contents of Dirty Politics, Television New Zealand's Vote Compass tool provided a rough indication. Based on over 13,000 responses up to 25 August 2014, Vote Compass suggests that 33% thought there was "none" or "little" truth to the claims in the book, 49% said there "some" or "a lot" of truth, and 18% didn't know. In terms of the effect of the book on voting intentions, 56% said it wouldn't have an effect, 15% said it would have a "little" effect, 11% "some", 10% "a lot", and 8% said they didn't know. There is also some evidence to suggest that the number of people who believed the book's claims had grown over time. A number of polls were held during the intense publicity surrounding Dirty Politics and showed the National Party losing support. Towards the end of August 2014, two weeks after the release of the book, National was still way ahead at 50.8%, but down 4.3% on previous figures. John Key also lost ground as preferred prime minister. The Radio New Zealand Poll of Polls (an arithmetical average of the four most recent major polls) showed National had slid below 50% to 49.1% as at 29 August. The Prime Minister frequently tried to deflect media interest from Dirty Politics arguing that most people were more interested in other election issues. Once Judith Collins resigned, three polls including the Herald DigiPoll survey, all showed National support had gone back up to 50% while Labour continued to drop. At his first campaign appearance in the Epsom electorate on 6 September, Key said the polls were showing the public had moved on. New Zealand Herald columnist John Armstrong noted that Collins was pushed into resigning and said: "When Key sacked Judith Collins, the headlines screamed 'National in chaos'. In fact, removing Collins was the vital first move in getting National's campaign back on track. Hager's book looks like having no material impact on the outcome of the election." On the 2014 election John Key's National Party was re-elected to government after it received 47.04% of the party vote in the September 2014 general election. Judith Collins retained her seat in Papakura, but her winning margin over the Labour candidate was cut by half – from 9890 in the 2011 election to 5,119 in 2014. After the election, Collins was not reinstated to a ministerial position, and was relegated to the backbenches. Political journalist Andrea Vance commented that "Dirty Politics didn't hurt National – in fact, all the evidence points to a lift. But it did harm Labour." Labour leader David Cunliffe commented that the focus on Dirty Politics and spying on New Zealanders had actually hurt the left: "You would think the impact would be on the Government party responsible, but the reality is that if one is not able to get the airtime because others are dominating it, that impacts the opposition as well". Cameron Slater blogged that Dirty Politics had no impact on the election "because New Zealanders simply don't care". On John Key's integrity Throughout the weeks that the allegations of Dirty Politics swirled in the media, Prime Minister John Key tried to swat them away claiming Nicky Hager was little more than "a screaming left-wing conspiracy theorist". After National won the 2014 election, Key continued with this dismissive approach once MPs returned to Parliament. When asked by Green party leader, Russel Norman, how many times he had spoken to or texted Cameron Slater, Key replied "Never in my capacity as Prime Minister". An editorial in the New Zealand Herald said this "revealed a cavalier disregard for the accountability and integrity of his office." After Key denied Judith Collins the right to be called the 'Honourable', despite other Cabinet members resigning in disgrace and keeping the title, even Cameron Slater thought this reflected badly on Mr Key. Describing Mr Key's decision as a 'silly mistake' he said: "Starting off the term with an arrogant and petulant act like that is unbecoming and foreshadows ill-will dogging him in this term." Key's integrity took another hit after release of the report by Inspector-General of Intelligence and Security Cheryl Gwyn into the release of information by SIS director, Warren Tucker. Despite the revelations in Gwyn's report, Key claimed no one in his office had done anything wrong and refused to apologise to Mr Goff. New Zealand Herald columnist John Armstrong described the report as "hugely embarrassing" and said: "The Key administration has plumbed new depths of arrogance and contempt for the notion of politicians being accountable for their actions". Fairfax political journalist Andrea Vance wrote in response to the inquiries' findings that Key takes "hands-off to a disturbing extreme". She further wrote: "It stretches credibility to suggest Key – and his chief of staff Wayne Eagleson – bear no responsibility for the conduct of de Joux and Ede." A few days later, Mr Key had to admit in Parliament that he had been communicating with Cameron Slater by text about Gwyn's report prior to its release. Given that Slater is perceived, in John Armstrong's words, as "the second most-despised figure in New Zealand politics" (after Kim Dotcom), parliamentarians and media commentators were surprised that Key was still in touch with him. Armstrong said by acknowledging his links with Slater, Key "compromised his assurance that he had no knowledge of the dirty tricks operation". Bryce Edwards said that Key could not afford to cut ties with Slater "out of fear of what could happen if Slater became his enemy". Matthew Hooten said this could lead to John Key's 'downfall' as he "risks Cameron Slater going off the deep-end and revealing the full extent of his relationship with the Prime Minister himself, the Prime Minister's Office, with a number of John Key's ministers, with a fair bunch of the backbenchers, and with some senior party officials". On democracy & perceptions of corruption In his column in the New Zealand Herald, Dr Bryce Edwards, lecturer in political studies at Otago University, expressed concern about the impact of Dirty Politics on the public's confidence in public institutions and the integrity of governance. He said the official reports in response to the inquiries generated by the book should serve as a "wake-up call" about the health of New Zealand democratic process. In the National Business Review, Rob Hosking, supports this view writing: "The plain fact is, the country's security service got caught up in the political games of election 2011. Willfully or accidentally, it became a political arm of the party in power and not, as it should be, a neutral government agency." In his column, Vernon Small warns that spindoctors are acquiring too much political sway and see their main job as being to protect the Government from the Opposition. On science and public health Obesity expert, Dr Boyd Swinburn, Professor of Population Nutrition and Global Health at the University of Auckland, says dirty politics and conflicts of interest in government appointments are having a detrimental impact on Public health. Swinburn said when Tony Ryall appointed former National MP Katherine Rich to the Health Promotion Agency (HPA) in 2012, he ignored an 'irreconcilable conflict of interest' – Rich is chief executive of the Food and Grocery Council, which lobbies for the alcohol, tobacco and grocery-food industries, whereas the primary responsibility of the HPA is to improve the health of New Zealanders. Dr Swinburn also believes the views of scientists are being suppressed by attack campaigns driven by large commercial interests in New Zealand. He points to Tony Ryall's dismissal of a letter from more than 30 senior public-health experts calling for an investigation of Rich's role on the HPA board. Police investigation into the identity of the hacker Raid on Hager's home In October 2014, five police officers went to Nicky Hager's home with a search warrant and spent ten hours looking for information. The alleged purpose of the search was to try to find out the identity of the hacker Rawshark, who hacked into Cameron Slater's computer to obtain the emails and other material on which Dirty Politics was based. They took away computers, hard-drives, phones, CDs, an iPod and a camera. Police told Hager he is a witness, rather than a suspect of the inquiry. Hager said: "It seems like a very strange level of resources just for a witness." Stuff noted that similar raids had not been made on National Party staffer, Jason Ede, who allegedly hacked into the Labour Party website. Successful legal challenge of raid In response to the police search, a legal defence fundraiser for Hager was set up on the website Give A Little by former Young Labour leader Meg Bates, who described Hager as a "hero" who "exposed a twisted web of power and influence in New Zealand politics". As at April 2016, over $65,000 had been donated to the fund, despite attempts to shut it down and criticism from Slater's sympathisers. Another legal defence fundraiser was set up by the Freedom of the Press Foundation. In an article publicising the fundraiser at The Intercept and detailing the police raid on Hager's house, Glenn Greenwald and Ryan Gallagher wrote that the raid could have been partially motivated by a desire to investigate whether Hager had received any information from the NSA leaks. As at 18 November, the Foundation fundraiser had raised over $21,000. A legal team led by Julian Miles QC, media lawyer Steven Price, and barrister Felix Geiringer gathered round Nicky Hager, who is seeking a judicial review of the police action. Steven Price said "Our case will be that the police didn't properly consider Nicky Hager's right to protect his confidential sources". Commenting on the case, law professor Ursula Cheer said: "We've never really had a case like this before and we've never really had any of these (public interest) issues tested." On 30 October 2014, it was reported that an updated edition of John Roughan's biography John Key: Portrait of a Prime Minister had a new chapter on the election which mentioned that John Key claimed to have been told the identity of the hacker Rawshark. He did not pass the name on to the author of the biography. Asked by reporters about the comment and whether he given the information to the police, a spokesperson for Key said that while he thought he knew the identity of the hacker, "he cannot be certain". The spokesperson also said that Key had no involvement in the police inquiry. In December 2015, the High Court in Wellington declared the warrant used to search Hager's house "fundamentally unlawful". The judge criticised the police saying the way detectives went about obtaining the warrant was "not the type of facilitation that I consider the Search and Surveillance Act anticipates". In March 2016, Hager travelled to the High Court in Auckland to collect belongings taken by police during the search in October 2014. His belongings were in a sealed container at the High Court awaiting the outcome of the legal challenge. Police also destroyed a hard drive which contained copies of Hager's data. Hager commented that: On 12 June 2018, Hager accepted an apology and compensation for "substantial damages" from the New Zealand Police for the 2014 raid on his home during the investigation into the hacking that led to the Dirty Politics book. The settlement also included the police acknowledging that they had accessed Hager's personal banking data. On the media After the police raid on Nicky Hager's house, the Labour Party's acting leader, David Parker, pointed out that journalists have a right to protect their sources, and said there was a growing number of cases in which the police were being called in to investigate the media. He said the police had searched media offices after Mr Key's "teapot tapes" conversation with John Banks and the Serious Fraud Office had demanded the National Business Review hand over documents related to the collapse of South Canterbury Finance. Parker argued that the search of Nicky Hager's house amounted to intimidation and said: "If the media are not free of undue intrusion by state agencies, or have too cosy a relationship with political parties; they cannot do what the fourth estate is meant to do." In an editorial, the New Zealand Herald agreed and said the police raid over such a minor crime was "not a good look for the police". The editorial went on to say: "The effect of such raids is to intimidate such people from approaching media to disclose uncomfortable truths." In reviews of political events in 2014, numerous commentators emphasised the impact of Dirty Politics. The New Zealand Herald's David Fisher named Nicky Hager as a finalist for New Zealander of the Year. Russell Brown said #dirtypolitics was "Word of the Year 2014". Martin Bradbury criticised John Key for his involvement with Cameron Slater but also took aim at the media for their part in dirty politics arguing that "many of these media outlets also worked hand in glove with National's black ops team." Andrew Geddis challenged the Sunday Star Times for giving Judith Collins a bi-weekly column (once she resigned from cabinet), arguing the paper was essentially choosing money over 'ethical concerns'. Reviewing these various commentaries, Bryce Edwards said: "There is a growing call for the so-called 'Dirty Politics cast' of politicians and activists to be shunned by the media." Footnotes References External links Official website – DirtyPoliticsNZ.com Nicky Hager's website containing articles, books, speeches and interviews 2014 non-fiction books New Zealand books Books about politics of New Zealand 2014 New Zealand general election Investigative journalism Potton & Burton books
306913
https://en.wikipedia.org/wiki/Modding
Modding
Modding is a slang expression that is derived from the verb "modify". Modding refers to the act of modifying hardware, software, or virtually anything else, to perform a function not originally conceived or intended by the designer, or achieve a bespoke specification. The term modding is used within the computer game community, particularly in regard to creating new or altered content and sharing that via the web. It may be applied to the overclocking of computers in order to increase the frequency at which the CPU operates. Case modding is a popular activity amongst many computer enthusiasts which involves the customization of a computer case or the installation of water cooling technology. In connection with automobiles, modding can mean engine tuning, remapping of a vehicle's engine control unit or customization of the bodywork. Some people modify small electronic handheld toys by circuit bending, changing the circuits and adding a knob or switch that will change the produced pitch up or down. Computers and digital equipment Legal issues Modding may sometimes infringe the legal rights of the copyright owner. Some nations have laws prohibiting modding and accuse modders of attempting to overcome copy protection schemes. In the United States, the DMCA has set up stiff penalties for mods that violate the rights of intellectual property owners. In the European Union, member states have agreed the EU Copyright Directive and are transposing it into national law. A 22-year-old man was convicted by Caerphilly Magistrates' Court in the United Kingdom in July 2005 for selling a modded Xbox with built in software and games. However it is also worthy of note that some other European countries have not interpreted the legal issues in the same way. In Italy a judge threw out a Sony case saying it was up to owners of a console what they did with it. Similarly in Spain, mod chips have been ruled as legal despite the EU copyright legislation. Modding may be an unauthorized change made to a software or hardware to a platform in gaming. Case mods are modifications to a device with the altering of certain styles. For example, people who mod a Microsoft Xbox 360 can alter the LED lights on the controller to glow different colors. On August 5, 2009 Matthew Crippen, a 27-year-old student at California State University, Fullerton, was arrested for modifying game consoles including the Xbox 360, PlayStation 3, and Nintendo Wii for profit. Crippen testified that it was so owners could play their backup discs of DRM-laden gaming software that they legally own. However, the DMCA states that it is illegal to circumvent copyright protection software, even for non-infringing uses such as backing up legally owned games. In December 2010 the prosecutors dropped all charges against Crippen because of inadmissible evidence obtained through an audio-less video recording deemed illegal by California law. Multi-user licensing Computer systems, hardware, and software are often sold or licensed to one home machine, to a business that has many computers, a government agency, or a non-profit organization. When the software license says that it is for a specific person, then it is not legal for that software to be used by some other person on that same computer, even a member of the same family, or another employee of the same company. But this strict licensing is only one approach. In this form of licensing, for more than one person to be using that software or hardware, they need to have a multi-user license that usually dictates how many different people may use it. Derivative software Some software is licensed with a copy of the program source code supplied along with the executable code in which the license specifically authorizes changes to the supplied software. This is a common standard in business software packages. Hundreds of thousands of computer programmers in some nations have jobs because businesses want the purchased software tailored to the specific needs of the individual businesses. Most every major city has want ads in the newspaper where there are job openings for people to modify some company's computer systems, where the ad specifies what programming languages or operating systems the applicant needs to know. Derivative software is licensed copyrighted software used in whole or part in new programs written by programmers at businesses using the licensed software. Programmers copy the copyright notices into the source code where the code was copied, and track all such places, because if the license is permitted to expire, then the business loses software use rights, including any place to which it was copied. An annual fee is typically paid to keep the license in effect, and over time, the software supplier can increase the fees to the point that the business chooses to convert to some other commercial software that seems to be more cost effective. It is not unusual in business software to find programs that have many different copyright notices, each referring to different sources of the derived source code. Video game consoles A common example of one kind of modding is video game console mod chips, which can allow users to play homemade games, games legitimately purchased in other regions, or legal backup copies, but can also allow illegal unauthorized copies by allowing the player to play personally recorded CD or DVD copies of video games. Modchips, in their current form, were first available for the Sony PlayStation (and later the PlayStation 2). Various other types of copyright circumvention systems also existed for the Nintendo 64 and the older Game Boy consoles (though neither include actual modding, but instead backup devices). Types of modding There are two different ways of running unsigned code on a game console. One is through soft modding (modifying software, normally using a softmod) to allow the user to change data contained on its hard drive in the case of the Xbox. Another type of modding, known as hard modding, is done by modifying the hardware, such as components connected to the Hypervisor in order to run exploits to the BIOS of the console or to run unsigned code and games. This form of 'modding' (more correctly termed as hacking), while not as popular as softmodding, is mostly done as it is able to 'run' many different types of software. Soft modding is more popular because of its ease of installation and its relatively low price (it can even be done for free with the right tools). Another type of console modding is about appearances, much like computer case modification. Which includes, adding lights (most likely LEDS, cathodes or other electro-luminescent lighting). Cutting the game system case, to fit hardware and/or expose the internal systems. Cooling is a large part of console hard 'modding', including: heat sink upgrades, more powerful or quieter fans, some even go so far as to abandon common heat exchange to air all together by liquid cooling a console (most notably in the Xbox 360, which initially had some heat problems). Game software On the other side, some companies actively encourage modding of their products. In cases such as TiVo and Google, there has been an informal agreement between the modders and the company in which the modders agree not to do anything that destroys the company's business model and the company agrees to support the modding community by providing technical specifications and information. Some commercial video games thrive through a modding community. In the case of Half-Life and Warcraft III, mods called Counter-Strike and Defense of the Ancients (DotA) respectively, drove sales of the original software for years. Device drivers Modded drivers are made for improved performance which official versions of drivers do not offer or in cases where there are no official versions of drivers for new hardware designed for older operating systems such as Windows 98. Computer hardware Case modding may range from simple case (or chassis) painting, to extensive case modifications with custom-fabricated pieces. The terms modding and modder have expanded to encompass a broad range of customization by personal-computer enthusiasts, especially in gaming PCs, including: custom and homemade hard- and soft-line liquid cooling loops; installation of LED strips and other lighting effects; replacement of stock cooling fans with high-performance ones (on CPU coolers, power supplies, cases, drive bays, and other components); "delidding" of CPUs and GPUs to replace their stock thermal paste and pads with high-thermal-conductivity ones; addition of individually sleeved (and often color-coordinated) cabling; replacement of various heat sinks with custom liquid-cooling water blocks on components normally air-cooled (e.g. RAM, GPUs, and SSDs); addition of new components not usually found in PCs, such as electronic timers or temperature and humidity alarms; custom soldering to replace or change the behavior of components; expansion of the motherboard's capabilities with PCIe add-in cards and risers (the latter also used to mount graphics cards sideways to show them off better); addition of server hardware into a consumer-grade system; and extensive overclocking through detailed benchmarking that seeks to squeeze out every last percentage of performance improvement without the system becoming unstable. Cars and vehicles Orthopedic Ortho-modding is the car adaptation (seats, pedals, etc.) to help drivers to prevent, correct and diminish light orthopedic and backbone/spine problems. Eco-modding Eco-modding is the reduction of drag, petroleum car adaptation to use renewable energy (generally, changing or adding a new engine or motor), generally hydrogen or electricity. See hybrid car. Occasionally, it has been known to run a Diesel engine on plant and animal oils. See Biodiesel. Performance tuning Car and engine tuning are modification of a car for enhanced performance, in engine power output, handling, aerodynamics, or cosmetic purposes. See also :Category:Vehicle modification. Industrial machines Factories get rather expensive machines that are used to mass-produce specialized parts. These machines can be altered to make parts other than how the manufacturers of the machines designed or intended them. The legality of doing this depends on who owns the machines, and whether the agreement, that supplied the machines to the factories, said anything about this, and what the laws are in the nation where this is being done. For example, the machines might be leased from the manufacturer of the machines. If they are ever to be returned, they need to go back in the same kind of condition and engineering shape as when they were first delivered. There is an annual physical inventory to make sure the factory has everything that they are leasing. This audit might be done by representatives of the leasing company, who are looking to see recognizable machines, that match their models and safety rules. Pens Pen-modding is the act of combining many pen parts either to help with pen spinning, in which a perfect balance is desired to create an ideal spinning pen, or simply for decoration. These pen mods can either be made by combining parts from different pens and/or mechanical pencils, or by buying modded pens online. In some cases, pen mods can exceed over $30–40 USD per pen. See also Body modification Custom house DIY audio Audio equipment modifications and construction. Hackerspace Holga An inexpensive brand of medium-format cameras that are often modded in numerous ways by their owners. Video game art Modular design References Computer hardware tuning Computing culture
6879145
https://en.wikipedia.org/wiki/Bernard%20Galler
Bernard Galler
Bernard A. Galler ( in Chicago – in Ann Arbor, Michigan) was an American mathematician and computer scientist at the University of Michigan who was involved in the development of large-scale operating systems and computer languages including the MAD programming language and the Michigan Terminal System operating system. He attended the University of Chicago where he earned a B.Sc. in mathematics at the University of Chicago (1947), followed by a M.Sc. from UCLA and a Ph.D. from the University of Chicago (1955), advised by Paul Halmos and Marshall Stone. He joined the mathematics department at the University of Michigan (1955) where he taught the first programming course (1956) using an IBM 704. Galler helped to develop the computer language called the Michigan Algorithm Decoder (1959-) in use at several universities. He formed the Communication Sciences dept (1965), renamed Computer Sciences (CS), which became the Computer and Communications (CCS) dept (1984), and Computer Science Department in the 70s, from which he retired in 1994. His class developed the realtime course scheduling program called Computer Registration Involving Student Participation (CRISP) which allowed students to register for courses without waiting in long lines. The University used the CRISP application for over fifteen years. From 1968 to 1970, Prof. Galler was the President of the Association for Computing Machinery (ACM). In 1994 he was inducted as a Fellow of the Association for Computing Machinery. He was the founding editor of the journal IEEE Annals of the History of Computing (1979–87). He was also the President of the Software Patent Institute (1992). For fifteen years, he served as an expert witness in numerous important legal cases around the country involving computer software issues. He was married to Enid Harris, played violin in several orchestras and chamber groups, co-founded the Ypsilanti Youth Orchestra (2001) for children whose schools did not have string music education. He was president of the Orchestra Board at the University of Michigan and a member of the Ann Arbor chapter of Rotary International. He died from pulmonary embolism. The Bernard A. Galler Fellowship Fund has been established at the University of Michigan Department of Electrical Engineering and Computer Science to "attract and support outstanding graduate students pursuing an advanced degree in computer science." References External links Oral history interview with Bernard A. Galler Charles Babbage Institute, University of Minnesota, Minneapolis. Galler describes the development of computer science at the University of Michigan from the 1950s through the 1980s and discusses his own work in computer science. Galler also discusses Michigan's relationship with ARPANET, CSNET, and BITNET. He describes the atmosphere on campus in the 1960s and early 1970s and his various administrative roles at the university. Galler discusses his involvement with the Association for Computing Machinery, the American Federation of Information Processing Societies, the founding of the Charles Babbage Institute, and his work with the Annals of the History of Computing. A Day in the Life of Bernard Galler, ACM Crossroads (no date) The Mathematics Genealogy Project: Bernard Galler, Department of Mathematics, North Dakota State University Bernard A. Galler, DBLP Computer Science Bibliography Bernard Galler's obituary in the University Record, University of Michigan, Ann Arbor, November 8, 2006. 1928 births 2006 deaths American computer scientists 20th-century American educators 20th-century American mathematicians 21st-century American mathematicians Deaths from pulmonary embolism Scientists from Chicago People from Ann Arbor, Michigan University of Chicago alumni University of Michigan faculty Fellows of the Association for Computing Machinery Presidents of the Association for Computing Machinery University of California, Los Angeles alumni Engineers from Illinois Mathematicians from Illinois
1059095
https://en.wikipedia.org/wiki/Arab%20diaspora
Arab diaspora
Arab diaspora refers to descendants of the Arab emigrants who, voluntarily or as refugees, emigrated from their native lands to non-Arab countries, primarily in Central America, South America, Europe, North America, and parts of Southeast Asia, the Caribbean, and West Africa. In a more specific view, emigrants from Arab countries, such as Sudan or the Palestinian territories, also make up important national groups of their countries' diaspora in other Arab states, such as the Gulf states or Saudi Arabia. Overview Arab expatriates contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. Large numbers of Arabs migrated to West Africa, particularly Côte d'Ivoire, Senegal, Sierra Leone, Liberia, and Nigeria. Since the end of the civil war in 2002, Lebanese traders have become re-established in Sierra Leone. According to Saudi Aramco World, the largest concentration of Arabs outside the Arab World is in Brazil, which has 9 million Brazilians of Arab ancestry. Of these 9 million Arabs, 6 million are of Lebanese ancestry, making Brazil's population of Lebanese equivalent to that of Lebanon itself. However, these figures are contradicted by the Brazilian Institute of Geography and Statistics (IBGE), which is the agency responsible for official collection of statistical information in Brazil. According to the 2010 Brazilian census conducted by IBGE, there were only 12,336 Lebanese nationals living in Brazil and other Arab nationalities were so small that they were not even listed. The Brazilian census does not ask about ancestry or family origin. There is a question about nationality and, according to the Brazilian law, any person born in Brazil is a Brazilian national by birth and right for any purpose, nationally or internationally - not an Arab. The last Brazilian census to ask about family origin was conducted in 1940. At that time, 107,074 Brazilians said they had a Syrian, Lebanese, Palestinian, Iraqi or Arab father. Native Arabs were 46,105 and naturalized Brazilians were 5,447. In 1940, Brazil had 41,169,321 inhabitants, hence Arabs and their children were 0.38% of Brazil's population in 1940. Venezuela, Colombia, Mexico and Chile. Palestinians cluster in Chile and Central America, particularly El Salvador, and Honduras. The Palestinian community in Chile is the fourth largest in the world after those in Israel, Lebanon, and Jordan. Arab Haitians (a large number of whom live in the capital) are more often than not, concentrated in financial areas where the majority of them establish businesses. In the United States, there are around 3.5 million people of Arab ancestry. It has been estimated that there are as many as four million Indonesians with at least partial Arab ancestry. They are generally well-integrated socially with Indonesian society, and identify as Indonesians. In the 2010 census, 118,886 people, amounting to 0.05% of the population, identified themselves as being of Arab ethnicity. Notable people Prominent members of the Arab diaspora include: Sciences Michael E. DeBakey (Lebanese origin), American cardiac surgeon Ahmed Zewail (Egyptian origin), known as the "father of femtochemistry", winner of the 1999 Nobel Prize in Chemistry Riad Barmada (Syrian origin) Orthopedic surgeon, served as the president of the Illinois Orthopedic Society. Elias J. Corey (Lebanese origin), Organic chemistry professor at Harvard University, winner of the 1990 Nobel Prize in Chemistry Michael E. DeBakey (Lebanese origin), cardiovascular surgeon and researcher, 1963 Lasker Award laureate Huda Akil (Syrian origin), Neuroscientist and medical researcher Elias Zerhouni (Algerian origin), American National Institutes of Health Director Shadia Habbal, (Syrian origin), astronomer and physicist specialized in Space physics Miriam Merad, French (Algerian origin) immunologist Hunein Maassab (Syrian origin), professor of epidemiology and the inventor of the live attenuated influenza vaccine Joanne Chory (Lebanese origin), plant biologist and geneticist Anthony Atala (Lebanese origin), Director of the Wake Forest Institute for Regenerative Medicine Noureddine Melikechi (Algerian origin), Atomic, Molecular, and Optical Physicist, member of the Mars Science Laboratory Michel T. Halbouty (Lebanese origin), geologist and geophysicist; pioneer in oil field research Adah al-Mutairi (Saudi origin),inventor and scholar in nanotechnology and nanomedicine Sami J Barmada (Syrian origin) neuroscientist and medical researcher M. Amin Arnaout (Lebanese origin), nephrologist and biologist Essam Heggy (Egyptian origin), Planetary scientist Engineering/Computer Science Zaha Hadid (Iraqi origin), British architect Mohamed Atalla (Egyptian origin), engineer, inventor of MOSFET (metal-oxide-semiconductor field-effect transistor), pioneer in silicon semiconductors and security systems, founder of Atalla Corporation Charles Elachi (Lebanese origin), professor of electrical engineering and planetary science at Caltech and the former director of the Jet Propulsion Laboratory Fawwaz T. Ulaby (Syrian origin), professor of Electrical Engineering and Computer Science, former Vice President of Research for the University of Michigan; first Arab-American winner of the IEEE Edison Medal Taher ElGamal (Egyptian origin), cryptographer, inventor of the ElGamal discrete log cryptosystem and the ElGamal signature scheme Ali H. Nayfeh (Palestinian origin), mechanical engineer, the 2014 recipient of Benjamin Franklin Medal in mechanical engineering Dina Katabi,(Syrian origin), Professor of Electrical Engineering and Computer Science at MIT and the director of the MIT Wireless Center. Abbas El Gamal (Egyptian origin), electrical engineer, educator and entrepreneur, the recipient of the 2012 Claude E. Shannon Award John Makhoul (Lebanese origin), computer scientist who works in the field of speech and language processing. Jerrier A. Haddad (Syrian origin),computer engineer who worked with IBM. Oussama Khatib (Syrian origin), roboticist and a professor of computer science Elias Zerhouni (Algerian origin),former director of the National Institutes of Health Business Mohamed Al-Fayed (Egyptian origin), businessman, former owner of London's Harrods and the Ritz Palace in Paris Mohed Altrad (Syrian origin), French billionaire businessman André Apaid (Lebanese origin), high-profile Haitian businessman Nadhmi Auchi (Iraqi origin), businessman, founder and Chairman of General Mediterranean Holdings Nadiem Makarim (Yemeni origin), Arab-Indonesian businessman, co-Founder of Gojek. Mohamed A. El-Erian (Egyptian origin), Egyptian-French-American businessman and asset manager; former CEO of PIMCO and current chief economic adviser at Allianz Charles Elachi (Lebanese origin), Director of the Jet Propulsion Laboratory (JPL) Miguel Facussé Barjum (Palestinian origin), Honduran businessman and landowner Carlos Ghosn (Lebanese origin), Brazilian-French-Lebanese businessman, former CEO of Renault-Nissan Mohamed Hadid (Palestinian origin), Jordanian-American luxury real estate developer and businessman Nicolas Hayek (Lebanese origin), Swiss-Lebanese American entrepreneur, co-founder, CEO and Chairman of the Board of the Swatch Group Antoine Izméry (Palestinian origin), Former wealthy Haitian businessman and pro-democracy activist Steve Jobs (Syrian origin), American businessman and inventor; Apple Inc co-founder, chairman and CEO George J. Maloof, Sr. (Lebanese origin), American businessman, owner of the Houston Rockets Fredy Nasser (Palestinian origin), Honduran businessman Awadh Saleh Sherman (Yemeni origin), Kenyan businessman Carlos Slim Helú (Lebanese origin), Mexican businessman; listed by Forbes as the richest man in the world Politics Alberto Abdala (Lebanese origin), Vice President of Uruguay Washington Abdala (Lebanese origin), Speaker of the Chamber of Representatives of Uruguay Ali Alatas (Yemeni origin), former Indonesian Foreign Minister Luis Abinader (Lebanese origin), President of the Dominican Republic Geraldo Alckmin (Lebanese origin), politician, Governor of São Paulo state Faris Al-Rawi (Iraqi origin), Attorney General of Trinidad and Tobago Khadija Arib (Moroccan origin), politician, Speaker of the House of Representatives (Netherlands) Jacobo Majluta Azar (Lebanese origin), former President of Dominican Republic Najib Balala (Yemeni origin), member of parliament in Kenya Anies Baswedan (Yemeni origin), politician, Governor of Jakarta Yamina Benguigui (Algerian origin), French socialist politician Mohamed Bennouna (Moroccan origin), Diplomat, Jurist and Judge of the International Court of Justice in The Hague since 2006 Salvador Jorge Blanco (Syrian origin), former President of Dominican Republic Abdalá Bucaram (Lebanese origin), former President of Ecuador Alberto Dahik (Lebanese origin), former Vice President of Ecuador Rachida Dati (Moroccan-Algerian origin), former French Minister of Justice, current Member of the European Parliament and Mayor of the 7th arrondissement of Paris Tareck El Aissami (father is Syrian Druze and mother is Lebanese), Vice President of Venezuela since January 2017 Myriam El Khomri (Moroccan origin), French Minister of Labor Carlos Roberto Flores Facussé (Palestinian origin), former President of Honduras Schafik Handal (Palestinian origin), Salvadoran born FMLN leader Gilberto Kassab (Lebanese origin), former mayor of São Paulo Hasan Kleib (Yemeni origin), politician, current Permanent Representative of the Republic of Indonesia to the United Nations, World Trade Organizations, and Other International Organizations in Geneva Paulo Maluf (Lebanese origin), politician, former mayor of São Paulo city and former governor of São Paulo state in Brazil Carlos Menem (Syrian origin), former President of Argentina Arnaud Montebourg (Algerian origin), politician, former French Minister of the Economy, and candidate for the Socialist Party's presidential nomination Said Musa (Palestinian origin), former Prime Minister of Belize Ralph Nader (Lebanese origin), 2004 US presidential candidate, activist for consumer rights Antonio Saca (Palestinian origin), former President of El Salvador Edward Seaga (Lebanese origin), former Prime Minister of Jamaica Donna Shalala (Lebanese origin), former American Secretary of Health and Human Services Michel Temer (Lebanese origin), politician, 37th President of Brazil Najat Vallaud-Belkacem (Moroccan origin), French Minister of Education, Higher Education, and Research Fashion, beauty Azzedine Alaia (Tunisian origin), fashion designer Valerie Domínguez (Lebanese origin), Miss Colombia 2005 winner and Top 10 finalist at Miss Universe 2006 Rima Fakih (Lebanese origin), Miss USA 2010 winner from Michigan Bella Hadid (Palestinian origin), American model Gigi Hadid (Palestinian origin), American model Lisa Hanna (Lebanese origin), Miss World 1993 winner from Jamaica Sabrina Houssami (Lebanese origin), Miss World 2006 (2nd runner-up) from Australia Jessica Kahawaty (Lebanese origin), Miss World 2012 (2nd runner-up) from Australia Huda Kattan (Iraqi origin), American make-up artist and cosmetic entrepreneur Hedi Slimane (Tunisian origin), fashion designer and former creative director of Yves Saint-Laurent Paola Turbay (Lebanese origin), Miss Universe 1992 (1st runner-up) from Colombia Paulina Vega (Lebanese origin), Miss Colombia 2013 Miss Universe 2014 Film, television Hiba Abouk (Libyan and Tunisian origin), Spanish actress Tige Andrews (Syrian origin), an American character actor Sofia Boutella (Algerian origin), actress, model, and dancer Abdel Raouf Dafri (Algerian origin), award-winning French director Nora Fatehi (Canadian of Moroccan origin), Indian actress and belly dancer Gad Elmaleh (Moroccan origin), Moroccan-French-Canadian humorist Mina El Hammani (Moroccan origin), Spanish actress Salma Hayek (Lebanese origin), Mexican actress Bader Ben Hirsi (Yemeni origin), award-winning British-Yemeni director Abdellatif Kechiche (Tunisian origin), French film director, actor and screenwriter; Palme d'Or laureate Hoda Kotb (Egyptian origin), American television personality Nawell Madani (Algerian origin), Belgian humorist, presenter and producer Rami Malek (Egyptian origin), American actor Mena Massoud (Egyptian origin), Egyptian-Canadian actor Elyas M'Barek (Tunisian origin), Austrian actor born in Germany Kad Merad (Algerian origin), French comedian, actor, and filmmaker Tahar Rahim (Algerian origin), award-winning French actor Omar Sharif (Egyptian origin), award-winning international actor Literature / theatre Kaouther Adimi, Algerian writer William Peter Blatty (Lebanese origin), American writer Assia Djebar, Algerian writer Kahlil Gibran (Lebanese origin), American poet, writer, artist Amin Maalouf (Lebanese origin), French author David Malouf (Lebanese origin), Australian writer Wajdi Mouawad (Lebanese origin), Canadian writer, actor and director Media and intellectuals Magdi Allam (Egyptian origin), Italian journalist Khadija Benguenna (Algerian origin), Arab TV personality Hala Gorani (Syrian origin), journalist and anchor of CNN's International Desk Casey Kasem (Lebanese origin), American radio personality. Tariq Ramadan (Egyptian origin), Swiss intellectual Edward Saïd (Palestinian origin), US intellectual Léa Salamé (Lebanese origin), French journalist and political commentator Music Salem Al Fakir (Syrian origin), Swedish singer Ali B (Moroccan origin), Dutch rapper Amir (Egyptian origin), Italian rapper Tarak Ben Ammar (Tunisian origin), international movie producer and distributor Paul Anka (Syrian/Lebanese origin), Canadian singer Nasri Tony Atweh (Palestinian origin), Canadian lead singer of Magic! Malika Ayane (Moroccan origin), Italian singer Bushido (Tunisian origin), German rapper DJ Snake (Algerian origin), French DJ David Guetta (Moroccan origin), French DJ R3hab (Moroccan origin), Dutch DJ Fredwreck (Palestinian origin), American record producer Ghali (Tunisian origin), Italian rapper Indila (Algerian origin), French singer DJ Khaled (Palestinian origin), American DJ L'Algérino (Algerian origin), French rapper La Fouine (Moroccan origin), French rapper Lowkey (Iraqi origin), British rapper and political activist Fady Maalouf (Lebanese origin), German singer Ibrahim Maalouf (Lebanese origin), French saxophonist Massari (Lebanese origin), Canadian singer RedOne (Moroccan origin), Swedish record producer French Montana (Moroccan origin), American rapper Natasja Saad (Sudanese origin), Danish rapper and reggae singer Eric Saade (Palestinian Lebanese origin), Swedish singer Kareem Salama (Egyptian origin), American country singer Faouzia (Moroccan origin), Canadian singer Sarbel (Lebanese origin), Greek singer Shakira (Lebanese origin), Colombian singer Madison Beer (Moroccan origin), American singer Adel Tawil (Egyptian / Tunisian origin), German singer, songwriter and producer Tunisiano (Tunisian origin), French rapper Karl Wolf (Lebanese origin), Canadian pop star Rami Yacoub (Palestinian origin), Swedish record producer Zaho (Algerian origin), Canadian singer Maher Zain (Lebanese origin), Swedish singer Lacrim (Algerian origin) French rapper Heuss l'Enfoiré (Algerian origin) French rapper Niro (rapper) French-Moroccan rapper Sport Justin Abdelkader (Jordanian origin), American ice hockey player Brahim Asloum (Algerian origin), French professional boxer, Olympic winner Hatem Ben Arfa (Tunisian origin), French professional football player of Paris Saint-Germain F.C. Wissam Ben Yedder (Tunisian origin), French professional football player of Sevilla FC Karim Benzema (Algerian origin), French professional football player Migidio Bourifa (Moroccan origin), Italian long-distance runner Robert Farah (Lebanese origin), Colombian tennis player Naseem Hamed (Yemeni origin), also known as Prince Naseem, English professional boxer Nazem Kadri (Lebanese origin), Canadian ice hockey player Sami Khedira (Tunisian origin), midfielder in the German national team Mahiedine Mekhissi (Algerian origin), French runner, International winner Brandon Saad (Syrian origin), American ice hockey player Mário Zagallo (Lebanese origin), Brazilian football coach and former player Badr Hari Dutch-Moroccan kickboxer Yousri Belgaroui Dutch-Tunisian kickboxer and MMA fighter Chalid Arrab German-Moroccan Kickboxer and MMA fighter Mohamed Mezouari Dutch-Moroccan Kickboxer Nordine Mahieddine (Algerian origin) French kickboxer Billy Dib (Lebanese origin) Australian professional boxer Khalid Yafai (Yemeni origin) British professional boxer Kid Galahad (Yemeni origin) from his real name Abdul-Bari Awad,British professional boxer Stephan El Shaarawy (Egyptian origin) Italian football player Adil Rami (Moroccan origin) French football player, World cup 2018 winner Nabil Fekir (Algerian origin) French football player, World cup 2018 winner Marouane Chamakh French-Moroccan football player Oday Aboushi Palestinian-American NFL player Ryan Kalil (Lebanese origin) American NFL player Frank Kalil (Lebanese origin) American NFL player Matt Kalil (Lebanese origin) American NFL player Belal Muhammad Palestinian-American UFC Fighter Wallid Ismail (Lebanese origin) Brazilian MMA fighter and founder of Jungle Fight Tarec Saffiedine (Lebanese Father)Belgian UFC fighter and ex-Strikeforce champion Hakim Ziyech Dutch-Moroccan football player Sadam Ali (Yemeni origin) American professional Mahmoud Charr (Syrian-Lebanese origin) German professional boxer Karim Bellarabi German-Moroccan professional football player Medhi Benatia French-Moroccan professional football player Riyad Mahrez French-Algerian professional football player Yacine Brahimi French-Algerian professional football player Dida Diafat French-Algerian Kickboxer Morgan Adrar French-Moroccan Muay-Thai kickboxer Samy Sana French-Algerian Muay-Thai kickboxer Hakim Hamech French-Algerian Muay-Thai kickboxer Samir Nasri (Algerian origin) French football player Zakaria Zouggary Dutch-Moroccan kickboxer Ismaël Bennacer French-Algerian soccer player Houssem Aouar French-Algerian soccer player Djamel Bouras (Algerian origin) French judoka, Olympic champion Zinedine Zidane (Algerian origin) French soccer player, Winner of the World Cup 1998, Ballon d'Or 1998 and trainer with 4 UEFA Champions League trophy Hesdy Gerges Dutch-Egyptian kickboxer and MMA fighter Mourad Bouzidi Dutch-Tunisian kickboxer Amir Zeyada Dutch-Egyptian kickboxer Karo Murat (Iraqi origin) German professional boxer Fadi Merza (Syrian origin) Austrian Muay Thai kickboxer Zinedine Hameur-Lain French-Algerian Kickboxer and Muay Thai fighter Ibrahim Afellay (Moroccan origin) Dutch professional soccer player Noussair Mazraoui Dutch-Moroccan professional soccer player Nordine Oubaali (Moroccan origin) French professional boxer Hicham El Gaoui Dutch-Moroccan kickboxer and K-1 fighter Tyjani Beztati Dutch-Moroccan kickboxer Ilias Ennahachi Dutch-Moroccan kickboxer Ottman Azaitar German-Moroccan UFC Fighter and ex-Brave Combat Federation champion Rani Yahya (Syrian father) Brazilian UFC Fighter Hacine Cherifi (Algerian origin) professional boxer Youssef Boughanem Belgian-Moroccan Muay Thai kickboxer Yassine Boughanem Belgian-Moroccan Muay Thai kickboxer Jamal Ben Saddik Belgian-Moroccan kickboxer Tarik Khbabez Dutch-Moroccan kickboxer Mustapha Haida Italian-Moroccan kickboxer Ilias Bulaid Dutch-Moroccan kickboxer and Bellator MMA fighter Mosab Amrani Dutch-Moroccan Muay Thai kickboxer Matteo Guendouzi (Moroccan origin) French soccer player Aïssa Mandi French-Algerian soccer player Oussama Idrissi Dutch-Moroccan soccer player Karim El Ahmadi Dutch-Moroccan soccer player L'houcine Ouzgni Dutch-Moroccan Kickboxer Morad Sari French-Algerian Kickboxer Religion Sheikh Ali Jaber (Saudi Arabian origin), cleric Habib Munzir (Hadrami descent), leader of Majelis Rasulullah Habib Rizieq (Hadrami descent), leader of FPI See also Algerian British Arab Americans Arab Argentines Arab Australians Arab Brazilians Arab Canadians Arab Chileans Arab Colombians Arab Haitians Arab Indonesians Arab Mexicans Arab New Zealanders Arab Singaporeans Arab Venezuelans Arabs in Austria Arabs in Bulgaria Arabs in Europe Arabs in Finland Arabs in France Arabs in Germany Arabs in Greece Arabs in India Arabs in Italy Arabs in the Netherlands Arabs in Pakistan Arabs in Romania Arabs in Serbia Arabs in Spain Arabs in Sweden Arabs in Switzerland Arabs in Turkey Arma people (Saharan Arab and Spanish) British Arabs British Iraqis Chaush (Yemenis in South India) Egyptians in the United Kingdom Emirati diaspora Hadhrami diaspora History of Arabs in Afghanistan Iranian Arabs Iraqi Biradari (Iraqis residing in India and Pakistan) Iraqi diaspora Lebanese Americans Lebanese Argentines Lebanese Australians Lebanese Brazilians Lebanese Canadians Lebanese diaspora Lebanese people in Ecuador List of Arab Americans Magyarab people (Egyptian Arab and Hungarian) Palestinian diaspora Refugees of Iraq Sri Lankan Moors Syrian Americans Yemeni Americans Yemenis in the United Kingdom References Notes Citations Further reading Niger's Arabs to fight expulsion Out of the Hadhramaut Arab Immigrants in Latin American Politics Descendants of Arabs thriving in S. America The Arrival Of The Lebanese to Jamaica "Arab roots grow deep in Brazil's rich melting pot", The Washington Times. External links International Organization for Migration - Regional Office for the Middle East The Lebanese of South Africa The Arabs of Honduras The Arabs of Brazil Lebanese Social and Cultural Community in Ireland Diaspora North African diaspora
571334
https://en.wikipedia.org/wiki/SGI%20Octane
SGI Octane
Octane series of IRIX workstations was developed and sold by SGI in the 2000s. Octane and Octane2 are two-way multiprocessing-capable workstations, originally based on the MIPS Technologies R10000 microprocessor. Newer Octanes are based on the R12000 and R14000. The Octane2 has four improvements: a revised power supply, system board, and Xbow ASIC. The Octane2 has VPro graphics and supports all the VPro cards. Later revisions of the Octane include some of the improvements introduced in the Octane2. The codenames for the Octane and Octane2 are "Racer" and "Speedracer" respectively. The Octane is the direct successor to the Indigo2, and was succeeded by the Tezro, and its immediate sibling is the O2. SGI withdrew the Octane2 from the price book on May 26, 2004, and ceased Octane2 production on June 25, 2004. Support for the Octane2 ceased in June 2009. Octane III was introduced in early 2010 after SGI's bankruptcy reorganization. It is a series of Intel based deskside systems, as a Xeon-based workstation with 1 or 2 3U EATX trays, or as cluster servers with 10 system trays configured with up to 10 Twin Blade nodes or 20 Intel ATOM MINI-ITX nodes. Hardware The Octane's system board is designated as IP30, based on SGI's Xtalk architecture. Xtalk does not use a system bus, but a Crossbow application-specific integrated circuit (ASIC), referred to as Xbow, a dynamic crossbar switch that connects the XIO ports to the hub. One of the ports is used for the processor and memory subsystem, one is available for PCI-X expansion, and four are XIO slots (packet-based high-bandwidth bus, somewhat similar to HyperTransport). This makes it very similar to a single node of the Origin 200 system. The XIO can be bridged to PCI-X, using a chip named BRIDGE. This bridging includes the system board (for the IOC3 multi-I/O chip, two ISP1040B SCSI controllers and RAD1 audio), MENET cards (four IOC3s) and the PCI cage (used for PCI cards in Octane). The Octane uses ARCS boot firmware, like all contemporary SGI computer systems. CPUs Processors for Octane series computers: The Octane series has single and dual CPU modules. A second CPU cannot be added; the only option is to replace the whole CPU module. Memory The Octane supports 64 MB to 8 GB of system memory, using up to eight proprietary 200-pin DIMMs. There are two system board revisions. The first revision (part number 030-0887-003) supports 2 GB of RAM, while the second (part number 030-1467-001) allows up to 8 GB. The memory subsystem has vast reserves of bandwidth that can be directly served by the Xbow router to any XIO card. The Octane's memory controller was named HEART. It acts as a bridge between the processor, the memory (SDRAM) and the XIO bus. Graphics Graphics on the Octane are provided by a series of cards: SI, SI+T, SSI, MXI. These are updated XIO versions of Solid Impact (SI), High Impact (SI+T) and Maximum Impact (MXI) from the SGI Indigo2 that were internally designated by SGI as MARDIGRAS. The boards were accelerated and reengineered with faster geometry engine and texture modules to create their new versions: SE, SE+T, SSE, MXE. The SI/SE provides 13.5 MB of framebuffer memory and the SSE and MXE have a 27 MB framebuffer. The '+T' indicates an additional high speed Rambus RDRAM-based texture board which gives 4 MB of texture memory, which is practically indispensable, though quite expensive and fragile. The SI/SE+T has one texture board while the MXI/MXE has 2 texture boards, however, the 2 boards in the MXI/MXE do not double the available texture memory to the system. It just doubles the texture performance. Later Octanes and Octane2's support the SGI VPro graphics board series, designated 'ODYSSEY'. The first VPro series cards were the V6 and V8. The main differentiator being that the V6 has 32 MB of RAM (unlike the MARDIGRAS option, framebuffer memory and texture memory come from the same pool) and V8 having 128 MB. Later, the V10 (32 MB) and V12 (128 MB) were introduced. The main difference with the new VPro V10/V12 series is that they had double the geometry performance of the older V6/V8. V6 and V10 can have up to 8 MB RAM allocated to textures (2X more than the textured-enabled MARDIGRAS options), while V8 and V12 can have up to 108 MB RAM used for textures. The VPro graphics subsystem consists of an SGI proprietary chip set and associated software. The chip set consists of the buzz ASIC, pixel blaster and jammer (PB&J) ASIC, and associated SDRAM. The buzz ASIC is a single-chip graphics pipeline. It operates at 251 MHz and contains on-chip SRAM. The buzz ASIC has three interfaces: Host (16-bit, 400-MHz peer-to-peer XIO link) SDRAM (The SDRAM is 32 MB (V6 or V10) or 128 MB (V8 or V12); the memory bus operates at half the speed of the buzz ASIC.) PB&J ASIC As with the MARDIGRAS boards, all VPro boards support the OpenGL (MARDIGRAS is OpenGL 1.1 + SGI Extensions, and VPro upgraded support to OpenGL 1.2) and OpenGL ARB imaging extensions, allowing for hardware acceleration of numerous imaging operations at real-time rates. IMPACT Series VPro Series Audio Audio hardware is standard; even without extensions they can support low-latency (3 ms input-to-output) audio streams. Alesis ADAT 8-channel, 24-bit optical ports are built-in, along with S/PDIF or AES/EBU optical and coaxial ports. Case and expandability The Octane cases are large (30x40x35 cm) and heavy (25 kg), yet there are no internal 5.25" drive bays, so external CD-ROM drives must be connected if desired. Extensions include video I/O, audio I/O, networking, real-time video compression boards, and external storage options (through SCSI, Fibre Channel or FireWire). Octanes can use standard PCI cards with the optional PCI cardcage (which provides one half-length and two full-length 5V 64-bit PCI slots), or a PCI to XIO adapter (known as a 'shoehorn' which provides a single 3.3/5V 64-bit PCI slot). Older Octanes can be upgraded with VPro graphics however V10 and V12 graphics board require Xbow revision 1.4 and a Cherokee power supply. VPro V6 and V8 require Xbow revision 1.3 and a Cherokee power supply. There's distinct difference between the Cherokee (747 watt) and older Lucent (623 watt) power supplies. The mounting handle is silver in the Cherokee International manufactured power supplies while the original Lucent manufactured ones have black handles. Octane skins come in three types. The original Octane has green skins with the original SGI 'cube' logo. The later model Octanes have skins of the same colour as the original, but with Octane2-style lettering and logos. Octane2 systems have blue skins with the modern lowercase letter-only SGI logo. I/O subsystem Octane series has two SCSI controllers that supports Ultra Wide SCSI devices. Systems can have a maximum of three internal 3.5" SCSI SCA devices, using proprietary mounting sleds that are also compatible with the Origin 2000, Origin 200 and Onyx2. An external Ultra Wide SCSI port is used for connecting external devices. Operating systems The SGI Octane with IMPACT-class graphics was first supported by IRIX version 6.4. VPro-class graphics have been supported since IRIX version 6.5.10 for V6 and V8, with V10 and V12 graphics supported as of 6.5.11 (or 6.5.10 with a special driver patch). Linux and OpenBSD have had support. References External links SGI Octane series General Information SGI Octane series Technical Information SGI Octane series pictures Linux-MIPS IP30 page "Speedracer" Linux port status page OpenBSD/sgi port page Octane Advanced RISC Computing Computer-related introductions in 1997 64-bit computers
25981697
https://en.wikipedia.org/wiki/Cognitive%20password
Cognitive password
A cognitive password is a form of knowledge-based authentication that requires a user to answer a question, presumably something they intrinsically know, to verify their identity. Cognitive password systems have been researched for many years and are currently commonly used as a form of secondary access. They were developed to overcome the common memorability vs. strength problem that exists with the traditional password. Cognitive passwords, when compared to other password systems, can be measured through the usage of a memorability vs. guessability ratio. History Research on passwords as an authentication method has struggled between memorability and strong security. Passwords that are easily remembered are easily cracked by attackers. On the other hand, strong passwords are difficult to crack but also difficult to remember. When passwords are difficult to remember, users may write them down, and the secrecy of the password is compromised. Early research into this trade-off between security and usability aimed to develop a password system that utilized easily remembered personal facts and encouraged user participation. This line of research resulted in the concept of the associative password, a password system based on user selected cues and responses. This concept of associative passwords was extended to a pre-specified set of questions and answers that users would be expected to know and could easily recall. Empirical analysis of passwords and human cognition resulted in a recommendation that people should not be expected to remember more the four complex passwords. Building upon the idea of questions later researchers developed a series of innovations for cognitive passwords. Pass faces used the ability to identify individuals in a social network and the particular cognitive strength of recognizing faces. Later work evaluating these cues reified the recommendation of four passwords as a reasonable cognitive expectation. A historical overview of the use of various cues found that the specific design and layout of the page impinge the memorability and strength. Later work illustrated that inclusion of a visual cue enabled strongly significant improvements in the trade-off between memorability and security. Cognitive questions The core of a cognitive password system lies the cues. These can be photos of faces, newspapers, images, or other graphical or textual cues. One early method of assisting recall recommended the now later security questions. These questions were designed to be more memorable than the standard username/password authentication method. As such, a measure of the strength of a cognitive password is the memorability/guessability ratio. Question development Questions developed for cognitive password systems are classified as being either fact or opinion based. Fact based systems have questions with answers that are considered independent of an individual's feelings such as "What is the name of the high school you attended?". Opinion based questions are the opposite and, as the name implies, have answers based on personal opinions such as, "What is your favorite color?" Later research developed a set of criteria for question selection which included generalized answerability, number of potential answers, and generalized lack of ambiguity. The first criterion suggested that questions should be answerable by all (i.e. not asking "When did you purchase your first home?" because not all users may have purchased homes). The second criterion recommended selecting questions with a sufficiently large set of potential answers (i.e. not asking "How many children do you have?" because a majority of people would answer 0, 1 or 2). One design goal is to have questions that were as unambiguous as possible (i.e. not asking "How many family members do you have?" as there may be some confusion as to who would be included in that count). For creating usable questions one effective criterion is the use of persuasive, engaging questions. Older people dealing with the normal cognitive decline of aging may respond well to visual cues. Tactile interactions can make technology more accessible. Memorability vs. guessability A user's ability to correctly recall their password is expected to decrease as time progresses. However, the memorability of cognitive passwords remains relatively stable over time with recall rates significantly higher than traditional passwords. When fact and opinion-based questions are compared, the fact-based questions are more likely to be correctly remembered than opinion-based questions, but still far more likely than traditional passwords. Cognitive questions, with a group averaged as a whole, show relatively high guessability, much higher than traditional passwords but when analyzed individually, certain questions have been shown to have acceptable memorability/guessability ratios. Examples The following are some typical cognitive password questions: What is your mother’s maiden name? Who is your favorite superhero? What is your dog’s name What is your car's name? What is your favorite movie? What city were you born in? What is your favorite color? What number is written in one of your pencils? Use different pencils for different passwords. References Works cited External links Visual and Cognitive Password Authentication Password authentication
12866752
https://en.wikipedia.org/wiki/Multiven
Multiven
Multiven Group BV is a Netherlands-based provider of independent and decentralised software integrity maintenance and cyber-defence services for multivendor Internet Protocol network hardware and blockchain nodes. Its customers include large enterprises, Internet service providers, small, medium businesses, Telecommunications companies, Fortune 500, Academia and government agencies. History Origins Multiven was founded in 2005 in Palo Alto, California by British-Nigerian technology entrepreneur Peter Alfred-Adekeye. His aim was to build a company able to maintain the integrity of the world's Internet infrastructure without a hardware, software or political agenda. Headquartered in Rotterdam, Netherlands, Multiven is today the world's first and only independent provider of software management, maintenance and cyber-defence services for Internet networks. Multiven has regional sales offices in Paris, France, London, UK, Dubai, UAE. History of Market The Internet, a short form of the word Internetwork, is an international ubiquitous network of networks. Given the amount of data traversing it, its criticality to personal, corporate and national security cannot be over-emphasized. The Internet infrastructure comprises software-driven networked switches, routers, firewalls, servers and storage hardware that switch, route, protect and store all voice, video and text data intelligently across the world. Internet equipment manufacturers like Cisco Systems, Hewlett-Packard, etc. have a quasi monopoly on the multi-Billion dollar software maintenance services market for networking equipment through their own maintenance contracts and authorized third party maintenance. Multiven is the only provider offering a one-stop alternative for businesses of all sizes. The Multiven Open Marketplace In March 2018, Multiven launched the development of the Multiven Open Marketplace, a blockchain project aiming to transition its existing business into a decentralized organisation. Multiven is developing a blockchain-based open Marketplace to allow customers to buy and sell new and pre-owned Information Technology hardware, software and services, on a peer-to-peer basis, without intermediaries, solely powered by smart contracts and one global Cryptocurrency, the Multicoin. Multiven's maintenance and cybersecurity services will be accessible on the platform and can be bound to any hardware device. A portion of the Multicoin transaction fees from the Multiven Open Marketplace, funds Multiven B-Fence, a program that proactively defends Bitcoin nodes and the routers that connect them to other nodes, against centralisation cyberattacks. Controversies Antitrust lawsuit against Cisco Systems On December 1, 2008, Multiven filed a United States Federal antitrust lawsuit against Cisco Systems, Inc. in an effort to open up the network maintenance services marketplace for Cisco equipment, promote competition and ensure consumer choice and value. Multiven's complaint alleges that Cisco harmed Multiven and consumers by bundling and tying bug fixes/patches and updates for its operating system software to its maintenance services (“SMARTnet”) and through a series of other illegal exclusionary and anticompetitive acts designed to maintain Cisco's alleged monopoly in the network maintenance services market for Cisco networking equipment. In April 2009 during a Cisco-initiated mediation, Cisco offered Multiven a few million dollars as settlement and limited software updates for its customers but insisted that Multiven couldn't advertise its access to Cisco software updates. Multiven declined this offer as it still limited free and fair competition. The following month in May 2009, Cisco filed a set of false counterclaims against Multiven and alleged that in 2006 “Multiven’s CEO or someone under his control” viewed Cisco's ‘protected’ public website 99 times and downloaded Cisco IOS five times with a total value of $14,000, using a Cisco-employee provided credentials. Cisco Fabricated Criminal Charges against Multiven CEO Cisco went on to communicate exactly the same false allegations in its civil counterclaims to a local US prosecutor in San Jose and after several months in which US authorities prevented Peter Alfred-Adekeye's re-entry into the country for participation in Multiven v. Cisco, a special hearing in the case took place at a Canadian hotel from 18–20 May 2010, involving a US special master and four Cisco lawyers. On 20 May 2010, Cisco accused the person who filed the antitrust suit of hacking and orchestrated his arrest from the court session by Canadian police based on a misleading US arrest warrant while in the middle of testifying against Cisco in Multiven's antitrust hearing. Alfred-Adekeye was released after 28 days in custody but was forced to remain in Canada for another year before he was allowed to return to his home in Zurich, Switzerland. On May 31, 2011, Mr Justice Ronald McKinnon of the Supreme Court of British Columbia stayed the extradition, ruling that the strict standard of "extraordinary misconduct" was met by the circumstances and speaking of the "audacity of it all", of "Cisco's duplicity", and the "shocking" act of preventing someone's participation in a judicial proceeding by arresting them. False material in the US attorney's letter had misled the judge who signed the Canadian arrest warrant. "Grotesquely inflated" charges and the unjustified portrayal of Alfred-Adekeye as a Nigerian-born scam artist and flight risk had misled the Canadian judicial system further. The underlying civil case by Multiven against Cisco had been withheld from them. Cisco Settles Multiven v. Cisco On July 19, 2010, Cisco settled Multiven's antitrust lawsuit by making its operating system and application software updates, bug fixes and patches available to all Multiven customers worldwide. This settlement agreement transformed Multiven into Cisco's first and only independent competitor in the multi-Billion dollars per year software maintenance services business. Libel Lawsuit Against Cisco On July 20, 2015, Multiven CEO, Peter Alfred-Adekeye initiated a libel lawsuit against Cisco for (1) falsely claiming that ‘he or someone under his control at Multiven’ downloaded and ‘stole’ Cisco software five times in 2006 from cisco.com (2) used this lie (a) to orchestrate his illegal arrest in Vancouver, Canada in 2010 during Multiven v. Cisco and (b) to fabricate an ‘indictment’ in August 2011 after the Canadian Supreme Court threw out Cisco/US Government's extradition request and (3) continuing to knowingly propagate this falsehood till today, with the core purpose of defaming Mr Alfred-Adekeye's character and that of Multiven's - Cisco's only competitor in the global multi-Billion dollar per year services business. References External links Companies established in 2005 Companies based in Redwood City, California Online companies of the Netherlands International information technology consulting firms Blockchain entities Peer-to-peer
149766
https://en.wikipedia.org/wiki/Microsoft%20BASIC
Microsoft BASIC
Microsoft BASIC is the foundation software product of the Microsoft company and evolved into a line of BASIC interpreters adapted for many different microcomputers. It first appeared in 1975 as Altair BASIC, which was the first version of BASIC published by Microsoft as well as the first high-level programming language available for the Altair 8800 microcomputer. During the home computer craze during the late-1970s and early-1980s, Microsoft BASIC was ported to and supplied with practically every computer design. Slight variations to add support for machine-specific functions, especially graphics, led to a profusion of related designs like Commodore BASIC and Atari Microsoft BASIC. As the early home computers gave way to newer designs like the IBM Personal Computer and Apple Macintosh, BASIC was no longer as widely used, although it retained a strong following. The release of Visual Basic reboosted its popularity and it remains in wide use on Microsoft Windows platforms in its most recent incarnation, Visual Basic .NET Altair BASIC and early microcomputers The Altair BASIC interpreter was developed by Microsoft founders Paul Allen and Bill Gates using a self-made Intel 8080 emulator running on a PDP-10 minicomputer. The MS dialect is patterned on Digital Equipment Corporation's BASIC-PLUS on the PDP-11, which Gates had used in high school. The first versions supported integer math only, but Monte Davidoff convinced them that floating-point arithmetic was possible, and wrote a library which became the Microsoft Binary Format. Altair BASIC was delivered on paper tape and in its original version took 4 KB of memory. The following functions and statements were available: LIST, NEW, PRINT, INPUT, IF...THEN, FOR...NEXT, SQR, RND, SIN, LET, USR, DATA, READ, REM, CLEAR, STOP, TAB, RESTORE, ABS, END, INT, RETURN, STEP, GOTO, and GOSUB. There were no string variables in 4k BASIC and single-precision 32-bit floating point was the only numeric type supported. Variable names consisted of one letter (A–Z) or one letter followed by one digit (0–9), thus allowing up to 286 numeric variables. For machines with more memory, the 8 KB version added 31 additional statements and support for string variables and their related operations like MID$ and string concatenation. String variables were denoted with a $ suffix, which remained in later versions of the language. Later on, Microsoft released the 12K Extended BASIC, which included double precision 64-bit variables, IF...THEN...ELSE structures, user defined functions, more advanced program editing commands, and descriptive error messages as opposed to error numbers. Numeric variables now had three basic types, % denoted 16-bit integers, # denoted 64-bit doubles, and ! denoted 32-bit singles, but this was the default format so the ! is rarely seen in programs. The extended 8 KB version was then generalized into BASIC-80 (8080/85, Z80), and ported into BASIC-68 (6800), BASIC-69 (6809), and 6502-BASIC. The 6502 had somewhat less dense assembler code and expanded in size to just under 8K for the single precision version, or 9K for a version using an intermediate 40-bit floating point format in place of the original 32-bit version. This new 40-bit format became the most common as it was used on most 6502-based machines of the era. It was also ported to the 16-bit BASIC-86 (8086/88). The final major release of BASIC-80 was version 5.x, which appeared in 1981 and added support for 40-character variable names, WHILE...WEND loops, dynamic string allocation, and several other features. BASIC 5.x removed the ability to crunch program lines. The core command set and syntax are the same in all implementations of Microsoft BASIC and, generally speaking, a program can be run on any version if it does not use hardware-specific features or double precision numbers (not supported in some implementations). Licenses to home computer makers After the initial success of Altair BASIC, Microsoft BASIC became the basis for a lucrative software licensing business, being ported to the majority of the numerous home and other personal computers of the 1970s and especially the 1980s, and extended along the way. Contrary to the original Altair BASIC, most home computer BASICs are resident in ROM, and thus are available on the machines at power-on in the form of the characteristic "READY." prompt. Hence, Microsoft's and other variants of BASIC constitute a significant and visible part of the user interface of many home computers' rudimentary operating systems. By 1981, Microsoft BASIC was so popular that even companies that already had a BASIC licensed the language, such as IBM for its Personal Computer, and Atari, which sold both Atari Microsoft BASIC and its own Atari BASIC. IBM's Don Estridge said, "Microsoft BASIC had hundreds of thousands of users around the world. How are you going to argue with that?" Microsoft licensed similar versions to companies that competed with each other. After licensing IBM Advanced BASIC (BASICA) to IBM, for example, Microsoft licensed the compatible GW-BASIC to makers of PC clones, and also sold copies to retail customers. The company similarly licensed an Applesoft-compatible BASIC to VTech for its Laser 128 clone. Extended BASIC-80 Tangerine Microtan 65 Spectravideo SV-318 and SV-328 Known variants: NCR Basic Plus 6, released in the first quarter of 1977 for the NCR 7200 model VI data-entry terminal. The adaptation of Microsoft's Extended BASIC-80 was carried out by Marc McDonald in 1976/1977. Disk BASIC-80 MBASIC is available for CP/M-80 and ISIS-II. Also available for TEKDOS. MBASIC is a stripped-down BASIC-80 with only hardware-neutral functions. However, due to the popularity of CP/M, the great majority of Z80 machines ran MBASIC, rather than a version customized for specific hardware (TRS-80 BASIC was one of the few exceptions). Microsoft's CP/M card for the Apple II included a modified version of MBASIC that incorporated some of the graphics commands from Applesoft BASIC, such as HPLOT, but the full command set is not supported. Standalone Disk BASIC-80 The first implementation to use an 8-bit variant of the File Allocation Table was a BASIC adaptation for an Intel 8080-based NCR 7200, 7520 or 7530 data-entry terminal with 8-inch floppy disks in 1977/1978. TRS-80 Level II/III BASIC The TRS-80 computer was offered initially with an adaption of Li-Chen Wang's Tiny BASIC (Level I BASIC); within a few months this was replaced by a port of BASIC-80 which incorporated some of Level I BASIC's command set, particularly the commands for setting graphics characters. Level II BASIC contained some of the features of Extended BASIC, although due to the need to include Level I commands such as SET and PSET, other features such as descriptive error messages still had to be left out; these were subsequently added into TRS-80 Disk BASIC. The TRS-80 Model 4 had a newer disk-based BASIC that utilized the BASIC-80 5.x core, which included support for 40-character variable names. Thus the ability to crunch program lines (without spaces between keywords and arguments) was no longer possible as it had been in Level II. It was no longer necessary to reserve string space. New features included user defined functions (DEF FN) and access to TRSDOS 6 system functions via a SYSTEM keyword. A modified version published later by OS provider Logical Systems, in the LS-DOS Version 6.3 update, added single-letter access to BASIC control functions (like LIST and EDIT) and direct access to LS-DOS supervisor calls. The program edit environment was still line-oriented. The facility available in Level II to sort arrays (CMD"O") was not available; programmers and users had to devise their own workarounds. BASIC-86 The first implementation as a standalone disk based language system was for Seattle Computer Products S-100 bus 8086 CPU card in 1979. It was utilizing an 8-bit FAT file system. Microsoft also offered a version of Standalone BASIC-86 for SBC-86/12 for Intel's 8086 Single Board Computer platform in 1980. Texas Instruments BASIC This is the version of BASIC used on Texas Instruments' TI-99/4A computer line. Although very similar to Microsoft BASIC TI-99/4 BASIC was not written by Microsoft as was widely rumored. According to TI Engineer H. Schuurman; 'They (in the form of Bob Greenberg of Microsoft) were contracted to develop BASIC for the SR-70 (which is also sometimes referred to as the 99/7), but the BASIC for the 99/4 was developed in-house.' Ti-99/4 BASIC was based on the Dartmouth Basic and complies to the American National Standard for minimal Basic (ANSI X3.60-1978). 6502 BASIC Microsoft ported BASIC-80 to the 6502 during the summer of 1976; it was mostly a straight port of the 8K version of BASIC-80 and included the same prompts asking for memory size and if the user wanted floating point functions enabled or not (having them active used an extra 135 bytes of memory). The earliest machines to use 6502 BASIC were the OSI Model 500 and KIM-1 in 1977. 6502 BASIC included certain features from Extended BASIC such as user-defined functions and descriptive error messages, but omitted other features like double precision variables and the PRINT USING statement. As compensation for not having double precision variables, Microsoft included 40-bit floating point support instead of BASIC-80's 32-bit floating point and string allocation was dynamic (thus the user did not have to reserve string space like in BASIC-80). However, vendors could still request BASIC with 32-bit floating point for a slightly smaller memory footprint; as one example, Disk BASIC for the Atari 8-bits used 32-bit floating point rather than 40-bit. Standard features of the 9K version of Microsoft 6502 BASIC included: GET statement to detect a key press. Line crunching program lines do not require any spaces except between the line number and statement. Only supported variable types are string, single precision, and integer (arrays only). Long variable names are not supported and only the first two characters are recognized. Dynamic string allocation. 6502 BASIC lacked a standardized set of commands for disk and printer output; these were up to the vendor to add and varied widely with each implementation. Later implementations of 6502 Basic (1983–) were improved in many aspects. While early Commodore machines (VIC-20, C64) had a BASIC very close to 6502 MS BASIC, later Commodore 8-bit machines (C=264 series, PET and C=128 named as V3.5, V4.0 and V7.0) had numerous improvements to make BASIC more useful and user friendly: Disk commands (DIRECTORY, DSAVE, DLOAD, BACKUP, HEADER, SCRATCH, COLLECT, DVERIFY, COPY, DELETE, RENAME, etc.) Graphics commands (CIRCLE, DRAW, BOX, COLOR (of background, border, etc.), PAINT, SCALE) Graphics block copy and logical operation with the existing graphical screen (SSHAPE and GSHAPE with OR, AND, XOR, etc.) Sprite definition, displaying and animation commands on C128, even saving sprites to binaries Sound commands (VOL, SOUND), later on at C=128 Music commands (ADSR and SID filter programming (ENVELOPE and FILTER), PLAY, TEMPO commands) Signs of more structured programming: IF–THEN–ELSE, DO–LOOP–WHILE/UNTIL–EXIT, ON–GOSUB Extended I/O commands for special features: JOY, Function keys Debugging commands: STOP, CONT, TRON, TROFF, RESUME Extended handling of character screen: WINDOW Support easier program development: RENUMBER, NEW, MONITOR, RREG Spectravideo CompuMate on the Atari 2600's MOS Technology 6507 CPU in 1983 BASIC-68 and BASIC-69 Microsoft catalogs from the 1980s also showed the availability of BASIC-68 and BASIC-69 for the Motorola 6800 and 6809 microprocessors respectively, running the FLEX operating systems, and also mention OEM versions for Perkin-Elmer, Ohio Nuclear, Pertec and Societe Occitane d'Electronique systems. It seems likely this is what is also the basis for the Microsoft/Epson BASIC in the Epson HX-20 portable computer, which has two Hitachi 6301 CPUs, which are essentially a "souped up" 6801. Most of the core features in BASIC-68 and BASIC-69 were copied directly from BASIC-80. BASIC-69 was notably also licensed to Tandy, where it formed the nucleus of Color BASIC on the TRS-80 Color Computer. Not to be confused with BASIC09, a very different BASIC created by Microware as the main language for its OS-9, the other OS available on the Color Computer (Microware also wrote version 2.0 of Extended Color BASIC when Microsoft refused to do it). Microsoft BASIC was also included in the Dragon 32 / 64 computers that were built in Wales and enjoyed some limited success in the UK home computer market in the early 1980s. Dragon computers were somewhat compatible with the Tandy TRS-80, as they were built on very similar hardware. MSX Microsoft produced a ROM-based MSX BASIC for use in MSX home computers, which used a Z80 processor. This version supported the graphics and sound systems of the MSX computers; some variants also had support for disk drives. Modern descendants No variety of Microsoft BASIC (BASICA, GW-BASIC, QuickBasic, QBasic) is currently distributed with Microsoft Windows or DOS. However, versions that will still run on modern machines can be downloaded from various Internet sites or be found on old DOS disks. The latest incarnation of Microsoft BASIC is Visual Basic .NET, which incorporates some features from C++ and C# and can be used to develop Web forms, Windows forms, console applications and server-based applications. Most .NET code samples are presented in VB.NET as well as C#, and VB.NET continues to be favored by former Visual Basic programmers. In October 2008, Microsoft released Small Basic. The language has only 14 keywords. Small Basic Version 1.0 (12 June 2011) was released with an updated Microsoft MSDN Web site that included a full teacher curriculum, a Getting Started Guide, and several e-books. Small Basic exists to help students as young as age eight learn the foundations of computer programming and then graduate to Visual Basic via the downloadable software, Visual Studio Express, where they can continue to build on the foundation by learning Visual C#, VB.NET, and Visual C++. Variants and derivatives of Microsoft BASIC Altair BASIC (MITS Altair and other S-100 computers) Amiga BASIC (Commodore Amiga family) Applesoft BASIC (Apple II family) Atari Microsoft BASIC I and II (Atari 8-bit family) BASICA ("BASIC Advanced") (PC DOS, on IBM PC) Color BASIC (TRS-80 Color Computer) Commodore BASIC (Commodore 8-bit family, incl C64) Oric Extended Basic (Oric 8-bit family) Color BASIC and Disk Extended Color BASIC (TRS-80 Color Computer and Dragon 32/64) IBM Cassette BASIC (Original IBM PC, built into ROM) Galaksija BASIC (Galaksija home computer) GW-BASIC (BASICA for MS-DOS, on PC compatibles) Microsoft Level III BASIC (Tandy/Radio-Shack TRS-80) Basic 1.0 (Thomson computer family) MBASIC (CP/M, on 8080/85 and Z80 based computers) MS BASIC for Macintosh (Mac OS on Apple Macintosh) MSX BASIC (MSX standard home computers) N88-BASIC (NEC PC8801/9801) N82-BASIC (NEC PC-8201/8201A, TRS-80 Model 100) QBasic (PC DOS/MS-DOS on IBM PC and compatibles) QuickBASIC (PC MS-DOS on IBM PC and compatibles) Small Basic (MS Windows on IBM PC and compatibles) TRS-80 Level II BASIC (Tandy/Radio-Shack TRS-80) T-BASIC (Toshiba Pasopia) and T-BASIC7 (Toshiba Pasopia 7) Visual Basic (classic and .NET) (PC DOS/MS-DOS/MS Windows on IBM PC and compatibles) Video Technology Basic (Laser 350/500/700) WordBasic (pre-VBA) (MS Windows) HP2640 HP2647 Programmable Terminal with AGL graphics extensions FreeBASIC a free clone of the QuickBasic system Gambas free implementation inspired by Visual Basic See also Locomotive BASIC Atari BASIC Integer BASIC Tiny BASIC BBC BASIC Open Letter to Hobbyists Notes References External links Bill Gates’ Personal Easter Eggs in 8 Bit BASIC BASIC BASIC BASIC programming language family Computer-related introductions in 1975 Programming languages created in 1975
59337407
https://en.wikipedia.org/wiki/List%20of%20International%20Organization%20for%20Standardization%20standards%2C%2016000-17999
List of International Organization for Standardization standards, 16000-17999
This is a list of published International Organization for Standardization (ISO) standards and other deliverables. For a complete and up-to-date list of all the ISO standards, see the ISO catalogue. The standards are protected by copyright and most of them must be purchased. However, about 300 of the standards produced by ISO and IEC's Joint Technical Committee 1 (JTC1) have been made freely and publicly available. ISO 16000 – ISO 16999 ISO/TR 16015:2003 Geometrical product specifications (GPS) - Systematic errors and contributions to measurement uncertainty of length measurement due to thermal influences ISO 16016:2016 Technical product documentation – Protection notices for restricting the use of documents and products ISO 16020:2005 Steel for the reinforcement and prestressing of concrete – Vocabulary ISO/IEC 16022:2006 Information technology - Automatic identification and data capture techniques - Data Matrix bar code symbology specification ISO/IEC 16023:2000 Information technology - International symbology specification - MaxiCode ISO 16032:2004 Acoustics – Measurement of sound pressure level from service equipment in buildings – Engineering method ISO 16034:2002 Ophthalmic optics – Specifications for single-vision ready-to-wear near- vision spectacles ISO 16039:2004 Road construction and maintenance equipment – Slipform pavers – Definitions and commercial specifications ISO 16054:2000 Implants for surgery – Minimum data sets for surgical implants ISO/TR 16056 Health informatics - Interoperability of telehealth systems and networks ISO/TR 16056-1:2004 Part 1: Introduction and definitions ISO/TR 16056-2:2004 Part 2: Real-time systems ISO/TS 16058:2004 Health informatics – Interoperability of telelearning systems ISO 16059:2007 Dentistry - Required elements for codification used in data exchange ISO 16061:2015 Instrumentation for use in association with non-active surgical implants – General requirements ISO 16063 Methods for the calibration of vibration and shock transducers ISO 16063-1:1998 Part 1: Basic concepts ISO 16063-11:1999 Part 11: Primary vibration calibration by laser interferometry ISO 16063-12:2002 Part 12: Primary vibration calibration by the reciprocity method ISO 16063-13:2001 Part 13: Primary shock calibration using laser interferometry ISO 16063-15:2006 Part 15: Primary angular vibration calibration by laser interferometry ISO 16063-16:2014 Part 16: Calibration by Earth's gravitation ISO 16063-17:2016 Part 17: Primary calibration by centrifuge ISO 16063-21:2003 Part 21: Vibration calibration by comparison to a reference transducer ISO 16063-22:2005 Part 22: Shock calibration by comparison to a reference transducer ISO 16063-31:2009 Part 31: Testing of transverse vibration sensitivity ISO 16063-32:2016 Part 32: Resonance testing – Testing the frequency and the phase response of accelerometers by means of shock excitation ISO 16063-33:2017 Part 33: Testing of magnetic field sensitivity ISO 16063-41:2011 Part 41: Calibration of laser vibrometers ISO 16063-42:2014 Part 42: Calibration of seismometers with high accuracy using acceleration of gravity ISO 16063-43:2015 Part 43: Calibration of accelerometers by model-based parameter identification ISO 16063-45:2017 Part 45: In-situ calibration of transducers with built in calibration coil ISO 16069:2004 Graphical symbols - Safety signs - Safety way guidance systems (SWGS) ISO/IEC 16085:2006 Systems and software engineering - Life cycle processes - Risk management ISO 16087:2013 Implants for surgery – Roentgen stereophotogrammetric analysis for the assessment of migration of orthopaedic implants ISO 16103:2005 Packaging - Transport packaging for dangerous goods - Recycled plastics material ISO 16106:2006 Packaging - Transport packages for dangerous goods - Dangerous goods packagings, intermediate bulk containers (IBCs) and large packagings—Guidelines for the application of ISO 9001 ISO 16134 Earthquake- and subsidence-resistant design of ductile iron pipelines ISO 16140 Microbiology of the food chain – Method validation ISO 16140-1:2016 Part 1: Vocabulary ISO 16140-2:2016 Part 2: Protocol for the validation of alternative (proprietary) methods against a reference method ISO 16142 Medical devices – Recognized essential principles of safety and performance of medical devices ISO 16142-1:2016 Part 1: General essential principles and additional specific essential principles for all non-IVD medical devices and guidance on the selection of standards ISO 16142-2:2017 Part 2: General essential principles and additional specific essential principles for all IVD medical devices and guidance on the selection of standards ISO/TR 16153:2004 Piston-operated volumetric instruments – Determination of uncertainty for volume measurements made using the photometric method ISO 16155:2006 Ships and marine technology – Computer applications – Shipboard loading instruments ISO 16165:2013 Ships and marine technology - Marine environment protection - Terminology relating to oil spill response ISO/IEC TR 16166:2010 Information technology – Telecommunications and information exchange between systems – Next Generation Corporate Networks (NGCN) – Security of session-based communications ISO/IEC TR 16167:2011 Information technology – Telecommunications and information exchange between systems – Next Generation Corporate Networks (NGCN) – Emergency calls ISO 16175 Information and documentation - Principles and functional requirements for records in electronic office environments ISO 16175-1:2010 Part 1: Overview and statement of principles ISO 16175-2:2011 Part 2: Guidelines and functional requirements for digital records management systems ISO 16175-3:2010 Part 3: Guidelines and functional requirements for records in business systems ISO 16192:2017 Space systems - Experience gained in space projects (lessons learned) - Principles and guidelines ISO/TS 16195:2013 Nanotechnologies – Guidance for developing representative test materials consisting of nano-objects in dry powder form ISO/TR 16196:2016 Nanotechnologies – Compilation and description of sample preparation and dosing methods for engineered and manufactured nanomaterials ISO/TR 16197:2014 Nanotechnologies – Compilation and description of toxicological screening methods for manufactured nanomaterials ISO 16212:2017 Cosmetics – Microbiology – Enumeration of yeast and mould ISO/TR 16218:2013 Packaging and the environment - Processes for chemical recovery ISO 16232 Road vehicles – Cleanliness of components of fluid circuits ISO 16232-1:2007 Part 1: Vocabulary ISO 16232-2:2007 Part 2: Method of extraction of contaminants by agitation ISO 16232-3:2007 Part 3: Method of extraction of contaminants by pressure rinsing ISO 16232-4:2007 Part 4: Method of extraction of contaminants by ultrasonic techniques ISO 16232-5:2007 Part 5: Method of extraction of contaminants on functional test bench ISO 16232-6:2007 Part 6: Particle mass determination by gravimetric analysis ISO 16232-7:2007 Part 7: Particle sizing and counting by microscopic analysis ISO 16232-8:2007 Part 8: Particle nature determination by microscopic analysis ISO 16232-9:2007 Part 10: Particle sizing and counting by automatic light extinction particle counter ISO 16232-10:2007 Part 10: Expression of results ISO 16239:2013 Metric series wires for measuring screw threads ISO 16245:2009 Information and documentation - Boxes, file covers and other enclosures, made from cellulosic materials, for storage of paper and parchment documents ISO 16249:2013 Springs – Symbols ISO 16254:2016 Acoustics – Measurement of sound emitted by road vehicles of category M and N at standstill and low speed operation – Engineering method ISO/IEC 16262:2011 Information technology – Programming languages, their environments and system software interfaces – ECMAScript language specification ISO 16269 Statistical interpretation of data ISO 16269-4:2010 Part 4: Detection and treatment of outliers ISO 16269-6:2014 Part 6: Determination of statistical tolerance intervals ISO 16269-7:2001 Part 7: Median - Estimation and confidence intervals ISO 16269-8:2004 Part 8: Determination of prediction intervals ISO/TS 16277 Health informatics – Categorial structures of clinical findings in traditional medicine ISO/TS 16277-1:2015 Part 1: Traditional Chinese, Japanese and Korean medicine ISO 16278:2016 Health informatics – Categorial structure for terminological systems of human anatomy ISO 16284:2006 Ophthalmic optics – Information interchange for ophthalmic optical equipment ISO/TR 16310:2014 Symbol libraries for construction and facilities management ISO/IEC 16317:2011 Information technology – Telecommunications and information exchange between systems – proxZzzy for sleeping hosts ISO/IEC/IEEE 16326:2009 Systems and software engineering - Life cycle processes - Project management ISO 16331 Optics and optical instruments – Laboratory procedures for testing surveying and construction instruments ISO 16331-1:2017 Part 1: Performance of handheld laser distance meters ISO 16336:2014 Applications of statistical and related methods to new technology and product development process - Robust parameter design (RPD) ISO/IEC 16350:2015 Information technology - Systems and software engineering - Application management ISO/IEC 16353:2011 Information technology – Telecommunications and information exchange between systems – Front-end configuration command for NFC-WI (NFC-FEC) ISO 16355 Applications of statistical and related methods to new technology and product development process ISO 16355-1:2015 Part 1: General principles and perspectives of Quality Function Deployment (QFD) ISO 16355-2:2017 Part 2: Non-quantitative approaches for the acquisition of voice of customer and voice of stakeholder ISO 16355-4:2017 Part 4: Analysis of non-quantitative and quantitative Voice of Customer and Voice of Stakeholder ISO 16355-5:2017 Part 5: Solution strategy ISO/TR 16355-8:2017 Part 8: Guidelines for commercialization and life cycle ISO/TR 16379:2014 Tissue-engineered medical products – Evaluation of anisotropic structure of articular cartilage using DT (Diffusion Tensor)-MR Imaging ISO/IEC 16382:2000 Information technology – Data interchange on 12,7 mm 208-track magnetic tape cartridges – DLT 6 format ISO/IEC 16388:2007 Information technology - Automatic identification and data capture techniques - Code 39 bar code symbology specification ISO/IEC 16390:2007 Information technology - Automatic identification and data capture techniques - Interleaved 2 of 5 bar code symbology specification ISO/TS 16401 Electronic fee collection – Evaluation of equipment for conformity to ISO/TS 17575-2 ISO/TS 16401-1:2012 Part 1: Test suite structure and test purposes ISO/TS 16401-2:2012 Part 2: Abstract test suite ISO 16402:2008 Implants for surgery – Acrylic resin cement – Flexural fatigue testing of acrylic resin cements used in orthopaedics ISO/TS 16407 Electronic fee collection – Evaluation of equipment for conformity to ISO/TS 17575-1 ISO/TS 16407-1:2011 Part 1: Test suite structure and test purposes ISO/TS 16407-2:2012 Part 2: Abstract test suite ISO/TS 16410 Electronic fee collection – Evaluation of equipment for conformity to ISO/TS 17575-3 ISO/TS 16410-1:2011 Part 1: Test suite structure and test purposes ISO/TS 16410-2:2012 Part 2: Abstract test suite ISO 16413:2013 Evaluation of thickness, density and interface width of thin films by X-ray reflectometry – Instrumental requirements, alignment and positioning, data collection, data analysis and reporting ISO 16428:2005 Implants for surgery – Test solutions and environmental conditions for static and dynamic corrosion tests on implantable materials and medical devices ISO 16429:2004 Implants for surgery – Measurements of open-circuit potential to assess corrosion behaviour of metallic implantable materials and medical devices over extended time periods ISO 16437:2012 Ships and marine technology - Lifesaving and fire protection - Atmospheric oil mist detectors for ships ISO 16439:2014 Information and documentation - Methods and procedures for assessing the impact of libraries ISO 16443:2014 Dentistry - Vocabulary for dental implants systems and related procedure ISO/IEC 16448:2002 Information technology - 120 mm DVD - Read-only disk ISO/IEC 16449:2002 Information technology - 80 mm DVD - Read-only disk ISO 16457:2014 Space systems – Space environment (natural and artificial) – The Earth's ionosphere model: international reference ionosphere (IRI) model and extensions to the plasmasphere ISO/TS 16460:2016 Intelligent transport systems – Communications access for land mobiles (CALM) – Communication protocol messages for global usage ISO/IEC 16480:2015 Information technology - Automatic identification and data capture techniques - Reading and display of ORM by mobile devices ISO 16484 Building automation and control systems (BACS) ISO 16484-1:2010 Part 1: Project specification and implementation ISO 16484-2:2004 Part 2: Hardware ISO 16484-3:2005 Part 3: Functions ISO 16484-5:2017 Part 5: Data communication protocol ISO 16484-6:2014 Part 6: Data communication conformance testing ISO/IEC 16485:2000 Information technology - Mixed Raster Content (MRC) ISO 16495:2013 Packaging - Transport packaging for dangerous goods - Test methods ISO/IEC 16500 Information technology – Generic digital audio-visual systems ISO/IEC 16500-1:1999 Part 1: System reference models and scenarios ISO/IEC 16500-2:1999 Part 2: System dynamics, scenarios and protocol requirements ISO/IEC 16500-3:1999 Part 3: Contours: Technology domain ISO/IEC 16500-4:1999 Part 4: Lower-layer protocols and physical interfaces ISO/IEC 16500-5:1999 Part 5: High and mid-layer protocols ISO/IEC 16500-6:1999 Part 6: Information representation ISO/IEC 16500-7:1999 Part 7: Basic security tools ISO/IEC 16500-8:1999 Part 8: Management architecture and protocols ISO/IEC 16500-9:1999 Part 9: Usage information protocols ISO/IEC TR 16501:1999 Information technology - Generic digital audio-visual systems - Technical Report on ISO/IEC 16500 - Description of digital audio-visual functionalities ISO/IEC 16504:2011 Information technology – Telecommunications and information exchange between systems – MAC and PHY for operation in TV white space ISO/IEC 16509:1999 Information technology - Year 2000 terminology ISO/IEC 16512 Information technology – Relayed multicast protocol ISO/IEC 16512-1:2016 Information technology – Relayed Multicast Control Protocol (RMCP) – Framework ISO/IEC 16512-2:2016 Information technology – Relayed multicast protocol: Specification for simplex group applications ISO/IEC 16513:2005 Information technology – Group management protocol ISO/HL7 16527:2016 Health informatics – HL7 Personal Health Record System Functional Model, Release 1 (PHRS FM) ISO 16548:2012 Ships and marine technology - Ship design - General guidance on emergency towing procedures ISO 16549 Textiles – Unevenness of textile strands – Capacitance method ISO/TS 16550:2014 Nanotechnologies – Determination of silver nanoparticles potency by release of muramic acid from Staphylococcus aureus ISO 16559:2014 Solid biofuels – Terminology, definitions and descriptions ISO 16571:2014 Systems for evacuation of plume generated by medical devices ISO 16587:2004 Mechanical vibration and shock – Performance parameters for condition monitoring of structures ISO 16589 Rotary shaft lip-type seals incorporating thermoplastic sealing elements ISO 16589-2:2011 Part 2: Vocabulary ISO 16609:2012 Financial services – Requirements for message authentication using symmetric techniques ISO 16610 Geometrical product specifications (GPS) – Filtration ISO 16612 Graphic technology - Variable printing data exchange ISO 16612-1:2005 Part 1: Using PPML 2.1 and PDF 1.4 (PPML/VDX-2005) ISO 16612-2:2010 Part 2: Using PDF/X-4 and PDF/X-5 (PDF/VT-1 and PDF/VT-2) ISO 16613 Graphic technology - Variable content replacement ISO 16613-1:2017 Part 1: Using PDF/X for variable content replacement (PDF/VCR-1) ISO 16622:2002 Meteorology – Sonic anemometers/thermometers – Acceptance test methods for mean wind measurements ISO 16628:2008 Tracheobronchial tubes – Sizing and marking ISO 16638 Radiological protection - Monitoring and internal dosimetry for specific materials ISO 16638-1:2015 Part 1: Inhalation of uranium compounds ISO 16641:2014 Measurement of radioactivity in the environment - Air - Radon 220: Integrated measurement methods for the determination of the average activity concentration using passive solid-state nuclear track detectors ISO 16642:2017 Computer applications in terminology — Terminological markup framework ISO 16649 Microbiology of the food chain – Horizontal method for the enumeration of beta-glucuronidase-positive Escherichia coli ISO 16649-1:2001 Part 1: Colony-count technique at 44 degrees C using membranes and 5-bromo-4-chloro-3-indolyl beta-D-glucuronide ISO 16649-2:2001 Part 2: Colony-count technique at 44 degrees C using 5-bromo-4-chloro-3-indolyl beta-D-glucuronide ISO 16649-3:2015 Part 3: Detection and most probable number technique using 5-bromo-4-chloro-3-indolyl-ß-D-glucuronide ISO 16654:2001 Microbiology of food and animal feeding stuffs – Horizontal method for the detection of Escherichia coli O157 ISO 16671:2015 Ophthalmic implants – Irrigating solutions for ophthalmic surgery ISO 16672:2015 Ophthalmic implants – Ocular endotamponades ISO 16682:2015 Aerospace series – Terminology for clamping devices ISO 16684 Graphic technology - Extensible metadata platform (XMP) specification ISO 16684-1:2012 Part 1: Data model, serialization and core properties ISO 16684-2:2014 Part 2: Description of XMP schemas using RELAX NG ISO/TR 16705:2016 Statistical methods for implementation of Six Sigma - Selected illustrations of contingency table analysis ISO 16706:2016 Ships and marine technology - Marine evacuation systems - Load calculations and testing ISO 16707:2016 Ships and marine technology - Marine evacuation systems - Determination of capacity ISO 16744:2003 Dentistry — Base metal materials for fixed dental restorations [Withdrawn: replaced with ISO 22674] ISO 16750 Road vehicles – Environmental conditions and testing for electrical and electronic equipment ISO 16760:2014 Graphic technology - Prepress data exchange - Preparation and visualization of RGB images to be used in RGB-based graphics arts workflows ISO/TR 16764:2003 Lifts, escalators and passenger conveyors – Comparison of worldwide standards on electromagnetic interference/electromagnetic compatibility ISO/TS 16785:2014 Electronic Fee Collection (EFC) – Interface definition between DSRC-OBE and external in-vehicle devices ISO/TR 16786:2015 Intelligent transport systems – The use of simulation models for evaluation of traffic management systems – Input parameters and reporting template for simulation of traffic signal control systems ISO 16787:2016 Intelligent transport systems – Assisted Parking System (APS) – Performance requirements and test procedures ISO/TS 16791:2014 Health informatics – Requirements for international machine-readable coding of medicinal product package identifiers ISO 16792:2015 Technical product documentation - Digital product definition data practices ISO 16818:2008 Building environment design – Energy efficiency – Terminology ISO/IEC 16824:1999 Information technology - 120 mm DVD rewritable disk (DVD-RAM) ISO/IEC 16825:1999 Information technology - Case for 120 mm DVD-RAM disks ISO 16832:2006 Acoustics – Loudness scaling by means of categories ISO 16840 Wheelchair seating ISO 16840-1:2006 Part 1: Vocabulary, reference axis convention and measures for body segments, posture and postural support surfaces ISO/TS 16843 Health informatics – Categorial structures for representation of acupuncture ISO/TS 16843-1:2016 Part 1: Acupuncture points ISO/TS 16843-2:2015 Part 2: Needling ISO 16894:2009 Wood-based panels – Oriented strand board (OSB) – Definitions, classification and specifications ISO 16902 Hydraulic fluid power – Test code for the determination of sound power levels of pumps using sound intensity techniques: Engineering method ISO 16902-1:2003 Part 1: Pumps ISO 16919:2014 Space data and information transfer systems – Requirements for bodies providing audit and certification of candidate trustworthy digital repositories ISO/TS 16949 Quality management systems – Particular requirements for the application of ISO 9001:2008 for automotive production and relevant service part organizations [Withdrawn: replaced by IATF 16949:2016] ISO/TS 16951:2004 Road vehicles – Ergonomic aspects of transport information and control systems (TICS) – Procedures for determining priority of on-board messages presented to drivers ISO/TS 16955:2016 Prosthetics – Quantification of physical parameters of ankle foot devices and foot units ISO/IEC 16963:2017 Information technology - Digitally recorded media for information interchange and storage - Test method for the estimation of lifetime of optical disks for long-term data storage ISO/IEC 16969:1999 Information technology - Data interchange on 120 mm optical disk cartridges using +RW format - Capacity: 3,0 Gbytes and 6,0 Gbytes ISO 16971:2015 Ophthalmic instruments – Optical coherence tomograph for the posterior segment of the human eye ISO/TR 16982:2002 Ergonomics of human-system interaction – Usability methods supporting human-centred design ISO 17000 – ISO 17999 ISO/IEC 17000:2004 Conformity assessment - Vocabulary and general principles ISO/IEC 17007:2009 Conformity assessment – Guidance for drafting normative documents suitable for use for conformity assessment ISO/IEC 17011:2017 Conformity assessment – Requirements for accreditation bodies accrediting conformity assessment bodies ISO/IEC 17020:2012 Conformity assessment—Requirements for the operation of various types of bodies performing inspection ISO/IEC 17021:2011 Conformity assessment—Requirements for bodies providing audit and certification of management systems ISO/IEC 17021-1:2015 Part 1: Requirements ISO/IEC 17021-2:2016 Part 2: Competence requirements for auditing and certification of environmental management systems ISO/IEC 17021-3:2017 Part 3: Competence requirements for auditing and certification of quality management systems ISO/IEC TS 17021-4:2013 Part 4: Competence requirements for auditing and certification of event sustainability management systems ISO/IEC TS 17021-5:2014 Part 5: Competence requirements for auditing and certification of asset management systems ISO/IEC TS 17021-6:2014 Part 6: Competence requirements for auditing and certification of business continuity management systems ISO/IEC TS 17021-7:2014 Part 7: Competence requirements for auditing and certification of road traffic safety management systems ISO/IEC TS 17021-9:2016 Part 9: Competence requirements for auditing and certification of anti-bribery management systems ISO/IEC TS 17022:2012 Conformity assessment—Requirements and recommendations for content of a third-party audit report on management systems ISO/IEC TS 17023:2013 Conformity assessment—Guidelines for determining the duration of management system certification audits ISO/IEC 17024:2012 Conformity assessment – General requirements for bodies operating certification of persons ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratories ISO/IEC TR 17026:2015 Conformity assessment – Example of a certification scheme for tangible products ISO/IEC TS 17027:2014 Conformity assessment – Vocabulary related to competence of persons used for certification of persons ISO/IEC TR 17028:2017 Conformity assessment – Guidelines and examples of a certification scheme for services ISO/IEC 17029:2019 Conformity assessment — General principles and requirements for validation and verification bodies ISO/IEC 17030:2003 Conformity assessment – General requirements for third-party marks of conformity ISO/IEC 17033:2019 Ethical claims and supporting information - Principles and requirements ISO 17034:2016 General requirements for the competence of reference material producers ISO/IEC 17040:2005 Conformity assessment – General requirements for peer assessment of conformity assessment bodies and accreditation bodies ISO/IEC 17043:2010 Conformity assessment – General requirements for proficiency testing ISO 17049:2013 Accessible design - Application of braille on signage, equipment and appliances ISO/IEC 17050:2004 Conformity assessment — Supplier's declaration of conformity ISO/IEC 17050-1:2004 Part 1: General requirements (reviewed and confirmed in 2020) ISO/IEC 17050-2:2004 Part 2: Supporting documentation (reviewed and confirmed in 2020) ISO/IEC 17065:2012 Conformity assessment—Requirements for bodies certifying products, processes and services ISO 17066:2007 Hydraulic tools - Vocabulary ISO/IEC 17067:2013 Conformity assessment – Fundamentals of product certification and guidelines for product certification schemes ISO/TR 17068:2012 Information and documentation - Trusted third party repository for digital records ISO 17075 Leather - Chemical determination of chromium(VI) content in leather ISO 17075-1:2017 Part 1: Colorimetric method ISO 17075-2:2017 Part 2: Chromatographic method ISO 17080:2005 Manually portable agricultural and forestry machines and powered lawn and garden equipment - Design principles for single-panel product safety labels ISO 17088:2012 Specifications for compostable plastics ISO 17089 Measurement of fluid flow in closed conduits – Ultrasonic meters for gas ISO 17089-1:2010 Part 1: Meters for custody transfer and allocation measurement ISO 17089-2:2012 Part 2: Meters for industrial applications ISO 17090 Health informatics – Public key infrastructure ISO 17090-1:2013 Part 1: Overview of digital certificate services ISO 17090-2:2015 Part 2: Certificate profile ISO 17090-3:2008 Part 3: Policy management of certification authority ISO 17090-4:2014 Part 4: Digital Signatures for healthcare documents ISO 17090-5:2017 Part 5: Authentication using Healthcare PKI credentials ISO/TR 17098:2013 Packaging material recycling - Report on substances and materials which may impede recycling ISO 17100:2015 Translation services—Requirements for translation services ISO 17115:2007 Health informatics - Vocabulary of compositional terminological systems ISO/TS 17117:2002 Health informatics - Controlled health terminology - Structure and high-level indicators ISO 17117 Health informatics — Terminological resources ISO 17117-1:2018 Part 1: Characteristics ISO/TR 17119:2005 Health informatics – Health informatics profiling framework ISO 17123 Optics and optical instruments – Field procedures for testing geodetic and surveying instruments ISO 17123-1:2014 Part 1: Theory ISO 17123-2:2001 Part 2: Levels ISO 17123-3:2001 Part 3: Theodolites ISO 17123-4:2012 Part 4: Electro-optical distance meters (EDM measurements to reflectors) ISO 17123-5:2012 Part 5: Total stations ISO 17123-6:2012 Part 6: Rotating lasers ISO 17123-7:2005 Part 7: Optical plumbing instruments ISO 17123-8:2015 Part 8: GNSS field measurement systems in real-time kinematic (RTK) ISO/TS 17137:2014 Cardiovascular implants and extracorporeal systems – Cardiovascular absorbable implants ISO 17166:1999 Erythema reference action spectrum and standard erythema dose ISO 17185 Intelligent transport systems – Public transport user information ISO 17185-1:2014 Part 1: Standards framework for public information systems ISO/TR 17185-2:2015 Part 2: Public transport data and interface standards catalogue and cross references ISO/TR 17185-3:2015 Part 3: Use cases for journey planning systems and their interoperation ISO/TS 17187:2013 Intelligent transport systems – Electronic information exchange to facilitate the movement of freight and its intermodal transfer – Governance rules to sustain electronic information exchange methods ISO/TS 17200:2013 Nanotechnology – Nanoparticles in powder form – Characteristics and measurements ISO 17201 Acoustics – Noise from shooting ranges ISO 17201-1:2005 Part 1: Determination of muzzle blast by measurement ISO 17201-2:2006 Part 2: Estimation of muzzle blast and projectile sound by calculation ISO 17201-3:2010 Part 3: Guidelines for sound propagation calculations ISO 17201-4:2006 Part 4: Prediction of projectile sound ISO 17201-5:2010 Part 5: Noise management ISO 17202 Textiles – Determination of twist in single spun yarns – Untwist/retwist method ISO/IEC 17203:2017 Information technology – Open Virtualization Format (OVF) specification ISO 17208 Underwater acoustics – Quantities and procedures for description and measurement of underwater sound from ships ISO 17208-1:2016 Part 1: Requirements for precision measurements in deep water used for comparison purposes ISO 17218:2014 Sterile acupuncture needles for single use ISO/TS 17251:2016 Health informatics – Business requirements for a syntax to exchange structured dose information for medicinal products ISO 17258:2015 Statistical methods - Six Sigma - Basic criteria underlying benchmarking for Six Sigma in organisations ISO 17261:2012 Intelligent transport systems – Automatic vehicle and equipment identification – Intermodal goods transport architecture and terminology ISO 17262:2012 Intelligent transport systems – Automatic vehicle and equipment identification – Numbering and data structures ISO 17263:2012 Intelligent transport systems – Automatic vehicle and equipment identification – System parameters ISO 17264:2009 Intelligent transport systems – Automatic vehicle and equipment identification – Interfaces ISO 17267:2009 Intelligent transport systems – Navigation systems – Application programming interface (API) ISO/TR 17302:2015 Nanotechnologies – Framework for identifying vocabulary development for nanotechnology applications in human healthcare ISO/IEC 17309:2000 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Mapping functions for the employment of a circuit mode basic service and the supplementary service user-to-user signalling as a pair of on-demand inter-PINX connections ISO/IEC 17310:2000 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Mapping functions for the employment of 64 kbit/s circuit mode connections with 16 kbit/s sub-multiplexing ISO/IEC 17311:2000 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Mapping functions for the employment of 64 kbit/s circuit mode connections with 8 kbit/s sub-multiplexing ISO 17316:2015 Information and documentation - International standard link identifier (ISLI) ISO 17338:2009 Ships and marine technology - Drawings for fire protection - Indications of fire rating by divisions for ships and high-speed craft ISO/IEC 17341:2009 Information technology - Data interchange on 120 mm and 80 mm optical disk using +RW format - Capacity: 4,7 Gbytes and 1,46 Gbytes per side (recording speed up to 4X) ISO/IEC 17342:2004 Information technology - 80 mm (1,46 Gbytes per side) and 120 mm (4,70 Gbytes per side) DVD re-recordable disk (DVD-RW) ISO/IEC 17343:2007 Information technology – Telecommunications and information exchange between systems – Corporate telecommunication networks – Signalling interworking between QSIG and SIP – Basic services ISO/IEC 17344:2009 Information technology - Data interchange on 120 mm and 80 mm optical disk using +R format - Capacity: 4,7 Gbytes and 1,46 Gbytes per side (recording speed up to 16X) ISO/IEC 17345:2006 Information technology - Data Interchange on 130 mm Rewritable and Write Once Read Many Ultra Density Optical (UDO) Disk Cartridges - Capacity: 30 Gbytes per Cartridge - First Generation ISO/IEC 17346:2005 Information technology - Data interchange on 90 mm optical disk cartridges - Capacity: 1,3 Gbytes per cartridge ISO/TR 17350:2013 Direct Marking on Plastic Returnable Transport Items (RTIs) ISO 17351:2013 Packaging - Braille on packaging for medicinal products ISO 17359:2011 Condition monitoring and diagnostics of machines – General guidelines ISO 17361:2017 Intelligent transport systems – Lane departure warning systems – Performance requirements and test procedures ISO 17363:2013 Supply chain applications of RFID - Freight containers ISO 17364:2013 Supply chain applications of RFID - Returnable transport items (RTIs) and returnable packaging items (RPIs) ISO 17365:2013 Supply chain applications of RFID - Transport units ISO 17366:2013 Supply chain applications of RFID - Product packaging ISO 17367:2013 Supply chain applications of RFID - Product tagging ISO 17369:2013 Statistical data and metadata exchange (SDMX) ISO/TR 17370:2013 Application Guideline on Data Carriers for Supply Chain Management ISO/TR 17384:2008 Intelligent transport systems – Interactive centrally determined route guidance (CDRG) – Air interface message set, contents and format ISO 17387:2008 Intelligent transport systems – Lane change decision aid systems (LCDAS) – Performance requirements and test procedures ISO 17398:2004 Safety colours and safety signs - Classification, performance and durability of safety signs ISO 17410:2001 Microbiology of food and animal feeding stuffs – Horizontal method for the enumeration of psychrotrophic microorganisms ISO/IEC 17417:2011 Information technology – Telecommunications and information exchange between systems – Short Distance Visible Light Communication (SDVLC) ISO/TS 17419:2014 Intelligent transport systems – Cooperative systems – Classification and management of ITS applications in a global context ISO/TS 17423:2014 Intelligent transport systems – Cooperative systems – ITS application requirements and objectives for selection of communication profiles ISO/TR 17424:2015 Intelligent transport systems – Cooperative systems – State of the art of Local Dynamic Maps concepts ISO/TS 17425:2016 Intelligent transport systems – Cooperative systems – Data exchange specification for in-vehicle presentation of external road and traffic related data ISO/TS 17426:2016 Intelligent transport systems – Cooperative systems – Contextual speeds ISO 17427 Intelligent transport systems – Cooperative ITS ISO/TS 17427:2014 Roles and responsibilities in the context of cooperative ITS based on architecture(s) for cooperative systems ISO/TR 17427-2:2015 Part 2: Framework overview ISO/TR 17427-3:2015 Part 3: Concept of operations (ConOps) for 'core' systems ISO/TR 17427-4:2015 Part 4: Minimum system requirements and behaviour for core systems ISO/TR 17427-6:2015 Part 6: 'Core system' risk assessment methodology ISO/TR 17427-7:2015 Part 7: Privacy aspects ISO/TR 17427-8:2015 Part 8: Liability aspects ISO/TR 17427-9:2015 Part 9: Compliance and enforcement aspects ISO/TR 17427-10:2015 Part 10: Driver distraction and information display ISO/TS 17429:2017 Intelligent transport systems – Cooperative ITS – ITS station facilities for the transfer of information between ITS stations ISO 17432:2004 Health informatics – Messages and communication – Web access to DICOM persistent objects ISO 17438 Intelligent transport systems – Indoor navigation for personal and vehicle ITS station ISO 17438-1:2016 Part 1: General information and use case definition ISO/TS 17439:2014 Health informatics – Development of terms and definitions for health informatics glossaries ISO 17442:2012 Financial services - Legal Entity Identifier (LEI) ISO/TS 17444 Electronic fee collection – Charging performance ISO/TS 17444-1:2017 Part 1: Metrics ISO/TS 17444-2:2017 Part 2: Examination framework ISO 17450 Geometrical product specifications (GPS) - Basic concepts ISO 17450-1:2011 Part 1: Model for geometrical specification and verification ISO 17450-2:2012 Part 2: Basic tenets, specifications, operators, uncertainties and ambiguities ISO 17450-3:2016 Part 3: Toleranced features ISO 17450-4:2017 Part 4: Geometrical characteristics for quantifying GPS deviations ISO 17451 Packaging - Codification of contents for inventories for shipments of household goods and personal effects ISO 17451-1:2016 Part 1: Numeric codification of inventories ISO/TS 17451-2:2017 Part 2: XML messaging structure for electronic transmission of inventory data ISO/TR 17452:2007 Intelligent transport systems – Using UML for defining and documenting ITS/TICS interfaces ISO/IEC 17462:2000 Information technology – 3,81 mm wide magnetic tape cartridge for information interchange – Helical scan recording – DDS-4 format ISO/TR 17465 Intelligent transport systems - Cooperative ITS ISO/TR 17465-1:2014 Part 1: Terms and definitions ISO/TR 17465-2:2015 Part 2: Guidelines for standards documents ISO/TR 17465-3:2015 Part 3: Release procedures for standards documents ISO/TS 17466:2015 Use of UV-Vis absorption spectroscopy in the characterization of cadmium chalcogenide colloidal quantum dots ISO 17468:2016 Microbiology of the food chain – Technical requirements and guidance on establishment or revision of a standardized reference method ISO 17469 Document management - Strategy markup language (StratML) ISO 17469-1:2015 Part 1: StratML core elements ISO 17480:2015 Packaging - Accessible design - Ease of opening ISO 17488:2016 Road vehicles – Transport information and control systems – Detection-response task (DRT) for assessing attentional effects of cognitive load in driving ISO/TS 17503:2015 Statistical methods of uncertainty evaluation - Guidance on evaluation of uncertainty using two-factor crossed designs ISO/PAS 17506 Industrial automation systems and integration – COLLADA digital asset schema specification for 3D visualization of industrial data ISO 17510:2015 Medical devices – Sleep apnoea breathing therapy – Masks and application accessories ISO 17515 Intelligent transport systems – Communications access for land mobiles (CALM) – Evolved universal terrestrial radio access network (E-UTRAN) ISO 17515-1:2015 Part 1: General usage ISO 17516:2014 Cosmetics – Microbiology – Microbiological limits ISO/TR 17522:2015 Health informatics – Provisions for health applications on mobile/smart devices ISO 17523:2016 Health informatics – Requirements for electronic prescriptions ISO 17526:2003 Optics and optical instruments – Lasers and laser-related equipment – Lifetime of lasers ISO 17532:2007 Stationary equipment for agriculture – Data communications network for livestock farming ISO 17534 Acoustics – Software for the calculation of sound outdoors ISO 17534-1:2015 Part 1: Quality requirements and quality assurance ISO/TR 17534-2:2014 Part 2: General recommendations for test cases and quality assurance interface ISO/TR 17534-3:2015 Part 3: Recommendations for quality assured implementation of ISO 9613-2 in software according to ISO 17534-1 ISO/IEC 17549 Information technology – User interface guidelines on menu navigation ISO/IEC 17549-2:2015 Part 2: Navigation with 4-direction devices ISO/IEC 17568:2013 Information technology – Telecommunications and information exchange between systems – Close proximity electric induction wireless communications ISO 17572 Intelligent transport systems (ITS) – Location referencing for geographic databases ISO 17572-1:2015 Part 1: General requirements and conceptual model ISO 17572-2:2015 Part 2: Pre-coded location references (pre-coded profile) ISO 17572-3:2015 Part 3: Dynamic location references (dynamic profile) ISO 17573:2010 Electronic fee collection – Systems architecture for vehicle-related tolling ISO/TS 17574:2017 Electronic fee collection – Guidelines for security protection profiles ISO 17575 Electronic fee collection – Application interface definition for autonomous systems ISO 17575-1:2016 Part 1: Charging ISO 17575-2:2016 Part 2: Communication and connection to the lower layers ISO 17575-3:2016 Part 3: Context data ISO/TS 17582:2014 Quality management systems – Particular requirements for the application of ISO 9001:2008 for electoral organizations at all levels of government ISO/IEC 17592:2004 Information technology - 120 mm (4,7 Gbytes per side) and 80 mm (1,46 Gbytes per side) DVD rewritable disk (DVD-RAM) ISO/IEC 17594:2004 Information technology - Cases for 120 mm and 80 mm DVD-RAM disks ISO 17599:2015 Technical product documentation (TPD) – General requirements of digital mock-up for mechanical products ISO 17604:2015 Microbiology of the food chain – Carcass sampling for microbiological analysis ISO 17631:2002 Ships and marine technology - Shipboard plans for fire protection, life-saving appliances and means of escape ISO 17658:2002 Welding - Imperfections in oxyfuel flame cuts, laser beam cuts and plasma cuts - Terminology ISO 17659:2002 Welding - Multilingual terms for welded joints with illustrations ISO 17677 Resistance welding - Vocabulary ISO 17677-1:2009 Part 1: Spot, projection and seam welding ISO 17679:2016 Tourism and related services – Wellness spa – Service requirements ISO 17680:2015 Tourism and related services – Thalassotherapy – Service requirements ISO 17687:2007 Transport Information and Control Systems (TICS) – General fleet management and commercial freight operations – Data dictionary and message sets for electronic identification and monitoring of hazardous materials/dangerous goods transportation ISO 17712:2103 Freight containers — Mechanical seals - updated the 2010 version: offers "a single source of information on mechanical seals which are acceptable for securing freight containers in international commerce". Supported by C-TPAT (the use of "H" class seals is required). ISO 17713 Meteorology – Wind measurements ISO 17713-1:2007 Part 1: Wind tunnel test methods for rotating anemometer performance ISO 17714:2007 Meteorology – Air temperature measurements – Test methods for comparing the performance of thermometer shields/screens and defining important characteristics ISO 17717:2017 Meteorological balloons – Specification ISO 17724:2003 Graphical symbols - Vocabulary ISO/TS 17728:2015 Microbiology of the food chain – Sampling techniques for microbiological analysis of food and feed samples ISO 17740 Building construction machinery and equipment – Concrete placing systems for stationary equipment ISO 17740-1:2015 Part 1: Terminology and commercial specifications ISO/IEC 17760 Information technology - AT Attachment ISO/IEC 17760-101:2015 Part 101: ATA/ATAPI Command Set (ATA8-ACS) ISO/IEC 17760-102:2016 Part 102: ATA/ATAPI Command set - 2 (ACS-2) ISO 17772 Energy performance of buildings - Indoor environmental quality ISO 17772-1:2017 Part 1: Indoor environmental input parameters for the design and assessment of energy performance of buildings ISO 17776 Petroleum and natural gas industries – Offshore production installations – Guidelines on tools and techniques for hazard identification and risk assessment ISO/IEC 17788:2014 Information technology - Cloud computing - Overview and vocabulary ISO/IEC 17789:2014 Information technology - Cloud computing - Reference architecture ISO/TR 17791:2013 Health informatics – Guidance on standards for enabling safety in health software ISO 17799 Information technology - Security techniques - Code of Practice for Information Security Management (superseded by ISO/IEC 27002) ISO 17800 Building environment design – Facility Smart Grid Information Model ISO/IEC 17811 Information technology – Device control and management ISO/IEC 17811-1:2014 Part 1: Architecture ISO/IEC 17811-2:2015 Part 2: Specification of Device Control and Management Protocol ISO/IEC 17811-3:2014 Part 3: Specification of Reliable Message Delivery Protocol ISO/IEC 17821:2015 Information technology – Specification of low power wireless mesh network over channel-hopped TDMA links ISO/TS 17822 In vitro diagnostic test systems - Qualitative nucleic acid-based in vitro examination procedures for detection and identification of microbial pathogens ISO/TS 17822-1:2014 Part 1: General requirements, terms and definitions ISO/IEC 17825:2016 Information technology - Security techniques - Testing methods for the mitigation of non-invasive attack classes against cryptographic modules ISO/IEC 17826:2016 Information technology - Cloud Data Management Interface (CDMI) ISO/IEC 17839 Information technology – Identification cards – Biometric System-on-Card ISO/IEC 17839-1:2014 Part 1: Core requirements ISO/IEC 17839-2:2015 Part 2: Physical characteristics ISO/IEC 17839-3:2016 Part 3: Logical information interchange mechanism ISO 17853:2011 Wear of implant materials – Polymer and metal wear particles – Isolation and characterization ISO/TS 17863:2013 Geometrical product specification (GPS) - Tolerancing of moveable assemblies ISO/TS 17865:2016 Geometrical product specifications (GPS) - Guidelines for the evaluation of coordinate measuring machine (CMM) test uncertainty for CMMs using single and multiple stylus contacting probing systems ISO/IEC 17875:2000 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Specification, functional model and information flows – Private User Mobility (PUM) – Registration supplementary service ISO/IEC 17876:2003 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Inter-exchange signalling protocol—Private User Mobility (PUM) – Registration supplementary service ISO/IEC 17877:2000 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Specification, functional model and information flows – Private User Mobility (PUM) – Call handling additional network features ISO/IEC 17878:2003 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Inter-exchange signalling protocol – Private User Mobility (PUM) – Call handling additional network features ISO 17893:2004 Steel wire ropes - Vocabulary, designation and classification ISO 17894:2005 Ships and marine technology – Computer applications – General principles for the development and use of programmable electronic systems in marine applications ISO 17901 Optics and photonics – Holography ISO 17901-1:2015 Part 1: Methods of measuring diffraction efficiency and associated optical characteristics of holograms ISO 17901-2:2015 Part 2: Methods for measurement of hologram recording characteristics ISO 17907:2014 Ships and marine technology - Single point mooring arrangements for conventional tankers ISO/IEC 17913:2000 Information technology – 12,7mm 128-track magnetic tape cartridge for information interchange – Parallel serpentine format ISO/TS 17915:2013 Optics and photonics – Measurement method of semiconductor lasers for sensing ISO/TS 17919:2013 Microbiology of the food chain – Polymerase chain reaction (PCR) for the detection of food-borne pathogens – Detection of botulinum type A, B, E and F neurotoxin-producing clostridia ISO 17933:2000 GEDI - Generic Electronic Document Interchange ISO/TS 17938:2014 Health informatics – Semantic network framework of traditional Chinese medicine language system ISO 17937:2015 Dentistry - Osteotome ISO/TS 17948:2014 Health informatics – Traditional Chinese medicine literature metadata ISO/IEC 17960:2015 Information technology - Programming languages, their environments and system software interfaces - Code signing for source code ISO/IEC TS 17961:2013 Information technology - Programming languages, their environments and system software interfaces - C secure coding rules ISO/IEC 17963:2013 Web Services for Management (WS-Management) Specification ISO/TS 17969:2015 Petroleum, petrochemical and natural gas industries – Guidelines on competency for personnel ISO 17972 Graphic technology - Colour data exchange format (CxF/X) ISO 17972-1:2015 Part 1: Relationship to CxF3 (CxF/X) ISO 17972-2:2016 Part 2: Scanner target data (CxF/X-2) ISO 17972-3:2017 Part 3: Output target data (CxF/X-3) ISO 17972-4:2015 Part 4: Spot colour characterisation data (CxF/X-4) ISO/TS 17975:2015 Health informatics – Principles and data requirements for consent in the Collection, Use or Disclosure of personal health information ISO/IEC 17982:2012 Information technology – Telecommunications and information exchange between systems – Close Capacitive Coupling Communication Physical Layer (CCCC PHY) ISO 17987 Road vehicles – Local Interconnect Network (LIN) ISO 17987-2:2016 Part 2: Transport protocol and network layer services ISO 17994:2014 Water quality – Requirements for the comparison of the relative recovery of microorganisms by two quantitative methods ISO 17995:2005 Water quality – Detection and enumeration of thermotolerant Campylobacter species ISO/IEC 17998:2012 Information technology - SOA Governance Framework Notes References External links International Organization for Standardization ISO Certification Provider ISO Consultant International Organization for Standardization
45420727
https://en.wikipedia.org/wiki/Equation%20Group
Equation Group
The Equation Group, classified as an advanced persistent threat, is a highly sophisticated threat actor suspected of being tied to the Tailored Access Operations (TAO) unit of the United States National Security Agency (NSA). Kaspersky Labs describes them as one of the most sophisticated cyber attack groups in the world and "the most advanced ... we have seen", operating alongside the creators of Stuxnet and Flame. Most of their targets have been in Iran, Russia, Pakistan, Afghanistan, India, Syria, and Mali. The name originated from the group's extensive use of encryption. By 2015, Kaspersky documented 500 malware infections by the group in at least 42 countries, while acknowledging that the actual number could be in the tens of thousands due to its self-terminating protocol. In 2017, WikiLeaks published a discussion held within the CIA on how it had been possible to identify the group. One commenter wrote that "the Equation Group as labeled in the report does not relate to a specific group but rather a collection of tools" used for hacking. Discovery At the Kaspersky Security Analysts Summit held in Mexico on February 16, 2015, Kaspersky Lab announced its discovery of the Equation Group. According to Kaspersky Lab's report, the group has been active since at least 2001, with more than 60 actors. The malware used in their operations, dubbed EquationDrug and GrayFish, is found to be capable of reprogramming hard disk drive firmware. Because of the advanced techniques involved and high degree of covertness, the group is suspected of ties to the NSA, but Kaspersky Lab has not identified the actors behind the group. Probable links to Stuxnet and the NSA In 2015 Kaspersky's research findings on the Equation Group noted that its loader, "Grayfish", had similarities to a previously discovered loader, "Gauss", from another attack series, and separately noted that the Equation Group used two zero-day attacks later used in Stuxnet; the researchers concluded that "the similar type of usage of both exploits together in different computer worms, at around the same time, indicates that the EQUATION group and the Stuxnet developers are either the same or working closely together". Firmware They also identified that the platform had at times been spread by interdiction (interception of legitimate CDs sent by a scientific conference organizer by mail), and that the platform had the "unprecedented" ability to infect and be transmitted through the hard drive firmware of several major hard drive manufacturers, and create and use hidden disk areas and virtual disk systems for its purposes, a feat which would require access to the manufacturer's source code to achieve, and that the tool was designed for surgical precision, going so far as to exclude specific countries by IP and allow targeting of specific usernames on discussion forums. Codewords and timestamps The NSA codewords "STRAITACID" and "STRAITSHOOTER" have been found inside the malware. In addition, timestamps in the malware seem to indicate that the programmers worked overwhelmingly Monday–Friday in what would correspond to a 08:00–17:00 (8:00 AM - 5:00 PM) workday in an Eastern United States time zone. The LNK exploit Kaspersky's global research and analysis team, otherwise known as GReAT, claimed to have found a piece of malware that contained Stuxnet's "privLib" in 2008. Specifically it contained the LNK exploit found in Stuxnet in 2010. Fanny is classified as a worm that affects certain Windows operating systems and attempts to spread laterally via network connection or USB storage. Kaspersky stated that they suspect that the Equation Group has been around longer than Stuxnet, based on the recorded compile time of Fanny. Link to IRATEMONK F-Secure claims that the Equation Group's malicious hard drive firmware is TAO program "IRATEMONK", one of the items from the NSA ANT catalog exposed in a 2013 Der Spiegel article. IRATEMONK provides the attacker with an ability to have their software application persistently installed on desktop and laptop computers, despite the disk being formatted, its data erased or the operating system re-installed. It infects the hard drive firmware, which in turn adds instructions to the disk's master boot record that causes the software to install each time the computer is booted up. It is capable of infecting certain hard drives from Seagate, Maxtor, Western Digital, Samsung, IBM, Micron Technology and Toshiba. 2016 breach of the Equation Group In August 2016, a hacking group calling itself "The Shadow Brokers" announced that it had stolen malware code from the Equation Group. Kaspersky Lab noticed similarities between the stolen code and earlier known code from the Equation Group malware samples it had in its possession including quirks unique to the Equation Group's way of implementing the RC6 encryption algorithm, and therefore concluded that this announcement is legitimate. The most recent dates of the stolen files are from June 2013, thus prompting Edward Snowden to speculate that a likely lockdown resulting from his leak of the NSA's global and domestic surveillance efforts stopped The Shadow Brokers' breach of the Equation Group. Exploits against Cisco Adaptive Security Appliances and Fortinet's firewalls were featured in some malware samples released by The Shadow Brokers. EXTRABACON, a Simple Network Management Protocol exploit against Cisco's ASA software, was a zero-day exploit as of the time of the announcement. Juniper also confirmed that its NetScreen firewalls were affected. The EternalBlue exploit was used to conduct the damaging worldwide WannaCry ransomware attack. See also Global surveillance disclosures (2013–present) United States intelligence operations abroad Firmware hacking References External links Equation Group: Questions and Answers by Kaspersky Lab, Version: 1.5, February 2015 A Fanny Equation: "I am your father, Stuxnet" by Kaspersky Lab, February 2015 fanny.bmp source - at GitHub, November 30, 2020 Technical Write-up - at GitHub, February 10, 2021 Cyberwarfare in the United States National Security Agency operations Rootkits American advanced persistent threat groups
383827
https://en.wikipedia.org/wiki/VEB%20Robotron
VEB Robotron
VEB Kombinat Robotron (or simply Robotron) was the biggest East German electronics manufacturer. It was based in Dresden and employed 68,000 people (1989). It produced personal computers, SM EVM minicomputers, the ESER mainframe computers, several computer peripherals as well as home computers, radios, television sets and other items including a kleingebäckpresse (cookie press). Divisions Robotron managed several different divisions: VEB Robotron-Elektronik Dresden (headquarters) — typewriters, personal computers, minicomputers, mainframes VEB Robotron-Meßelektronik Dresden — measurement and testing devices, home computers VEB Robotron-Projekt Dresden — software department VEB Robotron-Buchungsmaschinenwerk Karl-Marx-Stadt — personal computers, floppy disk drives VEB Robotron-Elektronik Hoyerswerda — monitors, power supply units VEB Robotron-Elektronik Radeberg — mainframes, radio receivers, portable television receivers, directional radio systems VEB Robotron Vertrieb Dresden, Berlin and Erfurt — sales departments VEB Robotron-Elektronik Zella-Mehlis — computer terminals, hard disk drives VEB Robotron-Büromaschinenwerk Sömmerda — personal computers, printers, electronic calculators (Soemtron 220, 222, 224), invoicing machines (EFA 380), punched card indexers and sorters (Soemtron 432). VEB Robotron Elektronik Riesa — printed circuit boards VEB Robotron-Anlagenbau Leipzig — general contractor, design and assembly for computer and process calculation systems in the GDR and export, training center On 30 June 1990 the Kombinat Robotron was liquidated and the divisions were converted into corporations. In the 1990s these companies were sold, e.g., to Siemens Nixdorf, or also liquidated. Robotron Datenbank-Software GmbH is a company which emerged from one of the former divisions of Kombinat Robotron. It was newly founded on 23 August 1990, just before German reunification. Rebranding activities Robotron printers were sold in Western Germany as "Soemtron" or "Präsident", and the West German branch of Commodore used some Robotron parts for their printers. In East Germany, Epson printers were sold under the Robotron brand (but the technical description on the back side was with the "Epson" logo). Robotron computers and typewriters Robotron product series: Midrange computer EDVA Robotron 300 (based on IBM 1401), R 4000 and R 4200 computers (based on Honeywell Series 16), ES EVM systems EC 1040, EC 1055, EC 1056, EC 1057 (based on IBM System/360, IBM System/370), Minicomputer and Superminicomputer K 1600 (DEC PDP-11), K 1840 (VAX 11/780), K 1820 (MicroVAX II), Office and personal computers A 5120, PC 1715, A 7100, A 7150, BIC A 5105, EC 1834 (IBM XT), EC 1835 (IBM AT), OEM microcomputer systems K 1510, K 1520, K 1700 Operating systems such as JAMB, Disk Control Program or KOBRA See also Rolanet Single User Control Program (SCP) Disk Control Program (DCP) Electronics industry in East Germany References External links Robotron.foerderverein-tsd.de — The History of Robotron narrated by former staff members. German language site. Computer-archiv.de — On-line generalised list of computers including Robotron. German language site. Robotrontechnik.de — On-line museum of Robotron companies and products. German language site with some English translations. Soemtron.org — Technical data for the Soemtron 22x series (220, 222, 224) desktop calculators manufactured by VEB Robotron Büromaschinenwerk Sömmerda. Typewriter "Erika" — DDR Museum, Berlin Company History — Company History Robotron Radeberg. Home computer hardware companies Volkseigene Betriebe Defunct companies of Germany Science and technology in East Germany
204043
https://en.wikipedia.org/wiki/C-ration
C-ration
The C-Ration, or Field Ration, Type C, was a prepared and canned wet combat ration intended to be issued to U.S. military land forces when fresh food (A-ration) or packaged unprepared food (B-ration) prepared in mess halls or field kitchens was not possible or not available, and when a survival ration (K-ration or D-ration) was insufficient. Development began in 1938 with the first rations being field-tested in 1940 and wide-scale adoption following soon after. Operational conditions often caused the C-ration to be standardized for field issue regardless of environmental suitability or weight limitations. The C-Ration was replaced in 1958 with the Meal Combat Individual (MCI). Although officially a new ration, the MCI was derived from and very similar to the original C-Ration, and in fact continued to be called "C-Rations" by American troops throughout its production life as a combat ration (1958–1980). Although the replacement for the MCI, the MRE, was formally adopted as the Department of Defense combat ration in 1975, the first large-scale production test did not occur until in 1978 with the first MRE I rations packed and delivered in 1981.<ref>MRE History', MREInfo.Com, retrieved 4 August 2011</ref> While the MRE officially replaced the MCI in 1981, previously packed MCI rations continued to be issued until depleted. Background and development "Iron ration" (1907–1922) The first U.S attempt to make an individual ration for issue to soldiers in the field was the "iron ration", introduced in 1907. It contained three 3-ounce cakes (made from a concoction of beef bouillon powder and parched and cooked wheat), three 1-ounce bars of sweetened chocolate and packets of salt and pepper. The ration was issued in a sealed tin packet that weighed one pound, to be carried in an infantryman's top tunic pocket, and was designed for emergency use when the troops were unable to be supplied with food. It was later discontinued by the adoption of the "Reserve Ration", but findings from the development and use of the Iron Ration went into the development of the emergency D-ration. "Reserve ration" (1917–1937) The Reserve Ration was issued during the later part of World War I to feed troops who were away from a garrison or field kitchen. It originally consisted of of bacon or of meat (usually canned corned beef), two cans of hard bread or hardtack biscuits, a packet of of pre-ground coffee, a packet of of granulated sugar, and a packet of of salt. There was also a separate "tobacco ration" of of tobacco and 10 cigarette rolling papers, later replaced by brand-name machine-rolled cigarettes. After the war, there were attempts to improve the ration based on input from the field. In 1922, the ration was reorganized to consist of of meat (usually beef jerky), of canned corned beef or chocolate, of hard bread or hardtack biscuits, coffee and sugar. In 1925, the meat ration was replaced with canned pork and beans. In 1936, there was an attempt at variety by having an "A"-menu of corned beef and a "B"-menu of pork and beans. This was cancelled upon introduction of the new Field Ration, Type C, in 1938. Field ration, Type C (1938–1945) The original Type C ration, commonly known as the C-Ration, was intended to replace the Reserve Ration as a short-term individual ration designed for short use, to be supplemented by the D-Ration emergency ration. Development of a replacement for the Reserve Ration was undertaken by the newly formed Quartermaster Subsistence Research and Development Laboratory in Chicago in 1938 with the aim of producing a ration that was more palatable, nutritionally balanced, and had better keeping qualities. The first Type C ration consisted of a 'meat' unit (M-unit) (reduced to after being field tested during the 1940 Louisiana maneuvers). In the initial Type C ration, there were only three variations of the main course: meat and beans, meat and potato hash, or meat and vegetable stew. Also issued was one bread-and-dessert can, or B-unit. Each daily ration (i.e. enough food for one soldier for one day) consisted of six cans (three M-units and three B-units), while an individual meal consisted of one M-unit and one B-unit. The original oblong can was replaced with the more common cylindrical design in June 1939 due to mass production problems with the former shape of can. The C-Ration can was about tall and in diameter. It was made of non-corrugated tinplate, had a visible tin solder seam, and incorporated an opening strip. A key for use on the opening strip was soldered to the base of every B unit can. The first C-Ration cans had an aluminized finish, but in late 1940, this was changed to a gold lacquer finish to improve corrosion resistance. There was noticeable variation in the depth of gold color in World War II vintage cans, because of the large number of suppliers involved. Late in the war this was changed to drab green paint, which remained standard through the remainder of the C-ration's service life, as well as that of its (very similar) successor, the Meal Combat Individual (MCI). During the war, soldiers frequently requested that the cylindrical cans be replaced with flat, rectangular ones (similar to a sardine can), comparable to those used in the earliest versions of contemporary K rations, because of their compactness and packability; but this was deemed impractical because of the shortage of commercial machinery available to produce rectangular cans. After 1942 the K ration too, reverted to the use of small round cans. Initially, C-Ration cans were marked only with paper labels, which soon fell off and made a guessing game out of evening meals; US Soldiers and Marines receiving an unpopular menu item several nights in a row often found themselves powerless to bargain for a more palatable one. The C-Ration was, in general, not well liked by U.S. Army or Marine forces in World War II, who found the cans heavy and cumbersome, and the menu monotonous after a short period of time.Koehler, Franz A., Special Rations for the Armed Forces: Army Operational Rations – A Historical Background, QMC Historical Studies, Historical Branch, Office of the Quartermaster General, Washington, D.C. (1958) There were also inevitable problems with product consistency given the large number of suppliers involved and the pressures of wartime production. When issued to British or other Commonwealth forces formerly issued hardtack and bully beef-type rations, the C ration was initially accepted, but monotony also became a chief complaint after a few days of consumption. Australian forces tended to dislike the C ration, finding the canned food items generally bland, overly soft in texture, and unappealing. Originally intended only for infrequent use, the exigencies of combat sometimes forced supply authorities to make the C ration the only source of sustenance for several weeks in succession. In 1943, a ration board reviewing medical examinations of soldiers after long-term use of Type C rations recommended that they be restricted to a maximum of five continuous days in the absence of supplementation with other rations. While the initial specification was officially declared obsolete in 1945, and production of all Type C rations ended in 1958, existing stockpiles of both original and revised Type C rations continued to be issued to troops serving in Korea and even as late as the Vietnam War. A Marine tank commander serving in Vietnam in 1968 noted his unit was frequently supplied with older stocks of C rations, complete with early 1950s dates on the cans. "M" Unit The M-unit contained a canned entrée originally made of stew meat (a mixture of beef and pork) seasoned with salt, various spices, and chopped onions. They initially came in three varieties: Meat Stew with Beans, Meat with Vegetable Hash, and Meat Stew with Vegetables (carrots and potatoes). The commonplace nature of the menu was intentional, and designed to duplicate the menu items (hash, stews, etc.) soldiers were normally served as A- or B-rations in Army mess halls. Another new menu item, "Meat & Spaghetti in Tomato Sauce", was added in 1943. In late 1944 "Chopped Ham, Egg, and Potato", "Meat and Noodles", "Pork and Rice", "Frankfurters and Beans", "Pork and Beans", "Ham and Lima Beans", and "Chicken and Vegetables" were introduced in an attempt to increase the C rations' period of continuous use. The unpopular Meat Hash and equally unpopular experimental "Mutton Stew with Vegetables" meal were dropped. In the final revision, "Beef Stew with Vegetables" was added in 1945. By all accounts, after the meat hash and mutton stew, the Ham and Lima Beans entrée was the most unpopular; despite continued negative field reports, it inexplicably remained a standard entrée item not only during World War II, but also during the Korean War and Vietnam War. "B" Unit The B-unit (bread and dessert portion) contained several calorie-dense crackers, 3 sugar tablets, 3 dextrose energy tablets, and a packet or small can of beverage mix (instant coffee, powdered synthetic lemon drink, containing the rations' main source of vitamin C; or bouillon soup powder). Later revisions added orange drink powder (1944), sweetened cocoa powder (1944), and grape drink powder (1945), all enriched with Vitamin C, to the list of beverages. In 1941, the energy tablets were replaced with loose candy, such as candy-coated peanuts or raisins, Charms hard candy, or Brachs chocolate or vanilla caramels. Due to spoilage, the loose candy was replaced in 1944 with a chocolate disk (e.g. Brachs fudge disk) or a cookie sandwich (e.g., Jim Dandee), and the number of biscuits was reduced to 4. Another B-unit, consisting of pre-mixed oatmeal cereal, was introduced in 1944 as a breakfast ration that was usually paired with the "Ham, Egg, and Potato" Meal. Accessory pack The inclusion of additional accessories and condiments led to the development of an accessory package. The brown butcher paper accessory pack contained sugar tablets, halazone water purification tablets (for a brief period in 1945), a flat wooden spoon, a piece of candy-coated chewing gum, 3 "short" sample 3-packs or one "long" sample 9-pack of commercial-grade cigarettes and a book of 20 cardboard moisture-resistant matches, a paper-wrapped P-38 can opener printed with instructions for its proper use, and typically 22.5 sheets of toilet paper (compared to 3 sheets for the British Army). The P-38 can openers were generally worn on the GI's "dog tag" chain to facilitate opening the next meal's cans. In 1945, the accessory pack was modified. Per the order of the Surgeon General, the halazone tablets were removed and salt tablets were added. Also, feedback from the field revealed that some soldiers opened up accessory packs just to get the cigarettes and threw away the rest of the items. To reduce waste, the accessory pack was now divided into the "short" pack with cigarettes and matches, and the "long" pack containing the other accessories. Cigarette brands issued included Camel, Chelsea, Chesterfield, Craven A, Lucky Strike, Old Gold, Philip Morris, Player's, Raleigh, and Wings. Crates The rations came packed in a small rectangular wooden crate that weighed 40 lbs. and had a volume of 1.12 cubic feet. Each crate contained 8 daily rations of 3 meals each for a total of 24 "M" units, 24 "B" units, and 24 accessory packs. Early rations came with a variety of 8 Meat and Beans, 8 Meat & Vegetable Hash, and 8 Meat & Vegetable Stew "M" units and 24 "B" units. Later rations (c.1944–1945) added a breakfast meal of 8 Chopped Ham, Egg, & Potato "M"-units and 8 Compressed Cereal "B"-units in place of the Meat and Vegetable Hash''. Alternate "M"-unit menu items came packed in cases of 24 "M" units (and 24 "B" units) rather than a mixed 8-8-8 menu like the main items. Field ration, Type E (1946–1948) After World War II there was an attempt to combine the best features of the C-ration and the K-ration into a new individual ration. Called the E-ration, it was for all intents and purposes the same canned C-ration, with the addition of some new components. In field testing, the bread component of the E-ration was found to be so unpalatable that the E-ration was quickly dropped from classification and inventory. Ration, Individual, Combat, Type C (Revised) (1948–1958) After the failure of the E-Ration, ration planners decided to save costs by returning to the basic C-ration designation, intermittently revised with new menus and item specifications. Type C-2 ration (1948–1951) The C-2 ration was described in TB-QM-53, Department of the Army, dated March 1948, as an individual ration which consisted of packaged pre-cooked foods which could be eaten hot or cold. It replaced the World War II C-Ration, and later, the short-lived E-Ration. It could be carried and prepared by the individual soldier. The revised C-Ration was now intended for feeding combat troops continuously, up to three weeks (21 days). Due to the required individual portability of this ration, maximum nourishment had to be provided in the smallest physical unit. The components of this ration were prepared in five different menus. Each menu included an accessory packet which consisted of essential toilet articles, tobacco, and confections. Type C-3 ration (1951–1953) In 1951, a new C-3 menu for the Type C ration was introduced. The C-3 ration was composed of the same five menus of the C-2, but offered greater variety. In addition to new and improved “B” (bread) and “M” (meat) units, each menu contained an accessory packet, fruit, and cigarettes. The ration was very heavy, weighing 5 lbs. 8.5 oz. [2.5 kg.], and was packed in 8 small cans in a cardboard box. There were 6 daily ration boxes per cardboard case. Three “M” (meat) components, which offered 10 different varieties of meat entrées. Chopped Eggs and Ham Pork and Beans Meat Chunks and Beans Three “B” (bread) components consisting of: B-1: a unit of 5 crackers, a packet of soluble coffee, a packet of powdered milk, a packet of granulated sugar, a cocoa disc, and a tin of jam. B-2: a unit of 5 crackers, a packet of soluble coffee, a packet of powdered milk, a packet of granulated sugar, 1 cookie sandwich, and 1 chocolate fudge disc. B-3: a unit of 5 crackers, a packet of soluble coffee, a packet of powdered milk, a packet of granulated sugar, 2 cookie sandwiches, and a tin of jam. B-4: a unit of pre-mixed and compressed cereal. One can of fruit. One sundries can containing the accessory packet: (Chewing gum, toilet paper, a P-38 can opener, granulated salt, and a flat wooden spoon) and the cigarette packet: (one 9-pack of cigarettes and a book of matches). Field cooking equipment was not required for the preparation of this ration. The C-3 ration was more adequate than the original C ration in respect to its nutritional value. Type C-4 ration (1954–1958) In 1954, the C-4 ration was developed as a modification of the C-3 ration, and was called Ration, Combat, Individual. It included the issue of two cans of fruit for 2 meals to replace the one can issued for one meal in the C-3 ration. A sample C-4 ration (stamped March 1954) contained: 1 instruction sheet 2 cheese bars (1.5 net ounces/43 g net) 2 cereal class 5 bars (1.5 net ounces/43 g net) 3 type XII style 1 enriched chocolate bars (1 ounce/28 g) 1 jelly bar (2 ounces/56 g) 2 Fruit Cake Bars (2 ounces/56 g) 3 sticks Topps peppermint chewing gum 3 Domino sugar packets 3 Nestea "soluble tea product" packets 1 packet of pure soluble sugar 1 packet of soluble cream product 1 bottle water purification tablets (iodine) 1 plastic bag End of the C-Ration At its introduction, the QMC stated that the Type C ration was intended for short-term use for periods not to exceed three days. After the war, in light of field evaluation reports of monotony, the QMC Food Services Branch used this limitation as a defense to the largely negative response to the C ration during the war, while at the same time advocating standardization on the C-Ration as the sole individual packaged ration for U.S. troops. Not only did the QMC decide not to develop or introduce new alternative lightweight individual rations, it successfully campaigned for the elimination of alternatives, including the K-ration, Mountain ration, Jungle ration, and even the 10-in-1 group ration (which had proven somewhat useful in boosting nourishment and alleviating complaints of monotony for men living for extended periods on C-Rations or K-Rations). Instead, the C-Ration, still designated as a packaged ration intended for infrequent or short-term use, went through a series of largely unsuccessful minor revisions. This decision resulted in limiting troops in the field to a single class of packaged ration that despite meal variances, was neither suited to varied field environments nor for long-term use. Troops continued to complain of the monotony of a single class of field ration with one or more unpalatable menu items, especially where A and B rations were not available for extended periods. Primarily implemented due to cost concerns, the selection of a heavy canned wet ration resulted in a severe weight penalty for troops marching on foot and forced to carry a multi-day supply of rations. The overuse of the canned wet ration reached an extreme during the Vietnam War, where American troops resorted to placing stacked ration cans in socks to save bulk and reduce noise on patrol, while their enemy increased their mobility by carrying lightweight rations of dry rice. The Quartermaster Branch's insistence on canned wet rations for all postwar field issue, and the failure to develop a suitable lightweight dehydrated or other dry ration for jungle and other extreme environments led directly to the hurried development of the LRP ration or Long Range Patrol ration in 1966. Starting in 1958, C-Rations were slowly replaced by the nearly identical canned Meal, Combat, Individual ration. These rations were issued for most of the next two plus decades, until they were replaced by Meal Ready to Eat or MREs in the early 1980s. Influence The C-Rations during the Korean War led later to the introduction of instant coffee to South Korea. See also 5-in-1 ration Meal, Combat, Individual ration K-ration United States military ration Notes Further reading Arnold, Bruce Makoto. Your Money Ain't No Good O'er There': Food as Real and Social Currency in the Pacific Theater of World War II". Special Issue: Food on the Home Front, Food on the Warfront: World War II and the American Diet, Food and Foodways. 25, No. 2 (2017). External links C-Rations Through WW II C-Rations After WW II Military food of the United States
655457
https://en.wikipedia.org/wiki/EICAR%20test%20file
EICAR test file
The EICAR Anti-Virus Test File or EICAR test file is a computer file that was developed by the European Institute for Computer Antivirus Research (EICAR) and Computer Antivirus Research Organization (CARO), to test the response of computer antivirus (AV) programs. Instead of using real malware, which could cause real damage, this test file allows people to test anti-virus software without having to use a real computer virus. Anti-virus programmers set the EICAR string as a verified virus, similar to other identified signatures. A compliant virus scanner, when detecting the file, will respond in more or less the same manner as if it found a harmful virus. Not all virus scanners are compliant, and may not detect the file even when they are correctly configured. Neither the way in which the file is detected nor the wording with which it is flagged are standardized, and may differ from the way in which real malware is flagged, but should prevent it from executing as long as it meets the strict specification set by European Institute for Computer Antivirus Research. The use of the EICAR test string can be more versatile than straightforward detection: a file containing the EICAR test string can be compressed or archived, and then the antivirus software can be run to see whether it can detect the test string in the compressed file. Many of the AMTSO Feature Settings Checks are based on the EICAR test string. Design The file is a text file of between 68 and 128 bytes that is a legitimate .com executable file (plain x86 machine code) that can be run by MS-DOS, some work-alikes, and its successors OS/2 and Windows (except for 64-bit due to 16-bit limitations). When executed, the EICAR test file will print "EICAR-STANDARD-ANTIVIRUS-TEST-FILE!" and then will stop. The test string was written by noted anti-virus researchers Padgett Peterson and Paul Ducklin and engineered to consist of ASCII human-readable characters, easily created using a standard computer keyboard. It makes use of self-modifying code to work around technical issues that this constraint imposes on the execution of the test string. The EICAR test string reads The third character is the capital letter 'O', not the digit zero. The string's hash values (68 bytes without any trailing newline character) are as follows: Adoption The developers of one anti-virus software, Malwarebytes, have said that they did not add the EICAR test file to their database, because "adding fake malware and test files like EICAR to the database takes time away from malware research, and proves nothing in the long run." According to EICAR's specification, the antivirus detects the test file only if it starts with the 68-byte test string and is not more than 128 bytes long. As a result, antiviruses are not expected to raise an alarm on some other document containing the test string. The test file can still be used for some malicious purposes, exploiting the reaction from the antivirus software: A race condition involving symlinks can cause antiviruses to delete themselves. A QR-encoded EICAR test file crashes some CCTV systems. See also GTUBE – a similar test for unsolicited bulk email (email spam) References External links (also known as the European Expert Group for IT-Security) An Examination of the EICAR's Standard A-V Test Program Assembly-language analysis of the EICAR test file VirusTotal Antivirus results from scanning the EICAR file Computer security software Test items pl:Europejski Instytut Badań Wirusów Komputerowych#Plik testowy EICAR
3464430
https://en.wikipedia.org/wiki/Chaonians
Chaonians
The Chaonians () were an ancient Greek tribe that inhabited the region of Epirus currently part of north-western Greece and southern Albania. Together with the Molossians and the Thesprotians, they formed the main tribes of the northwestern Greek group. On their southern frontier lay the Epirote kingdom of the Molossians, to their southwest stood the kingdom of the Thesprotians, and to their north were various Illyrian tribes, as well as the polis of Apollonia. By the 5th century BC, they had conquered and combined to a large degree with the neighboring Thesprotians and Molossians. The Chaonians were part of the Epirote League until 170 BC when their territory was annexed by the Roman Republic. Name Due to phonetic similarity, the Athenian comic playwright Aristophanes, in his play The Knights, punningly associated the ethnonym of the Chaonians with the verb χάσκω, chásko 'to yawn', while in his play The Acharnians, with χάος, cháos 'chaos'; implying the situation that prevailed in Athenian foreign policy, and the indolent nature of Athenian politicians, respectively. According to linguist Vladimir I. Georgiev (1981), both the region of Χαονία 'Chaonia', and the name of its inhabitants Χάονες 'Chaones, Chaonians', derive from Χάων 'Chaon', which in turn derives from the Greek *χαϝ-ών 'place with gorges'; cf. Χάον ὄρος 'Chaon mountain' in Argolis, χάος 'chaos, space, abyss', χάσκω 'to yawn', χάσμα 'chasm, gorge'. According to linguist Carlo De Simone (1985), there is limited availability on the name's etymology and as such no convincing etymology of Chaonia and Chaones (Χᾶονες, Χαῦνοι) can be given. New studies such as of linguist Panagiotis Filos (2017) assert that the toponym and tribal name have a Greek etymology, citing Radoslav Katičić, who in turn cites examples from Georgiev. History The Chaonians were mentioned for the first time by Thucydides in the 5th century BC. According to Strabo, the Chaonians (along with the Molossians) were the most famous among the fourteen tribes of Epirus, because they once ruled over the whole of Epirus. The Periplus of Pseudo-Scylax makes a clear distinction between the Chaonians and their northern neighbours, the Illyrian tribes that occupied the coastal and hinterland regions further north. Epirotic tribes whether Chaonian or Molossian inhabited the plain of Korce and the valley of the Erigon. Some of the tribes that belonged to the Chaonians had practiced tumulus burial during or at the end of the Bronze Age; this feature continued into the Iron Age. After c. 650 BC, their rule extended to the southern part of the lakeland (Prespa-Ohrid), while the Dassaretai were their northernmost tribe. During this period, the Chaonians were the most powerful tribal state in Epirus. They were later succeeded by the Molossians. Though they lost some pastures at that time, they managed to retain control of an area stretching from Grammos mountain (ancient Boion) to the southwest of Ohrid-Prespa lakes. They also lost some regions in their north, in particular the southern part of Malakastra plain to the Greek colony of Apollonia, as well as to the Illyrian Taulantians. The Illyrians and Chaonians appear to have had, at least at times, a confrontational relationship; Polybius recounts a devastating raid mounted in 230 BC by the Illyrians against Phoenice, the chief city of the Chaonians. The incident had major political ramifications where many Italian traders who were in the town at the time of the sacking were killed or enslaved by the Illyrians, prompting the Roman Republic to launch the first of the two Illyrian Wars the following year. During the 2nd century, the Prasaebi replaced the Chaones in their control of Buthrotum, as attested in inscriptions from that period. The Prasaebi, themselves a member of the larger association 'The Chaones', consisted of at least five subsidiary tribes, since the officials and members of their community had as ethnics: Aixonios, Tharios, Kotulaios, O...-atas and Prochtheios. Political structure The Chaonians were settled Kata Komas () meaning in a collection of villages and not in an organized polis (despite the fact that they called their community a polis) and were a tribal state in the 5th century BC. As reported by the most common ancient account Epirus was first ruled by the Chaonians, while the rule of the Molossians started only later. According to Thucydides, their leaders were chosen on an annual basis; he names two such leaders, Photius and Nikanor "from the ruling lineage". In the 4th century BC, the Chaonians adopted the term prostates (Greek: προστάτης, "ruler") to describe their leaders, like most Greek tribal states at the time. Other terms for office were grammateus (Greek: Γραμματεύς, "secretary"), demiourgoi (Greek: δημιουργοί, "creators"), hieromnemones (Greek: ἱερομνήμονες, "of the sacred memory") and synarchontes (Greek: συνάρχοντες, "co-rulers"). They were loosely associated with the rest of the Epirote tribes (), including the Thesprotians and Molossians. They joined the Epirote League, founded in 325/320 BC, uniting their territories with those of the rest of the Epirotes in a loosely federated state that became a major power in the region until it was conquered by Rome in 170 BC. Economy Chaonia was an area where pastoral life was prominent, as is also evidenced by the local numismatic iconography. The pastoral character of the area was particularly renowned during Roman times. In Virgil's Georgics, a poem that described and idealized rural life, the author praised the goods of agriculture, contrasting the pre-eminent pastoral model; that of Chaonia. In general, Roman poets praised Chaonia as a model of bucolic life; similar to that of Arcadia. Rich Roman businessmen settled in Chaonia, where they established large villas with agricultural and livestock units. The most prominent example is that of Titus Pomponius Atticus, who built Amaltheion or Amaltheia near Bouthrotos, and bred horses and 120 herds of oxen. However, it is clear that the contrast between the agricultural and pastoral communities was very subtle, and in part, largely plasmatic. The local society was generally rural, and its inhabitants tried to achieve self-sufficiency through a number of means that complemented each other. The character of the economy was a consequence of the mountainous environment. The descriptions of ancient authors indicate that the region was densely populated during the 4th century BC. This made the adoption of a new way of life imperative, because the nomadic or semi-nomadic economy could not sustain the ever-increasing population, while constant raids and military operations would make extensive living in unfortified villages problematic; especially with the transhumant seasonal movement of men. This required a shift to a more permanent and organized settlement, which ensured better defense and the parallel exercise of various economic activities, such as agriculture, hunting, fishing and animal husbandry; the latter, however, on a more limited scale than that of nomadic or semi-nomadic. Henceforth, there was import and export of cereals in the region, while it also served as a supply base for troops. The reorganization of the economy also created new sources of wealth, through artisanship, which improved transactions and prompted Chaonians to create their first coins in the 4th century BC. The excavation of Phoenice – the capital of the Chaonians – unearthed 800 coins, of which only 20 were issued by the Chaonians; evidence of their economic prosperity, extroversion, and independent development. Women had rights over family property (including the slaves), and they could manage it, sell it, or even give it up, without the mediation of a male guardian. Also, it was common for slaves, after their liberation acts, to remain at the side of their former masters, for as long as the latter lived. This reveals the strong bond between the slaves and their owners; possibly due to the late adoption of slavery in the region, and the preservation of tribal ties, the core of which was the house, and therefore all the members attached to it were largely integral. Geography Chaonia or Chaon ( or ) was the name of the northwestern part of Epirus. It was one of the three main ethnic divisions of Epirus; the other two were Thesprotia and Molossia. Strabo in his Geography places Chaonia between the Ceraunian Mountains in the north and the River Thyamis in the south. Phoenice (Phoinike) was the capital and most important city of the Chaonians. The strength of the Chaonian tribes prevented the organized Greek city-states from establishing any colonies on the coast of Chaonia. Their region lay on the edge of the Greek world and was far from peaceful; for many centuries, it remained a frontier area contested with the Illyrian peoples to the north. According to Šašel Kos, at the time of Pseudo-Scylax, Chaonians as well as all the peoples to the north and to the south of Chaonia were living in villages, while Greece began at the Greek polis of Ambracia. However, Šašel Kos, also stated that northern Epirus was part of the Greek world and the line stretching from the Acroceraunian mountains as far as Damastium inland (in Dassaretis) is generally accepted among modern scholars as its border. As already known by Hecataeus in the 6th century BC, Pseudo-Scylax notes that Oricum marked the northernmost border of Chaonia (and therefore of Epirus); as such, Illyria begun north of Oricum. According to Stocker, Chaonia in the 5th century BC was not considered part of "Greece", but was considered part of it by the beginning of the Hellenistic period. Stocker also states that there was a major cultural border between Chaonia and Illyria. On this the polis of Apollonia was founded by Greek colonists. Language Due to the fact that Greek toponyms preserving archaic features are very densely found in the wider area, it appears, according to Vladimir I. Georgiev, that speakers of the Proto-Greek language inhabited a region which included Chaonia before the Late Bronze Age migrations (ca. 2500 BC). There is today an overall consensus that the Chaonians were among the Greek-speaking population of Epirus, which spoke the North-West Doric dialect of Ancient Greek, akin to that of Aetolia, Phocis, and certain other regions, this is also attested by the available epigraphic evidence in Epirus. Eugene Borza argues that the Molossians originated from a pool of Proto-Greek tribes inhabiting northwestern Greece. Hammond argues the Chaonians and other Epirote tribes spoke Greek at least from the Dark Ages (1100–800 BC). Hammond further argues that Pseudo-Scylax's description of the situation about 380-360 BC indicates that they did not speak Illyrian and that their acceptance in 330 BC into the Epirote League is a strong indication that they spoke Greek; Chaonian inscriptions, all in Greek, began around 329 BC. In the northern part of the region of Epirus, contact with Illyrian-speakers increased sub-dialectal variation within North-West Doric, although concrete evidence outside of onomastics is lacking. On the other hand, penetration of Greek speech, including Epirote, was much more evident among the adjacent Illyrian tribes. Filos asserts that Epirus was a largely Greek-speaking region, as indicated in epigraphic material, although a uniform picture in the sense of dialect use is far from certain, at least regarding the northern parts bordering Southern Illyria, namely Chaonia, where interaction with Southern Illyria and a certain degree of bilingualism in the northern parts must have been a reality, especially in later times. Hernandez (2017) says that there is strong evidence that the Molossians, Thesprotians and Cassopaei spoke West Greek, while the language of the Chaonians is less certain; he reports that, according to , the Chaonians spoke Illyrian and that throughout the region bilingualism in West Greek and Illyrian was predominant. Hernandez finally concludes that it is more likely that the Chaonians also spoke West Greek. Pseudo-Scylax describing the geographical and ethnographical situation (c. 380-360 BC), distinguished the Illyrians from the Chaonians, making it clear that the Chaonians did not speak Illyrian. Religion The sanctuary of Dodona was a religious site frequented by all the Epirote peoples, including the Chaonians. Chaonians constructed their own treasury, likely as a way to highlight their participation and to enhance their identity. The region of Chaonia appears as a place visited by the theoroi, in an Epidaurian list of theorodokoi. Theoroi were sent by the major Panhellenic sanctuaries throughout the Greek world, and theorodokoi's duty was to host and assist the former in preparation of the Panhellenic games and festivals. The list was compiled in 360 or 356 BC. The worship of many Greek gods is attested in Chaonia; these include Athena, Artemis, Asclepius, Zeus, Pan and Poseidon. In particular, at Bouthrotos, the worship of Athena, Asclepius and Zeus Soter is attested; the combined worship of the three, is associated with coastal areas and may have been part of a common cult that is also attested in other parts of the Greek world. Zeus, the central figure of the Greek Pantheon, was a popular deity among the Chaonians, as well as the rest of northern Greece. Zeus Chaonius, was also associated with the Acroceraunian mountains. In Bouthrotos, he was one of the main deities with the epithet "Soter" (Zeus Soter 'Zeus the Saviour'); the epithet "Soter" is associated with the protection of sailors. In the same city, Zeus was also invoked with the epithet "Kassios". Heracles – who was a son of Zeus – was another deity that was worshipped in Bouthrotos. The worship of Poseidon was popular in the region and Taurian Poseidon () is associated with chthonic features, being protector of the sherpents and the flocks. This kind of worship was quite archaic in the region and was preserved from prehistoric Mycenaean Greece (1600-1100 B.C). A temple dedicated to Athena Polias was erected in Chaonian territory, which is attested from an oracular lamella from Dodona dated to ca. 330–320 BC. The inscription on the lamella states: "Good fortune. The polis of the Chaonians asks Zeus Naos and Dione if it is desirable, better, and more advantageous to move and rebuild the Temple of Athena Polias." There is evidence that this temple was initially located at Bouthrotos; such as a number of Corinthian ostraca from the citadel, that bore the inscription "ΑΘΑ" (ATHA), which has been rendered as "ΑΘΑΝΑΣ" (ATHANAS) and indicates that the archaic temple of Bouthrotos was probably dedicated to Athena. According to Hadeli and Gjongecaj, the new temple of Athena Polias was moved to the capital of the Chaonians; Phoenice. Regardless of whether the temple of Athena Polias – mentioned on the lamella from Dodona – was located in Phoenice, Bouthrotos, or elsewhere, the importance and prominent position of the goddess in Chaonia is also confirmed by the coins of the ethnos, where she is depicted with a Corinthian helmet. The cult of Athena was introduced to Epirus for the first time by the Corinthian colonists. Between the end of the 4th century and the beginning of the 3rd century BC, a temenos was assigned to the god of medicine and healing, Asclepius. The establishment of his worship probably took place under the reign of Pyrrhus; however, Pyrrhus might only be responsible for the monumental construction of the asclepeion, while the worship itself may have been inherent due to the geophysical wealth of the site. During the excavations, hydrocarbon deposits and sulfur gas were found; the latter is one of the main components of thermal springs, whose main deity was Asclepius. Liberation acts of the 2nd - 1st c. BC were found at the city's theater (western parodos and diazoma), most of which were dedicated to Asclepius. Among the recorded officials, the priest of the god is also included. The priests of Asclepius came from the local ruling class, since the inscriptions show that some people had served both as generals and priests of the god; this indicates that the cult of Asclepius was intertwined with the public affairs of the city. The publication of the liberation acts at the temene is not only characteristic of Bouthrotos, but also of the other asclepieia; such as those of Nafpaktos, Orchomenos, Chaeronia, Epidaurus, etc. With the development of Bouthrotos' urban center, Asclepius seems to have become the patron god of the city, replacing Athena, who is not mentioned in any inscription thereafter. In ancient sources Modern scholarship hardly denies the belonging of the Chaonians to Greek culture and ethnicity, though classical era literature preferred a more 'peripheral' connotation and described them as "barbaroi". The Greek historian Thucydides describes them as barbaroi while their "ruling family" (τὸ άρχικὸν γένος) apparently had Greek names. Crossland argues that Thucydides' writings indicate that during his lifetime, the tribes from Amphilochia northward were not Greek-speaking, though already under strong Greek influence, and they would later adopt the Greek language in the fourth century BC. On the other hand, Hammond argues that they spoke Greek during and before the time of Thucydides considering both the Greek language and Greek names of their inscriptions were not suddenly adopted. Moreover, Filos finds Crossland's view of a pre-classical semi-Hellenized Epirus untenable given the earliest Greek epigraphical evidence from the region. According to Filos, there were some local peculiarities among the Greek-speaking tribes of Epirus. Irad Malkin asserts that from references in Thucydides' work it is clear that the ancient historian was able to distinguish between various groups of Greeks, semi-Greeks, bilinguals, and non-Greeks, yet he distinguished very explicitly, even within the same coalitions, Epirote barbarians from Greeks, reporting the Epirotes as barbarians. However, Thucydides had similar views of the neighboring Aetolians and Acarnians, even though the evidence leaves no doubt that they were Greek. The term "barbarian" denoted not only clearly non-Greek populations, but also Greek populations on the fringe of the Greek world with peculiar dialects and preserving an archaic way of life that would have been incomprehensible to the inhabitants of the more developed Greek city-states to the south. Pseudo-Scymnus considered the Chaonians barbarians, while Polybius considered the Epirotes, and the Chaonians specifically, to be Greek. According to Plutarch, the Molossian king Tharrhypas was the first to introduce Greek letters and customs in Epirus, and to administer the cities with humane laws, when he was sent to Athens to be educated in the 5th century BC. A much more reliable source about the actual views of contemporary Greeks concerning Epirus is the epigraphic material which contains the list of theorodokoi (; sacred envoy-receivers whose duty was to host and assist the theoroi (θεωροί, "viewers") before the Panhellenic games and festivals), listing Greek cities and tribes, to which the major Panhellenic sanctuaries sent theoroi in Epidaurus, which includes the Epirotes. The list which was compiled in 360 or 356 BC includes the sacred envoys (members of the ruling family of each tribe or subtribe) of the Molossians, Kassopeans, Chaonians and Thesprotians. The weight of this evidence is decisive because only Greeks (Hellenes) were permitted to participate in the Panhellenic games and festivals. Mythological origins The Epirotic tribes sought their progenitor in the Homeric epics, similar to other tribes in Greece. The Chaonians chose the Trojan prince Helenus. Though this mythical tradition is commonly considered to have been created during the time of Alcetas, or even Pyrrhus (along with later additions that date to the Roman times), it seems that the core of the mythological genealogy of the Chaonians was already established by the 6th century BC, and its roots can be traced at the end of the 8th century BC, through the lost epic poem of Nostoi. In Virgil's Aeneid, the eponymous ancestor of the Chaonians was the Trojan hero Chaon. According to Malkin, their founding myth may have arisen as a response to the self-definitions of the Molossians and Thesprotians. According to Hernandez, among the Epirote tribes, the Molossians viewed their descent as a mixture of Greek and Trojan, from Neoptolemus and Helenus respectively, whereas the Chaonians viewed their descent as strictly Trojan, from Helenus and Andromache; perhaps in opposition to the Greek ethnicity of the colonizers and/or the mixed origins of the southern Epirote tribes. According to Hadeli, Helenus had a rather inconspicuous role compared to other heroes in the Homeric epics, but he did possess all the qualities which could serve the aspirations of the Chaonians. Namely, he was the son of a king and a brave warrior, but above all, he was the best seer of the Trojans; a quality that pertains directly to Dodona, and can thus explain their choice, which covered ideological control over the oracle. Hadeli adds, that the presence of Trojans in Epirus, may not have been an ex nihilo invention of the Chaonians, but might stem from the now lost epics of Nostoi and the Little Iliad; an approach which refutes the argument that the Homeric myths were subsequently adopted by the locals, in order to be integrated and legitimized in the Greek world, since these traditions are attested from the Archaic period and it is highly probable that they initially pertained to the aspirations of the three main Epirotic tribes regarding the ideological establishment over their lands, natural resources, and the oracle of Dodona. She concludes that the same trend that was followed separately by the whole Greek world of the early historical period, was also embraced by Epirus, a fact that implies that the Homeric and rest of the Epic Cycle poems, would have been known in the region from around the time of their main composition, through wandering aoidoi, who traveled and recited poems throughout the Greek-speaking world. In Virgil's Aeneid, Chaon is described as a Trojan hero and the eponymous ancestor of the Chaonians. The story is unclear as to whether he was the friend or the brother of Helenus, but in either case, he accompanied him to the court of Neoptolemus. The story concerning Chaon's death is as unclear as that of his relationship to Helenus. Chaon was either killed in a hunting accident or offered himself as a sacrifice to the gods during an epidemic, thus saving the lives of his countrymen. In either case, when Helenus became the ruler of the country, he named a part of the kingdom after Chaon. According to Hadeli, even though the Aeniad was written between 29 and 19 BC, this myth originated after the Macedonian Wars, and has its roots at least in the 2nd century BC. She explains it as an attempt of the Romans to better associate themselves with the Chaonians through a common Trojan origin (like the former had through Aeneas), and second, to disassociate them ideologically from the Molossians (and consequently the pro-Macedonian faction), since Helenus – being the Chaonian mythical progenitor – is presented as a mere slave of Neoptolemus (progenitor of the Molossians) who only came to power after his death. List of Chaonians Photius and Nicanor, leaders of the Chaonians in the Peloponnesian War (circa 431–421 BC). Doropsos , theorodokos in Epidauros (circa 365 BC). Antanor (son of Euthymides), proxenos in Delphi (325–275 BC). Peukestos, proxenos in Thyrrheion, Acarnania (3rd century BC) . Myrtilos, officer who gave proxeny decree to Boeotian Kallimelos (late 3rd century BC). Boiskos (son of Messaneos), prostates (late 3rd century BC). Lykidas (son of Hellinos), prostates (circa 232–168 BC). -tos (son of Lysias), winner in Pale (wrestling) Panathenaics (194/193 BC). Charops, father of Machatas, father of Charops the Younger - philoroman politicians (2nd century BC). See also Tribes of Epirus Dexari References Citations Sources Further reading Ancient tribes in Albania Ancient tribes in Epirus Greek tribes
1085448
https://en.wikipedia.org/wiki/Wind%20River%20Systems
Wind River Systems
Wind River Systems, also known as Wind River, is an Alameda, California–based wholly owned subsidiary of Aptiv. The company develops embedded system and cloud software consisting of real-time operating systems software, industry-specific software, simulation technology, development tools and middleware. History Wind River Systems was formed by a partnership of Jerry Fiddler and Dave Wilner. Until 1981, Fiddler had worked at Berkeley Lab writing software for control systems, and wanted to pursue a career in computer generated music, which he funded through a consultancy business focused on real-time operating systems. His early clients included the National Football League and film director Francis Ford Coppola, for whom he designed a unique film editing system. Wilner, a former colleague at Berkeley Lab, joined Fiddler to form Wind River Systems in 1983. In 2009, Wind River was acquired by Intel. In 2018, Intel spun out its Wind River division, which was then acquired by TPG Capital. On January 11, 2022, Wind River announced that it was acquired by Aptiv, an auto parts company, for $4.3 billion in cash. The company's key milestones include: 1983: Wind River is incorporated in 1983 with each partner contributing $3,000 and a desk to the business. The company was named for Wind River, Wyoming, where Fiddler had vacationed that year 1987: Wind River introduces VxWorks, a leading real-time operating system for embedded devices. 1995: VxWorks launches into space on the NASA Clementine moon probe. Also, the Tornado integrated development environment is launched and wins EDN's Embedded Development Software Innovation of the Year award as the first graphically oriented development environment for embedded 1997: VxWorks, the real-time operating system for NASA's Mars Pathfinder mission, lands on Mars 1999: Acquisition of one of their major competitors, Integrated Systems Inc., makers of pSOS. Wind River has since discontinued the pSOS product line and has recommended existing pSOS customers move to VxWorks. 2001: Wind River Systems acquired Belgian software company Eonic Systems, the developer of Virtuoso RTOS for DSPs. In November 2015, Wind River Systems renamed the operating system to Rocket, made it open-source and royalty-free. In 2016, Rocket was incorporated into Zephyr RTOS hosted by Linux Foundation. 2004: Wind River officially enters the embedded Linux market, with a Carrier Grade Linux platform targeting the networking & communications infrastructure industry. Also, NASA's Mars Exploration Rovers, Spirit and Opportunity, powered by VxWorks, land on Mars. Wind River helped in manufacturing the IntelliStar for The Weather Channel. The IntelliStar is used at Cable Headends to insert Local Weather into The Weather Channel's national programming. 2007: Wind River joins Google's Open Handset Alliance as an original Linux commercialization partner. 2008: Wind River establishes the embedded Linux market share lead with greater than 30 percent of total market revenue, four years after entering the market. 2009: Intel acquires Wind River for approximately $884 million and it becomes a wholly owned subsidiary of Intel. Wind River launches a commercial Android software platform. Wind River becomes a founding member of the GENIVI Alliance. 2010: Wind River adds Simics, a full system simulator, to its product portfolio. VxWorks becomes the first RTOS to be certified under Wurldtech's Achilles certification program, a standard for industrial cyber security. Wind River partners with Intel and the Linux Foundation to create the Yocto Project, an open source collaboration project providing templates, tools and methods to help developers create embedded Linux-based systems. 2012: NASA Jet Propulsion Laboratory (JPL) successfully lands Mars Science Laboratory rover Curiosity, powered by Wind River technology. Wind River debuts software platform targeted at gateways and hubs for the Internet of things. 2013: Wind River becomes part of Intel's Internet of Things Group (IOTG), but remains a wholly owned subsidiary. Barry Mainz assumes the position of President. 2014: Wind River introduces its software for network functions virtualization (NFV) applications, as well as its next-generation VxWorks platform reinvented for the Internet of Things. 2014: Wind River fined $750,000 by Bureau of Industry and Security for exporting encryption technology to countries including Israel and South Korea. 2015: the company was accused of repeated trademark and licensing violations of the Grsecurity project, which as response has restricted its code to commercial partners only. 2016: Intel announced that it intended to fully integrate Wind River into one of its divisions (thus ending Wind River's status as a wholly owned subsidiary,) although the scheduled completion date of this action has not been made public. Barry Mainz left the company to become President and CEO of MobileIron and Jim Douglas assumes the position of President. 2018: Intel divested Wind River Systems to alternative asset fund manager TPG under undisclosed terms. 2018: Ford selects Wind River Over-the-Air Update Technology. 2018: NASA's InSight lands on Mars with VxWorks operating system. 2019: Wind River became the first OpenChain 2.0 Conformant. 2020: Kevin Dallas named to role of CEO and member of the Board of Directors. 2020: Verizon uses Wind River's software infrastructure for its deployment of virtualized 5G RAN. 2020: Wind River becomes first and only to achieve The Open Group FACE™ Conformance for Linux. 2021: Perseverance Mars becomes fourth Mars rover running VxWorks operating system. 2021: Vodafone selects Wind River as a partner to build Europe's first commercial open RAN network. 2022: Wind River was acquired by Aptiv from TPG Capital for $4.3 billion in cash. Products Among the company's products are the VxWorks real-time operating system, the Wind River Linux operating system, and the Eclipse-based Wind River Workbench IDE. VxWorks began as an add-on to the VRTX operating system in the early 1980s. Wind River Workbench superseded the previous Tornado environment. VxWorks VxWorks is the original flagship product of Wind River. It is a real-time operating system (RTOS) intended for embedded and critical infrastructure devices and systems. It supports multicore processors, 32-bit and 64-bit, for several architectures including ARM, Intel, and Power and has over one hundred board support packages (BSPs) for different hardware systems. VxWorks is a real time and deterministic operating system. Wind River Linux Wind River's Linux product is source code and a build system that generate runtime images suitable for embedded devices. It supports a variety of architectures, including ARM, MIPS, PowerPC, IA32 and SPARC. In 2004, Wind River announced a partnership with Red Hat to create a new Linux-based distribution for embedded devices. Wind River has since ended its partnership with Red Hat and now ships its own Linux distribution optimized for embedded Linux development. Wind River released the first version of its embedded Linux distribution, Platform for Network Equipment - Linux Edition (PNE-LE) 1.0 in 2005. It was registered against the Carrier Grade Linux 2.0 specification and supported IA32 and PPC architectures. They added other platforms in subsequent releases, General Purpose Platform - Linux Edition (GPP-LE) and Platform for Consumer Devices - Linux Edition PCD-LE) starting in version 1.4. In 2013 Wind River announced Wind River Linux 6.0. Wind River Systems acquired FSMLabs embedded technology in February 2007 and made a version available as Wind River Real-Time Core for Wind River Linux. As of August 2011, Wind River has discontinued the Wind River Real-Time Core product line, effectively ending commercial support for the RTLinux product. On August 7, 2007, Palm Inc. announced it would use Wind River Systems Linux for its (later aborted) Palm Foleo. In 2008, Wind River announced cooperation with BMW, Intel and Magneti Marelli for development of a Linux-based open-source platform to control in-car electronics, which was extended in the GENIVI Alliance in 2009. In 2012, Wind River introduced a version of Linux that was developed from the Yocto Project open source development infrastructure and achieved Yocto project compatible registration. Simics Simics is a full-system simulator used by software developers to simulate the hardware of complex electronic systems. Wind River Studio Wind River Studio is a cloud-native platform for the deployment and servicing of mission-critical intelligent edge systems. Acquisitions 1991: Assets of ITRA (Vannes, France) 1997: DSP Foundry (WiSP RTOS for Motorola DSP563xx family) 1999: Integrated Systems Inc. (pSOS+) 2000: Merge staff of Dragonfly Software Consulting 2000: Embedded Support Tools Corp. (ESTC) 2000: ICEsoft (Bergen, Norway) 2000: AudeSi Technologies Inc. (Calgary, Alberta, Canada) 2001: Eonic Systems (Virtuoso RTOS) 2001: Berkeley Software Design Inc. (BSDI) 2005: ScopeTools business unit from Real-Time Innovations 2006: Interpeak AB (Stockholm, Sweden) 2007: Assets of FSMLabs (Socorro, New Mexico, United States) 2008: MIZI (Seoul, South Korea) 2009: Tilcon Software Limited (Ottawa, Ontario, Canada) 2010: Virtutech (Stockholm, Sweden) 2011: Switch++ (Santa Clara, United States) 2016 Arynga 2020 Star Labs References Further reading Lord of the Toasters an article from Wired magazine an article from CNET External links Wind River Systems company website Software companies of the United States Intel acquisitions Linux companies Software companies based in the San Francisco Bay Area Companies based in Alameda, California Companies based in Berkeley, California American companies established in 1981 Software companies established in 1981 1981 establishments in California 2009 mergers and acquisitions
17125914
https://en.wikipedia.org/wiki/EAST-ADL
EAST-ADL
EAST-ADL is an Architecture Description Language (ADL) for automotive embedded systems, developed in several European research projects. It is designed to complement AUTOSAR with descriptions at higher level of abstractions. Aspects covered by EAST-ADL include vehicle features, functions, requirements, variability, software components, hardware components and communication. Currently, it is maintained by the EAST-ADL Association in cooperation with the European FP7 MAENAD project. Overview EAST-ADL is a domain-specific language using meta-modeling constructs such as classes, attributes, and relationships. It is based on concepts from UML, SysML and AADL, but adapted for automotive needs and compliance with AUTOSAR. There is an EAST-ADL UML2 profile which is used in UML2 tools for user modeling.The EAST-ADL definition also serves as the specification for implementation in domain-specific tools. EAST-ADL contains several abstraction levels. The software- and electronics-based functionality of the vehicle are described at different levels of abstraction. The proposed abstraction levels and the contained elements provide a separation of concerns and an implicit style for using the modeling elements. The embedded system is complete on each abstraction level, and parts of the model are linked with various traceability relations. This makes it possible to trace an entity from feature down to components in hardware and software. EAST-ADL is defined with the development of safety-related embedded control systems as a benchmark. The EAST-ADL scope comprises support for the main phases of software development, from early analysis via functional design to the implementation and back to integration and validation on vehicle level. The main role of EAST-ADL is that of providing an integrated system model. On this basis, several concerns are addressed: Documentation, in terms of an integrated system model. Communication between engineers, by providing predefined views as well as related information. Analysis, through the description of system structure and properties. Behavioural models for simulation or code generation are supported as references from EAST-ADL functions to external models, such as a subsystem in MATLAB/Simulink. Organisation of EAST-ADL Meta-Model The EAST-ADL meta-model is organized according to 4 abstraction levels: Vehicle level contains modeling elements to represent intended functionality in a solution-independent way Analysis level represents the abstract functional decomposition of the vehicle with the principal internal and external interfaces. Design level has the detailed functional definition, a hardware architecture and allocations of functions to hardware. Implementation level relies on AUTOSAR elements and does not have EAST-ADL-specific constructs for the core structure. For all abstraction levels, relevant extension elements for requirements, behavior, variability and dependability are associated to the core structure. Relation between EAST-ADL and AUTOSAR Instead of providing modeling entities for the lowest abstraction level, i.e. implementation level, EAST-ADL uses unmodified AUTOSAR entities for this purpose and provides means to link EAST-ADL elements on higher abstraction levels to AUTOSAR elements. Thus, EAST-ADL and AUTOSAR in concert provide means for efficient development and management of the complexity of automotive embedded systems from early analysis right down to implementation. Concepts from model-based development and component-based development reinforce one another.An early, high-level representation of the system can evolve seamlessly into the detailed specifications of the AUTOSAR language. In addition, the EAST-ADL incorporates the following system development concerns: Modeling of requirements and verification/validation information, Feature modeling and support for software system product lines, Modeling of variability of the system design, Structural and behavioral modeling of functions and hardware entities in the context of distributed systems, Environment, i.e., plant model and adjacent systems, and Non-functional operational properties such as a definition of function timing and failure modes, supporting system level analysis. The EAST-ADL metamodel is specified according to the same rules as the AUTOSAR metamodel, which means that the two sets of elements can co-exist in the same model. The dependency is unidirectional from EAST-ADL to AUTOSAR, such that AUTOSAR is independent of EAST-ADL. However, relevant EAST-ADL elements can reference AUTOSAR elements to provide EAST-ADL support for requirements, variability, safety, etc. to the AUTOSAR domain. A model may thus be defined where AUTOSAR elements represent the software architecture and EAST-ADL elements extend the AUTOSAR model with orthogonal aspects and represents abstract system information through e.g. function and feature models. Such model can be defined in UML, by applying both an EAST-ADL profile and an AUTOSAR profile, or in a domain specific tool based on a merged AUTOSAR and EAST-ADL metamodel. History and Specification of EAST-ADL The EAST-ADL language has been defined in several steps within European research projects: EAST-ADL is governed by the EAST-ADL Association, founded in September 2011. The EAST-ADL UML2 profile is represented in the EAST-ADL annex to the OMG MARTE profile. Discussion While interest from automotive companies in EAST-ADL is increasing over the past years, EAST-ADL is still to be seen as a research effort (as of 2012). The practical acceptance of EAST-ADL in the automotive industry is still very low, even though EAST-ADL addresses many important aspects of vehicle development. EAST-ADL is used as a reference model in other research projects, e.g. CESAR and TIMMO-2-USE Modeling Tools and File Format EAST-ADL tool support is still limited, although a UML profile is available and domain specific tools such as MentorGraphics VSA, MetaCase MetaEdit+ and Systemite SystemWeaver have been tailored for EAST-ADL in the context of research projects and with customers. Papyrus UML, extended within the ATESST project as a concept demonstrator has EAST-ADL support, and MagicDraw, can also provide EAST-ADL palettes, diagrams, etc. In the case of UML, developers also need to have knowledge of UML (classes, stereotypes, arrow types, ..) for modeling with EAST-ADL. Many automotive engineers, in particular mechanical engineers, hardware developers, process experts) do not have this knowledge and prefer other approaches. EATOP is an upcoming initiative to make an Eclipse-based implementation of the EAST-ADL meta-model. An XML-based exchange format, EAXML, allows tools to exchange EAST-ADL models. The EAXML schema is autogenerated from the EAST-ADL metamodel according to the same principles as the AUTOSAR ARXML schema. Currently, the exchange format is supported by the EAST-ADL prototype of Mentor Graphics VSA, MetaEdit+ and SystemWeaver. For UML tooling, it is possible to exchange models using XMI, subject to the XMI compatibility between tools. Similar approaches Unified Modeling Language (UML) Systems Modeling Language (SysML) Architecture analysis and design language (AADL) AUTOSAR SystemDesk References External links www.east-adl.info EAST-ADL Association www.maenad.eu MAENAD project, current (2012) main contributing project to EAST-ADL. www.atesst.org Home of ATESST and ATESST2, former main EAST-ADL projects. Data modeling languages Software architecture Systems architecture Architecture description language
59769074
https://en.wikipedia.org/wiki/Zero%205%20%28Atari%20Jaguar%20game%29
Zero 5 (Atari Jaguar game)
Zero 5 is a shooter video game developed by Caspian Software and published by Telegames exclusively for the Atari Jaguar on September 29, 1997. It is a remake of the 1994 Atari STe title of the same name and one of the last licensed releases to be published for the Jaguar after being discontinued in 1996 by Atari Corporation, who merged with JT Storage in a reverse takeover prior to its eventual launch. Set in the year 2050 and featuring a completely different gameplay structure compared to the original title, the player assume the role of a recruited Space Hound by DEFCON to take command of both the BamBam space fighter craft and the Hit-Pak space cruiser in an attempt to defend Earth and counterattack against the invading forces of the Morphon alien race, before they reach the planet and eradicate the human race. Conceived as a commission by Atari Corp.'s UK division in an effort to incite independent developers to work with the Jaguar, development of Zero 5 began in January 1995 and went almost unreleased after Atari discontinued the system until it was eventually picked up by Telegames. Zero 5 was met with mostly positive reception from video game magazines and dedicated outlets that reviewed it, despite its very late release on the platform. Critics praised the graphics, sound design and gameplay but was criticized for its high level of difficulty and steep learning curve. The remake was the last project developed by Caspian Software before the company was disbanded prior to release and plans for a PlayStation port were ultimately scrapped after an unsuccessful attempt with a publisher. Gameplay Zero 5 is a shooter game where the player assume the role of a recruited Space Hound by DEFCON to take command of both the BamBam space fighter craft and the Hit-Pak space cruiser in a battle to defend Earth from the invading forces of the Morphon alien race, who are set to eradicate the human race after their signal was picked up signal by the now-destroyed Europa III satellite across 15 missions, each one featuring a different scenario and gameplay styles, with later missions increasing in difficulty and introducing new enemies as the player progress further. By pressing the Option button at the title screen, the player has access to the setup menu where various settings can be adjusted such as controls and sound configurations, in addition of selecting the last mission reached on the main game to resume progress and choosing any of the three levels of difficulty, each one applying a different stipulation during gameplay, with the highest one featuring missions that are longer in length compared to the other two difficulty levels. The game is based around three types of gameplay styles depending on the mission. The first mode is a shoot 'em up where it involves the BamBam space fighter and the player is given full control of their ship in a three-dimensional environment featuring six degrees of freedom on zero gravity. The main objective is to survive against incoming enemies across all directions to finish this mode, while collecting energy pods left by them after destroying a wave of enemy crafts to increase either the ship's firepower, shield or the score by pressing 1, 2 or 3 on the keypad, and pressing 6 will activate a homing laser beam to destroy enemies but can only be used up to three times. The ship can also be rotated left and right with the C or A buttons. The second mode is an on rails shooter segment that takes place in a first-person perspective, where involves the player controlling the Hit-Pak space cruiser and its gun turrets in environments of 360°. Like the first mode, the objective is to survive multiple waves of enemies that appear in all directions or destroying space debris to protect the ship, however there are no energy pods to be collected to regenerate shields and the player must keep focused on the radar, which displays the ship's shields and enemies. The missions involving Hit-Pak are broken into different segments, with some alternating between different playstyles. The third and last mode revolves around flying BamBam through a confined space at high-speed inside of an alien ship and destroy its reactors, while collecting power-ups by passing through a flashing panel. Sound plays a key role during this mode, as the player's shots can determine which walls are indestructible to avoid crashing against them. Development and release Zero 5 is a remake of the 1994 Atari STe title of the same name, which was one of the last official releases for the Atari ST platform after being discontinued in 1993 by Atari Corporation, who would later commission Caspian Software to work alongside them as part of Atari Corp.'s European center of development, which was established in January 1995 with the aim of working alongside small game developers around the region to create titles for the Atari Jaguar. Development of the project started during the same month as Atari established their center of development in Europe and was created by a small team at Caspian Software consisting of Chris Dillon as producer, lead programmer Matthew Gosling with assistance from programmers Dave Pratt and Sean Baggaley (who previously wrote Rock 'n' Roll Clams on Atari ST), while David Philbedge and Mark Bloomfield were involved as artists. Andrew T. Gisby, who was one of the authors of the original game, worked as consultant during development. Dave Newman and James Veal, who worked as composers in the original game as well, returned to compose music for the remake using a proprietary sound driver called "Zik Tracker", in addition of creating sound effects. Atari UK Software Development Manager Alistair Bodin and marketing manager Darryl Still were also involved during the production. In both an October 1997 post at the Jaguar Interactive forum and an interview with online magazine Jaguar Explorer Online, Matthew Gosling recounted about the development process of the game. Matthew claimed that the game runs at 30 frames per second and displays around 20.000 polygons on-screen with minimal texture mapping, among other techniques. He also stated that the game was originally going to feature a fourth gameplay style reminiscent of the missions found in the original game that took place on the surface of a planet or an asteroid, in addition of more enemy types and mission objectives as well as a two-player mode and support for the ProController, but these plans were scrapped due to time and cartridge space constraints. He and Chris Dillon stated that the remake was a completely different title and took a more arcade-style approach compared to the original game outside of sharing the same title, parts of the original plot and other key elements. Matthew also stated that the game had "the lowest development budget of any Jaguar product at the time". Zero 5 was first showcased at Autumn ECTS '95 and was originally scheduled for a November 1995 release, with internal documents from the company listing the game as still in development as of August 1995. It was planned to be published by Atari and was also covered by the magazine press that were invited to Atari's UK division. The game remained being mentioned in catalogs and magazines before Atari discontinued the Jaguar and merged with JT Storage in a reverse takeover in April 1996, until Telegames became involved and released it worldwide on September 29, 1997. Being a late release after the discontinuation of the platform, the game could be purchased either through direct order from Telegames' US and UK websites and retailers or distributors such as Electronics Boutique in North America and Terre du Milieu in France respectively. However, Matthew claimed that Atari sold the game to Telegames in an unfinished state, with none of the team members from Caspian Software receiving payment for the game's development. The remake became the last project developed by Caspian Software, as the company was disbanded due to financial difficulties. Matthew stated that a port for the PlayStation was also planned but never entered into full development after an unsuccessful pitch with a publisher. Reception Zero 5 on the Atari Jaguar garnered mostly positive reception from critics despite its very late release on the Jaguar. GameFans three reviewers felt mixed in regards to the visual presentation but unanimously praised the techno soundtrack, regarding it to be better than most Nintendo 64 releases, intense gameplay and smooth pacing. However, criticism was geared towards the high difficulty and constant change between playstyles on each mission. GamePros Dan Elektro commended the fast-paced polygonal visuals, controls, fun factor, sound design and techno music but also regarded this later aspect to be repetitive. However, Elektro stated that "Zero 5 is a dexterity test of the highest order, though some players will no doubt find it merely frustrating." Atari Gaming Headquarterss Keita Iida regarded the audiovisual presentation as some of the best on the Jaguar but criticized the gameplay, specifically the three playstyles due to certain design shortcomings each one has and steep learning curve. The Atari Timess Chris Engineer also commended the visuals, audio, controls and fast-paced arcade-style gameplay between each playstyle but noted its high difficulty level. References External links Zero 5 at AtariAge Zero 5 at GameFAQs Zero 5 at MobyGames 1997 video games Atari games Atari Jaguar games Atari Jaguar-only games Caspian Software games Cancelled PlayStation (console) games Science fiction video games Shooter video games Single-player video games Telegames games Video games developed in the United Kingdom Video game remakes Video games scored by Dave Newman Video games scored by James Veal Video games set in the 2050s Video games set in outer space Video games with 6 degrees of freedom
27456823
https://en.wikipedia.org/wiki/Srini%20Raju
Srini Raju
Chintalapati Srinivasa Raju is an Indian entrepreneur and private equity investor. Srini Raju was the founding chief executive officer (CEO) and managing director (MD) of Dun & Bradstreet Satyam Software, the in-house technology unit of Dun & Bradstreet, established in 1994, which focused on implementing large-scale IT projects for Dun & Bradstreet businesses. DBSS was later renamed Cognizant. Srini later became the co-founder and chairman of iLabs Venture Capital Fund, a private equity firm based in Hyderabad. Besides funding and mentoring next generation entrepreneurs, he plays an active role in building educational institutions of higher learning. Early life and education Srini Raju was born in 1961 at Khajipalem village, Guntur District, Andhra Pradesh. His father, Anjiraju Chintalapati was a farmer. After completing his school education, he went to the National Institute of Technology, Kurukshetra and graduated in 1983 with an Honours Degree, BE (Civil Engineering). In 1986 he received his master's degree in Civil & Environmental Engineering from Utah State University, USA. Career Srini Raju was the Founding CEO & MD of Dun & Bradstreet Satyam Software the in-house technology unit of Dun & Bradstreet, established in 1994, which focused on implementing large-scale IT projects for Dun & Bradstreet businesses. DBSS was later renamed as Cognizant. Srini later became the Co-founder and Chairman of iLabs Venture Capital Fund, a Private Equity (PE) firm based out of Hyderabad. Mr. Srini Raju founded iLabs Group in 2000. iLabs Group is one of the pioneers of the Investment Ecosystem in India. Over the years, it has grown its scope in terms of both Investment and Industry. Originally started as an Angel Investor, iLabs Capital has expanded into, Seed-Stage, Early-Stage and Growth-Stage Investments, as well as Re-Structures and Buy-Outs. Initially focused on technology and technology enabled Ventures, iLabs Capital has extended into Consumer Products & Services; Health Care; Skills-Development; Media and Entertainment; Corporate and Industrial Parks. Today, iLabs Group is one of the front-runners of the investment landscape in India. Srini has over 35 years of experience in Information Technology, Venture Capital and Private Equity. Prior to iLabs Group, Srini was Founding CEO & MD of Dun & Bradstreet Satyam Software (later became Cognizant Technology Solutions) and Satyam Enterprise Solutions. A number of iLabs Investee Companies have become Leading Companies in their respective areas. Sri City, a leading Industrial City started by iLabs Group is home to over 150 companies from 25 Countries and two of India's leading Universities. Srini is passionate about Educations and Skills Development. He is a Founding Member & Member of Governing Council of International Institute of Information Technology (IIIT), Hyderabad; Industry Partner (Donor) & Member of Board of Governors of Indian Institute of Information Technology, Sri City; Executive Board Member of Indian School of Business (ISB) and Benefactor of Srini Raju Center for Networked Economy (SRITNE); Co-Sponsor (Donor) and Board Member of KREA University; and Founding-Member & Board Member of T-Hub, Hyderabad. Satyam Computer Services: Corporate Fraud Case Raju was embroiled in one of the biggest corporate fraud scams involving Satyam Computer Services Limited, the company that he left in 2000. The Securities and Exchange Board of India (SEBI), India’s market regulator held Raju guilty of insider trading and making “unlawful gains”. India’s Supreme Court, however, acquitted Srinivasa Raju of any wrongdoing. His brother-in-law, Ramalinga Raju was, however, arrested in connection with the case. Professional affiliations Besides investing and mentoring young professionals and entrepreneurs, he is continuing to make large contributions to the cause of building Institutions of Excellence in the field of Management and Technology. Industry Partner (Donor) & Member of Board of Governors of Indian Institute of Information Technology, Sri City. IIITS was established in 2013 under the Government of India's Initiative to set up 20 Institutes across India, focusing on Information Technology. The Institute was set up by the Government of India – MHRD, Government of Andhra Pradesh and Industry Partners represented by Sri City Foundation as a Not-for-Profit Public-Private-Partnership. Member of the Executive Board of Indian School of Business. Srini Raju pledged to donate ₹35 crores towards developing a Research Centre in the areas of Information Technology and Networked Economy. Indian School of Business (ISB) named the IT research center as Srini Raju Centre for Information Technology and Networked Economy (SRITNE) at ISB. Member of Governing Board and Governing Council ; Founding Member of IIIT Hyderabad. In 10 years, IIIT has become India's leading Technology Research University. IIIT-H is a non-governmental Public Institution set up on Public-Private Partnership Model by Leading Tech Companies and the Government of Telangana. Co-Sponsor (Donor) and Board Member of KREA University. Krea University is a unique effort, specifically conceived and designed to address the unprecedented challenges and opportunities of the 21st century. Member & Board Member of T-Hub, Hyderabad. T-Hub is a unique public/private partnership between the government of Telangana, 3 of India's premier academic institutes (IIIT-H, ISB & NALSAR) and key private sector leaders. It stands at the intersection of the start-up, academic, corporate, research and government sectors. Srini Raju is also a visiting Faculty at various universities like ISB, IIIT Hyderabad, IIT Madras, IESE Business School-University of Navarre etc. Srini Raju was also the member of the Steering Committee for conducting conferences by Confederation of Indian Industry (CII) jointly with Government of Andhra Pradeshduring IT Mega Event "GitexHyderabad 2003". Srini Raju is also an Advisory member of TIE Hyderabad. TIE Global is a nonprofit venture devoted to entrepreneurs in all industries, at all stages, from incubation, throughout the entrepreneurial lifecycle. With a global reach and a local focus, the heart of TiE efforts lies in its five foundational programs, – Mentoring, Networking, Education, Funding, and Incubation Srini Raju is an Advisory Board member of Hyderabad Angels. Established in 2012, HA investors are leading venture capitalists, entrepreneurs and business leaders, who, with their financial and business acumen, are committed to providing support to the Indian and global start-up ecosystem. References Businesspeople from Andhra Pradesh Telugu people Living people 1961 births
25166288
https://en.wikipedia.org/wiki/IPv6%20packet
IPv6 packet
An IPv6 packet is the smallest message entity exchanged using Internet Protocol version 6 (IPv6). Packets consist of control information for addressing and routing and a payload of user data. The control information in IPv6 packets is subdivided into a mandatory fixed header and optional extension headers. The payload of an IPv6 packet is typically a datagram or segment of the higher-level transport layer protocol, but may be data for an internet layer (e.g., ICMPv6) or link layer (e.g., OSPF) instead. IPv6 packets are typically transmitted over the link layer (i.e., over Ethernet or Wi-Fi), which encapsulates each packet in a frame. Packets may also be transported over a higher-layer tunneling protocol, such as IPv4 when using 6to4 or Teredo transition technologies. In contrast to IPv4, routers do not fragment IPv6 packets larger than the maximum transmission unit (MTU), it is the sole responsibility of the originating node. A minimum MTU of 1,280 octets is mandated by IPv6, but hosts are "strongly recommended" to use Path MTU Discovery to take advantage of MTUs greater than the minimum. Since July 2017, the Internet Assigned Numbers Authority (IANA) is responsible for registering all IPv6 parameters that are used in IPv6 packet headers. Fixed header The fixed header starts an IPv6 packet and has a size of 40 octets (320 bits). {| class="wikitable" style="text-align: center" |+Fixed header format |- ! style="border-bottom:none; border-right:none;"| Offsets ! style="border-left:none;"| Octet ! colspan="8" | 0 ! colspan="8" | 1 ! colspan="8" | 2 ! colspan="8" | 3 |- ! style="border-top: none" | Octet ! Bit ! style="width:2.6%;"| 0 ! style="width:2.6%;"| 1 ! style="width:2.6%;"| 2 ! style="width:2.6%;"| 3 ! style="width:2.6%;"| 4 ! style="width:2.6%;"| 5 ! style="width:2.6%;"| 6 ! style="width:2.6%;"| 7 ! style="width:2.6%;"| 8 ! style="width:2.6%;"| 9 ! style="width:2.6%;"| 10 ! style="width:2.6%;"| 11 ! style="width:2.6%;"| 12 ! style="width:2.6%;"| 13 ! style="width:2.6%;"| 14 ! style="width:2.6%;"| 15 ! style="width:2.6%;"| 16 ! style="width:2.6%;"| 17 ! style="width:2.6%;"| 18 ! style="width:2.6%;"| 19 ! style="width:2.6%;"| 20 ! style="width:2.6%;"| 21 ! style="width:2.6%;"| 22 ! style="width:2.6%;"| 23 ! style="width:2.6%;"| 24 ! style="width:2.6%;"| 25 ! style="width:2.6%;"| 26 ! style="width:2.6%;"| 27 ! style="width:2.6%;"| 28 ! style="width:2.6%;"| 29 ! style="width:2.6%;"| 30 ! style="width:2.6%;"| 31 |- ! 0 ! 0 | colspan="4"|Version | colspan="8"|Traffic class | colspan="20"|Flow label |- ! 4 ! 32 | colspan="16"|Payload length | colspan="8"|Next header | colspan="8"|Hop limit |- ! 8 ! 64 | colspan="32" rowspan="4"|Source address |- ! 12 ! 96 |- ! 16 ! 128 |- ! 20 ! 160 |- ! 24 ! 192 | colspan="32" rowspan="4"|Destination address |- ! 28 ! 224 |- ! 32 ! 256 |- ! 36 ! 288 |} Version (4 bits) The constant 6 (bit sequence 0110). Traffic Class (6+2 bits) The bits of this field hold two values. The six most-significant bits hold the differentiated services field (DS field), which is used to classify packets. Currently, all standard DS fields end with a '0' bit. Any DS field that ends with two '1' bits is intended for local or experimental use. The remaining two bits are used for Explicit Congestion Notification (ECN); priority values subdivide into ranges: traffic where the source provides congestion control and non-congestion control traffic. Flow Label (20 bits) A high-entropy identifier of a flow of packets between a source and destination. A flow is a group of packets, e.g., a TCP session or a media stream. The special flow label 0 means the packet does not belong to any flow (using this scheme). An older scheme identifies flow by source address and port, destination address and port, protocol (value of the last Next Header field). It has further been suggested that the flow label be used to help detect spoofed packets. Payload Length (16 bits) The size of the payload in octets, including any extension headers. The length is set to zero when a Hop-by-Hop extension header carries a Jumbo Payload option. Next Header (8 bits) Specifies the type of the next header. This field usually specifies the transport layer protocol used by a packet's payload. When extension headers are present in the packet this field indicates which extension header follows. The values are shared with those used for the IPv4 protocol field, as both fields have the same function (see List of IP protocol numbers). Hop Limit (8 bits) Replaces the time to live field in IPv4. This value is decremented by one at each forwarding node and the packet is discarded if it becomes 0. However, the destination node should process the packet normally even if received with a hop limit of 0. Source Address (128 bits) The unicast IPv6 address of the sending node. Destination Address (128 bits) The IPv6 unicast or multicast address of the destination node(s). In order to increase performance, and since current link layer technology and transport layer protocols are assumed to provide sufficient error detection, the header has no checksum to protect it. Extension headers Extension headers carry optional internet layer information and are placed between the fixed header and the upper-layer protocol header. Extension headers form a chain, using the Next Header fields. The Next Header field in the fixed header indicates the type of the first extension header; the Next Header field of the last extension header indicates the type of the upper-layer protocol header in the payload of the packet. All extension headers are a multiple of 8 octets in size; some extension headers require internal padding to meet this requirement. There are several extension headers defined, and new extension headers may be defined in the future. Most extension headers are examined and processed at the packet's destination. Hop-by-Hop Options can be processed and modified by intermediate nodes and, if present, must be the first extension. All extension headers are optional and should appear at most once, except for the Destination Options header extension, which may appear twice. If a node does not recognize a specific extension header, it should discard the packet and send a Parameter Problem message (ICMPv6 type 4, code 1). The defined extension headers below are listed in the preferred order for the case where there is more than one extension header following the fixed header. {| class="wikitable" |- ! Extension header ! Next Header field value ! Description |- | Hop-by-Hop Options | | 0 || Options that need to be examined by all devices on the path |- | Routing | | 43 || Methods to specify the route for a datagram (used with Mobile IPv6) |- | Fragment | | 44 || Contains parameters for fragmentation of datagrams |- | Authentication Header (AH) | | 51 || Contains information used to verify the authenticity of most parts of the packet |- | Encapsulating Security Payload (ESP) | | 50 || Carries encrypted data for secure communication |- | Destination Options (before upper-layer header) | | 60 || Options that need to be examined only by the destination of the packet |- | Mobility (currently without upper-layer header) | | 135 || Parameters used with Mobile IPv6 |- | Host Identity Protocol || 139 || Used for Host Identity Protocol version 2 (HIPv2) |- | Shim6 Protocol || 140 || Used for Shim6 |- | Reserved || 253 || Used for experimentation and testing |- | Reserved || 254 || Used for experimentation and testing |} Value 59 (No Next Header) in the Next Header field indicates that there is no next header whatsoever following this one, not even a header of an upper-layer protocol. It means that, from the header's point of view, the IPv6 packet ends right after it: the payload should be empty. There could, however, still be data in the payload if the payload length in the first header of the packet is greater than the length of all extension headers in the packet. This data should be ignored by hosts, but passed unaltered by routers. Hop-by-hop options and destination options The Hop-by-Hop Options extension header may be examined and altered by all nodes on the packet's path, including sending and receiving nodes. (For authentication, option values that may change along the path are ignored.) The Destination Options extension header needs to be examined by the destination node(s) only. The extension headers are both at least 8 octets in size; if more options are present than will fit in that space, blocks of 8 octets, containing options and padding, are added to the header repeatedly until all options are represented. {| class="wikitable" style="text-align: center" |+Hop-by-Hop Options and Destination Options extension header format |- ! style="border-bottom:none; border-right:none;"| Offsets ! style="border-left:none;"| Octet ! colspan="8" | 0 ! colspan="8" | 1 ! colspan="8" | 2 ! colspan="8" | 3 |- ! style="border-top: none" | Octet ! Bit ! style="width:2.6%;"| 0 ! style="width:2.6%;"| 1 ! style="width:2.6%;"| 2 ! style="width:2.6%;"| 3 ! style="width:2.6%;"| 4 ! style="width:2.6%;"| 5 ! style="width:2.6%;"| 6 ! style="width:2.6%;"| 7 ! style="width:2.6%;"| 8 ! style="width:2.6%;"| 9 ! style="width:2.6%;"| 10 ! style="width:2.6%;"| 11 ! style="width:2.6%;"| 12 ! style="width:2.6%;"| 13 ! style="width:2.6%;"| 14 ! style="width:2.6%;"| 15 ! style="width:2.6%;"| 16 ! style="width:2.6%;"| 17 ! style="width:2.6%;"| 18 ! style="width:2.6%;"| 19 ! style="width:2.6%;"| 20 ! style="width:2.6%;"| 21 ! style="width:2.6%;"| 22 ! style="width:2.6%;"| 23 ! style="width:2.6%;"| 24 ! style="width:2.6%;"| 25 ! style="width:2.6%;"| 26 ! style="width:2.6%;"| 27 ! style="width:2.6%;"| 28 ! style="width:2.6%;"| 29 ! style="width:2.6%;"| 30 ! style="width:2.6%;"| 31 |- ! 0 ! 0 | colspan="8"|Next header | colspan="8"|Header extension length | colspan="16"|Options and padding |- ! 4 ! 32 | colspan="32"|Options and padding |- ! 8 ! 64 | colspan="32" rowspan=2|Optional: more Options and padding |- ! 12 ! 96 |} Next Header (8 bits) Specifies the type of the next header. Header extension length (8 bits) Length of this header in 8-octet units, not including the first 8 octets. Options and padding (variable) Contains one or more options, and optional padding fields to align options and to make the total header length a multiple of 8 octets. Options are TLV-coded. Routing The Routing extension header is used to direct a packet to one or more intermediate nodes before being sent to its destination. The header is at least 8 octets in size; if more Type-specific Data is needed than will fit in 4 octets, blocks of 8 octets are added to the header repeatedly, until all Type-specific Data is placed. {| class="wikitable" style="text-align: center" |+Routing extension header format |- ! style="border-bottom:none; border-right:none;"| Offsets ! style="border-left:none;"| Octet ! colspan="8" | 0 ! colspan="8" | 1 ! colspan="8" | 2 ! colspan="8" | 3 |- ! style="border-top: none" | Octet ! Bit ! style="width:2.6%;"| 0 ! style="width:2.6%;"| 1 ! style="width:2.6%;"| 2 ! style="width:2.6%;"| 3 ! style="width:2.6%;"| 4 ! style="width:2.6%;"| 5 ! style="width:2.6%;"| 6 ! style="width:2.6%;"| 7 ! style="width:2.6%;"| 8 ! style="width:2.6%;"| 9 ! style="width:2.6%;"| 10 ! style="width:2.6%;"| 11 ! style="width:2.6%;"| 12 ! style="width:2.6%;"| 13 ! style="width:2.6%;"| 14 ! style="width:2.6%;"| 15 ! style="width:2.6%;"| 16 ! style="width:2.6%;"| 17 ! style="width:2.6%;"| 18 ! style="width:2.6%;"| 19 ! style="width:2.6%;"| 20 ! style="width:2.6%;"| 21 ! style="width:2.6%;"| 22 ! style="width:2.6%;"| 23 ! style="width:2.6%;"| 24 ! style="width:2.6%;"| 25 ! style="width:2.6%;"| 26 ! style="width:2.6%;"| 27 ! style="width:2.6%;"| 28 ! style="width:2.6%;"| 29 ! style="width:2.6%;"| 30 ! style="width:2.6%;"| 31 |- ! 0 ! 0 | colspan="8"|Next header | colspan="8"|Header extension length | colspan="8"|Routing type | colspan="8"|Segments left |- ! 4 ! 32 | colspan="32"|Type-specific data |- ! 8 ! 64 | colspan="32" rowspan="2"|Optional: more type-specific data... |- ! 12 ! 96 |} Next header (8 bits) Indicates the type of the next header. Header extension length (8 bits) The length of this header, in multiples of 8 octets, not including the first 8 octets. Routing type (8 bits) A value between 0 and 255, as assigned by IANA. {| class="wikitable" style="text-align: left" !| Type !| Status ! style='width=500px' | Comment |- | 0 | Deprecated | Due to the fact that with Routing Header type 0 a simple but effective denial-of-service attack could be launched, this header was deprecated in 2007 and host and routers are required to ignore these headers. |- | 1 | Deprecated | Used for the Nimrod project funded by DARPA. It was deprecated in 2009. |- | 2 | Allowed | A limited version of type 0 and is used for Mobile IPv6, where it can hold the home address of the mobile node. |- | 3 | Allowed | RPL Source Route Header for low-power and lossy networks. |- | 4 | Allowed | Segment Routing Header (SRH). |- | 253 || Private use | May be used for testing, not for actual implementations. RFC3692-style Experiment 1. |- | 254 || Private use | May be used for testing, not for actual implementations. RFC3692-style Experiment 2. |} Segments Left (8 bits) Number of nodes this packet still has to visit before reaching its final destination. Type-specific Data (variable) Data that belongs to this type of routing header. Fragment In order to send a packet that is larger than the path MTU, the sending node splits the packet into fragments. The Fragment extension header carries the information necessary to reassemble the original (unfragmented) packet. {| class="wikitable" style="text-align: center" |+Fragment extension header format |- ! style="border-bottom:none; border-right:none;"| Offsets ! style="border-left:none;"| Octet ! colspan="8" | 0 ! colspan="8" | 1 ! colspan="8" | 2 ! colspan="8" | 3 |- ! style="border-top: none" | Octet ! Bit ! style="width:2.6%;"| 0 ! style="width:2.6%;"| 1 ! style="width:2.6%;"| 2 ! style="width:2.6%;"| 3 ! style="width:2.6%;"| 4 ! style="width:2.6%;"| 5 ! style="width:2.6%;"| 6 ! style="width:2.6%;"| 7 ! style="width:2.6%;"| 8 ! style="width:2.6%;"| 9 ! style="width:2.6%;"| 10 ! style="width:2.6%;"| 11 ! style="width:2.6%;"| 12 ! style="width:2.6%;"| 13 ! style="width:2.6%;"| 14 ! style="width:2.6%;"| 15 ! style="width:2.6%;"| 16 ! style="width:2.6%;"| 17 ! style="width:2.6%;"| 18 ! style="width:2.6%;"| 19 ! style="width:2.6%;"| 20 ! style="width:2.6%;"| 21 ! style="width:2.6%;"| 22 ! style="width:2.6%;"| 23 ! style="width:2.6%;"| 24 ! style="width:2.6%;"| 25 ! style="width:2.6%;"| 26 ! style="width:2.6%;"| 27 ! style="width:2.6%;"| 28 ! style="width:2.6%;"| 29 ! style="width:2.6%;"| 30 ! style="width:2.6%;"| 31 |- ! 0 ! 0 | colspan="8"|Next header | colspan="8"|Reserved | colspan="13"|Fragment offset | colspan="2"|Res | colspan="1"|M |- ! 4 ! 32 | colspan="32"|Identification |} Next header (8 bits) Identifies the type of the next header. Reserved (8 bits) Initialized to all zeroes. Fragment offset (13 bits) Offset, in 8-octet units, relative to the start of the fragmentable part of the original packet. Res (2 bits) Reserved; initialized to zeroes. M Flag (1 bit) 1 means more fragments follow; 0 means last fragment. Identification (32 bits) Packet identification value, generated by the source node. Needed for reassembly of the original packet. Authentication Header (AH) and Encapsulating Security Payload (ESP) The Authentication Header and the Encapsulating Security Payload are part of IPsec and are used identically in IPv6 and in IPv4. Payload The fixed and optional IPv6 headers are followed by the upper-layer payload, the data provided by the transport layer, for example a TCP segment or a UDP datagram. The Next Header field of the last IPv6 header indicates what type of payload is contained in this packet. Standard payload length The payload length field of IPv6 (and IPv4) has a size of 16 bits, capable of specifying a maximum length of octets for the payload. In practice, hosts determine the maximum usable payload length using Path MTU Discovery (yielding the minimum MTU along the path from sender to receiver), to avoid having to fragment packets. Most link-layer protocols have MTUs considerably smaller than octets. Jumbogram An optional feature of IPv6, the jumbo payload option in a Hop-By-Hop Options extension header, allows the exchange of packets with payloads of up to one octet less than 4GB (232−1= octets), by making use of a 32-bit length field. Packets with such payloads are called jumbograms. Since both TCP and UDP include fields limited to 16 bits (length, urgent data pointer), support for IPv6 jumbograms requires modifications to the transport layer protocol implementation. Jumbograms are only relevant for links that have a MTU larger than octets (more than octets for the payload, plus 40 octets for the fixed header, plus 8 octets for the Hop-by-Hop extension header). Only few link-layer protocols can process packets larger than octets. Fragmentation Unlike in IPv4, IPv6 routers never fragment IPv6 packets. Packets exceeding the size of the maximum transmission unit (MTU) of the destination link are dropped and this condition is signaled by a Packet too big ICMPv6 message to the originating node, similarly to the IPv4 method when the Don't Fragment bit is set. End nodes in IPv6 are expected to perform Path MTU Discovery to determine the maximum size of packets to send, and the upper-layer protocol is expected to limit the payload size. If the upper-layer protocol is unable to do so, the sending host may use the Fragment extension header instead. Any data link layer conveying IPv6 data must be capable of transmitting an IP packet containing up to 1,280 bytes, thus the sending endpoint may limit its packets to 1,280 bytes and avoid any need for fragmentation or Path MTU Discovery. Fragmenting A packet containing the first fragment of an original (larger) packet consists of five parts: the per-fragment headers (the crucial original headers that are repeatedly used in each fragment), followed by the Fragment extension header containing a zero Offset, then all the remaining original extension headers, then the original upper-layer header (alternatively the ESP header), and a piece of the original payload. Each subsequent packet consists of three parts: the per-fragment headers, followed by the Fragment extension header, and by a part of the original payload as identified by a Fragment Offset. The per-fragment headers are determined based on whether the original contains Routing or Hop-by-Hop extension header. If neither exists, the per-fragment part is just the fixed header. If the Routing extension header exists, the per-fragment headers include the fixed header and all the extension headers up to and including the Routing one. If the Hop-by-Hop extension header exists, the per-fragment headers consist of only the fixed header and the Hop-by-Hop extension header. In any case, the last header of the per-fragment part has its Next Header value set to 44 to indicate that a Fragment extension header follows. Each Fragment extension header has its M flag set to 1 (indicating more fragments follow), except the last, whose flag is set to 0. Each fragment's length is a multiple of 8 octets, except, potentially, the last fragment. The per-fragment headers were historically called the "unfragmentable part", referring to pre-2014 possibility of fragmenting the rest of the header. Now no headers are actually fragmentable. Reassembly The original packet is reassembled by the receiving node by collecting all fragments and placing each fragment at its indicated offset and discarding the Fragment extension headers of the packets that carried them. Packets containing fragments need not arrive in sequence; they will be rearranged by the receiving node. If not all fragments are received within 60 seconds after receiving the first packet with a fragment, reassembly of the original packet is abandoned and all fragments are discarded. If the first fragment was received (which contains the fixed header) and one or more others are missing, a Time Exceeded message (ICMPv6 type 3, code 1) is returned to the node originating the fragmented packet. When reassembling node detects a fragment that overlaps with another fragment, the reassembly of the original packet is aborted and all fragments are dropped. A node may optionally ignore the exact duplicates of a fragment instead of treating exact duplicates as overlapping each other. Receiving hosts must make a best-effort attempt to reassemble fragmented IP datagrams that, after reassembly, contain up to 1500 bytes. Hosts are permitted to make an attempt to reassemble fragmented datagrams larger than 1,500 bytes, but they are also permitted to silently discard any datagram after it becomes apparent that the reassembled packet would be larger than 1,500 bytes. Therefore, senders should avoid sending fragmented IP datagrams with a total reassembled size larger than 1,500 bytes, unless they have knowledge that the receiver is capable of reassembling such large datagrams. Security Research has shown that the use of fragmentation can be leveraged to evade network security controls. As a result, in 2014 the earlier allowance for overflowing the IPv6 header chain beyond the first fragment became forbidden in order to avoid some very pathological fragmentation cases. Additionally, as a result of research on the evasion of Router Advertisement Guard, the use of fragmentation with neighbor discovery is deprecated, and the use of fragmentation with Secure Neighbor Discovery (SEND) is discouraged. References Packets (information technology) IPv6
30177048
https://en.wikipedia.org/wiki/Classification%20of%20non-silicate%20minerals
Classification of non-silicate minerals
This list gives an overview of the classification of non-silicate minerals and includes mostly International Mineralogical Association (IMA) recognized minerals and its groupings. This list complements the List of minerals recognized by the International Mineralogical Association series of articles and List of minerals. Rocks, ores, mineral mixtures, not IMA approved minerals, not named minerals are mostly excluded. Mostly major groups only, or groupings used by New Dana Classification and Mindat. Classification of minerals Introduction The grouping of the New Dana Classification and of the mindat.org is similar only, and so this classification is an overview only. Consistency is missing too on the group name endings (group, subgroup, series) between New Dana Classification and mindat.org. Category, class and supergroup name endings are used as layout tools in the list as well. Abbreviations "*" – Mineral not IMA-approved. "?" – IMA discredited mineral name. "REE" – Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu) "PGE" – Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt) Category 01 Elements: Metals and Alloys, Carbides, Silicides, Nitrides, Phosphides :Category:Diamond :Category:Gold Class: Native elements :Category:Carbide minerals Osbornite group carbides and nitrides Osbornite TiN, Khamrabaevite (Ti,V,Fe)C, Niobocarbide (Nb,Ta)C, Tantalcarbide TaC :Category:Phosphide minerals Barringerite group phosphides Barringerite (Fe,Ni)2P, Schreibersite (Fe,Ni)3P, Nickelphosphide (Ni,Fe)3P, Allabogdanite (Fe,Ni)2P, Melliniite (Ni,Fe)4P, Monipite MoNiP Copper group/ Gold group Gold Au, Silver Ag, Copper Cu, Lead Pb, Aluminium Al, Maldonite Au2Bi Silver Amalgam Alloys Amalgam* Ag2Hg3, Moschellandsbergite Ag2Hg3, Schachnerite Ag1.1Hg0.9, Paraschachnerite Ag3Hg2, Luanheite Ag3Hg, Eugenite Ag9Hg2, Weishanite (Au,Ag)3Hg2 Iron-Nickel group Iron Fe, Kamacite? alpha-(Fe,Ni), Taenite gamma-(Fe,Ni), Tetrataenite FeNi, Awaruite Ni2Fe to Ni3Fe, Nickel Ni, Wairauite CoFe Suessite group silicides Suessite (Fe,Ni)3Si, Gupeiite Fe3Si, Xifengite Fe5Si3, Hapkeite Fe2Si, Luobusaite Fe0.83Si2, Mavlyanovite Mn5Si3, Brownleeite MnSi Platinum group (Space group Fm3m) Platinum Pt, Iridium (Ir,Os,Ru,Pt), Rhodium (Rh,Pt), Palladium Pd,Pt Osmium group (Space group P63/mmc) Osmium (Os,Ir), Ruthenium (Ru,Ir,Os), Rutheniridosmine (Ir,Os,Ru), Hexaferrum (Fe,Os,Ru,Ir), Hexamolybdenum (Mo,Ru,Fe,Ir,Os), IMA2008-055 (Ni,Fe,Ir) Tetraferroplatinum group (Space group P4/mmm) Tetraferroplatinum PtFe, Tulameenite Pt2FeCu, Ferronickelplatinum Pt2FeNi, Potarite PdHg Isoferroplatinum group (Space group Pm3m) Isoferroplatinum (Pt,Pd)3(Fe,Cu), Rustenburgite (Pt,Pd)3Sn, Atokite (Pd,Pt)3Sn, Zvyagintsevite Pd3Pb, Chengdeite Ir3Fe, Yixunite Pt3In Arsenic group Arsenic As, Antimony Sb, Stibarsen SbAs, Bismuth Bi, Stistaite SnSb Carbon polymorph group (IMA-CNMNC discourages a grouping of diamond and graphite, Mills et al. (2009)) Graphite C, Chaoite C, Fullerite C60, (Diamond C, Lonsdaleite C) Category 02 Sulfides, Sulfosalts, Sulfarsenates, Sulfantimonates, Selenides, Tellurides Class: Sulfide minerals - including Selenides and Tellurides Chalcocite-Digenite group ([Cu]2−x S] formulae) Chalcocite Cu2S, Djurleite Cu31S16, Digenite Cu9S5, Roxbyite Cu1.78S, Anilite Cu7S4, Geerite Cu8S5, Spionkopite Cu1.4S Joseite group (Trigonal: R-3m) Joseite Bi4(S,Te)3, Joseite-B Bi4(S,Te)3, Ikunolite Bi4(S,Se)3, Laitakarite Bi4(Se,S)3, Pilsenite Bi4Te3, Poubaite PbBi2Se2(Te,S)2, Rucklidgeite (Bi,Pb)3Te4, Babkinite Pb2Bi2(S,Se)3 Pentlandite group (Isometric: Fm3m) Pentlandite (Fe,Ni)9S8, Argentopentlandite Ag(Fe,Ni)8S8, Cobaltpentlandite Co9S8, Shadlunite (Pb,Cd)(Fe,Cu)8S8, Manganoshadlunite (Mn,Pb)(Cu,Fe)8S8, Geffroyite (Ag,Cu,Fe)9(Se,S)8 Galena group (Isometric: Fm3m, IMA-CNMNC discourages the use of this grouping, Mills et al. (2009)) Galena PbS, Clausthalite PbSe, Altaite PbTe, Alabandite MnS, Oldhamite (Calcium sulfide) (Ca,Mg,Fe)S, Niningerite (Mg,Fe2+,Mn)S, Borovskite Pd3SbTe4, Crerarite (Pt,Pb)Bi3(S,Se)4−x (x~0.7), Keilite (Fe,Mn,Mg,Ca,Cr)S Sphalerite group (Isometric: F4-3m) Sphalerite (Zn,Fe)S, Stilleite ZnSe, Metacinnabar HgS, Tiemannite HgSe, Coloradoite HgTe, Hawleyite CdS, Rudashevskyite (Fe,Zn)S Wurtzite group (Hexagonal: P63mc) Wurtzite (Zn,Fe)S, Greenockite CdS, Cadmoselite CdSe, Rambergite MnS Nickeline group (Hexagonal: P63/mmc) Nickeline NiAs, Breithauptite NiSb, Sederholmite NiSe, Hexatestibiopanickelite (Ni,Pd)(Te,Sb), Sudburyite (Pd,Ni)Sb, Kotulskite Pd(Te,Bi), Sobolevskite PdBi, Stumpflite Pt(Sb,Bi), Langisite (Co,Ni)As, Freboldite CoSe, Achavalite FeSe, Sorosite Cu(Sn,Sb), Vavrinite Ni2SbTe2 Chalcopyrite group (Tetragonal: I-42d) Chalcopyrite CuFeS2, Eskebornite CuFeSe2, Gallite CuGaS2, Roquesite CuInS2, Lenaite AgFeS2, Laforetite AgInS2 Stannite group (Tetragonal: I-42m) A2BCS type Stannite Cu2FeSnS4, Cernyite Cu2CdSnS4, Briartite Cu2(Zn,Fe)GeS4, Kuramite Cu3SnS4, Sakuraiite (Cu,Zn,Fe,In,Sn)4S4, Hocartite Ag2FeSnS4, Pirquitasite Ag2ZnSnS4, Velikite Cu2HgSnS4, Kesterite Cu2(Zn,Fe)SnS4, Ferrokesterite Cu2(Fe,Zn)SnS4, Barquillite Cu2CdGeS4 Thiospinel group, AB2X4 (Isometric: Fd3m) Bornhardtite Co2+(Co3+)2Se4, Cadmoindite CdIn2S4, Carrollite Cu(Co,Ni)2S4, Cuproiridsite CuIr2S4, Cuprorhodsite CuRh2S4, Daubréelite Fe2+Cr2S4, Ferrorhodsite (Fe,Cu)(Rh,Ir,Pt)2S4, Fletcherite (mineral) Cu(Ni,Co)2S4, Florensovite Cu(Cr1.5Sb0.5)S4, Greigite Fe2+(Fe3+)2S4, Indite Fe2+In2S4, Kalininite ZnCr2S4, Linnaeite Co2+(Co3+)2S4, Malanite Cu(Pt,Ir)2S4, Polydymite NiNi2S4, Siegenite (Ni,Co)3S4, Violarite Fe2+(Ni3+)2S4, Trustedtite Ni3Se4, Tyrrellite (Cu,Co,Ni)3Se4 Tetradymite group (Trigonal: R-3m) Tetradymite Bi2Te2S, Tellurobismuthite Bi2Te3, Tellurantimony Sb2Te3, Paraguanajuatite Bi2(Se,S)3, Kawazulite Bi2(Te,Se,S)3, Skippenite Bi2Se2(Te,S), Vihorlatite Bi24Se17Te4 Pyrite group (Isometric: Pa3) Pyrite FeS2, Vaesite NiS2, Cattierite CoS2, Penroseite (Ni,Co,Cu)Se2, Trogtalite CoSe2, Villamaninite (Cu,Ni,Co,Fe)S2, Fukuchilite Cu3FeS8, Krutaite CuSe2, Hauerite MnS2, Laurite RuS2, Aurostibite AuSb2, Krutovite NiAs2, Sperrylite PtAs2, Geversite Pt(Sb,Bi)2, Insizwaite Pt(Bi,Sb)2, Erlichmanite OsS2, Dzharkenite FeSe2, Gaotaiite Ir3Te8, Mayingite IrBiTe Marcasite group (Orthorhombic: Pnnm) Marcasite FeS2, Ferroselite FeSe2, Frohbergite FeTe2, Hastite? CoSe2, Mattagamite CoTe2, Kullerudite NiSe2, Omeiite (Os,Ru)As2, Anduoite (Ru,Os)As2, Lollingite FeAs2, Seinajokite (Fe,Ni)(Sb,As)2, Safflorite (Co,Fe)As2, Rammelsbergite NiAs2, Nisbite NiSb2 Cobaltite group (Cubic or pseudocubic crystals) Cobaltite CoAsS, Gersdorffite NiAsS, Ullmannite NiSbS, Willyamite (Co,Ni)SbS, Tolovkite IrSbS, Platarsite (Pt,Rh,Ru)AsS, Irarsite (Ir,Ru,Rh,Pt)AsS, Hollingworthite (Rh,Pt,Pd)AsS, Jolliffeite (Ni,Co)AsSe, Padmaite PdBiSe, Michenerite (Pd,Pt)BiTe, Maslovite PtBiTe, Testibiopalladite PdTe(Sb,Te), Changchengite IrBiS, Milotaite PdSbSe, Kalungaite PdAsSe Arsenopyrite group (Monoclinic: P21/c (Pseudo-orthorhombic)) Arsenopyrite FeAsS, Gudmundite FeSbS, Osarsite (Os,Ru)AsS, Ruarsite RuAsS, Iridarsenite (Ir,Ru)As2, Clinosafflorite (Co,Fe,Ni)As2 Molybdenite group Drysdallite Mo(Se,S)2, Molybdenite MoS2, Tungstenite WS2 Skutterudite group Ferroskutterudite (Fe,Co)As3; Nickelskutterudite NiAs2-3; Skutterudite (Co,Fe,Ni)As2-3; Kieftite CoSb3 Class: Sulfosalt minerals Colusite group Colusite Cu12-13V(As,Sb,Sn,Ge)3S16, Germanocolusite Cu13V(Ge,As)3S16, Nekrasovite Cu+26V2(Sn,As,Sb)6S32, Stibiocolusite Cu13V(Sb,As,Sn)3S16 Cylindrite group Cylindrite Pb3Sn4FeSb2S14, Franckeite (Pb,Sn)6Fe2+Sn2Sb2S14, Incaite Pb4Sn4FeSb2S15, Potosiite Pb6Sn2FeSb2S14, Abramovite Pb2SnInBiS7, Coiraite (Pb,Sn)12.5As3Sn5FeS28 Hauchecornite group (Tetragonal: P4/nnn or I4/mmm) Hauchecornite Ni9Bi(Sb,Bi)S8, Bismutohauchecornite Ni9Bi2S8, Tellurohauchecornite Ni9BiTeS8, Arsenohauchecornite Ni18Bi3AsS16, Tucekite Ni9Sb2S8 Tetrahedrite group (Isometric: I-43m) Tetrahedrite (Cu,Fe)12Sb4S13, Tennantite (Cu,Fe)12As4S13, Freibergite (Ag,Cu,Fe)12(Sb,As)4S13, Hakite (Cu,Hg)3(Sb,As)(Se,S)3, Giraudite (Cu,Zn,Ag)12(As,Sb)4(Se,S)13, Goldfieldite Cu12(Te,Sb,As)4S13, Argentotennantite (Ag,Cu)10(Zn,Fe)2(As,Sb)4S13 Proustite group Proustite Ag3AsS3, Pyrargyrite Ag3SbS3 Aikinite group (Orthorhombic containing Pb, Cu, Bi, and S) Aikinite PbCuBiS3, Krupkaite PbCuBi3S6, Gladite PbCuBi5S9, Hammarite Pb2Cu2Bi4S9 (?), Friedrichite Pb5Cu5Bi7S18, Pekoite PbCuBi11(S,Se)18, Lindstromite Pb3Cu3Bi7S15, Salzburgite Cu1.6Pb1.6Bi6.4S12 Lillianite group (Orthorhombic, AmBnS6 where A=Pb, Ag, Mn and B=Sb, Bi) Lillianite Pb3Bi2S6, Bursaite? Pb5Bi4S11, Gustavite PbAgBi3S6 (?), Andorite PbAgSb3S6, Uchucchacuaite AgPb3MnSb5S12, Ramdohrite Ag3Pb6Sb11S24, Roshchinite Ag19Pb10Sb51S96 or Pb(Ag,Cu)2(Sb,As)5S10, Fizelyite Pb14Ag5Sb21S48 Matildite group Matildite AgBiS2, Bohdanowiczite AgBiSe2, Volynskite AgBiTe2, Zlatogorite CuNiSb2 Sartorite group Sartorite PbAs2S4, Guettardite Pb(Sb,As)2S4, Twinnite Pb(Sb,As)2S4, Marumoite Pb32As40S92 Pavonite group (Monoclinic: C/2c bismuth sulfosalts) Pavonite (Ag,Cu)(Bi,Pb)3S5, Makovickyite Ag1.5Bi5.5S9, Benjaminite (Ag,Cu)3(Bi,Pb)7S12, Mummeite Cu0.58Ag3.11Pb1.10Bi6.65S13, Borodaevite Ag5(Bi,Sb)9S16, Cupropavonite AgPbCu2Bi5S10, Cupromakovickyite Cu4AgPb2Bi9S18, Kudriavite (Cd,Pb)Bi2S4, IMA2008-058 Ag5Bi13S22, IMA2005-036 Cu8Pb4Ag3Bi19S38 Category 03 Halogenides, Oxyhalides, Hydroxyhalides Atacamite group Polymorths of Cu2[(OH)3|Cl]: Atacamite, Botallackite, Clinoatacamite, Paratacamite Gillardite Cu3Ni(OH)6Cl2, Haydeeite Cu3Mg(OH)6Cl2, Herbertsmithite Cu3Zn[(OH)3|Cl]2, Kapellasite Cu3Zn[(OH)3|Cl]2 Fluorite group Fluorite CaF2, Fluorocronite PbF2, Frankdicksonite BaF2, Tveitite-(Y) Ca1−xYxF2+x (x~0.3), IMA2009-014 SrF2 Halite group (IMA-CNMNC discourages the use of this grouping, Mills et al. (2009)) Halite NaCl, Sylvite KCl, Villiaumite NaF, Carobbiite KF, Griceite LiF Chlorargyrite group Bromargyrite AgBr, Chlorargyrite AgCl, Marshite CuI, Miersite (Ag,Cu)I, Nantokite CuCl Lawrencite group Chloromagnesite MgCl2, Lawrencite (Fe2+,Ni)Cl2, Scacchite MnCl2, Tolbachite CuCl2 Matlockite group Bismoclite (BiO)Cl, Daubréeite (BiO)(OH,Cl), Laurionite PbCl(OH), Paralaurionite PbCl(OH), Rorisite CaFCl, Zavaritskite (BiO)F, Matlockite PbFCl Challacolloite group Challacolloite KPb2Cl5, Hephaistosite TlPb2Cl5, Steropesite Tl3BiCl6, Panichiite (NH4)2SnCl6 Chukhrovite group Chukhrovite-(Y) Ca3(Y,Ce)Al2(SO4)F13•10H2O, Chukhrovite-(Ce) Ca3(Ce,Y)Al2(SO4)F13•10H2O, Meniaylovite Ca4AlSi(SO4)F13•12H2O, Chukhrovite-(Nd) Ca3(Nd,Y)Al2(SO4)F13•12H2O Category 04 Oxides and Hydroxides, Vanadates, Arsenites, Antimonites, Bismuthites, Sulfites, Iodates :Category:Vanadate minerals Periclase group (Isometric: Fm3m, IMA-CNMNC discourages the use of this grouping, Mills et al. (2009)) Periclase MgO, Bunsenite NiO, Manganosite MnO, Monteponite CdO, Lime CaO, Wustite FeO, Hongquiite* TiO Hematite group/ Corundum group (Rhombohedral: R-3c) Corundum Al2O3 (Sapphire, Ruby), Eskolaite Cr2O3, Hematite Fe2O3, Karelianite V2O3, Tistarite Ti2O3 Perovskite group Perovskite CaTiO3, Latrappite (Ca,Na)(Nb,Ti,Fe)O3, Loparite-(Ce) (Ce,Na,Ca)2(Ti,Nb)2O6, Lueshite NaNbO3, Tausonite SrTiO3, Isolueshite (Na,La,Ca)(Nb,Ti)O3, Barioperovskite BaTiO3, Lakargiite CaZrO3 Ilmenite group Ilmenite Fe2+TiO3, Geikielite MgTiO3, Pyrophanite MnTiO3, Ecandrewsite (Zn,Fe2+,Mn2+)TiO3, Melanostibite Mn(Sb5+,Fe3+)O3, Brizziite-III NaSb5+O3, Akimotoite (Mg,Fe)SiO3 Rutile group (Tetragonal: P4/mnm) Rutile TiO2, Ilmenorutile (Ti,Nb,Fe3+)O2, Struverite? (Ti,Ta,Fe3+)O2, Pyrolusite MnO2, Cassiterite SnO2, Plattnerite PbO2, Argutite GeO2, Squawcreekite? (Fe3+,Sb 5+,W6+)O4•H2O Multiple Oxides with O19 groups/ Magnetoplumbite group Hibonite (Ca,Ce)(Al,Ti,Mg)12O19, Yimengite K(Cr,Ti,Fe,Mg)12O19, Hawthorneite Ba[Ti3Cr4Fe4Mg]O19, Magnetoplumbite Pb(Fe3+,Mn3+)12O19, Haggertyite Ba[(Fe2+)6Ti5Mg]O19, Nezilovite PbZn2(Mn4+,Ti4+)2(Fe3+)8O19, Batiferrite Ba(Ti2(Fe3+)8(Fe2+)2)O19, Barioferrite Ba(Fe3+)12O19, Plumboferrite Pb2(Fe3+)(11-x)(Mn2+)xO(19-2x) x = 1/3, IMA2009-027 (Fe,Mg)Al12O19 Cryptomelane group (Hard black, fine-grained) Hollandite Ba(Mn4+,Mn2+)8O16, Cryptomelane K(Mn4+,Mn2+)8O16, Manjiroite (Na,K)(Mn4+,Mn2+)8O16•nH2O, Coronadite Pb(Mn4+,Mn2+)8O16, Strontiomelane Sr(Mn4+)6Mn3+2O16, Henrymeyerite BaFe2+Ti7O16 Aeschynite group Aeschynite-(Ce) (Ce,Ca,Fe)(Ti,Nb)2(O,OH)6, Nioboaeschynite-(Ce) (Ce,Ca)(Nb,Ti)2(O,OH)6, Aeschynite-(Y) (Y,Ca,Fe)(Ti,Nb)2(O,OH)6, Tantalaeschynite-(Y) (Y,Ce,Ca)(Ta,Ti,Nb)2O6, Aeschynite-(Nd) (Nd,Ce)(Ti,Nb)2(O,OH)6, Nioboaeschynite-(Nd) (Nd,Ce)(Nb,Ti)2(O,OH)6, Nioboaeschynite-(Y) [(Y,REE),Ca,Th,Fe](Nb,Ti,Ta)2(O,OH)6 Crichtonite group (ABC18 T2 O38) Landauite NaMnZn2(Ti,Fe3+)6Ti12O38, Loveringite (Ca,Ce)(Ti,Fe3+,Cr,Mg)21O38, Crichtonite (Sr,La,Ce,Y)(Ti,Fe3+,Mn)21O38, Senaite Pb(Ti,Fe,Mn)21O38, Davidite-(La) (La,Ce,Ca)(Y,U)(Ti,Fe3+)20O38, Davidite-(Ce) (Ce,La)(Y,U)(Ti,Fe3+)20O38, Mathiasite (K,Ca,Sr)(Ti,Cr,Fe,Mg)21O38, Lindsleyite (Ba,Sr)(Ti,Cr,Fe,Mg)21O38, Dessauite (Sr,Pb)(Y,U)(Ti,Fe3+)20O38, Cleusonite Pb(U4+,U6+)(Ti,Fe2+,Fe3+)20(O,OH)38, Gramaccioliite-(Y) (Pb,Sr)(Y,Mn)Fe2(Ti,Fe)18O38 Spinel group AB2O4 Aluminum subgroup Spinel MgAl2O4, Galaxite (Mn,Mg)(Al,Fe3+)2O4, Hercynite Fe2+Al2O4, Gahnite ZnAl2O4 Iron subgroup Magnesioferrite MgFe3+2O4, Jacobsite (Mn2+,Fe2+,Mg)(Fe3+,Mn3+)2O4, Magnetite Fe2+(Fe3+)2O4, Franklinite (Zn,Mn2+,Fe2+)(Fe3+,Mn3+)2O4, Trevorite Ni(Fe3+)2O4, Cuprospinel (Cu,Mg)(Fe3+)2O4, Brunogeierite (Ge2+,Fe2+)(Fe3+)2O4 Chromium subgroup Magnesiochromite MgCr2O4, Manganochromite (Mn,Fe2+)(Cr,V)2O4, Chromite Fe2+Cr2O4, Nichromite (Ni,Co,Fe2+)(Cr,Fe3+,Al)2O4, Cochromite (Co,Ni,Fe2+)(Cr,Al)2O4, Zincochromite ZnCr2O4 Vanadium subgroup Vuorelainenite (Mn2+,Fe2+)(V3+,Cr3+)2O4, Coulsonite Fe2+(V3+)2O4, Magnesiocoulsonite Mg(V3+)2O4 Titanium subgroup Qandilite (Mg,Fe2+)2(Ti,Fe3+,Al)O4, Ulvospinel Ti(Fe2+)2O4 Taaffeite group Magnesiotaaffeite-2N2S Mg3Al8BeO16, Magnesiotaaffeite-6N3S (Mg,Fe2+,Zn)2Al6BeO12, Ferrotaaffeite-6N3S (Fe2+,Zn,Mg)2Al6BeO12 Kusachiite CuBi2O4, Iwakiite Mn2+(Fe3+,Mn3+)2O4, Hausmannite Mn2+(Mn3+)2O4, Hetaerolite Zn(Mn3+)2O4, Hydrohetaerolite Zn2(Mn3+)4O8•H2O, Minium (Pb2+)2Pb4+O4, Chrysoberyl BeAl2O4, Marokite Ca(Mn3+)2O4, Filipstadite (Mn2+,Mg)4Sb5+Fe 3+O8, Tegengrenite (Mg,Mn2+)2(Sb5+)0.5(Mn3+,Si,Ti)0.5O4, Yafsoanite Ca3Te2Zn3O12, Xieite FeCr2O4 Nickel-Strunz 04.DH mineral family IMA/CMNMC revised the Pyrochlore supergroup 2010. Pyrochlore supergroup Pyrochlore group (D atom is Nb) Fluorcalciopyrochlore (Ca,[ ])2Nb2(O,OH)6F, Fluorkenopyrochlore ([ ],Na,Ce,Ca)2(Nb,Ti)2O6F, Fluornatropyrochlore (Na,REE,Ca)2Nb2(O,OH)6F, Fluorstrontiopyrochlore (Sr,[ ])2Nb2(O,OH)6F, Hydropyrochlore (H2O,[ ])2Nb2(O,OH)6(H2O), Hydroxycalciopyrochlore (Ca,[ ])2Nb2(O,OH)6(OH), Kenoplumbopyrochlore (Pb,[ ])Nb2O6([ ],O), Oxycalciopyrochlore Ca2Nb2O6O, Oxynatropyrochlore (Na,Ca,U)2Nb2O6(O,OH), Oxyplumbopyrochlore Pb2Nb2O6O, Oxyyttropyrochlore-(Y) (Y,[ ])2Nb2O6O Microlite group (D atom is Ta) Fluorcalciomicrolite (Ca,Na)2Ta2O6F, Fluornatromicrolite (Na,Ca,Bi)2Ta2O6F, Hydrokenomicrolite ([ ],H2O)2Ta2(O,OH)6H2O, Hydromicrolite (H2O,[ ])2Ta2(O,OH)6H2O, Hydroxykenomicrolite ([ ],Na,Sb3+)2Ta2O6(OH), Kenoplumbomicrolite (Pb,[ ])2Ta2O6([ ],O,OH), Oxycalciomicrolite Ca2Ta2O6O, Oxystannomicrolite Sn2Ta2O6O, Oxystibiomicrolite (Sb3+,Ca)2Ta2O6O Romeite group (D atom is Sb) Cuproromeite Cu2Sb2(O,OH)7, Fluorcalcioromeite (Ca,Sb3+)2(Sb5+,Ti)2O6F, Fluornatroromeite (Na,Ca)2Sb2(O,OH)6F, Hydroxycalcioromeite (Ca,Sb3+)2(Sb5+,Ti)2O6(OH), Oxycalcioromeite Ca2Sb2O6O, Oxyplumboromeite Pb2Sb2O6O, Stibiconite Sb3+Sb+62O6(OH) Betafite group (D atom is Ti): Calciobetafite Ca2(Ti,Nb)2O6O, Oxyuranobetafite (U,Ca,[ ])2(Ti,Nb)2O6O Elsmoreite group (D atom is W): Hydrokenoelsmoreite [ ]2W2O6(H2O) Cesstibtantite group Cesstibtantite (Cs,Na)SbTa4O12, Natrobistantite (Na,Cs)Bi(Ta,Nb,Sb)4O12 Brannerite-Thorutite series, Orthobrannerite-Thorutite series: Brannerite (U4+,REE,Th,Ca)(Ti,Fe3+,Nb)2(O,OH)6, Orthobrannerite U4+U6+Ti4O12(OH)2, Thorutite (Th,U,Ca)Ti2(O,OH)6 Class: Hydroxides and oxides containing hydroxyl Diaspore group (Orthorhombic, Pnma or Pnmd) Diaspore AlO(OH), Goethite Fe3+O(OH), Groutite Mn3+O(OH), Montroseite (V3+,Fe3+,V4+)O(OH), Bracewellite Cr3+O(OH), Tsumgallite GaO(OH) Brucite group (Rhombohedral: P-3m1) Brucite Mg(OH)2, Amakinite (Fe2+,Mg)(OH)2, Pyrochroite Mn(OH)2, Portlandite Ca(OH)2, Theophrastite Ni(OH)2 Wickmanite group (Cubic or Trigonal, 2+ cations containing Sn) Wickmanite Mn2+Sn4+(OH)6, Schoenfliesite MgSn4+(OH)6, Natanite Fe2+Sn4+(OH)6, Vismirnovite ZnSn4+(OH)6, Burtite CaSn(OH)6, Mushistonite (Cu,Zn,Fe)Sn4+(OH)6 (Tetragonal: P42/n) Stottite Fe2+Ge(OH)6, Tetrawickmanite Mn2+Sn4+(OH)6, Jeanbandyite (Fe3+,Mn2+)Sn4+(OH)6, Mopungite NaSb(OH)6 Category 05 Carbonates and Nitrates Calcite group (Trigonal: R-3c) Calcite CaCO3, Magnesite MgCO3, Siderite Fe2+CO3, Rhodochrosite MnCO3, Spherocobaltite CoCO3, Smithsonite ZnCO3, Otavite CdCO3, Gaspeite (Ni,Mg,Fe2+)CO3 Aragonite group (Orthorhombic: Pmcn) Aragonite CaCO3, Witherite BaCO3, Strontianite SrCO3, Cerussite PbCO3 Dolomite group Ankerite Ca(Fe2+,Mg,Mn2+)(CO3)2, Dolomite CaMg(CO3)2, Kutnohorite Ca(Mn,Mg,Fe)(CO3)2, Minrecordite CaZn(CO3)2 Burbankite group Hexagonal Burbankite (Na,Ca)3(Sr,Ba,Ce)3(CO3)5, Khanneshite (NaCa)3(Ba,Sr,Ce,Ca)3(CO3)5, Calcioburbankite Na3(Ca,REE,Sr)3(CO3)5, Sanromanite Na2CaPb3(CO3)5 Monoclinic Rémondite-(Ce) Na3(Ce,La,Ca,Na,Sr)3(CO3)5, Petersenite-(Ce) (Na,Ca)4(Ce,La,Nd)2(CO3)5, Rémondite-(La) Na3(La,Ce,Ca)3(CO3)5 Rosasite group Rosasite (Cu,Zn)2(CO3)(OH)2, Glaukosphaerite (Cu,Ni)2(CO3)(OH)2, Kolwezite (Cu,Co)2(CO3)(OH)2, Zincrosasite (Zn,Cu)2(CO3)(OH)2, Mcguinnessite (Mg,Cu)2(CO3)(OH)2 Malachite group Malachite Cu2(CO3)(OH)2, Nullaginite Ni2(CO3)(OH)2, Pokrovskite Mg2(CO3)(OH)2•0.5H2O, Chukanovite Fe2(CO3)(OH)2 Ancylite group Ancylite-(Ce) SrCe(CO3)2(OH)•H2O, Calcioancylite-(Ce) CaCe(CO3)2(OH)•H2O, Calcioancylite-(Nd) CaNd(CO3)2(OH)•H2O, Gysinite-(Nd) Pb(Nd,La)(CO3)2(OH)•H2O, Ancylite-(La) Sr(La,Ce)(CO3)2(OH)•H2O, Kozoite-(Nd) (Nd,La,Sm,Pr)(CO3)(OH), Kozoite-(La) La(CO3)(OH) Sjogrenite-Hydrotalcite group Sjogrenite subgroup: Hexagonal Manasseite Mg6Al2[(OH)16|CO3]·4H2O, Barbertonite Mg6(Cr,Al)2[(OH)16|CO3]·4H2O, Sjogrenite Mg6(Fe3+)2[(OH)16|CO3]·4H2O, Zaccagnaite Zn4Al2(OH)12(CO3)•3H2O, Fougerite (Fe2+,Mg)6(Fe3+)2(OH)18·4H2O Hydrotalcite subgroup: Rhombohedral I, Mg6(R3+)2(OH)16CO3·4H2O, where R3+ = Al, Cr, or Fe Hydrotalcite Mg6Al2[(OH)16CO3]·4H2O, Stichtite Mg6Cr2[(OH)16|CO3]·4H2O, Pyroaurite Mg6Fe3+2[(OH)16|CO3]·4H2O, Desautelsite Mg6(Mn3+)2[(OH)16|CO3]·4H2O, Droninoite Ni3Fe3+Cl(OH)8•2H2O, Hydrowoodwardite Cu1−xAlx[(OH)2|(SO4)x/2]·nH2O, Iowaite Mg4Fe(OH)8OCl·4H2O Hydrotalcite subgroup: Rhombohedral II Reevesite Ni6(Fe3+)2(CO3)(OH)16•4H2O, Takovite Ni6Al2(OH)16(CO3,OH)•4H2O, Comblainite (Ni2+)6(Co3+)2(CO3)(OH)16•4H2O Tundrite group Tundrite-(Ce) Na2Ce2TiO2(SiO4)(CO3)2, Tundrite-(Nd) Na3(Nd,La)4(Ti,Nb)2(SiO4)2(CO3)3O4(OH)•2H2O :Category:Nitrate minerals Category 06 Borates Ludwigite group (Space group: Pbam) Ludwigite Mg2Fe3+BO5, Vonsenite Fe2+2Fe3+BO5, Azoproite (Mg,Fe2+)2(Fe3+,Ti,Mg)BO5, Bonaccordite Ni2Fe3+BO5, Chestermanite Mg2(Fe3+,Mg,Al,Sb5+)BO3O2, Fredrikssonite Mg2(Mn3+,Fe3+)O2(BO3) Boracite group (Tecto-heptaborates) (Orthorhombic: Rca21) Boracite Mg3B7O13Cl, Ericaite (Fe2+,Mg,Mn)3B7O13Cl, Chambersite Mn3B7O13Cl (Trigonal: R3c) Congolite (Fe2+,Mg,Mn)3B7O13Cl, Trembathite (Mg,Fe2+)3B7O13Cl Inderite group (Neso-triborates) Inyoite Ca2B6O6(OH)10•8H2O, Inderborite CaMg[B3O3(OH)5]2•6H2O, Inderite MgB3O3(OH)5•5H2O, Kurnakovite Mg(H4B3O7)(OH)·5H2O, Meyerhofferite Ca2(H3B3O7)2·4H2O, Solongoite Ca2 (H3B3O7)(OH)Cl Santite group (Neso-pentaborates) Santite KB5O6(OH)4•2(H2O), Ramanite-(Rb) Rb[B5O6(OH)4]•2H2O, Ramanite-(Cs) Cs[B5O6(OH)4]•2H2O Hilgardite group (Tecto-pentaborates) Hilgardite Ca2B5O9Cl•H2O, Kurgantaite CaSr[B5O9]Cl•H2O, IMA2007-047 Pb2[B5O9]Cl•0.5H2O Pringleite group Pringleite Ca9B26O34(OH)24Cl4•13H2O, Ruitenbergite Ca9B26O34(OH)24Cl4•13H2O, Brianroulstonite Ca3[B5O6(OH)6](OH)Cl2•8H2O, Penobsquisite Ca2Fe2+[B9O13(OH)6]Cl•4H2O, Walkerite Ca16(Mg,Li,[ ])2[B13O17(OH)12]4Cl6•28H2O Category 07 Sulfates, Selenates, Chromates, Molybdates, Wolframates, Niobates Barite group Barite BaSO4, Celestine SrSO4, Anglesite PbSO4 Blodite group Blodite Na2Mg(SO4)2•4H2O, Nickelblodite Na2(Ni,Mg)(SO4)2•4H2O, Leonite K2Mg(SO4)2•4H2O, Mereiterite K2Fe2+(SO4)2•4H2O, Changoite Na2Zn(SO4)2•4H2O Alum group, XAl(SO4)2·12H2O Alum-(K) KAl[SO4]2·12H2O, Alum-(Na) NaAl[SO4]2·12H2O, Tschermigite (NH4)Al(SO4)2•12H2O, Lonecreekite (NH4)(Fe3+,Al)(SO4)2•12H2O, Lanmuchangite TlAl(SO4)2•12H2O Voltaite group Voltaite K2(Fe2+)5(Fe3+)3Al(SO4)12•18H2O, Zincovoltaite K2Zn5(Fe3+)3Al(SO4)12•18H2O, Pertlikite K2(Fe2+,Mg)2(Mg,Fe3+)4(Fe3+)2Al(SO4)12•18H2O Aluminite group Aluminite Al2(SO4)(OH)4•7(H2O), Mangazeite Al2(SO4)(OH)4•3H2O Zippeite group Zippeite K4(UO2)6(SO4)3(OH)10•4H2O, Natrozippeite Na4(UO2)6(SO4)3(OH)10•4H2O, Magnesiozippeite Mg(H2O)3.5(UO2)2(SO4)O2, Nickelzippeite (Ni2+)2(UO2)6(SO4)3(OH)10•16H2O, Zinc-zippeite (Zn2+)2(UO2)6(SO4)3(OH)10•16H2O, Cobaltzippeite (Co2+)2(UO2)6(SO4)3(OH)10•16H2O, Marecottite Mg3(H2O)18[(UO2)4O3(SO4)2]2•10H2O, Pseudojohannite Cu6.5[(UO2)4O4(SO4)2]2(OH)5•25H2O, IMA2009-008 Y2[(UO2)8O6(SO4)4(OH)2]•26H2O Copiapite group Copiapite Fe2+(Fe3+)4(SO4)6(OH)2•20H2O, Magnesiocopiapite Mg(Fe3+)4(SO4)6(OH)2•20H2O, Cuprocopiapite Cu(Fe3+)4(SO4)6(OH)2•20H2O, Ferricopiapite (Fe3+)2/3(Fe3+)4(SO4)6(OH)2•20H2O, Calciocopiapite Ca(Fe3+)4(SO4)6(OH)2•19H2O, Zincocopiapite Zn(Fe3+)4(SO4)6(OH)2•18H2O, Aluminocopiapite Al2/3(Fe3+)4(SO4)6O(OH)2•20H2O Pb, Zn tellurates Cheremnykhite Zn3Pb3Te4+O6(VO4)2, Kuksite Pb3Zn3Te6+O6(PO4)2, Dugganite Pb3Zn3Te(As,V,Si)2(O,OH)14, Joelbruggerite Pb3Zn3Sb5+As2O13(OH) "Halotrichite" supergroup Hydrated acid and sulfates where A(B)2(XO4)4·H2O Halotrichite group Pickeringite MgAl2(SO4)4•22H2O, Halotrichite Fe2+Al2(SO4)4•22H2O, Apjohnite MnAl2(SO4)4•22H2O, Dietrichite (Zn,Fe2+,Mn)Al2(SO4)4•22H2O, Bilinite Fe2+(Fe3+)2(SO4)4•22H2O, Redingtonite (Fe2+,Mg,Ni)(Cr,Al)2(SO4)4•22H2O, Wupatkiite (Co,Mg,Ni)Al2(SO4)4•22H2O Ransomite Cu(Fe3+)2(SO4)4•6H2O, Romerite Fe2+(Fe3+)2(SO4)4•14H2O, Lishizhenite Zn(Fe3+)2(SO4)4•14H2O "Kieserite" supergroup Hydrated acid and sulfates where AXO4·H2O Kieserite group Kieserite MgSO4•H2O, Szomolnokite Fe2+SO4•H2O, Szmikite MnSO4•H2O, Poitevinite (Cu,Fe2+,Zn)SO4•H2O, Gunningite (Zn,Mn)SO4•H2O, Dwornikite (Ni,Fe2+)SO4•H2O, Cobaltkieserite CoSO4•H2O Rozenite group (Monoclinic) Rozenite Fe2+SO4•4H2O, Starkeyite MgSO4•4H2O, Ilesite (Mn,Zn,Fe2+)SO4•4H2O, Aplowite (Co,Mn,Ni)SO4•4H2O, Boyleite (Zn,Mg)SO4•4H2O, IMA2002-034 CdSO4•4H2O Chalchanthite group (Triclinic: P-1) Chalcanthite CuSO4•5H2O, Siderotil Fe2+SO4•5H2O, Pentahydrite MgSO4•5H2O, Jokokuite MnSO4•5H2O Hexahydrite group (Space group: C2/c) Hexahydrite MgSO4•6H2O, Bianchite (Zn,Fe2+)(SO4)•6H2O, Ferrohexahydrite Fe2+SO4•6H2O, Nickelhexahydrite (Ni,Mg,Fe2+)(SO4)•6H2O, Moorhouseite (Co,Ni,Mn)SO4•6H2O, Chvaleticeite (Mn2+,Mg)SO4•6H2O Melanterite group (Heptahydrates, Monoclinic: P21/c) Melanterite Fe2+SO4•7H2O, Boothite CuSO4•7H2O, Zincmelanterite (Zn,Cu,Fe2+)SO4•7H2O, Bieberite CoSO4•7H2O, Mallardite Mn2+SO4•7H2O, Alpersite (Mg,Cu)SO4•7H2O Epsomite group Epsomite MgSO4·7H2O, Goslarite ZnSO4·7H2O, Morenosite NiSO4·7H2O Minasragrite group (Monclinic and Triclinic) Minasragrite VO(SO4)•5H2O, Bobjonesite VO(SO4)H2O3, Anorthominasragrite V4+O(SO4)H2O5 (Orthorhombic) Stanleyite (V4+O)SO4•6(H2O), Orthominasragrite VO(SO4)•5(H2O) Bassanite 2CaSO4•H2O, Gypsum CaSO4•2H2O, Sanderite MgSO4•2H2O, Bonattite CuSO4•3H2O, Retgersite NiSO4•6H2O, Meridianiite MgSO4•11H2O Alunite supergroup - Part I :Category:Alunite group, A1+(B[3])3(SO4)2(OH)6 Alunite KAl3[(OH)3|SO4]2, Ammonioalunite (NH4)Al3[(OH)3|SO4]2, Ammoniojarosite (NH4)(Fe3+)3(SO4)2(OH)6, Argentojarosite Ag(Fe3+)3(SO4)2(OH)6, Beaverite-Cu Pb(Fe3+,Cu)3(SO4)2(OH)6 (Fe3+:Cu ≈ 2:1), Beaverite-Zn Pb((Fe3+)2Zn)(SO4)2(OH)6, Dorallcharite (Tl,K)(Fe3+)3(SO4)2(OH)6, Huangite Ca0.5Al3(SO4)2(OH)6, Hydroniumjarosite (H3O)(Fe3+)3(SO4)2(OH)6, Jarosite K(Fe3+)3[(OH)3|SO4]2, Natroalunite (Na,K)Al3[(OH)3|SO4], Natrojarosite Na(Fe3+)3(SO4)2(OH)6, Osarizawaite PbCuAl2(SO4)2(OH)6, Plumbojarosite Pb(Fe3+)6(SO4)4(OH)12, Schlossmacherite (H3O,Ca)Al3(AsO4,SO4)2(OH)6, Walthierite Ba0.5Al3(SO4)2(OH)6, Mills et al. (2009) Category 08 Phosphates, Arsenates, Polyvanadates :Category:Arsenate minerals Class: Anhydrous phosphates Triphylite group Triphylite LiFe2+PO4, Lithiophilite LiMnPO4, Natrophilite NaMnPO4 Retzian series Retzian-(Ce) Mn2Ce(AsO4)(OH)4, Retzian-(Nd) Mn2(Nd,Ce,La)(AsO4)(OH)4, Retzian-(La) (Mn,Mg)2(La,Ce,Nd)(AsO4)(OH)4 "Alluaudite-Wyllieite" supergroup Anhydrous phosphates, etc. (A+ B2+)5 (XO4)3 Berzeliite group Berzeliite (Ca,Na)3(Mg,Mn)2(AsO4)3, Manganberzeliite (Ca,Na)3(Mn,Mg)2(AsO4)3, Palenzonaite (Ca,Na)3Mn2+(V5+,As5+,Si)3O12, Schaferite NaCa2Mg2(VO4)3 Alluaudite-Wyllieite group (Alluaudite subgroup I) Caryinite (Na,Pb)(Ca,Na)(Ca,Mn2+)(Mn2+,Mg)2(AsO4)3, Arseniopleite (Ca,Na)(Na,Pb)Mn2+(Mn2+,Mg,Fe2+)2(AsO4)3 Alluaudite-Wyllieite group (Alluaudite subgroup II/ Hagendorfite subgroup) Ferrohagendorfite* (Na,Ca)2Fe2+(Fe2+,Fe3+)2(PO4)3, Hagendorfite NaCaMn(Fe2+,Fe3+,Mg)2(PO4)3, Varulite NaCaMn(Mn,Fe2+,Fe3+)2(PO4)3, Maghagendorfite NaMgMn(Fe2+,Fe3+)2(PO4)3, Ferroalluaudite NaCaFe2+(Fe2+,Mn,Fe3+,Mg)2(PO4)3, Alluaudite NaCaFe2+(Mn,Fe2+,Fe3+,Mg)2(PO4)3, Odanielite Na(Zn,Mg)3H2(AsO4)3, Johillerite Na(Mg,Zn)3Cu(AsO4)3, Nickenichite Na0.8Ca0.4(Mg,Fe3+,Al)3Cu0.4(AsO4)3, Yazganite Na(Fe3+)2(Mg,Mn)(AsO4)3•H2O, IMA2008-054 NaCaMn2(PO4)[PO3(OH)]2, IMA2008-064 Na16(Mn2+)25Al8(PO4)30 Alluaudite-Wyllieite group (Wyllieite subgroup) Ferrowyllieite (Na,Ca,Mn)(Fe2+,Mn)(Fe2+,Fe3+,Mg)Al(PO4)3, Wyllieite (Na,Ca,Mn2+)(Mn2+,Fe2+)(Fe 2+, Fe3+,Mg)Al(PO4)3, Rosemaryite (Na,Ca,Mn2+)(Mn2+,Fe2+)(Fe3+,Fe2+,Mg)Al(PO4)3, Qingheiite Na2(Mn2+,Mg,Fe2+)(Al,Fe3+)(PO4)3, Bobfergusonite Na2(Mn2+)5Fe3+Al(PO4)6, Bradaczekite NaCu4(AsO4)3, Ferrorosemaryite [ ]NaFe2+Fe3+Al(PO4)3 Fillowite group Fillowite Na2Ca(Mn,Fe2+)7(PO4)6, Johnsomervilleite Na2Ca(Mg,Fe2+,Mn)7(PO4)6, Chladniite Na2Ca(Mg,Fe2+)7(PO4)6, Galileiite Na(Fe2+)4(PO4)3, Xenophyllite Na4Fe7(PO4)6, Stornesite-(Y) (Y, Ca)[ ]2Na6(Ca,Na)8(Mg,Fe)43(PO4)36 Nabiasite BaMn9[(V,As)O4]6(OH)2 "Whitlockite" supergroup Anhydrous phosphates, etc. (A+ B2+)3 (XO4)2 Sarcopside group Sarcopside (Fe2+,Mn,Mg)3(PO4)2, Farringtonite Mg3(PO4)2, Chopinite (Mg,Fe)3(PO4)2 Whitlockite group Whitlockite Ca9(Mg,Fe2+)(PO4)6(PO3OH), Strontiowhitlockite Sr7(Mg,Ca)3(PO4)6[PO3(OH)], Merrillite-(Ca)* (Ca,[ ])19Mg2(PO4)14, Merrillite Ca18Na2Mg2(PO4)14, Merrillite-(Y)* Ca16Y2Mg2(PO4)14, Ferromerrillite Ca9NaFe(PO4)7, Tuite Ca3(PO4)2, Bobdownsite Ca9Mg(PO3F)(PO4)6 Xanthiosite Ni3(AsO4)2, Graftonite (Fe2+,Mn,Ca)3(PO4)2, Beusite (Mn2+,Fe2+,Ca,Mg)3(PO4)2, Stanfieldite Ca4(Mg,Fe2+,Mn)5(PO4)6, Hurlbutite CaBe2(PO4)2, Stranskiite Zn2Cu2+(AsO4)2, Keyite (Cu2+)3(Zn,Cu)4Cd2(AsO4)6•2H2O, Lammerite Cu3[(As,P)O4]2, Mcbirneyite Cu3(VO4)2, Tillmannsite (Ag3Hg)(V,As)O4, IMA2009-002 Cu3(AsO4)2 "Monazite" supergroup Anhydrous phosphates, etc. A+ XO4 Berlinite group Berlinite AlPO4, Alarsite AlAsO4, Rodolicoite Fe3+PO4 Monazite group (Monoclinic: P21/n) Monazite-(Ce) (Ce,La,Nd,Th)PO4, Monazite-(La) (La,Ce,Nd)PO4, Cheralite-(Ce)? (Ce,Ca,Th)(P,Si)O4, Brabantite? CaTh(PO4)2, Monazite-(Nd) (Nd,Ce,La)(P,Si)O4, Gasparite-(Ce) CeAsO4, Monazite-(Sm) SmPO4 Lithiophosphate group Lithiophosphate Li3PO4, Olympite LiNa5(PO4)2, Nalipoite NaLi2PO4 Zenotime group (Tetragonal: I41/amd) Xenotime-(Y) YPO4, Chernovite-(Y) YAsO4, Wakefieldite-(Y) YVO4, Wakefieldite-(Ce) (Ce3+,Pb2+,Pb4+)VO4, Pretulite ScPO4, Xenotime-(Yb) YbPO4, Wakefieldite-(La) LaVO4, Wakefieldite-(Nd) NdVO4 Heterosite Fe3+PO4, Purpurite Mn3+PO4, Rooseveltite BiAsO4, Tetrarooseveltite BiAsO4, Pucherite BiVO4, Clinobisvanite BiVO4, Dreyerite BiVO4, Ximengite BiPO4, Kosnarite K(Zr4+)2(PO4)3, Petewilliamsite (Ni,Co,Cu)30(As2O7)15 "Adelite" supergroup Anhydrous phosphates, etc. containing hydroxyl or halogen where (A B)2 (XO4) Zq Adelite group Adelite CaMg(AsO4)(OH), Conichalcite CaCu(AsO4)(OH), Austinite CaZn(AsO4)(OH), Duftite-beta? PbCu(AsO4)(OH), Gabrielsonite PbFe2+(AsO4)(OH), Tangeite CaCu(VO4)(OH), Nickelaustinite Ca(Ni,Zn)(AsO4)(OH), Cobaltaustinite CaCo(AsO4)(OH), Arsendescloizite PbZn(AsO4)(OH), Gottlobite CaMg(VO4,AsO4)(OH) Descloizite group Descloizite PbZn(VO4)(OH), Mottramite PbCu(VO4)(OH), Pyrobelonite PbMn(VO4)(OH), Cechite Pb(Fe2+,Mn)(VO4)(OH), Duftite-alpha PbCu(AsO4)(OH) Herderite group Herderite CaBe(PO4)F, Hydroxylherderite CaBe(PO4)(OH), Vayrynenite MnBe(PO4)(OH,F), Bergslagite CaBe(AsO4)(OH) Lacroixite group Lacroixite NaAl(PO4)F, Durangite NaAl(AsO4)F, Maxwellite NaFe3+(AsO4)F Tilasite group Tilasite CaMg(AsO4)F, Isokite CaMg(PO4)F, Panasqueiraite CaMg(PO4)(OH,F) Amblygonite group Amblygonite (Li,Na)Al(PO4)(F,OH), Montebrasite? LiAl(PO4)(OH,F), Natromontebrasite? (Na,Li)Al(PO4)(OH,F) Dussertite group Dussertite Ba(Fe3+)3(AsO4)2(OH)5, Florencite-(Ce) CeAl3(PO4)2(OH)6, Florencite-(La) (La,Ce)Al3(PO4)2(OH)6, Florencite-(Nd) (Nd,Ce)Al3(PO4)2(OH)6 Arsenoflorencite group Arsenoflorencite-(Ce) (Ce,La)Al3(AsO4)2(OH)6, Arsenoflorencite-(Nd)* (Nd,La,Ce,Ba)(Al,Fe3+)3(AsO4,PO4)2(OH)6, Arsenoflorencite-(La)* (La,Sr)Al3(AsO4,SO4,PO4)2(OH)6, Graulichite-(Ce) Ce(Fe3+)3(AsO4)2(OH)6 Waylandite group Waylandite BiAl3(PO4)2(OH)6, Eylettersite (Th,Pb)(1-x)Al3(PO4,SiO4)2(OH)6 (?), Zairite Bi(Fe3+,Al)3(PO4)2(OH)6, Arsenogorceixite BaAl3AsO3(OH)(AsO4,PO4)(OH,F)6 Babefphite BaBe(PO4)(F,O), Brazilianite NaAl3(PO4)2(OH)4, Tavorite LiFe3+(PO4)(OH), Vesignieite Cu3Ba(VO4)2(OH)2, Bayldonite (Cu,Zn)3Pb(AsO3OH)2(OH)2, Curetonite Ba4Al3Ti(PO4)4(O,OH)6, Thadeuite (Ca,Mn2+)(Mg,Fe2+,Mn3+)3(PO4)2(OH,F)2, Leningradite Pb(Cu2+)3(VO4)2Cl2, Arctite Na2Ca4(PO4)3F, Wilhelmkleinite Zn(Fe3+)3(AsO4)2(OH)2, Artsmithite Hg+4Al(PO4)1.74(OH)1.78 "Olivenite" supergroup Anhydrous phosphates, etc. containing hydroxyl or halogen where (A)2 (XO4) Zq Zwieselite group Zwieselite (Fe2+,Mn)2(PO4)F, Triplite (Mn,Fe2+,Mg,Ca)2(PO4)(F,OH), Magniotriplite? (Mg,Fe2+,Mn)2(PO4)F Wagnerite group Wagnerite (Mg,Fe2+)2(PO4)F, Hydroxylwagnerite Mg2(PO4)(OH) Wolfeite group Wolfeite (Fe2+,Mn2+)2(PO4)(OH), Triploidite (Mn,Fe2+)2(PO4)(OH), Sarkinite (Mn2+)2(AsO4)(OH), Stanekite Fe3+(Mn,Fe2+,Mg)(PO4)O, Joosteite (Mn2+,Mn3+,Fe3+)2(PO4)O Satterlyite group Satterlyite (Fe2+,Mg)2(PO4)(OH), Holtedahlite Mg12(PO3OH,CO3)(PO4)5(OH,O)6 Olivenite group Olivenite subgroup Adamite Zn2(AsO4)(OH), Eveite (Mn2+)2[OH|AsO4], Libethenite Cu2PO4OH, Olivenite Cu2[OH|AsO4], Zincolivenite CuZn(AsO4)(OH), Zincolibethenite CuZn(PO4)OH Tarbuttite subgroup Tarbuttite Zn2PO4OH, Paradamite Zn2[OH|AsO4] Althausite Mg2(PO4)(OH,F,O), Augelite Al2(PO4)(OH)3, Arsenobismite? Bi2(AsO4)(OH)3, Angelellite (Fe3+)4(AsO4)2O3, Spodiosite? Ca2(PO4)F "Arrojadite" supergroup Anhydrous phosphates, etc. containing hydroxyl or halogen where (A B)m (XO4)4 Zq Palermoite group Palermoite (Sr,Ca)(Li,Na)2Al4(PO4)4(OH)4, Bertossaite Li2CaAl4(PO4)4(OH)4 Arrojadite group (Arrojadite subgroup) (Al in Al site, OH in W site, Fe in M site) Arrojadite KNa4Ca(Mn2+)4(Fe2+)10Al(PO4)12(OH,F)2, Arrojadite-(KNa) KNa4Ca(Fe,Mn,Mg)13Al(PO4)11(PO3OH)(OH,F)2, Arrojadite-(KFe) KNa2CaNa2(Fe2+,Mn,Mg)13Al(PO4)11(PO3OH)(OH,F)2, Arrojadite-(NaFe) NaNa2CaNa2(Fe2+,Mn,Mg)13Al(PO4)11(PO3OH)(OH,F)2, Arrojadite-(BaNa) BaFe2+Na2Ca(Fe2+,Mn,Mg)13Al(PO4)11(PO3OH)(OH,F)2, Arrojadite-(BaFe) (Ba,K,Pb)Na3(Ca,Sr)(Fe2+,Mg,Mn)14Al(PO4)11(PO3OH)(OH,F)2, Arrojadite-(SrFe) SrFe2+Na2Ca(Fe2+,Mn,Mg)13Al(PO4)11(PO3OH)(OH,F)2, Arrojadite-(PbFe) PbFe2+Na2Ca(Fe2+,Mn,Mg)13Al(PO4)11(PO3OH)(OH,F)2 Arrojadite group (Fluorarrojadite subgroup) (Al in Al site, F in W site, Fe in M site) Fluorarrojadite-(KNa) KNa4Ca(Fe,Mn,Mg)13Al(PO4)11(PO3OH)(F,OH)2, Fluorarrojadite-(BaNa) BaFe2+Na2Ca(Fe2+,Mn,Mg)13Al(PO4)11(PO3OH)(F,OH)2, Fluorarrojadite-(BaFe) (Ba,K,Pb)Na3(Ca,Sr)(Fe2+,Mg,Mn)14Al(PO4)11(PO3OH)(F,OH)2 Arrojadite group (Dickinsonite subgroup) (Fe in Al site, OH in W site, Fe in M site) Dickinsonite? KNa4Ca(Mn2+,Fe2+)14Al(PO4)12(OH)2, Dickinsonite-(KMnNa) KNaMnNa3Ca(Mn,Fe,Mg)13Al(PO4)11(PO4)(OH,F)2, Dickinsonite-(KNaNa) KNaNa4Ca(Mn,Fe,Mg)13Al(PO4)11(PO4)(OH,F)2, Dickinsonite-(KNa) KNa4Ca(Mn,Fe,Mg)13Al(PO4)11(PO4)(OH,F)2, Dickinsonite-(NaNa) Na2Na4Ca(Mn,Fe,Mg)13Al(PO4)11(PO4)(OH,F)2 Ferri-arrojadite-(BaNa) BaFe2+Na2Ca(Fe2+,Mn,Mg)13Al(PO4)11(PO3OH)(F,OH)2 Lulzacite Sr2Fe2+(Fe2+,Mg)2Al4(PO4)4(OH)10 "Apatite" supergroup Anhydrous phosphates, etc. containing hydroxyl or halogen where (A)5 (XO4)3 Zq Morelandite group Morelandite (Ba,Ca,Pb)5(AsO4,PO4)3Cl, Alforsite Ba5(PO4)3Cl Clinomimetite group Clinomimetite Pb5(AsO4)3Cl, Apatite-(CaOH)-M (Ca,Na)5[(P,S)O4]3(OH,Cl) Apatite group Apatite* Ca5(PO4)3(OH,F,Cl), Apatite-(CaF) Ca5(PO4)3F, Apatite-(CaCl) Ca5(PO4)3Cl, Apatite-(CaOH) Ca5(PO4)3(OH), Carbonate-fluorapatite? Ca5(PO4,CO3)3F, Carbonate-hydroxylapatite? Ca5(PO4,CO3)3(OH), Belovite-(Ce) (Sr,Ce,Na,Ca)5(PO4)3(OH), Belovite-(La) (Sr,La,Ce,Ca)5(PO4)3(F,OH), Kuannersuite-(Ce) Ba6Na2REE2(PO4)6FCl, Apatite-(SrOH) (Sr,Ca)5(PO4)3(F,OH), Fluorcaphite (Ca,Sr,Ce,Na)5(PO4)3F, Deloneite-(Ce) NaCa2SrCe(PO4)3F, Phosphohedyphane Ca2Pb3(PO4)3Cl, IMA2008-009 Sr5(PO4)3F, IMA2008-068 Ca2Pb3(PO4)3F Svabite group Svabite Ca5(AsO4)3F, Turneaureite Ca5[(As,P)O4]3Cl, Johnbaumite Ca5(AsO4)3(OH), Fermorite (Ca,Sr)5(AsO4,PO4)3(OH) Hedyphane Ca2Pb3(AsO4)3Cl, Phosphohedyphane Ca2Pb3(PO4)3Cl Pyromorphite group Pyromorphite Pb5(PO4)3Cl, Mimetite Pb5(AsO4)3Cl, Vanadinite Pb5(VO4)3Cl, Hydroxylpyromorphite Pb5(PO4)3OH "Rockbridgeite" supergroup Anhydrous phosphates, etc. containing hydroxyl or halogen where (A B)5 (XO4)3 Zq Kulanite group Kulanite Ba(Fe2+,Mn,Mg)2Al2(PO4)3(OH)3, Penikisite BaMg2Al2(PO4)3(OH)3, Bjarebyite (Ba,Sr)(Mn2+,Fe2+,Mg)2Al2(PO4)3(OH)3, Perloffite Ba(Mn,Fe2+)2(Fe3+)2(PO4)3(OH)3, Johntomaite Ba(Fe2+,Ca,Mn2+)2(Fe3+)2(PO4)3(OH)3 Rockbridgeite group Rockbridgeite (Fe2+,Mn)(Fe3+)4(PO4)3(OH)5, Frondelite Mn2+(Fe3+)4(PO4)3(OH)5, Plimerite Zn(Fe3+)4(PO4)3(OH)5 Griphite Ca(Mn,Na,Li)6Fe2+Al2(PO4)6(F,OH)2 "Lazulite" supergroup Anhydrous phosphates, etc. containing hydroxyl or halogen where (A2+ B2+)3 (XO4)2 Zq Lazulite group Lazulite MgAl2(PO4)2(OH)2, Scorzalite (Fe2+,Mg)Al2(PO4)2(OH)2, Hentschelite Cu2+(Fe3+)2(PO4)2(OH)2, Barbosalite Fe2+(Fe3+)2(PO4)2(OH)2 Lipscombite group Lipscombite (Fe2+,Mn2+)(Fe3+)2(PO4)2(OH)2, Zinclipscombite Zn(Fe3+)2(PO4)2(OH)2 Goedkenite group Goedkenite (Sr,Ca)2Al(PO4)2(OH), Bearthite Ca2Al(PO4)2(OH), Gamagarite Ba2(Fe3+,Mn3+)(VO4)2(OH), Tokyoite Ba2Mn(VO4)2(OH) Carminite group Carminite Pb(Fe3+)2(AsO4)2(OH)2, Sewardite Ca(Fe3+)2(AsO4)2(OH)2 Mounanaite group Mounanaite Pb(Fe3+)2(VO4)2(OH)2, Krettnichite Pb(Mn3+)2(VO4)2(OH)2 Preisingerite group Preisingerite Bi3(AsO4)2O(OH), Schumacherite Bi3[(V,As,P)O4]2O(OH) Jagowerite BaAl2(PO4)2(OH)2, Melonjosephite CaFe2+Fe3+(PO4)2(OH), Samuelsonite (Ca,Ba)Ca8(Fe2+,Mn)4Al2(PO4)10(OH)2, Petitjeanite (Bi3+)3(PO4)2O(OH), Drugmanite Pb2(Fe3+,Al)H(PO4)2(OH)2 Class: Hydrated phosphates Hureaulite group Hureaulite Mn5(PO3OH)2(PO4)2•4H2O, Sainfeldite Ca5(AsO3OH)2(AsO4)2•4H2O, Villyaellenite (Mn2+)5(AsO3OH)2(AsO4)2•4H2O, IMA2008-047 Cd3Zn2(AsO3OH)2(AsO4)2•4H2O, IMA2008-066 Mn5(H2O)4(AsO3OH)2(AsO4)2 Lindackerite group Lindackerite CuCu4(AsO4)2(AsO3OH)2•~9H2O, Braithwaiteite NaCu5(Ti,Sb)2O2(AsO4)4[AsO3(OH)]2•8H2O, Veselovskyite (Zn,Cu,Co)Cu4(AsO4)2(AsO3OH)2•9H2O, IMA2008-010 CaCu4(AsO4)2(AsO3OH)2•10H2O Struvite group Struvite (NH4)MgPO4•6H2O, Struvite-(K) KMg(PO4)•6H2O, Hazenite KNaMg2(PO4)2•14H2O Autunite group Formula: A(UO2)2(XO4)2·(10-12)H2O A = Cu, Ca, Ba, or Mg; X = P or As. Autunite Ca(UO2)2(PO4)2•(10-12)H2O, Heinrichite Ba(UO2)2(AsO4)2•(10-12)H2O, Kahlerite Fe2+(UO2)2(AsO4)2•(10-12)H2O, Novacekite-I Mg(UO2)2(AsO4)2•12H2O, Sabugalite HAl(UO2)4(PO4)4•16H2O, Saleeite Mg(UO2)2(PO4)2•10H2O, Torbernite Cu(UO2)2(PO4)2•(8-12)H2O, Uranocircite Ba(UO2)2(PO4)2•12H2O, Uranospinite Ca(UO2)2(AsO4)2•10H2O, Zeunerite Cu(UO2)2(AsO4)2•(10-16)H2O Meta-autunite group Formula: A(UO2)2(XO4)2·nH2O (n = 6, 7 or 8) A = Cu, Ca, Ba, or Mg and X = P or As. Abernathyite K2(UO2)2(AsO4)2·6H2O, Bassetite Fe2+(UO2)2(PO4)2·8H2O, Chernikovite (H3O)2(UO2)2(PO4)2·6H2O, Lehnerite Mn2+(UO2)2(PO4)2·8H2O, Meta-ankoleite K2(UO2)2(PO4)2·6H2O, Meta-autunite Ca(UO2)2(PO4)2·(6-8)H2O, Metakahlerite Fe2+(UO2)2(AsO4)2•8H2O, Metakirchheimerite Co(UO2)2(AsO4)2•8H2O, Metalodevite Zn(UO2)2(AsO4)2•10H2O, Metanovacekite Mg(UO2)2(AsO4)2•(4-8)H2O, Metatorbernite Cu(UO2)2(PO4)2•8H2O, Metauranocircite Ba(UO2)2(PO4)2•(6-8)H2O, Metauranospinite Ca(UO2)2(AsO4)2•8H2O, Metazeunerite Cu(UO2)2(AsO4)2•8H2O, Natrouranospinite (Na2,Ca)(UO2)2(AsO4)2•5H2O, Sodium Meta-autunite Na2(UO2)2(PO4)2•(6-8)H2O, Uramarsite (NH4,H3O)2(UO2)2(AsO4,PO4)2•6H2O, Uramphite (NH4)(UO2)(PO4)•3H2O Vivianite group Vivianite (Fe2+)3(PO4)2•8H2O, Baricite (Mg,Fe2+)3(PO4)2•8H2O, Erythrite Co3(AsO4)2•8H2O, Annabergite Ni3(AsO4)2•8H2O, Köttigite Zn3(AsO4)2•8H2O, Parasymplesite (Fe2+)3(AsO4)2•8H2O, Hornesite Mg3(AsO4)2•8H2O, Arupite (Ni,Fe2+)3(PO4)2•8H2O, Pakhomovskyite Co3(PO4)2•8H2O Walpurgite group Walpurgite Bi4(UO2)(AsO4)2O4•2H2O, Orthowalpurgite (UO2)Bi4O4(AsO4)2•2H2O, Phosphowalpurgite (UO2)Bi4(PO4)O4•2H2O Roscherite group Roscherite Ca(Mn2+,Fe2+)5Be4(PO4)6(OH)4•6H2O, Zanazziite Ca2(Mg,Fe2+)(Mg,Fe2+,Al,Mn,Fe3+)4Be4(PO4)6(OH)4•6H2O, Greifensteinite Ca2Be4(Fe2+,Mn)5(PO4)6(OH)4•6H2O, Atencioite Ca2Fe2+[ ]Mg2(Fe2+)2Be4(PO4)6(OH)4•6H2O, Guimaraesite Ca2(Zn,Mg,Fe)5Be4(PO4)6(OH)4•6H2O, Footemineite Ca2Mn2+(Mn2+)2(Mn2+)2Be4(PO4)6(OH)4•6H2O, Ruifrancoite Ca2([ ],Mn)2(Fe3+,Mn,Mg)4Be4(PO4)6(OH)4•6H2O Pharmacosiderite group Pharmacosiderite K(Fe3+)4(AsO4)3(OH)4•(6-7H)2O, Alumopharmacosiderite KAl4(AsO4)3(OH)4•6•5H2O, Bariopharmacosiderite Ba(Fe3+)4(AsO4)3(OH)5•5H2O, Barium-alumopharmacosiderite? BaAl4(AsO4)3(OH)5•5H2O, Natropharmacosiderite (Na,K)2(Fe3+)4(AsO4)3(OH)5•7H2O 08.CE.75 group Malhmoodite FeZr(PO4)2·4H2O, Zigrasite ZnZr(PO4)2·4H2O, Unnamed (Ca-analogue of zigrasite) CaZr(PO4)2·4H2O "Variscite" supergroup Hydrated phosphates, etc. where A3+ XO4 · H2O Variscite group Variscite AlPO4•2H2O, Strengite Fe3+PO4•2H2O, Scorodite Fe3+AsO4•2H2O, Mansfieldite AlAsO4•2H2O, Yanomamite In(AsO4)•2H2O Metavariscite group Metavariscite AlPO4•2H2O, Phosphosiderite Fe3+PO4•2H2O, Kolbeckite ScPO4•2H2O Rhabdophane group Rhabdophane-(Ce) (Ce,La)PO4•H2O, Rhabdophane-(La) (La,Ce)PO4•H2O, Rhabdophane-(Nd) (Nd,Ce,La)PO4•H2O, Grayite (Th,Pb,Ca)PO4•H2O, Brockite (Ca,Th,Ce)(PO4)•H2O, Tristramite (Ca,U4+,Fe3+)(PO4,SO4)•2H2O Ningyoite group Ningyoite (U,Ca,Ce)2(PO4)2•(1-2)H2O, Lermontovite U4+(PO4)(OH)•H2O (?), Vyacheslavite U4+(PO4)(OH)•2.5H2O Koninckite Fe3+PO4•3H2O (?), Kankite Fe3+AsO4•3.5H2O, Steigerite AlVO4•3H2O, Churchite-(Y) YPO4•2H2O, Churchite-(Nd) Nd(PO4)•2H2O, Parascorodite Fe3+AsO4•2H2O, Serrabrancaite MnPO4•H2O "Mixite" supergroup Hydrated phosphates, etc., containing hydroxyl or halogen where (A)2 (XO4) Zq · H2O Mixite group (Arsenate series) Mixite BiCu6(AsO4)3(OH)6•3H2O, Agardite-(Y) (Y,Ca)Cu6(AsO4)3(OH)6•3H2O, Agardite-(La) (La,Ca)Cu6(AsO4)3(OH)6•3H2O, Agardite-(Nd) (Pb,Nd,Y,La,Ca)Cu6(AsO4)3(OH)6•3H2O, Agardite-(Dy) (Dy,La,Ca)Cu6(AsO4)3(OH)6•3H2O, Agardite-(Ca) CaCu6(AsO4)3(OH)6•3H2O, Agardite-(Ce) (Ce,Ca)Cu6(AsO4)3(OH)6•3H2O, Goudeyite (Al,Y)Cu6(AsO4)3(OH)6•3H2O, Zalesiite (Ca,Y)Cu6[(AsO4)2(AsO3OH)(OH)6]•3H2O, Plumboagardite (Pb,REE,Ca)Cu6(AsO4)3(OH)6•3H2O Mixite group (Phosphate series) Petersite-(Y) (Y,Ce,Nd,Ca)Cu6(PO4)3(OH)6•3H2O, Calciopetersite CaCu6[(PO4)2(PO3OH)(OH)6]•3H2O Zapatalite Cu3Al4(PO4)3(OH)9•4H2O, Juanitaite (Cu,Ca,Fe)10Bi(AsO4)4(OH)11•2H2O "Brackebushite" supergroup Hydrated phosphates, etc. where A2+ (B2+)2 (XO4) · H2O Fairfieldite subgroup Fairfieldite Ca2(Mn,Fe2+)(PO4)2•2H2O, Messelite Ca2(Fe2+,Mn)(PO4)2•2H2O, Collinsite Ca2(Mg,Fe2+)(PO4)2•2H2O, Cassidyite Ca2(Ni,Mg)(PO4)2•2H2O, Talmessite Ca2Mg(AsO4)2•2H2O, Gaitite Ca2Zn(AsO4)2•2H2O, Roselite-beta Ca2(Co,Mg)(AsO4)2•2H2O, Parabrandtite Ca2Mn2+(AsO4)•2H2O, Hillite Ca2(Zn, Mg)[PO4]2•2H2O, Nickeltalmessite Ca2Ni(AsO4)2•2H2O Roselite subgroup Roselite Ca2(Co,Mg)(AsO4)2•2H2O, Brandtite Ca2(Mn,Mg)(AsO4)2•2H2O, Zincroselite Ca2Zn(AsO4)2•2H2O, Wendwilsonite Ca2(Mg,Co)(AsO4)2•2H2O, Manganlotharmeyerite Ca(Mn3+,Mg,)2(AsO4)2(OH,H2O)2 Brackebushite group Brackebuschite Pb2(Mn,Fe2+)(VO4)2(OH), Arsenbrackebuschite Pb2(Fe2+,Zn)(AsO4)2•H2O, Feinglosite Pb2(Zn,Fe)[(As,S)O4]2•H2O, Calderonite Pb2Fe3+(VO4)2(OH), Bushmakinite Pb2Al(PO4)(VO4)(OH) Helmutwinklerite subgroup Tsumcorite PbZnFe2+(AsO4)2•H2O, Helmutwinklerite PbZn2(AsO4)2•2H2O, Thometzekite Pb(Cu,Zn)2(AsO4)2•2H2O, Mawbyite Pb(Fe3+Zn)2(AsO4)2(OH,H2O)2, Rappoldite Pb(Co,Ni,Zn,)2(AsO4)2•2H2O, Schneebergite Bi(Co,Ni)2(AsO4)2(OH,H2O)2, Nickelschneebergite Bi(Ni,Co)2(AsO4)2(OH,H2O)2, Cobalttsumcorite Pb(Co,Fe)2(AsO4)2(OH,H2O)2 Unnamed group Wicksite NaCa2(Fe2+,Mn2+)4MgFe3+(PO4)6•2H2O, Bederite ([ ],Na)Ca2(Mn2+,Mg,Fe2+)2(Fe3+,Mg2+,Al)2Mn2+2(PO4)6•2H2O, Tassieite (Na,[ ])Ca2(Mg,Fe2+,Fe3+)2(Fe3+,Mg)2(Fe2+,Mg)2(PO4)6•2H2O Anapaite Ca2Fe2+(PO4)2•4H2O, Prosperite CaZn2(AsO4)2•H2O, Parascholzite CaZn2(PO4)2•2H2O, Scholzite CaZn2(PO4)2•2H2O, Phosphophyllite Zn2(Fe2+,Mn)(PO4)2•4H2O, Cabalzarite Ca(Mg,Al,Fe2+)2(AsO4)2(H2O,OH)2, Grischunite NaCa2(Mn2+)5Fe3+(AsO4)6•2H2O "Turquoise" supergroup Hydrated phosphates, etc., containing hydroxyl or halogen where (A)3 (XO4)2 Zq · H2O Burangaite group Burangaite (Na,Ca)2(Fe2+,Mg)2Al10(PO4)8(OH,O)12•4H2O, Dufrenite Fe2+(Fe3+)4(PO4)3(OH)5•2H2O, Natrodufrenite Na(Fe3+,Fe2+)(Fe3+,Al)5(PO4)4(OH)6•2H2O, Matioliite NaMgAl5(PO4)4(OH)6•2H2O, IMA2008-056 NaMn2+(Fe3+)5(PO4)4(OH)6•2H2O Souzalite group Souzalite (Mg,Fe2+)3(Al,Fe3+)4(PO4)4(OH)6•2H2O, Gormanite Fe2+3Al4(PO4)4(OH)6•2H2O, Andyrobertsite KCdCu5(AsO4)4[As(OH)2O2]•2H2O, Calcioandyrobertsite-1M KCaCu5(AsO4)4[As(OH)2O2]•2H2O, Calcioandyrobertsite-2O KCaCu5(AsO4)4[As(OH)2O2]•2H2O Turquoise group Aheylite (Fe2+,Zn)Al6[(OH)4|(PO4)2]2·4H2O, Chalcosiderite Cu(Fe3+,Al)6[(OH)4|(PO4)2]2·4H2O, Faustite (Zn,Cu)Al6[(OH)4|(PO4)2]2·4H2O, Planerite Al6[(OH)4|HPO4|PO4]2·4H2O, Turquoise Cu(Al,Fe3+)6[(OH)4|(PO4)2]2·4H2O Unnamed group Sampleite NaCaCu5(PO4)4Cl•5H2O, Lavendulan NaCaCu5(AsO4)4Cl•5H2O, Zdenekite NaPb(Cu2+)5(AsO4)4Cl•5H2O, Mahnertite (Na,Ca)(Cu2+)3(AsO4)2Cl•5H2O, Lemanskiite NaCaCu5(AsO4)4Cl•5H2O Duhamelite? Pb2Cu4Bi(VO4)4(OH)3•8H2O, Santafeite (Mn,Fe,Al,Mg)2(Mn4+,Mn2+)2(Ca,Sr,Na)3(VO4,AsO4)4(OH)3•2H2O, Ogdensburgite Ca2(Zn,Mn)(Fe3+)4(AsO4)4(OH)6•6H2O, Dewindtite Pb3[H(UO2)3O2(PO4)2]2•12H2O "Overite" supergroup Hydrated phosphates, etc., containing hydroxyl or halogen where (AB)4 (XO4)3 Zq · H2O Overite group Overite CaMgAl(PO4)2(OH)•4H2O, Segelerite CaMgFe3+(PO4)2(OH)•4H2O, Manganosegelerite (Mn,Ca)(Mn,Fe2+,Mg)Fe3+(PO4)2(OH)•4H2O, Lunokite (Mn,Ca)(Mg,Fe2+,Mn)Al(PO4)2(OH)•4H2O, Wilhelmvierlingite CaMn2+Fe3+(PO4)2(OH)•2H2O, Kaluginite* (Mn2+,Ca)MgFe3+(PO4)2(OH)•4H2O, Juonniite CaMgSc(PO4)2(OH)•4H2O Jahnsite group Jahnsite-(CaMnMg) CaMnMg2(Fe3+)2(PO4)4(OH)2•8H2O, Jahnsite-(CaMnFe) CaMn2+(Fe2+)2(Fe3+)2(PO4)4(OH)2•8H2O, Jahnsite-(CaMnMn) CaMn2+(Mn2+)2(Fe3+)2(PO4)4(OH)2•8H2O, Jahnsite-(MnMnMn)* MnMnMn2(Fe3+)2(PO4)4(OH)2•8H2O Whiteite group Whiteite-(CaFeMg) Ca(Fe2+,Mn2+)Mg2Al2(PO4)4(OH)2•8H2O, Whiteite-(MnFeMg) (Mn2+,Ca)(Fe2+,Mn2+)Mg2Al2(PO4)4(OH)2•8H2O, Whiteite-(CaMnMg) CaMn2+Mg2Al2(PO4)4(OH)2•8H2O, Rittmannite Mn2+Mn2+Fe2+Al2(OH)2(PO4)4•8H2O, Jahnsite-(CaFeFe) (Ca,Mn)(Fe2+,Mn2+)(Fe2+)2(Fe3+)2(PO4)4(OH)2•8H2O, Jahnsite-(NaFeMg) NaFe3+Mg2(Fe3+)2(PO4)4(OH)2•8H2O, Jahnsite-(CaMgMg) CaMgMg2(Fe3+)2(PO4)4(OH)2•8H2O, Jahnsite-(NaMnMg) NaMnMg2(Fe3+)2(PO4)4(OH)2•8H2O Leucophosphite group Leucophosphite K(Fe3+)2(PO4)2(OH)•2H2O, Tinsleyite KAl2(PO4)2(OH)•2H2O, Spheniscidite (NH4,K)(Fe3+,Al)2(PO4)2(OH)•2H2O Montgomeryite group Montgomeryite Ca4MgAl4(PO4)6(OH)4•12H2O, Kingsmountite (Ca,Mn2+)4(Fe2+,Mn2+)Al4(PO4)6(OH)4•12H2O, Calcioferrite Ca4Fe2+(Fe3+,Al)4(PO4)6(OH)4•12H2O, Zodacite Ca4Mn2+(Fe3+)4(PO4)6(OH)4•12H2O, Angastonite CaMgAl2(PO4)2(OH)4•7H2O Strunzite group Strunzite Mn2+(Fe3+)2(PO4)2(OH)2•6H2O, Ferrostrunzite Fe2+(Fe3+)2(PO4)2(OH)2•6H2O, Ferristrunzite Fe3+(Fe3+)2(PO4)2(OH)3•5H2O Laueite group Laueite Mn2+(Fe3+)2(PO4)2(OH)2•8H2O, Stewartite Mn2+(Fe3+)2(PO4)2(OH)2•8H2O, Pseudolaueite Mn2+(Fe3+)2(PO4)2(OH)2•(7-8)H2O, Ushkovite Mg(Fe3+)2(PO4)2(OH)2•8H2O, Ferrolaueite Fe2+(Fe3+)2(PO4)2(OH)2•8H2O Gatumbaite group Gatumbaite CaAl2(PO4)2(OH)2•H2O, Kleemanite ZnAl2(PO4)2(OH)2•3H2O Vanuralite group Vanuralite Al(UO2)2(VO4)2(OH)•11H2O, Metavanuralite Al(UO2)2(VO4)2(OH)•8H2O, Threadgoldite Al(UO2)2(PO4)2(OH)•8H2O, Chistyakovaite Al(UO2)2(AsO4)2(F,OH)•6.5H2O Vauxite group Vauxite Fe2+Al2(PO4)2(OH)2•6H2O, Paravauxite Fe2+Al2(PO4)2(OH)2•8H2O, Sigloite Fe3+Al2(PO4)2(OH)3•5H2O, Gordonite MgAl2(PO4)2(OH)2•8H2O, Mangangordonite (Mn2+,Fe2+,Mg)Al2(PO4)2(OH)2•8H2O, Kastningite (Mn2+,Fe2+,Mg)Al2(PO4)2(OH)2•8H2O, Maghrebite MgAl2(AsO4)2(OH)2•8H2O Bermanite group Bermanite Mn2+(Mn3+)2(PO4)2(OH)2•4H2O, Ercitite Na2(H2O)4[(Mn3+)2(OH)2(PO4)2] Arthurite group/ Whitmoreite group Whitmoreite Fe2+(Fe3+)2(PO4)2(OH)2•4H2O, Arthurite Cu(Fe3+)2(AsO4,PO4,SO4)2(O,OH)2•4H2O, Ojuelaite Zn(Fe3+)2(AsO4)2(OH)2•4H2O, Earlshannonite (Mn,Fe2+)(Fe3+)2(PO4)2(OH)2•4H2O, Gladiusite (Fe2+)2(Fe3+,Mg)4(PO4)(OH)13•H2O, Cobaltarthurite Co(Fe3+)2(AsO4)2(OH)2•4H2O, Kunatite Cu(Fe3+)2(PO4)2(OH)2•4H2O, Bendadaite Fe2+(Fe3+)2(AsO4)2(OH)2•4H2O Sincosite group Sincosite Ca(V4+O)2(PO4)2•5H2O, Phosphovanadylite (Ba,Ca,K,Na)x[(V,Al)4P2(O,OH)16]•12H2O x~0.66, Bariosincosite Ba(V4+O)2(PO4)2•4H2O Paulkerrite group Paulkerrite K(Mg,Mn)2(Fe3+,Al)2Ti(PO4)4(OH)3•15H2O, Mantienneite KMg2Al2Ti(PO4)4(OH)3•15H2O, Matveevite? KTiMn2(Fe3+)2(PO4)4(OH)3•15H2O, Benyacarite (H2O,K)2Ti(Mn2+,Fe2+)2(Fe3+,Ti)2Ti(PO4)4(O,F)2•14H2O Keckite Ca(Mn,Zn)2(Fe3+)3(PO4)4(OH)3•2H2O, Minyulite KAl2(PO4)2(OH,F)•4H2O, Giniite Fe2+(Fe3+)4(PO4)4(OH)2•2H2O, Metavauxite Fe2+Al2(PO4)2(OH)2•8H2O, Metavauxite Fe2+Al2(PO4)2(OH)2•8H2O, Xanthoxenite Ca4(Fe3+)2(PO4)4(OH)2•3H2O, Beraunite Fe2+(Fe3+)5(PO4)4(OH)5•4H2O, Furongite Al2(UO2)(PO4)3(OH)2•8H2O, Mcauslanite H(Fe2+)3Al2(PO4)4F•18H2O, Vochtenite (Fe2+,Mg)Fe3+[(UO2)(PO4)]4(OH)•(12-13)H2O Alunite supergroup - Part II Beudantite group, AB3(XO4)(SO4)(OH)6 Beudantite PbFe3[(OH)6|SO4|AsO4], Corkite PbFe3(PO4)(SO4)(OH)6, Gallobeudantite PbGa3(AsO4)(SO4)(OH)6, Hidalgoite PbAl3(AsO4)(SO4)(OH)6, Hinsdalite PbAl3(PO4)(SO4)(OH)6, Kemmlitzite (Sr,Ce)Al3(AsO4)(SO4)(OH)6, Svanbergite SrAl3(PO4)(SO4)(OH)6, Weilerite BaAl3H[(As,P)O4]2(OH)6, Woodhouseite CaAl3(PO4)(SO4)(OH)6 Dussertite group/ Arsenocrandallite group Arsenocrandallite (Ca,Sr)Al3[(As,P)O4]2(OH)5•H2O, Arsenoflorencite-(Ce) (Ce,La)Al3(AsO4)2(OH), Arsenogorceixite BaAl3AsO3(OH)(AsO4,PO4)(OH,F)6, Arsenogoyazite (Sr,Ca,Ba)Al3(AsO4,PO4)2(OH,F)5•H2O, Dussertite Ba(Fe3+)3(AsO4)2(OH)5, Graulichite-(Ce) Ce(Fe3+)3(AsO4)2(OH)6, Philipsbornite PbAl3(AsO4)2(OH)5•H2O, Segnitite Pb(Fe3+)3H(AsO4)2(OH)6 Plumbogummite group/ Crandallite group Benauite HSr(Fe3+)3(PO4)2(OH)6, Crandallite CaAl3[(OH)5|(PO4)2]·H2O, Eylettersite (Th,Pb)(1-x)Al3(PO4,SiO4)2(OH)6(?), Florencite-(Ce) CeAl3(PO4)2(OH)6, Florencite-(La) LaAl3(PO4)2(OH)6, Florencite-(Nd) (Nd,La,Ce)Al3(PO4)2(OH)6, Gorceixite BaAl3(PO4)(PO3OH)(OH)6, Goyazite SrAl3(PO4)2(OH)5•H2O, Kintoreite Pb(Fe3+)3(PO4)2(OH,H2O)6, Plumbogummite PbAl3(PO4)2(OH)5•H2O, Springcreekite BaV3(PO4)2(OH,H2O)6, Waylandite BiAl3(PO4)2(OH)6, Zairite Bi(Fe3+,Al)3(PO4)2(OH)6, Mills et al. (2009) Class: Non simple phosphates Stibiconite group Stibiconite Sb3+(Sb5+)2O6(OH), Bindheimite Pb2Sb2O6(O,OH), Romeite (Ca,Fe2+,Mn,Na)2(Sb,Ti)2O6(O,OH,F), Hydroxycalcioroméite (Lewisite) (Ca,Fe2+,Na)2(Sb,Ti)2O7, Monimolite (Pb,Ca)2Sb2O7, Stetefeldtite Ag2Sb2O6(O,OH), Bismutostibiconite Bi(Sb5+,Fe3+)2O7, Partzite Cu2Sb2(O,OH)7 (?) Rossite group Rossite CaV2O6•4H2O, Metarossite CaV2O6•2H2O, Ansermetite MnV2O6•4H2O Pascoite group Pascoite Ca3V10O28•17H2O, Magnesiopascoite Ca2Mg[V10O28]•16H2O Vanadium oxysalts (Hydrated) Hewettite group Hewettite CaV6O16•9H2O, Metahewettite CaV6O16•3H2O, Barnesite Na2V6O16•3H2O, Hendersonite Ca1.3V6O16•6H2O, Grantsite Na4Cax(V4+)2xV5+(12-2x)O32•8H2O Straczekite group Straczekite (Ca,K,Ba)(V5+,V4+)8O20•3H2O, Corvusite (Na,Ca,K)V8O20•4H2O), Fernandinite CaV8O20•4H2O, Bariandite Al0.6V8O20•9H2O, Bokite (Al,Fe3+)1.3(V4+,Fe)8O20•4.7H2O, Kazakhstanite (Fe3+)5(V4+)3(V5+)12O39(OH)9•9H2O Schubnelite Fe2+(V5+O4)H2O, Fervanite (Fe3+)4(VO4)4•5H2O, Bannermanite (Na,K)0.7(V5+)6O15, Melanovanadite Ca(V4+)2(V5+)2O10•5H2O Anhydrous Molybdates and Tungstates where A XO4 Wolframite series Wolframite* (Fe,Mn)WO4, Hubnerite MnWO4, Ferberite Fe2+WO4, Sanmartinite (Zn,Fe2+)WO4, Heftetjernite ScTaO4 Scheelite series Scheelite CaWO4, Powellite CaMoO4 Wulfenite Series Wulfenite PbMoO4, Stolzite PbWO4 Raspite PbWO4 Category 10 Organic Compounds :Category:Coal :Category:Oil shale Class: Organic minerals :Category:Oxalate minerals Extras Rocks, ores and other mixtures of minerals Lapislazuli*, Psilomelane*, Olivine* (Fayalite-Forsterite Series) Ice Liquids: Water, Mercury Hg, Asphaltum* Amorphous solids: Polycrase, Pyrobitumen*, Amber* Vitreous (melts by heating): Tektite, Obsidian See also References Minerals Mineralogy
13433465
https://en.wikipedia.org/wiki/Handango
Handango
Handango is an online store selling mobile apps for personal digital assistants and smartphones. History Founded in 1999 by CEO Randy Eisenman, Handango InHand is an app store created for finding, installing, and buying software for mobile devices. It was made available in 2003 for Symbian UIQ users, in 2004 for Windows Mobile and Palm OS, 2005 for Blackberry, and in 2006 for Symbian S60. Application downloads and purchases are completed directly on the device. Descriptions, ratings and screenshots are available for all applications. Handango InHand is available for free on Palm OS, Windows Mobile, Symbian UIQ & S60, Blackberry, and Android. In February 2011, Handango's new owner PocketGear Inc. re-branded as Appia, shifting focus to on-device OEM branded store apps. Consequently, both Handango's and PocketGear's websites were shut down in 2013. Acquisition In 2010, PocketGear announced the acquisition of Handango to create the world's largest cross-platform app store. Post-acquisition In 2011, PocketGear decided to shift its model to become a White-Label App Marketplace Platform. Since acquiring Handango, PocketGear has reportedly gained more than $400 million in mobile application revenue, reaching consumers in 175 countries. See also List of digital distribution platforms for mobile devices Amazon Appstore App Store (iOS/iPadOS) BlackBerry World References External links Website Personal digital assistant software Pocket PC software Symbian software Windows Mobile Standard software Mobile software distribution platforms
35733875
https://en.wikipedia.org/wiki/Red%20%28programming%20language%29
Red (programming language)
Red is a programming language designed to overcome the limitations of the programming language Rebol. Red was introduced in 2011 by Nenad Rakočević, and is both an imperative and functional programming language. Its syntax and general usage overlaps that of the interpreted Rebol language. The implementation choices of Red intend to create a full stack programming language: Red can be used for extremely high-level programming (DSLs and GUIs) as well as low-level programming (operating systems and device drivers). Key to the approach is that the language has two parts: Red/System and Red. Red/System is similar to C, but packaged into a Rebol lexical structure for example, one would write instead of . Red is a homoiconic language capable of meta-programming, with Rebol-like semantics. Red's runtime library is written in Red/System, and uses a hybrid approach: it compiles what it can deduce statically and uses an embedded interpreter otherwise. The project roadmap includes a just-in-time compiler for cases in between, but this has not yet been implemented. Red seeks to remain independent of any other toolchain; it does its own code generation. It is therefore possible to cross-compile Red programs from any platform it supports to any other, via a command-line switch. Both Red and Red/System are distributed as open-source software under the modified BSD license. The runtime library is distributed under the more permissive Boost Software License. As of version 0.6.4 Red includes a garbage collector "the Simple GC". Introduction Red was introduced in the Netherlands in February 2011 at the Rebol & Boron conference by its author Nenad Rakočević. In September 2011, the Red programming language was presented to a larger audience during the Software Freedom Day 2011. Rakočević is a long-time Rebol developer known as the creator of the Cheyenne HTTP server. Features Red's syntax and semantics are very close to those of Rebol. Like Rebol, it strongly supports metaprogramming and domain-specific languages (DSLs) and is therefore a highly efficient tool for dialecting (creating embedded DSLs). Red includes a dialect called Red/System, a C-level language which provides system programming facilities. Red is easy to integrate with other tools and languages as a DLL (libRed) and very lightweight (around 1 MB). It is also able to cross-compile to various platforms (see Cross Compilation section below) and create packages for platforms that require them (e.g., .APK on Android). Red also includes a fully reactive cross-platform GUI system based on an underlying reactive dataflow engine, a 2D drawing dialect comparable to SVG, compile-time and runtime macro support, and more than 40 standard datatypes. Goals The following is the list of Red's Goals as presented on the Software Freedom Day 2011: Simplicity ("An IDE should not be necessary to write code.") Compactness ("Being highly expressive maximizes productivity.") Speed ("If too slow, it cannot be general-purpose enough.") Be "Green", Have a Small Footprint ("Because resources are not limitless.") Ubiquity ("Spread everywhere.") Portability, Write once run everywhere ("That’s the least expected from a programming language.") Flexibility ("Not best but good fit for any task!") Development Red's development is planned to be done in two phases: Initial phase: Red and Red/System compilers written in Rebol 2 Bootstrap phase: Red and Red/System compilers complemented by a Red JIT-compiler, all written in Red Cross compilation Red currently supports the following cross-compilation targets: MS-DOS: Windows, x86, console (and GUI) applications Windows: Windows, x86, GUI applications Linux: Linux, x86 Linux-ARM: Linux, ARMv5, armel (soft-float) Raspberry Pi: Linux, ARMv5, armhf (hard-float) FreeBSD: x86 Darwin: OS X Intel, console (and GUI) applications Syllable: Syllable OS, x86 Android: Android, ARMv5 Android-x86: Android, x86 (Note: This list will increase with time and should therefore be considered as incomplete.) Hello World! Red [Title: "Simple hello world script"] print "Hello World!" Factorial example IMPORTANT: These are intended as syntax examples. Until Red has 64-bit support, the integer example will overflow a 32-bit integer very quickly. Changing that to `float!` will go farther, but these are merely to show the syntax of the language. The following is a factorial example in Red: Red [Title: "A factorial script"] ; Note: The title is optional. factorial: func [ x [integer!] ; Giving the type of an argument in Red is optional ][ either x = 0 [1][x * factorial x - 1] ] The following is the same factorial example in Red/System (in this very simple case, the source code is very similar to Red's version): Red/System [Title: "A factorial script"] factorial: func [ x [integer!] ; This is compulsory in Red/System return: [integer!] ; This is compulsory in Red/System ][ either x = 0 [1][x * factorial x - 1] ] See also Rebol References External links Red Programming Language on GitHub Redprogramming.com Functional languages Systems programming languages Extensible syntax programming languages Domain-specific programming languages Software using the BSD license Software using the Boost license Programming languages created in 2011 2011 software High-level programming languages Homoiconic programming languages
1220345
https://en.wikipedia.org/wiki/EPSXe
EPSXe
ePSXe (enhanced PSX emulator) is a PlayStation video game console emulator for x86-based PC hardware with Microsoft Windows and Linux, as well as devices running Android. It was written by three authors, using the aliases calb, _Demo_ and Galtor. ePSXe is closed source with the exception of the application programming interface (API) for its plug-ins. Development For half a year, ePSXe was developed in private. When released on October 14, 2000, ePSXe was a revolution in the PSX emulation scene, boasting higher compatibility and performance than other PlayStation emulators of the time. After ePSXe 1.6.0 was released on August 5, 2003, its development seemed to halt, with rumors claiming that the source code had been lost due to a hard disk failure. However, on April 5, 2008, the developers of ePSXe made a public statement revealing that in the summer of 2007, they had decided to continue development of the emulator, due to encouragement from users. On May 24, 2008, ePSXe version 1.7.0 was released. After another hiatus, the developers came back on August 30, 2012, announcing the release of ePSXe for Android, as well as saying that ePSXe for Windows was in testing of version 1.8.0. This version was released on November 9, 2012. Features As with many modern emulators, ePSXe makes use of plug-ins to emulate GPU, SPU (sound), and CD-ROM drive functions, a model first established in PSEmu Pro. Games can be loaded from the computer's CD drive or from one of many types of CD images directly from the user's hard drive. A patching feature allows the user to apply game patches. Games that do not necessarily run properly, or even start at all, can be fixed and played via the use of ePSXe patch files in .ppf format. Not all games prone to bugs have ppf patches written for them. Until version 1.9.25, ePSXe could only function with an image of an official Sony PlayStation BIOS. Since the various PlayStation BIOS images are copyrighted by Sony, it is illegal to distribute them. For this reason, ePSXe does not come bundled with any of the PlayStation BIOS images, requiring the user to provide one for the emulator. Version 1.9.25 added HLE BIOS support, allowing it to mimic the effect of the PlayStation's BIOS, although compatibility is currently lower than an official BIOS. Plug-ins GPU: Most GPU plug-ins run with either Direct3D, OpenGL, or the Glide API, and are available as freeware or open-source. Many GPU plugins require game-specific hacks to run games. SPU: The SPU plug-ins can emulate everything from music to sound effects, with varying degrees of success depending on the plug-in settings, and of course the plug-in being used CD-ROM: ePSXe comes with a core CD-ROM plug-in, but many others are available for freeware download and many can emulate up to seven different types of read modes Input: The core plug-in is sufficient, but there are others that allow for more functionality. Compatibility ePSXe is able to run most PlayStation games somewhat accurately. Few games run flawlessly without extensive configuration and trial by error testing. In the case that a game does not run successfully, patches written for the game in question can be used, though few games have patches available. System requirements PC version ePSXe's system requirements are: Minimum system requirements: Intel Pentium III CPU 256 MB of RAM 16 MB VRAM x16 CD-ROM Windows XP SP3 DirectX 8 Recommended system configuration Intel Pentium 4 CPU 512 MB of RAM 32 MB VRAM DVD-ROM Windows XP SP3 DirectX 9 Android version Processor: ARM or x86 (Intel Atom) Operating system: Android 2.3.3 or newer 256mb ram Reception Retro Gamer called ePSXe "the best free PlayStation emulator". Techtree stated "ePSXe is the best free PlayStation emulator". Pcnexus says "the easiest PS1 emulator for android with downloadable cheat codes and great game compatibilty with PS1 roms". See also Mednafen bleem! PCSX List of video game emulators References External links Download version 2.0.5 for Windows Download version 2.0.5 for Linux ePSXe for Android website Android emulation software Linux emulation software PlayStation emulators Windows emulation software Proprietary software
32104138
https://en.wikipedia.org/wiki/SS%20Ben-my-Chree%20%281965%29
SS Ben-my-Chree (1965)
TSS (RMS) Ben-my-Chree (V) was the second of four side-loading car ferries ordered by the Isle of Man Steam Packet Company. Built in 1965, she was the last of their vessels designed with two classes of passenger accommodation and the fifth company vessel to bear the name. She operated until 1984 and was broken up in 1989. History Ben-my-Chree (No.186355) was constructed by Cammell Laird at a cost of £1,400,000 (equivalent to £ in ). She was the second of four car ferries ordered by the Isle of Man Steam Packet Company and was virtually identical to her sister , built four years earlier. She was launched by Mrs Margaret Brownsdon on Friday, 10 December 1965, and entered service, having gone through her trials and been accepted, with Capt J.E. Quirk in command and J.S. Kennaugh as Chief Officer. Ben-my-Chree made her maiden voyage from Liverpool to Douglas on Thursday, 12 May 1966. This was a happy occasion for the Steam Packet, for as the "Ben" slipped away from the Prince's Landing Stage and proceeded downstream towards the sea, she received a rousing salute from her sister Manx Maid, lying alongside the Liverpool terminal. The IOMSPCo Ltd's attitude and relationship with the Isle of Man Harbour Board bore significantly in the construction of the Manx Maid and Ben-My-Chree, as they were unable to agree to the building of, and who would pay the costs of linkspans required for a new RO-RO. However, it did result in a unique design that served the Isle of Man very successfully till the demise of the in 2005. Ultimately what scrapped these beautiful streamlined, whilst archaic ships, was not their inability, more their lack of efficiency as the economy of operating turbine steamers was eclipsed by motor vessels. Their nicknames by Steam Packet staff for both the "Maid" and the "Ben" in the 1960s and 1970s were "the money boxes", as they gained the Steam Packet all its profits through the carriage of cars and vans. Layout Ben-my-Chree was the last of the Company's ships designed with two classes of passenger accommodation. She only operated in this configuration until the beginning of 1967, when all the Company's ships, including Ben-my-Chree, were converted to single class. The two-class system had been employed for 136 years. Length 325'; beam 50'; and a depth of 18'; with accommodation for 1400 passengers and a crew of 60. Ben-my-Chree had a Gross tonnage of 2762 and a designed service speed of 21 knots. The machinery lay-out was similar to , developing shaft horse power. During her annual overhaul in 1978, she was fitted with a bow thruster, driven by a steam turbine of 500 brake horse power. Also at this time, she was fitted with the ship's whistle from the scrapped , a traditional organ type whistle, whose mighty reverberation was very much appreciated by lovers of Manx ships. Service life "The Ben", as she was always known, entered service in 1966, the year of the seaman's strike. This resulted in her being laid up for the first part of the summer. She serviced the main Douglas-Liverpool route on her own for part of April 1975 while was having her annual overhaul, and was forced to be idle through an industrial dispute. She was generally one of the winter boats i.e. out of Douglas at 09:00, and back at 11:00 from Liverpool the following day until the entered service. The winter job, and the boats that did it were sought after by the ratings, as they earned more sailing than when laid up. Disposal During the early 1980s, as a result of the formation of Manxline with its RO-RO vessel , the side-loading car ferries of the Steam Packet were seen as becoming increasingly inefficient, and the decision was made to retire both the Manx Maid and the Ben-my-Chree. Whilst their higher fuel consumption would initially be seen as the cost to dispose, the reality was the steam plants were very expensive to maintain, and just not as efficient. The Steamers averaged 9 tons of fuel on a Douglas - Liverpool trip whilst the motor ships and less than 4. A 30% price increase in heavy-grade fuel oil during 1984, finally rendered the vessels as totally uneconomic. Ben-my-Chree made her final voyage under Steam Packet ownership on 19 September 1984, ten days after her older sister. However, this wasn't quite the end for her, and in June 1985 she was chartered back from her new owners to cover the shortfall in capacity for the busy TT motorcycle races. Ben-my-Chree was finally laid up by Capt. Hall in Vittoria Dock, Birkenhead, on 10 June 1985. In August 1989, she was sold to Spanish breakers Cantabra Metalurgica S.A. Santander broke her up, still with the name Ben-my-Chree between 18 September and 4 December 1989. Gallery References Bibliography 1965 ships Ships built on the River Mersey Ferries of the Isle of Man Ships of the Isle of Man Steam Packet Company Steamships of the United Kingdom Merchant ships of the United Kingdom Passenger ships of the United Kingdom
1915099
https://en.wikipedia.org/wiki/Pat%20Haden
Pat Haden
Patrick Capper Haden (born January 23, 1953) is the former athletic director at the University of Southern California (USC) in Los Angeles from August 2010 to June 2016. He played quarterback for the USC Trojans before playing professionally in the National Football League (NFL) for the Los Angeles Rams from 1976 through 1981. He also played in the World Football League (WFL) for the Southern California Sun in 1975. Haden is a Rhodes Scholar, was a practicing attorney from 1982 to 1987, and was a partner at Riordan, Lewis & Haden, a private equity firm, from 1987 to 2010. He is also known for his work as a former sportscaster, beginning with CBS Sports in 1982, and ending his career in that field as a color commentator for NBC Sports' Notre Dame football coverage. Biography Early years Born in Westbury, New York, to working-class Irish American parents, Haden is the fourth of five children. He had a close relationship with his mother, Helen Haden, who told her children to "Live your life so that you have standing room only at your funeral." As a boy, Haden had a boyhood paper route, then worked at a shoe store where he also pushed accessories in order to earn an extra commission. He had the same mentality in sports, where he used smarts and toughness he gained from keeping up with his older brothers to compensate for physical shortcomings. By high school, his parents had moved to Southern California. Playing career High school career Haden played high school football at Bishop Amat Memorial High School in La Puente, California, where he became starting quarterback. He became close friends with teammate J.K. McKay, son of then-USC football coach John McKay; the two were opposites: J.K. was quick-witted and easygoing, while Haden was not. Haden and McKay shared the CIF Southern Section Player of the Year award in 1970. When Haden's parents had to move again, he stayed with the McKays for his senior year of high school. He was highly sought after and was recruited by many schools, including Notre Dame. Haden was inducted into the National High School Hall of Fame in 1995. College career Prior to College Football Haden and McKay won the CIF championship game in overtime against Lakewood High School. The game was played at the LA Coliseum, where Haden would go on to lead the Trojans to many victories. Haden and J.K. McKay joined the highly regarded USC Trojans under head coach John McKay; they joined a group of friends in living at an apartment building just off campus. At USC, he made it to three Rose Bowl appearances and won two national championships. In the final game of his college career, the 1975 Rose Bowl, he was named co-Most Valuable Player. Haden also was a recipient of the Today's Top V Award in 1975, which at the time honored five (now ten) senior student-athletes. He was put into the GTE Academic All-American Hall of Fame in 1988. He was inducted into the Rose Bowl Hall of Fame in 1995. An athletic and academic stand-out, he was awarded a Rhodes Scholarship. In 1973, he threw for 1,832 yards with 13 TD vs 11 INT. In 1974, he threw for just 988 yards (in part due to 98 fewer pass attempts) but still threw 13 TD vs 11 INT. Professional career Haden played one season in the World Football League, its last, for the Southern California Sun, which allowed him to attend school in England at Oxford University under his Rhodes Scholarship. His decision to go to the United Kingdom for schooling hurt his NFL possibilities, as did a lack of height () and arm strength, and he dropped to the seventh round of the NFL Draft. Haden made the Los Angeles Rams' roster in 1976 as the third quarterback, behind James Harris and Ron Jaworski. When both Harris and Jaworski were injured, Haden was pressed into duty in the second game of the season. Haden responded by playing mostly mistake-free football, letting running backs Lawrence McCutcheon and John Cappelletti shoulder the offensive load and passing only occasionally. Harris returned to the lineup as starting quarterback and Haden went back to a backup role. In a Monday night game against the Cincinnati Bengals, Harris played poorly in a 20-12 loss, and Rams head coach Chuck Knox was ordered by team owner Carroll Rosenbloom to bench Harris in favor of Haden. This is documented in Knox's autobiography Hard Knox: The Life of an NFL Coach and William Rhoden's Third and a Mile: The Trials and Triumph of the Black Quarterback. At the time of the quarterback change, Harris was the top-rated passer of the National Football Conference. The NFL records show that Harris finished as the NFC's top-rated passer of 1976. Despite the change, the Rams went on to win the NFC Western Division title and a 14–12 upset of the defending NFC champion Dallas Cowboys in the opening round of the NFC playoffs, but the Rams fell to the Minnesota Vikings in the NFC championship game. The Rams revamped their quarterback position for the 1977 season. Harris and Jaworski were traded, and the Rams acquired veteran QB Joe Namath from the New York Jets. Namath started the first four games, but it was evident his knees couldn't take it anymore, so the Rams went back to Haden. The Rams took eight victories in the last 10 games, won the NFC West and made the playoffs again. Their first-round opponent was the Vikings at home in the rain, but the Rams lost 14–7 in the Mud Bowl. Haden's small hands impaired his ability to grip the wet muddy ball. Haden completed 14 of 32 passes for 130 yards and one touchdown with 3 interceptions while Viking QB Bob Lee was only able to complete 5 of 10 passes for 57 yards and no touchdowns or interceptions. Haden was rewarded with the starting position from day one in 1978. The Rams started fast, winning their first eight games, but tailed off to 12-4, and won their third straight NFC West Division title. Haden threw a pair of touchdown passes and led the Rams to a 34-10 victory against the Vikings in the first round of the playoffs. The defending champion Dallas Cowboys walloped the Rams 28-0 in the 1978 NFC Championship Game on their way to Super Bowl XIII. Haden was voted the Washington D.C. Touchdown Club NFC Player of the Year of the 1978 season. Haden began the 1979 season as the starter, but a broken finger midway through the season sidelined him in favor of Vince Ferragamo, who led the Rams to Super Bowl XIV. Because of Rams' coach Ray Malavasi's policy of giving an injured starter his job back, Haden began the 1980 season as the starter with Ferragamo as the backup. Haden was injured in the Rams season opener against the Detroit Lions. Ferragamo took over as the starter and didn't relinquish the job (despite Haden returning mid-season), passing for a then Rams-record 30 touchdown passes. Ferragamo, however, bolted the Rams for the Canadian Football League. Haden went into the 1981 season as starter, but was injured midway through the season. After the season, while recovering from knee surgery and contemplating retirement, he got a call from CBS about a broadcast job and decided to take it. Broadcasting career After spending a few years at CBS, Haden was hired as the color commentator for NBC Sports' coverage of Notre Dame college football, and held similar duties for their Arena Football coverage from 2003 through 2006 and Fox Sports' Bowl Championship Series coverage in 2008. His position as the Notre Dame color commentator is ironic in that he, as USC's quarterback in 1974, helped orchestrate one of Notre Dame's greatest losses (and, conversely, one of USC's greatest wins, known as "The Comeback"). The Trojans won 55–24 despite trailing 24–0 at one point and 24–6 at halftime. Haden admits that his mother wanted him to go to Notre Dame and always lights a candle in her memory at the grotto whenever he is on campus. Haden also was a color man for CBS Sports' college football coverage (being one of a three-man booth with former Notre Dame coach Ara Parseghian and play-by-play man Brent Musburger, and later working with Jim Nantz), and provided color commentary for TNT's Sunday night football coverage and Westwood One's radiocasts, primarily working the Sunday night schedule which immediately followed his TV commitments (at the time, TNT and ESPN split the Sunday night games between them, with TNT broadcasting the first half of the season and ESPN the second half). Haden also called one NFL on CBS game in 1988 and some NFL on CBS games in 1989 during the busier weeks of the 1988 NFL Season and 1989 NFL Season, when the network's seven announcing teams weren't enough to cover the network games. Private equity career In 1987, he joined Riordan, Lewis & Haden, a private equity firm based in Los Angeles that focuses on making investments in growing, profitable businesses with $20 – 200 million in revenue. He has served as a director of a number of RLH portfolio companies including TetraTech, Systems Management Specialists, Data Processing Resources Corporation (formerly NASDAQ: DPRC), The Apothecary Shops, and Adohr Farms. Haden remained a partner at RLH until assuming the position of Athletic Director for the University of Southern California. Athletic director Haden replaced Mike Garrett as the USC Trojans athletic director on August 3, 2010. On September 8, 2014, he and USC football coach Steve Sarkisian were reprimanded by Pac-12 Conference commissioner Larry Scott for attempting "to influence the officiating, and ultimately the outcome of a contest" during the September 6 game with Stanford. Haden was fined $25,000. On October 11, 2015, Haden placed Sarkisian on leave after a series of incidents culminating in the coach missing a practice during the season. The next day, Haden announced that Sarkisian had been fired. On February 5, 2016, Haden announced that he would be stepping down as USC's athletic director effective June 30. College Football Playoff Selection Committee Haden was one of 13 members of the inaugural College Football Playoff selection committee. In September 2014 Haden received criticism and calls to resign from the selection committee by charging onto the field in order to argue with officials regarding a series of penalties during the third quarter of USC's 13-10 victory against Stanford. Education Haden received a B.A., magna cum laude, Phi Beta Kappa from the University of Southern California, a J.D. from Loyola Law School and a B.A. in economics from the University of Oxford as a Rhodes Scholar. Community activities Haden has served on numerous nonprofit boards. He sits on the boards of the Rose Hills Foundation and the Fletcher Jones Foundation, and has also served on the boards of non-profit organizations including the University of Southern California, the Good Samaritan Hospital, Boys Town of Southern California, the Multiple Sclerosis Society of Los Angeles, and the Crippled Children's Society of Los Angeles. He is former chair of the March of Dimes Reading Olympics in Los Angeles and the Boys Life National Illiteracy Campaign. Haden is a Board Member for the Lott IMPACT Trophy, which is named after Ronnie Lott and is awarded annually to college football's Defensive IMPACT Player of the Year. Haden was awarded the Ambassador Award of Excellence by the LA Sports & Entertainment Commission in 2003 for his community involvement. References External links American football quarterbacks American Rhodes Scholars 1953 births Living people Arena football announcers ArenaBowl broadcasters College football announcers College Football Playoff Selection Committee members Los Angeles Rams players Loyola Law School alumni National Conference Pro Bowl players National Football League announcers Notre Dame Fighting Irish football announcers People from La Puente, California People from Westbury, New York Players of American football from California Players of American football from New York (state) Private equity and venture capital investors Southern California Sun players Sportspeople from Los Angeles County, California Sportspeople from Nassau County, New York USC Trojans athletic directors USC Trojans football players
27806408
https://en.wikipedia.org/wiki/Hsinchun%20Chen
Hsinchun Chen
Hsinchun Chen is the Regents' Professor and Thomas R. Brown Chair of Management and Technology at the University of Arizona and the Director and founder of the Artificial Intelligence Lab (AI Lab). He also served as lead program director of the Smart and Connected Health program at the National Science Foundation from 2014 to 2015. He received a B.S. degree from National Chiao Tung University in Taiwan, an MBA from SUNY Buffalo and an M.S. and Ph.D. in Information Systems from New York University. Research Chen's research primarily focuses on data/web/text mining and knowledge management techniques. He has applied his work in the fields of web computing, search engines, digital libraries, health informatics, security informatics, biomedical informatics and business intelligence. His most notable contributions are in management information systems, digital libraries, health informatics, and security informatics. He was the founding Editor-in-Chief (EIC) of the ACM Transactions on Management Information Systems (ACM TMIS), 2009–2014, and is also founding editor-in-chief of the Springer journal Security Informatics. He serves or has served on ten editorial boards including: IEEE Intelligent Systems ACM Transactions on Information Systems IEEE Transactions on Systems, Man, and Cybernetics Journal of the American Society for Information Science and Technology Decision Support Systems International Journal on Digital Library Chen has also served as a Scientific Counselor/Advisor to the United States National Library of Medicine (NLM), Academia Sinica in Taiwan, and the National Library of China. Productivity In 2005, Chen was ranked #8 in publication productivity in Information Systems (CAIS 2005), #1 in Digital Library research (IP&M 2005), and #9 in a list of the "Top 32" most productive information systems researchers (EJIS 2007) in three bibliometric studies. In 2008, Chen was recognized as one of the top-ranked authors in the Management Information Systems field. His "H-index" score from Google Scholar is 90 (as of December 2017). Chen has authored or edited more than 40 books and conference proceedings, 30 book chapters, 280 SCI journal articles, and 180 refereed conference articles. His books include: Dark Web (2012; ), Infectious Disease Informatics (2010; ), Sports Data Mining (2010; ), Mapping Nanotechnology Innovations and Knowledge (2009; ), Terrorism Informatics: Knowledge Management and Data Mining for Homeland Security (2008, ), Digital Government: E-Government Research, Case Studies, and Implementation (2008; ); Intelligence and Security Informatics for International Security: Information Sharing and Data Mining (2006; ); and Medical Informatics: Knowledge Management and Data Mining in Biomedicine (2005; ), all published by Springer. He spearheaded the development of the International Conference of Asian Digital Library (ICADL) and the IEEE International Conference on Intelligence and Security Informatics (ISI). Development of COPLINK Software and Commercialization In 1997, Chen began development of the COPLINK software, which supports information sharing, analysis, and visualization of law enforcement data. He founded Knowledge Computing Corporation (KCC) to commercialize COPLINK, which then entered the market in 2002 and is now used in thousands of law enforcement agencies across the United States. COPLINK's functions have been described in numerous local, regional, and national sources including the New York Times, the Boston Globe, and the Denver Post. In 2003, COPLINK was named a finalist for the IACP/Motorola Webber Seavey Award: Quality in Law Enforcement. In July 2009, KCC merged with i2 Inc., a market leader in security analytics. In September 2011, the KCC/i2 firm was acquired by IBM for $500 million. In October 2017, IBM sold Coplink to Forensic Logic. The "Dark Web" Project Chen is also the director of the Artificial Intelligence Lab's project, "Dark Web Terrorism Research," the goal of which is to develop and use automated tools to systematically collect and analyze terrorist content from the Internet. One software program developed as part of the project, "Writeprint," aims to support authorship identification of online postings and other writings. The Dark Web project is supported by grants from the National Science Foundation and other federal agencies, and has been recognized in the national press including Science News, Wired Magazine, Discover Magazine, and the Toronto Star. The "Hacker Web" Project and AZSecure Program Chen was awarded a grant of $1.2M from the National Science Foundation (NSF) for “Securing Cyber Space: Understanding the Cyber Attackers and Attacks via Social Media Analytics” (NSF SES #1314631). Co-PIs are Dr. Salim Hariri (UA; ECE); Dr. Ronald Breiger (UA; Sociology); and Dr. Tom Holt (MSU; Criminology). The Hacker Web project aims to investigate questions regarding hacker behavior, markets, and community structure, among other things. The intended result is a computational framework including algorithms, software, and tools for access and use by other researchers to examine how cyber attacker groups form, develop, and spread their ideas. The project is funded through August 2016. Chen was also awarded $4.2M from NSF (with UA co-PIs Drs. Salim Hariri, Mark Patton, and Paulo Goes) for the AZSecure Cybersecurity Scholarship-for-Service program at The University of Arizona (NSF DUE #1303362). Project funds provide scholarships for undergraduate, graduate, and doctoral students studying cyber security, in exchange for government service. Students are recruited from across Arizona, including from military bases, community college, and tribal colleges. The program asserts a special emphasis on minority recruitment and retention. Program components are described as including student mentoring and development, independent study and research, internship and post-graduation placement assistance, and program assessment and evaluation. The program is aimed at helping to fill expected federal vacancies in cyber security and related positions over the next 5 years. The Smart Health Project Chen has developed the "Smart Health" project in his Lab, which lists the following research objectives: (1) developing technical approaches to support cyber-enabled patient empowerment; (2) developing personalized healthcare and community mapping techniques; and (3) conducting theory-driven assessment and evaluation research. DiabeticLink research includes information extraction, sentiment analysis, and social network analysis to be integrated into the DiabeticLink portal. SilverLink is being developed in the U.S., China, and Taiwan to support senior citizens living independently. SilverLink's capabilities include a home-health system that monitors fall detection, daily health status, and activity levels, and can alert caregivers in the event of anomalous situations. Chen also founded Caduceus Intelligence Corporation (CIC), a UA spinoff company working in the area of healthcare information systems. CIC lists web analytics, smart health, patient support, and knowledge discovery as its areas of expertise. Data Infrastructure Building Blocks for Intelligence and Security Informatics As part of ongoing work in intelligence and security informatics, Chen was awarded over $1.4M from NSF for a three-year pilot demonstration project to make available data and analysis tools to serve information and cyber security researchers. The funds are to support the creation of an archive of open source and other data, such as network data, data from honeypots, hacker forums, phishing data, and security-related tweets, in numerous languages including English, Arabic, French, Russian, and more. The project is also intended to provide tools and support for social network analysis and data visualization. The project is intended to serve individuals working in "academia, government, and industry to share data, information, and expertise relevant to research in intelligence and security informatics." Awards He is a Fellow of ACM, IEEE and AAAS, and received the IEEE Computer Society Technical Achievement Award in 2006 and the INFORMS Design Science Award in 2008. A paper for which he was co-author was given the MISQ Best Paper Award for 2010 ("Detecting Fake Websites: The Contribution of Statistical Learning Theory", by Ahmed Abbasi, Zhu Zhang, David Zimbra, Hsinchun Chen, and Jay F. Nunamaker Jr., published in volume 34, issue 3 of MIS Quarterly). Also in 2010, he was recognized as "Distinguished Alumnus" of the National Chiao Tung University in Taiwan and named a Distinguished University Chair Professor of National Taiwan University. In 2011, he received the IEEE 2011 Research Achievement and Leadership Award in Intelligence and Security Informatics and was a finalist for the AZ Tech Council's Governor's Innovation of the Year Award. The Arizona Daily Star recognized his accomplishments as a top scientist in its 2012 series, "Centennial Salute to Science". In March 2013 he was named the Thomas R. Brown Chair in Management and Technology at the UA, as well as the UA's 2013 Innovator of the Year. In May he was inducted as a University of Arizona Regents' Professor. He was elected as a China National 1000-Elite Chair Professor with the Tsinghua University in 2013. In 2015 he was named a fellow of the Association for Computing Machinery "for contributions to the research and development of security informatics and health informatics systems." Notes and references External links AI Lab website Living people Taiwanese computer scientists National Chiao Tung University alumni New York University alumni University of Arizona faculty Information systems researchers Fellows of the Association for Computing Machinery Year of birth missing (living people) University at Buffalo alumni Fellows of the American Academy of Arts and Sciences Taiwanese expatriates in the United States Fellow Members of the IEEE
3743092
https://en.wikipedia.org/wiki/AppleSingle%20and%20AppleDouble%20formats
AppleSingle and AppleDouble formats
AppleSingle Format and AppleDouble Format are file formats developed by Apple Computer to store Mac OS "dual-forked" files on the Unix filesystem being used in A/UX, the Macintosh platform's first Unix-like operating system. AppleSingle combined both file forks and the related Finder meta-file information into a single file, whereas AppleDouble stored them as two separate files. Support for the formats was later added to Unix software such as NFS and MAE, but they saw little use outside this small market. AppleSingle is similar in concept to the more popular MacBinary format, in that the resource and data forks are combined with a header containing the Finder information. In fact, the format is so similar, it seemed there was no reason why Apple did not simply use MacBinary instead, which by that point was widely known and used. Some not-so-obvious reasons are explained in an Internet Draft. The format was later assigned the MIME type . AppleDouble leaves the data fork in its original format, allowing it to be edited by normal Unix utilities. The resource fork and Finder information, both proprietary and lacking editors under Unix, were combined into a second file. A MIME type was also assigned to AppleDouble, . For sending to an AppleDouble un-aware system, the file was generally encoded using Base64, as opposed to being converted to AppleSingle. Usage Before Mac OS X, AppleSingle and Double had little presence in the Mac market, due largely to the small market share of A/UX. Nevertheless, they did force various file compression vendors to add support for the formats, and confuse future MacBinary versions. Mac OS X revived the use of AppleDouble; on file systems such as NFS and WebDAV that don't natively support resource forks, Finder information, or extended attributes, that information is stored in AppleDouble format, with the second file having a name generated by prepending "._" to the name of the first file (thus, this information acts as a hidden file when viewed from a non-Apple Unix-based operating system). The files are sometimes moved to a separate directory called . Metadata separation is also done in the OS X 10.3+ Finder Zip compression and the command line utility, with a copy of the AppleDouble metadata stored in a directory. Manipulation Unwanted "._" files can be removed using dot_clean -m on Mac OS X. Doing so also merges AppleDouble metadata with the corresponding files. AppleDouble files can be manually created through creative abuse of (which is AppleDouble-aware) and (which is not). On other systems, the command and a Perl script called can be used to view AppleDouble data. Both are part of Netatalk. The macOS system provides a set of library functions that allows for packing and unpacking AppleSingle and AppleDouble files in C. References External links AppleSingle and AppleDouble format internals – from the original A/UX documentation RFC 1740 - MIME Encapsulation of Macintosh files – documents AppleSingle/Double in appendixes Mac Binary Converter, an open source tool for converting between different Macintosh file encodings. Mac::AppleSingleDouble, a Perl module for reading AppleSingle and AppleDouble files Mac OS X: Apple Double Format Creates File Name With the Prefix '._', Apple Inc. knowledgebase note about the AppleDouble format Apple Inc. software Archive formats
33355770
https://en.wikipedia.org/wiki/VALO-CD
VALO-CD
VALO-CD, a distribution of open-source software on a CD for Microsoft Windows, aims to spread knowledge and the use of open-source software. VALO-CD originates from Finland, and was originally available only in Finnish. Since version 7, an international version of VALO-CD has also been available in English. The acronym VALO means "Free/Libre Open-Source Software" in Finnish. is also a Finnish word that means "light". Therefore the name of the project has a connotation of bringing enlightenment. The Finnish version of VALO-CD had the special goal of concentrating on completely localized open-source software. The project started in 2008, and aims to support technological and economic development in Finland. Contents Version 8 of VALO-CD includes the following software: Writing and drawing: Dia, GIMP, Inkscape, LibreOffice, and Scribus. Internet: Firefox, Thunderbird, Pidgin, Vuze, and WinSCP. Recreation: Stellarium, and Tux Paint. Multimedia: Audacity, MuseScore and VLC media player. Tools: 7-Zip, Evince, InfraRecorder, KeePass, Notepad++, PDFCreator, and TrueCrypt. Guides: Various user guides by FLOSS Manuals. See also OpenCD OpenDisc LoLiWin References External links Free software distributions Projects established in 2008 2008 establishments in Finland
36194808
https://en.wikipedia.org/wiki/Stephen%20Rosenbaum
Stephen Rosenbaum
Stephen Rosenbaum is an American visual effects artist and supervisor, and has worked for more than 25 years on numerous movie and commercial productions, including six that have won Academy Awards. He received two Academy Awards and two BAFTA Awards for his contributions on Forrest Gump and Avatar, and has played an integral role on such pioneering films as Jurassic Park, Terminator 2: Judgment Day, The Abyss, X2: X-Men United, Death Becomes Her, Contact and The Perfect Storm. Personal life Rosenbaum was raised in West Los Angeles and graduated from Palisades Charter High School where he met and eventually married his high school sweetheart. He graduated from University of California, Berkeley, and remains a Bay Area, California resident along with his wife and their two girls. Career Rosenbaum began his career in visual effects at the reconstructed Computer Graphics Department of Lucasfilm's effects division Industrial Light & Magic in 1989. The previous members of this department moved to the building next door and formed the company Pixar. This new group of artists received their first chance to make a computer generated character when James Cameron asked them to create the Pseudopod water creature for The Abyss. Cameron followed with Terminator 2 and the group expanded the artist base and created one of the first digital manipulations of a human character. The artists continued to thrive with opportunities to animate and render the seminal dinosaurs in Jurassic Park. Rosenbaum then oversaw the removal of Lt. Dan's legs, rephrasing the words of presidents, and the fanciful feather animation in Forrest Gump. These movies help spark the rapid transition of traditional film-processed visual effects and inspired a complete paradigm shift in visual effects and filmmaking methodologies and commercial digital imagery manipulation. Rosenbaum spent several years working on various projects at Weta Digital, and in 2007, he began work on Avatar. For two years, Rosenbaum worked with Cameron in Los Angeles during performance capture and in New Zealand during live action photography. For the third year of the project he returned to New Zealand to help complete the CGI on the movie. In 2010, Rosenbaum was hired by Digital Domain to start a character animation development group. He brought together some of the best computer graphics artists and technicians, and they built a modernized approach to creating physically and behaviorally realistic digital humans and creatures. In 2014, Rosenbaum directed the creation of a virtual Michael Jackson posthumously performing a previously unreleased song live at 2014 Billboard Music Awards. He then spent the next two years creating the reimagined King Kong for the movie Kong: Skull Island. Filmography Kong: Skull Island (2017) (Visual Effects Supervisor: Legendary Pictures) Slave to the Rhythm (2014) (Visual Effects Supervisor) Jack the Giant Slayer (2013) (Visual Effects Supervisor: Digital Domain) Avatar (2009) (Visual Effects Supervisor: Weta Digital) The Water Horse (2007) (On-set Visual Effects Supervisor: Weta Digital) The Shaggy Dog (2006) (Visual Effects Supervisor: Walt Disney Pictures) The Chronicles of Riddick (2004) (Visual Effects Supervisor: Universal Pictures) I, Robot (2004) (On-set Visual Effects Supervisor: Weta Digital) X2: X-Men United (2003) (Visual Effects Supervisor: Cinesite) K-19: The Widowmaker (2002) (Visual Effects Supervisor: National Geographic Films) Big Trouble (2002) (Visual Effects Supervisor: Industrial Light & Magic) The Perfect Storm (2000) (Visual Effects Supervisor: Industrial Light & Magic) Snow Falling on Cedars (1999) (Visual Effects Supervisor: Industrial Light & Magic) Snow Falling on Cedars (1999) (Visual Effects Supervisor: Sony Pictures Imageworks) The Postman (1997) (Visual Effects Supervisor: Sony Pictures Imageworks) Contact (1997) (Visual Effects Supervisor: Sony Pictures Imageworks) Michael (1996) (Visual Effects Supervisor: Sony Pictures Imageworks) DreamWorks (Logo) (1995) (Computer Graphics Supervisor: Industrial Light & Magic) The Indian in the Cupboard (1995) (Computer Graphics Supervisor: Industrial Light & Magic) Disclosure (1994) (Computer Graphics Artist: Industrial Light & Magic) Forrest Gump (1994) (Computer Graphics Supervisor: Industrial Light & Magic) Jurassic Park (1993) (Computer Graphics Artist: Industrial Light & Magic) Death Becomes Her (1992) (Computer Graphics Artist: Industrial Light & Magic) Memoirs of an Invisible Man (film) (1992) (Computer Graphics Artist:Industrial Light & Magic) Terminator 2: Judgment Day (1991) (Computer Graphics Artist: Industrial Light & Magic) The Hunt for Red October (film) (1990) (Computer Graphics Artist: Industrial Light & Magic) Back to the Future Part II (1989) (Computer Graphics Artist: Industrial Light & Magic) The Abyss (1989) (Computer Graphics Artist: Industrial Light & Magic) Awards and nominations Academy Award 2010 Oscar – winner for Best Visual Effects – Avatar (2009) 1995 Oscar – winner for Best Visual Effects – Forrest Gump (1994) BAFTA 2010 BAFTA Film Award – winner for best Special Visual Effects – Avatar (2009) 1995 BAFTA Film Award – winner for best Special Visual Effects – Forrest Gump (1994) Academy of Science Fiction, Fantasy & Horror Films 2010 Saturn Award – winner for Best Special Effects – Avatar (2009) 2004 Saturn Award – nominee for Best Special Effects – X2: X-Men United (2003) 1998 Saturn Award – nominee for Best Special Effects – Contact (1997) References External links Best Visual Effects Academy Award winners Best Visual Effects BAFTA Award winners Visual effects supervisors Living people 1965 births Computer graphics professionals People from the San Francisco Bay Area University of California, Berkeley alumni
11915418
https://en.wikipedia.org/wiki/Li%20Huatian
Li Huatian
Professor Li Huatian (1922–2007) was one of the first few computer scientists in China and was well known for his early contributions to the areas of computer science and computer networks. Life and work He was born on Jan 29, 1922 in Songjiang, Jiangsu (now Songjiang, Shanghai). He graduated from the National Southwestern Associated University with a degree in electrical engineering in 1943 and from Harvard University with a master's degree in 1948. He returned to China in 1949 to start his research and teaching career as a university professor. He taught at Dalian University of Technology and Northeastern University. He served as the department chair of the departments of Automation and Computer Science and the university vice president at Northeastern University. He also served as a vice president for IFAC, the International Federation of Automatic Control. He resigned administration positions in 1984 to return to full-time research and teaching. Meanwhile, he founded the first PhD program in computer science in China. Till his final retirement in 1995, he had published numerous journal papers in areas of automatic control, computer theory, computer networks, and multimedia systems and brought up a lot of younger computer scientists in China. He also co-founded the Neusoft Group in early 1990s with his PhD student Liu Jiren. Professor Li died on Jan 24, 2007 in Shenzhen, China. Chinese computer scientists Theoretical computer scientists Computer systems researchers Harvard University alumni Tsinghua University alumni 1922 births 2007 deaths Educators from Shanghai Northeastern University (China) faculty Dalian University of Technology faculty Scientists from Shanghai National Southwestern Associated University alumni
2656549
https://en.wikipedia.org/wiki/User%20story
User story
In software development and product management, a user story is an informal, natural language description of features of a software system. They are written from the perspective of an end user or user of a system, and may be recorded on index cards, Post-it notes, or digitally in project management software. Depending on the project, user stories may be written by different stakeholders like client, user, manager, or development team. User stories are a type of boundary object. They facilitate sensemaking and communication; and may help software teams document their understanding of the system and its context. History 1997: Kent Beck introduces user stories at the Chrysler C3 project in Detroit. 1998: Alistair Cockburn visited the C3 project and coined the phrase "A user story is a promise for a conversation." 1999: Kent Beck published the first edition of the book Extreme Programming Explained, introducing Extreme Programming (XP), and the usage of user stories in the planning game. 2001: Ron Jeffries proposed a "Three Cs" formula for user story creation: The Card (or often a post-it note) is a tangible physical token to hold the concepts; The Conversation is between the stakeholders (customers, users, developers, testers, etc.). It is verbal and often supplemented by documentation; The Confirmation ensures that the objectives of the conversation have been reached. 2001: The XP team at Connextra in London devised the user story format and shared examples with others. 2004: Mike Cohn generalized the principles of user stories beyond the usage of cards in his book User Stories Applied: For Agile Software Development that is now considered the standard reference for the topic according to Martin Fowler. Cohn names Rachel Davies as the inventor of user stories. While Davies was a team member at Connextra she credits the team as a whole with the invention. 2014: After a first article in 2005 and a blog post in 2008, in 2014 Jeff Patton published the user-story mapping technique, which intends to improve with a systematic approach the identification of user stories and to structure the stories to give better visibility to their interdependence. Principle User stories are written by or for users or customers to influence the functionality of the system being developed. In some teams, the product manager (or product owner in Scrum), is primarily responsible for formulating user stories and organizing them into a product backlog. In other teams, anyone can write a user story. User stories can be developed through discussion with stakeholders, based on personas or are simply made up. Common templates User stories may follow one of several formats or templates. The most common is the Connextra template, stated below. Mike Cohn suggested the "so that" clause is optional although still often helpful. As a <role> I can <capability>, so that <receive benefit> Chris Matts suggested that "hunting the value" was the first step in successfully delivering software, and proposed this alternative: In order to <receive benefit> as a <role>, I can <goal/desire> Another template based on the Five Ws specifies: As <who> <when> <where>, I want <what> because <why> Examples Screening Quiz (Epic Story) As the HR manager, I want to create a screening quiz so that I can understand whether I want to send possible recruits to the functional manager. Quiz Recall As a manager, I want to browse my existing quizzes so I can recall what I have in place and figure out if I can just reuse or update an existing quiz for the position I need now. Limited Backup As a user, I can indicate folders not to back up so that my backup drive isn't filled up with things I don't need saved. Usage A central part of many agile development methodologies, such as in XP's planning game, user stories describe what may be built in the software project. User stories are prioritized by the customer (or the product owner in Scrum) to indicate which are most important for the system and will be broken down into tasks and estimated by the developers. One way of estimating is via a Fibonacci scale. When user stories are about to be implemented, the developers should have the possibility to talk to the customer about it. The short stories may be difficult to interpret, may require some background knowledge or the requirements may have changed since the story was written. User stories can be expanded to add detail based on these conversations. This can include notes, attachments and acceptance criteria. Acceptance criteria Mike Cohn defines acceptance criteria as "notes about what the story must do in order for the product owner to accept it as complete." They define the boundaries of a user story and are used to confirm when a story is completed and working as intended. The appropriate amount of information to be included in the acceptance criteria varies by team, program and project. Some may include 'predecessor criteria', "The user has already logged in and has already edited his information once". Some may write the acceptance criteria in typical agile format, Given-When-Then. Others may simply use bullet points taken from original requirements gathered from customers or stakeholders. In order for a story to be considered done or complete, all acceptance criteria must be met. Benefits There is no good evidence that using user stories increases software success or developer productivity. However, user stories facilitate sensemaking without undue problem structuring, which is linked to success. Limitations Limitations of user stories include: Scale-up problem: User stories written on small physical cards are hard to maintain, difficult to scale to large projects and troublesome for geographically distributed teams. Vague, informal and incomplete: User story cards are regarded as conversation starters. Being informal, they are open to many interpretations. Being brief, they do not state all of the details necessary to implement a feature. Stories are therefore inappropriate for reaching formal agreements or writing legal contracts. Lack of non-functional requirements: User stories rarely include performance or non-functional requirement details, so non-functional tests (e.g. response time) may be overlooked. Don't necessarily represent how technology has to be built: Since user stories are often written from the business perspective, once a technical team begins to implement, it may find that technical constraints require effort which may be broader than the scope of an individual story. Sometimes splitting stories into smaller ones can help resolve this. Other times, 'technical-only' stories are most appropriate. These 'technical-only' stories may be challenged by the business stakeholders as not delivering value that can be demonstrated to customers/stakeholders. Relationship to epics, themes and initiatives In many contexts, user stories are used and also summarized in groups for semantic and organizational reasons. The different usages depend on the point-of-view, e.g. either looking from a user perspective as product owner in relation to features or a company perspective in relation to task organization. While some suggest to use 'epic' and 'theme' as labels for any thinkable kind of grouping of user stories, organization management tends to use it for strong structuring and uniting work loads. For instance, Jira seems to use a hierarchically organized to-do-list, in which they named the first level of to-do-tasks 'user-story', the second level 'epics' ( grouping of user stories ) and the third level 'initiatives' ( grouping of epics ). However, initiatives are not always present in product management development and just add another level of granularity. In Jira, 'themes' exist ( for tracking purposes ) that allow to cross-relate and group items of different parts of the fixed hierarchy. In this usage, Jira shifts the meaning of themes in an organization perspective: e.g how much time did we spent on developing theme "xyz". But another definition of themes is: a set of stories, epics, features etc for a user that forms a common semantic unit or goal. There is probably not a common definition because different approaches exist for different styles of product design and development. In this sense, some also suggest to not use any kind of hard groups and hierarchies. Epic Large stories or multiple user stories that are very closely related are summarized as epics. A common explanation of epics is also: a user story that is too big for a sprint. Initiative Multiple epics or stories grouped together hierarchically, mostly known from Jira. Theme Multiple epics or stories grouped together by a common theme or semantic relationship. Sizing Common approaches to sizing user stories include: Story Points: User stories are often given a 'size' based on a team's opinion of how long it might take to deliver, to do this, they will allocate a number of story points to the story. A story point is an arbitrary and inconsistent indicator of the team's opinion of size, at a point in time. The points allocation is usually based on a Fibonacci scale. T-shirt sizes: This is similar to story points in that it is the team's opinion. It is a less granular indicator of size than story points. COSMIC Function Points: This is a 2nd generation ISO standard software sizing methodology suited to agile software development where not all requirements are known up-front. IFPUG Function Points: First generation functional sizing methodology. Story map A story map organises user stories according to a narrative flow that presents the big picture of the product. The technique was developed by Jeff Patton from 2005 to 2014 to address the risk of projects flooded with very detailed user stories that distract from realizing the product's main objectives. User story mapping uses workshops with users to identify first the main business activities. Each of these main activities may involve several kind of users or personas. The horizontal cross-cutting narrative line is then drawn by identifying the main tasks of the individual user involved in these business activities. The line is kept throughout the project. More detailed user stories are gathered and collected as usual with the user story practice. But each new user story is either inserted into the narrative flow or related vertically to a main tasks. The horizontal axis corresponds to the coverage of the product objectives, and the vertical axis to the needs of the individual users. In this way it becomes possible to describe even large systems without losing the big picture. Story maps can easily provide a two-dimensional graphical visualization of the Product Backlog: At the top of the map are the headings under which stories are grouped, usually referred to as 'epics' (big coarse-grained user stories), 'themes' (collections of related user stories) or 'activities'. These are identified by orienting at the user’s workflow or "the order you'd explain the behavior of the system". Vertically, below the epics, the actual story cards are allocated and ordered by priority. The first horizontal row is a "walking skeleton" and below that represents increasing sophistication. User journey map A user journey map intends to show the big picture but for a single user category. Its narrative line focuses on the chronology of phases and actions that a single user has to perform in order to achieve his or her objectives. This allows to map the user experience beyond a set of user stories. Based on user feedback, the positive and negative emotions can be identified across the journey. Points of friction or unfulfilled needs can be identified on the map. This technique is used to improve the design of a product, allowing to engage users in participatory approaches. Comparing with use cases A use case has been described as "a generalized description of a set of interactions between the system and one or more actors, where an actor is either a user or another system." While user stories and use cases have some similarities, there are several differences between them. Kent Beck, Alistair Cockburn, Martin Fowler and others discussed this topic further on the c2.com wiki (the home of extreme programming). See also Kanban board Persona (user experience) Scenario (computing) Use case References Further reading Daniel H. Steinberg, Daniel W. Palmer, Extreme Software Engineering, Pearson Education, Inc., . Mike Cohn, User Stories Applied, 2004, Addison Wesley, . Mike Cohn, Agile Estimating and Planning, 2006, Prentice Hall, . Business Analyst Time Payton Consulting 'How user stories are different from IEEE requirements Software requirements Extreme programming Agile software development
1880157
https://en.wikipedia.org/wiki/Phantasmagoria%20%28video%20game%29
Phantasmagoria (video game)
Phantasmagoria is a point-and-click adventure horror video game designed by Roberta Williams for MS-DOS and Microsoft Windows and released by Sierra On-Line on August 24, 1995. It tells the story of Adrienne Delaney (Victoria Morsell), a writer who moves into a remote mansion and finds herself terrorized by supernatural forces. It was made at the peak of popularity for interactive movie games and features live-action actors and footage, both during cinematic scenes and within the three-dimensionally rendered environments of the game itself. It was noted for its violence and sexual content. Williams had long planned to design a horror game, but she waited eight years for software technology to improve before doing so. More than 200 people were involved in making Phantasmagoria, which was based on Williams' 550-page script, about four times the length of an average Hollywood screenplay. It took more than two years to develop and four months to film. The game was originally budgeted for $800,000, but it ultimately cost $4.5 million to develop and was filmed in a $1.5 million studio that Sierra built specifically for the game. The game was directed by Peter Maris and features a cast of twenty-five actors, all performing in front of a blue screen. Most games at the time featured 80 to 100 backgrounds, while Phantasmagoria includes more than 1,000. A professional Hollywood special effects house worked on the game, and the musical score includes a neo-Gregorian chant performed by a 135-voice choir. Sierra stressed that it was intended for adult audiences, and the company willingly submitted it to a ratings system and included a password-protected censoring option within the game to tone down the graphic content. Phantasmagoria was released on seven discs after multiple delays, but it was a financial success, grossing $12 million in its opening weekend and becoming one of the best-selling games of 1995. Sierra strongly promoted the game. It received mixed reviews, earning praise for its graphics and suspenseful tone while being criticized for its slow pacing and easy puzzles. The game also drew controversy, particularly due to a rape scene. CompUSA and other retailers declined to carry it, religious organizations and politicians condemned it, and it was refused classification altogether in Australia. The sequel Phantasmagoria: A Puzzle of Flesh was released in 1996, although Williams was not involved. Gameplay Phantasmagoria is a point-and-click adventure game that features live-action actors and cinematic footage, both during scenes between the gameplay and within the three-dimensionally rendered environments of the game itself. The game was made when interactive movie games were at the peak of their popularity, with the release of such popular computer games as Wing Commander III and Under a Killing Moon. In this single-player game, the player controls protagonist Adrienne Delaney, who is always on the screen. The player can click certain areas of the screen to control her movements or make her explore specific places and objects. The camera angles and rooms change depending on where the player moves Adrienne. The game includes only four mouse commands: look (which changes to "talk to" when selecting a person), pick up item, use item, and walk. The mouse cursor is always an arrow, unlike in most earlier Sierra On-Line games, in which the cursor could be changed to represent different functions (like walk, hand, and eye) by right-clicking. The cursor turns red when it passes over an area where the user can click to perform an action. Once the action has been completed, the cursor will not turn red again. Objects in the game can be picked up by clicking on them or interacting with them in a film sequence, after which the item automatically goes into the inventory. If an object is small enough that it can be easily missed, a pulsing glow occasionally surrounds it to draw attention to it. The user interface is a screen surrounded by a stone border, with buttons and eight inventory slots along the bottom. Adrienne can only hold eight items at once, and each black slot can contain an image of an item that the user can click on to retrieve or use within the game. In the middle of the inventory slots is a "P" button that brings up an options screen, allowing the player to save or restore games, change the game screen to full- or half-size, control the volume, or switch between a censored version of the game and uncensored one, with more graphic content. A chapter gauge on this screen conveys how much progress remains in a given chapter. A fast-forward button, located above the options button, allows the user to skip past cinematic scenes. On the left side of the interface is a red skull button, which the user can click to receive hints. On the right side is a red button with a picture of a closed eye, which displays any inventory item that the user drags to the button. The display shows a close-up image that can be turned in multiple directions. Game designer Roberta Williams, co-founder of Sierra On-Line, deliberately made the Phantasmagoria gameplay and interface simple, unobtrusive, and intuitive so it would be more accessible and less frustrating for casual gamers. According to Arinn Dembo from Computer Gaming World, the game is focused more around the story and the frightening atmosphere, rather than on a difficult gaming experience, and therefore the puzzles are relatively easy, logical, and straightforward. Plot Successful mystery novelist Adrienne Delaney (Victoria Morsell) and her photographer husband Don Gordon (David Homb) along with her pet cat Spaz have just purchased a remote mansion off the coast of a small New England island previously owned by a famous 19th-century magician, Zoltan "Carno" Carnovasch (Robert Miano), whose five wives all died mysteriously. Adrienne hopes to find an inspiration for her next novel in her new home, but starts having nightmares immediately upon moving in. She is comforted by the loving and supportive Don. Adrienne explores the estate, making mysterious discoveries like strange music, warnings written on her computer, and ominous messages from a fortune-teller automaton. Unbeknownst to the happy couple, Carno had practiced black magic when he lived in the mansion and had summoned an evil demon that possessed him and caused him to murder his wives. During her exploration of the grounds, Adrienne finds a secret chapel hidden behind a bricked off fireplace. After opening a locked box atop an altar, Adrienne unknowingly releases the demon that possessed Carno, which possesses Don. Don starts acting menacingly toward Adrienne and drinking heavily,(culminating in him raping her later on. Adrienne meets Harriet Hockaday (V. Joy Lee), a superstitious vagrant, and her strong but dim-witted son, Cyrus (Steven W. Bailey), who are secretly living in a barn on the estate. After Adrienne agrees to let them stay, they volunteer to help around the mansion, though Don disapproves. When a technician named Mike (Carl Neimic) visits the mansion to install the phone-line, Don screams at him in a jealous rage, warning him to stay away from his wife. Spaz also goes missing. Adrienne later discovers Spaz's collar where Don had been sitting by the fireplace earlier. While the local townspeople believe all Carno's wives died of natural causes or accidentally, Adrienne learns through a series of visions that he murdered them in grotesque ways. Hortencia (Christine Armond), who avoided Zoltan's abuse by secluding herself in her greenhouse, is stabbed with gardening tools and suffocated with mulch. Victoria (Holley Chant), an alcoholic, is killed when Zoltan impales her eye with a wine bottle during an argument. An overly talkative third wife, Leonora (Dana Moody), has her mouth gagged and her neck contorted in a torture device. Finally, the food-loving Regina (Wanda Smith) is force-fed animal entrails through a funnel until she chokes to death. Harriet performs a séance for Adrienne, in which she vomits a green ectoplasm that takes the form of Carno. A repentant Carno tells her that he was possessed by the demon when he tried to learn black magic to use in his performances. He reveals that the previously trapped demon must be stopped and only she can contain it once again. Adrienne is later horrified to discover Spaz's dead body in the garden. Adrienne visits the nearly 110-year-old Malcolm Wyrmshadow (Douglas Seale), who had been Carno's apprentice as a young boy. Malcolm reveals that Carno met his demise after his last wife, Marie (Traci Clauson), who was tired of being abused by him and discovered he was a murderer. Marie conspired with her lover, Gaston (Jeff Rector), Carno's prop master, to kill Carno by sabotaging the equipment for his most dangerous escapology trick "The Throne Of Terror", in which Carno donned a burning hood and escaped from bonds on a throne underneath a swinging axe. The sabotage left Carno horribly burned and disfigured and he was assumed dead, but he astonishingly survived and mutilated Marie and Gaston. After killing Marie by beheading her, Carno was impaled by Gaston, who then succumbed to his own wounds. Malcolm reveals that it was he who had originally sealed the demon in the estate's chapel. Malcolm also tells Adrienne about a ritual that can eradicate the demon. later that night Mike leaves the mansion, finally having completed installing the phone-line. unbeknownst to Adrienne Don murders Mike with an axe. Harriet fearing for her safety, decides to leave with Cyrus as Don becomes more abusive and erratic. The next day Adrienne discovers she cannot leave the mansion and later finds a collection of photos of her in Don's darkroom which have her head cut off. She is confronted by Don who is now completely insane and dressed like Carno. Adrienne scars Don's face with acid from his darkroom when he tries to kill her and flees, discovering the corpses of Mike, Harriet and Cyrus hidden throughout the mansion. Don captures Adrienne and straps her into the throne, but she distracts him long enough to free herself and trigger the swinging axe, which impales and kills Don. His death unleashes the demon, which pursues Adrienne through the mansion. She escapes long enough to perform a ritual that traps the demon once again. The game ends with Adrienne walking out of the mansion with a vacant stare, almost catatonic. Development Conception Phantasmagoria was a radically different game for Williams, who was best known for designing the family-friendly King's Quest fantasy adventure game series. Williams said she did not want to get typecast in a particular genre, adding: "I felt I had more to offer than fairy stories. I wanted to explore games with a lot of substance and deep emotions." Although Phantasmagoria was her first horror game, Williams had created murder and crime stories in her earlier mystery games, Mystery House and the Laura Bow series. Williams was a fan of the horror genre, although she did not watch many horror films as a child after a traumatic experience watching Horrors of the Black Museum (1959) at age four. She began watching them again as a teen, and particularly enjoyed such films as Halloween (1978), and movies based on Stephen King novels, like Carrie and The Shining (1980). Williams cited the works of King and Edgar Allan Poe as the inspirations behind Phantasmagoria. Williams had wanted to make a horror game for eight years prior to Phantasmagoria, and had started to design one on several occasions, but none came to fruition. She felt the computer game industry and software technology had not yet reached the point where an effective and frightening computer game could be made. As a result, she decided to wait until CD-ROMs were faster and could handle real actors, which she believed was crucial for a horror game because she felt the player had to be able to empathize with the character to fear for them. Williams had been considering several horror story ideas for years before making the game. In one, the heroine answers a magician's advertisement for an assistant and gets the job, but the magician turns out to be insane. Another was set in the past, with the female protagonist getting involved in the supernatural life of a magician character. That idea evolved into a woman being married to a man who is drawn to a bizarre house that previously belonged to illusionists and is haunted by ghosts. The final story as it appears in the game has a contemporary setting, but combines elements from modern fiction, 19th century literature, and classic horror films of the 1950s. Writing Phantasmagoria was the first Sierra On-Line game script to be written like a film screenplay. During the writing stage, Williams was under pressure to ensure Phantasmagoria was a commercial success for Sierra. She also faced concerns from her husband Ken Williams, the company president and co-founder of Sierra, who wanted the company to focus more on lower-cost, combat-oriented animated games, and was also concerned about the level of violence in Phantasmagoria. Williams said she knew the rape scene in particular would be controversial, but she felt it was essential to the plot because it was a turning point that made Adrienne realize something was deeply wrong with Don. She felt the rest of the story "would make no sense" without the scene, but also knew there was a chance it would be cut from the game before production ended. The script numbered roughly 550 pages when completed, about four times the size of an average Hollywood screenplay. It also included another 100 pages of storyboards representing a total of 800 scenes. In preparation for writing it, Williams spent six months watching horror films and reading horror novels, as well as reading books about how to write horror novels and screenplays. She also asked people she knew socially to tell her scary stories so she could identify the elements of fear. She immersed herself so deeply into the genre that she began having nightmares and had to scale back her reading. She also spent about six months researching historical aspects of the story before the writing process. Williams had a history of using female protagonists in her games, like the Laura Bow series and some of the King's Quest games. She said her own personality was a partial inspiration for the protagonist Adrienne, although it was not purposeful: "I think it just kind of naturally worked out that way." Andy Hoyos, the game's art director and a horror aficionado, participated in brainstorming sessions and discussions with Williams during the writing stage. He conceived the ideas for most of the death scenes and tried to make each one different and original with what he called "fresh approaches to murder." Williams wanted to include more scenes with Adrienne and Don as a happy, normal couple, so that Don's transformation to evil would have greater impact. She proved unable to do so while also maintaining the correct pacing for the gameplay. Williams wrote Phantasmagoria with the mass market in mind, targeting casual players as well as die-hard computer game fans. To that end, she wrote the game in short chapters to break it into smaller, easier-to-play sections. Her targeted demographic for the game was ages 16 and older. She chose the title "Phantasmagoria" after reading the word in a reference book about the history of magic and magicians. The term refers to a 17th-century theatrical horror show in which "spirits of the dead" were revealed in a darkened theater by the use of a modified magic lantern. Design Andy Hoyos was the art director for Phantasmagoria, and Kim White, Brandee Prugh and Brian Judy were among the computer artists. It was the first computer game Prugh worked on. Mark Hood, a veteran Sierra programmer, and Mark Seibert served as Phantasmagoria project managers. Seibert called managing the project "much more of a technical and managerial problem-solving job than I ever envisioned." Unlike previous Sierra games, it also involved managing a great deal of outside resources, such as keeping the studio and outside art contractors on schedule. The game's three-dimensional graphics were among the first rendered on Silicon Graphics software. In creating the look of the game, Hoyos drew particular inspiration from the films of Tim Burton, and he particularly tried to emulate the lighting, sets and "imaginative edge" of Batman (1989). Other influences included the films Alien (1979) and Hellraiser (1987). Phantasmagoria was designed using the 3-D software package Alias. Hoyos started by designing the rooms, then created the furniture, and added textures and lighting. Once complete, the computer rendered final images of each room, which became the backgrounds for the game. While most computer games at the time had between 80 and 100 background pictures, Phantasmagoria had more than 1,000. The game developers realized early during development that the game could not be completed entirely in-house due to the large scope of the project, and required working with Hollywood agencies, actors and special effects houses, among others. This added further complexity to the game's development and design. Gerald B. Wolfe, the game's director of photography, spoke to the artists about how to set up the camera angles during filming to best accommodate the design of the rooms. Mannequins created in the computer substituted as stand-ins for the actors to help Wolfe position the shots. Most of the artists had never created computer-generated backgrounds for video-captured characters, and found it to be a challenging process. Seibert said bridging the gap between Hollywood and the software world was difficult at first, but after about four weeks of shooting, the two sides had "come to a common language and had a greater understanding of the process." Hoyos said the biggest difficulty in designing the look of the game was creating a definitive scale for the environment, and ensuring the objects and environment would eventually be compatible with human characters and maintain realism. Once the filming was completed, more than 20 two-hour Beta SP video tapes of footage had to be edited down to fit the actions of the game. The programming included editing the original video footage, and mixing the rendered images of doors, drawers, chairs and other objects into the footage frame-by-frame, which was necessary for more than 100 scenes. The game required more polishing and fine-tuning than most games of its time due to the many video components, and the programmers had to determine the desired frame rate, data transfer rate and desired resolution of the video elements. Another challenge for the game was CD resource management. Since many of the scenes in the game could be repeated in multiple chapters, some pieces of the code were written onto several different CDs to reduce the number of times the player had to swap discs in the middle of a chapter, an unusual practice for computer games at the time. The final game was on seven discs, a large number for a computer game, and more than the four or five discs originally expected. Casting Twenty-five professional actors were cast in Phantasmagoria. Roberta Williams watched the audition tapes of all the roles, and was involved with Mark Seibert in picking the actors. The only actor that Siebert chose without Williams' input was Victoria Morsell as the protagonist, Adrienne Delaney. Williams had watched other audition tapes for the part and was not happy with any of the choices. While she was on vacation, Seibert watched Morsell's audition and immediately selected her for the part. Williams was initially concerned that she was not involved in the decision, but later came to agree that Morsell was the correct choice. Morsell mostly had television experience, but had also previously worked in film and theater roles. She had to work 10 to 12 hours a day, six days a week, for three months to capture all her character's actions and movements. David Homb portrayed Adrienne's husband, Don Gordon. Morsell and Homb started dating in real life after filming. Robert Miano was cast as the villain Zoltan "Carno" Carnovasch. He had previously appeared in several films by Phantasmagoria director Peter Maris. The comic relief roles of Harriet and Cyrus were played by V. Joy Lee and Steven W. Bailey, respectively. Lee had mostly done theater work in the Atlanta and Seattle areas, along with appearing in some industrial films and an episode of Northern Exposure. Bailey had been an actor in Seattle for about three years, doing mostly stage work, as well as some commercials, television episodes, independent films, and industrial films for the United States Navy. Malcolm was portrayed by veteran actor Douglas Seale. Stella Stevens played the antiques store owner Lou Ann, Geoff Pryser played Bob the realtor, and Carl Neimic played phone technician Mike. Traci Clauson portrayed Marie, Carno's fifth and final wife. Filming Phantasmagoria was Sierra's first game to use live actors integrated with computer-generated backgrounds. About 600 scenes were filmed for the game. The total filming took about four months, 12 hours a day, shooting six days a week. More than 200 people were ultimately involved in the making of the game. A total of $800,000 was originally budgeted for Phantasmagoria, but it ended up costing $4.5 million. The budget issues caused some friction between Roberta and Ken Williams. It was the first game made in a new filming studio built by Sierra in Oakhurst, California, the same location as Sierra's headquarters. It cost $1.5 million to build and was overseen by studio manager Bill Crow, who said, "Phantasmagoria essentially started with the design and creation of the studio." Sierra also developed computerized tools specifically to manage the shooting process, including software to digitize the video into the computer. Some of the software was built as the game was being made and as the need for new tools was being identified. Sierra wanted the game to have Hollywood-quality film sequences, so they sought a director with experience in the film industry. They hired Peter Maris, whose previous experience was mainly of action and drama films, although some of his earlier films were graphic horror movies. Maris and Roberta Williams collaborated a great deal from the beginning of the process, with Maris explaining what he wanted in each scene from a filmmaker's perspective, and Williams explaining her desires from a game designer's perspective. Maris set up the camera angles and worked with the actors, using three motion-controlled cameras during the shooting. All filming was done entirely in front of a blue screen, and the digitized information was later loaded into the Silicon Graphics computers, which synchronized the relative motion of computer-generated, three-dimensional background art. Next, the live action and the backgrounds were composited using advanced techniques controlled by Petro Vlahos's UltiMatte system. The computer imaging components made Phantasmagoria a very different experience for Maris than his usual film industry work. Crow served as a production coordinator during filming, helping facilitate what happened on stage. He also directed the scenes that involved stunts or other special effects that required the coordination of animated sequences with live action. Cindy Jordan worked as the makeup artist. Due to the nature of filming for a computer game, certain short performances had to be filmed over and over separately, like Adrienne simply crossing a room. Many of the actors, including Morsell, Lee and Bailey, had never done any blue screen acting before, and Miano had only done it once. Morsell in particular said she found it challenging because it was so physically constrictive. Such items as tables, chairs, doors or stairs that the actors appear to interact with are in fact constructed objects painted blue to match the blue screen, and were occasionally challenging for the actors to work with. During one scene, David Homb accidentally stood in such a way that his arm appeared to be going through the living room wall. Morsell found it challenging to film a scene near the end of the game, in which Adrienne is pleading with Don while getting strapped into the chair with a guillotine overhead. Several different versions of the scene had to be filmed, and Morsell said it was difficult for her to work up the genuine emotion she needed take after take. The torture devices featured in the game were scale models made by local craftsmen. During one scene, the mansion roof starts to collapse, causing beams and chunks of debris to fall around the actors. To achieve this, the crew built lightweight beams out of hollow cardboard, which had to fall in the correct sequence. In another scene, Adrienne crawls along a water pipe that breaks. To film the scene without damaging equipment, the crew built a giant plastic box with a plastic tray underneath for the shoot. Three hoses ran through the pipe so water would spray in multiple directions. The chase scene at the end of the game, in which Don pursues Adrienne throughout the entire mansion, took a full week to film. Homb wore a prosthesis on his face to simulate wounds from Adrienne pouring acid on him. Only one prosthesis was available, and it was in such bad shape by the end of filming that Williams said "we were essentially holding it in place with the proverbial wire and bailing wax." Several scenes in Phantasmagoria involved animals, including a Doberman Pinscher, two cats, a beagle and several rats. Sierra commissioned Dave Macmillan and other Hollywood animal handlers from the company Worldwide Movie Animals to handle the animals. Two cats were required to portray Adrienne's pet Spaz because the cats were temperamental and each would have moments when they would not perform. Some objects were borrowed by a museum operated by the Fresno Flats Historical Society and used as props in the game. The society was thanked in the credits as a result. Ultimately, all of the scenes filmed for Phantasmagoria were used in the final game. Some were edited for length and pacing, but none were cut altogether. Despite a storyline running over a course of seven days, Adrienne wears the same wardrobe throughout the entire game: an orange shirt and black pants. Williams said this is because many of Adrienne's actions throughout the game can be repeated by the player over and over, making it difficult to change her wardrobe without it becoming "a nightmare of shooting." The only solutions she saw were to allow less decision-making by the player, or to keep the flexibility intact but not allow any costume changes. Williams said she decided the latter option was ultimately better for the game. An orange shirt was chosen for the character because it was the best color to shoot against the blue screen. None of the characters wear blue, purple, gray or any shade of green similar to the blue screen for that same reason. Effects For Phantasmagoria's special effects, Sierra sought the professional effects house The Character Shop, headed by senior creature maker Rick Lazzarini. The core special effects team for the game consisted of Lazzarini, Michael Esbin and Bill Zahn. Lazzarini said making Phantasmagoria was a much faster and more cooperative process than in the film industry. Due to the tight filming schedule, most effects had to be done in one shot with no cuts, so there was less margin for error. They created the effects for all the death scenes in the game, which included creating assorted body prostheses, burn makeup and what Lazzarini called "gallons of fake blood," which was made of sugar, water and dye. Adrienne's head is split open by a swinging pendulum blade in one death scene, an effect that took several hours to set up. A fake head was created from a model cast of actress Victoria Morsell's head, with pumps installed to pump fake blood into it, and strings attached to either side so it could be pulled apart. The crew only had one head model, so they had to keep putting it back together and reusing it over and over for each take. After filming was completed, the crew put the head back together and took a picture of Morsell with it. One scene involved a burning head, which was created by taking a cast of the actor's face, head and body with a plaster, then placing soft rubber over the cast to simulate human flesh. It was also treated so it did not release black smoke like a rubber tire when it burned. For burn effects placed on the body after the fire, the crew made a sculpture of the wounds, then mounted them onto rubber appliances, which were glued to the actor. For Harriet's death scene, in which she is scalped and her brain is visible, makeup artist Cindy Jordan put a cap over actress V. Joy Lee's head and covered it with mortician's wax to make it look bumpy. Another special effect involved a spirit that appears in the form of ectoplasm during a séance. It was created from cellulose, water and green food coloring. It proved difficult to make, and several attempts were made before the right consistency was achieved. To create the appearance of it coming out of actress V. Joy Lee's mouth, an eight-foot long hose was attached to the side of her face. Project manager Mark Seibert had to suck the fake ectoplasm through the hose to get it in, then during filming he spit it out. It took several shots before the take was successful, and it looked so funny during filming that the crew kept laughing during takes. Music Sierra composers Mark Seibert and Jay User created the musical score for Phantasmagoria, with Neal Grandstaf providing additional music. Most of the music is a mixture of real instruments with synthesized and sampled sounds, and unlike previous Sierra games, the score was recorded live, rather than created in the computer. Themes for the music ranged from rock and roll to opera. The composers observed the game and various scenes to prepare their score, trying to create an ambiance and identify spots to build the tension with music, creating stingers as needed. For cinematic scenes, they watched tapes after the scenes were shot and composed the music afterward. This was a new process for Sierra and the computer game industry in general, but the same concept was regularly followed in the film industry. The opening theme of the game features a neo-Gregorian chant, which was performed in studio by a 135-voice choir. Much of the underscore music that plays when the player is exploring the game, rather than during cinematic scenes, is based upon that opening scene. The composers made use of silence in many scenes, to build up tension for moments of a "pounce" effect, creating the effect with music that something is catching the listener. They deliberately saved the most intense music for the final chapter of Phantasmagoria, when the game becomes the most intense. During some chase scenes, drums and low strings are used in the score to simulate a heartbeat, which grows louder and louder during more dangerous moments. Release Phantasmagoria took more than two years to develop. Sierra officials said this was because it was so different from anything that had been done previously, in both the Hollywood or computer game industries, and the new challenges led to complexities. Roberta Williams said: "It took a full year just to get people to understand what I wanted to do." The game experienced multiple delays before it was ultimately released. It was originally planned that the game would be in stores by late 1993 or early 1994, with Sierra saying it would come out no later than the fall of 1994. The game was previewed at the Consumer Electronics Show in Chicago in June 1994, where it was announced the release date had been pushed back to October 1994 and that the game would be made available on two discs. The game ultimately required seven discs. The shipping date was later changed again to Christmas 1994, and then February 1995, before Sierra said it would be delayed yet again as the company continued to refine the technology of live-action video. It was announced the game would be ready in June 1995, but when Sierra organized road shows that month to market 18 of its new games, Phantasmagoria was not among them because it was not ready yet. The date was changed once more to August 3, before it was finally released in stores on August 24, 1995, the same day as the Microsoft's operating system Windows 95 version. Phantasmagoria came out first for Windows 95, Windows 3.1 and MS-DOS, then for Macintosh computers in November 1995. A hint book by Sierra came out at the same time as the game. Phantasmagoria was also ported to Sega Saturn and spanned eight discs exclusively in Japan, where it was fully translated and dubbed into Japanese, and released by Outrigger under the name Phantasm. In response to all the delays, Vince Broady, executive editor of the monthly publication Multimedia World, said Sierra may have been attempting to avoid the same mistakes from the previous year with Outpost, for which the company released many advertisements and reviews, but then over-promoted and released before it was finished. Sierra spokeswoman Kathy Gillmore admitted that mistakes were made in marketing Outpost and Sierra had tried to fix them. The minimum system requirements were an IBM compatible 486–25, 8 megabytes of RAM, 5 megabytes hard disk space, SVGA (256 colors) and a 2x CD-ROM. It supported Sound Blaster-compatible sound cards, and for MS-DOS the game worked with a megahertz chip. It sold for as much as $80 in some stores, but was typically sold for $69.95. Phantasmagoria was heavily advertised by Sierra in advance of its release, and the company touted it as its biggest game to date. It was Sierra's first computer game targeted to an adult audience. There was no legislated rating system for computer games in the United States in place, although one was being debated before the U.S. Senate at the time. Sierra was among several major software companies that voluntarily supported the ratings system designed by the Entertainment Software Rating Board, which rated games so adults could make decisions about software purchases. The game was given an "M" rating for "mature" audiences, meaning intended for audiences 17 or older. As a result, it carried a content warning on the box that said "contains adult subject matter." The game box prominently displayed this rating, as well as a warning on the back panel that the game contains adult subject matter inappropriate for children. It also encouraged parents to review the material before providing it to children and asked retailers not to sell it to minors. Phantasmagoria also received mature ratings in Germany and the United Kingdom. The game itself includes a filter that players can activate with a password so that violent or sexually explicit content is censored. When the filter is in place, the screen blurs during the most violent sections, so the player can hear the action but cannot see it. Sierra officials believed it was the first CD-ROM adventure game that self-censored upon command. Sierra producer Mark Seibert said: "There are some pretty gruesome things, and we're concerned about how that is going to impact parents who want to buy a good game with a good story line." But Gene Emery of The Providence Journal said the censoring option was "unlikely to stop any computer-literate 10-year-old" and by scrambling portions of the videos that might be considered offensive, "the censorship option actually makes the explicit scenes seem even worse than they really are." In October 1997, the first chapter of Phantasmagoria was included in the Roberta Williams Anthology, a limited-edition collection of 15 games created by Williams over 18 years. Phantasmagoria was made available for digital download on the website GOG.com starting in February 2010 for $9.99. Reception Sales Phantasmagoria quickly became the best-selling game in the United States, and was Sierra's best-selling computer game to date. It grossed $12,000,000 and sold 300,000 units during its first week-end of release, debuting at number four in PC Data's August 1995 list of top-selling computer games for MS-DOS and Windows. It followed MechWarrior 2, Microsoft Flight Simulator and Myst. By September, it had reached number one on the list among computer games and ranked third among all computer software, following Windows 95 and Microsoft Plus! InterAction, a magazine published by Sierra On-Line, wrote that no other Sierra game topped game charts as quickly as Phantasmagoria did. By the end of December, it remained at number three among overall software, and number one among computer games, and by January it was estimated as many as 500,000 copies had been sold. According to Sierra On-Line, its global sales surpassed 600,000 units by March 1996. Phantasmagoria finished 1995 as the ninth best-selling game of the year according to an analysis by PC Data of sales by 42 retail chains. In January 1996, Phantasmagoria was the fourth-best seller among MS-DOS and Windows games, behind Microsoft Flight Simulator, Myst and Command & Conquer, and it was number six among all computer software. By February 10 it was number five among MS-DOS and Windows games, and by February 24 it was number 10. It received a boost in June 1996, possibly due to the pending release of its sequel Phantasmagoria: A Puzzle of Flesh, and jumped back up to seventh among best-selling computer software programs according to Computer Life magazine. One year after Phantasmagoria was released, it still remained in the best-seller charts, and more than one million copies were ultimately sold, making it Sierra's first game to reach that milestone. NPD Techworld, which tracked sales in the United States, reported 301,138 units sold of Phantasmagoria by December 2002. Reviews Phantasmagoria received mixed reviews. Vince Broady, executive editor of Multimedia World, said Sierra was smart to try to capture the adult market, which was looking for sophisticated games especially after the popularity of games such as The 7th Guest. Dwight Silverman of the Houston Chronicle declared it the best game of the year, calling it a "masterwork" and "as close as it gets to a film you control," although he said not everyone would appreciate it due to its violent content. Entertainment Weekly gave it a B+ and called it "one of the surest signs yet of computer games approaching the quality level of the movies." The review said it had some suspenseful and novel twists, but also some "horrific touches (that) seem a bit derivative, such as a Freddy Krueger-like nightmare in which hands pull you through a bed." Kim McDaniel of The Salt Lake Tribune called it "the most sophisticated computer game to date" and "a weird, wild, horrific ride that will make you jump at every turn, even if you aren't normally faint-of-heart." Although McDaniel said it might be easy for experienced players, she appreciated that it was more accessible for casual players than difficult games like The 7th Guest. A Billboard magazine review said Phantasmagoria "lives up to the advanced billing" and "aims to unnerve and succeeds gruesomely with bloody special effects interspliced in trusty scare-flick fashion with daubs of flesh and hints of sex." USA Today writer Joe Wilson gave it three-and-a-half stars and called it well-produced, visually appealing, frighteningly realistic and "a much-needed change from normal fare," although he said the plot did not start to become interesting until halfway through. Gene Emery of The Providence Journal said the game was "an impressive work, a sophisticated mix of live action and a rich computer-generated environment, coupled with a musical score that's spooky without being overwhelming." He also called the gameplay "elegant in its simplicity." San Francisco Chronicle writer Laura Evenson called it unpredictable and addictive, comparing it to a good mystery novel. A reviewer for Next Generation was pleased with the story, the production values of the cutscenes, and the way the game mixed in full motion video without resulting in slowdown or poor control. However, he found that the lack of a challenge made it a poor value for experienced players, since they would be able to finish the entire game in a day or two, and recommended it strictly for those who were new to the adventure genre. Ric Manning of the Gannett News Service wrote, "The graphics are terrific, but don't play on a full stomach." Steve La Rue of U-T San Diego found the game "visually opulent and interesting" with "dialogue a lot better than I expected, given the trite gothic novel plot," but also said he had to adjust to the "gradual, contemplative pace." Jack Warner of The Atlanta Journal-Constitution wrote: "It's hackneyed, but the art is so good, the acting sufficiently convincing and the atmosphere brooding enough to keep you going," although he said "precious little happens" in the first few discs. Computer Gaming World writer Arinn Dembo called the storyline of a couple buying a haunted mansion "a cliché so familiar that it needs no explanation," but said it still worked in a computer game because the player could directly experience it in that medium, rather than simply watching it in a film. Dembo also praised the game for featuring a female protagonist, though she said Adrienne was too much of an exaggeratedly feminine "girlie-type girl." Overall, Dembo said the game "achieved a new height of realism and beauty in a computer game" and was "an important technological leap, and represents a step forward for the whole industry." Another Computer Gaming World review said Phantasamagoria "appears to deliver on its ambitious promises" and that although the puzzles are too simple for experienced gamers, the appeal of the game is exploring a setting that "would have had even the folks in Amityville making tracks." Harley Jebens of the Austin American-Statesman said some of the acting was "hilariously bad" and the storyline was predictable, but that the game keeps you engaged, the story draws you in and the controls were well executed. Fort Worth Star-Telegram writer Andy Greiser said the game was a beautiful combination of live actors and computer-built backgrounds, but said the action doesn't start to pick up until the halfway point. In a mixed review, Phil LaRose of The Advocate praised the game for having a female protagonist and excellent special effects, and said it had gone further toward merging the realms of film and game than any other computer game. He also said the game was too linear, with overly simple puzzles and a plotline with "so many recycled concepts in this game, players will feel like they've seen it all before." Other reviews were more negative. GameSpot writer Jeff Sengstack said the expensive budget and long production history "do not necessarily translate into high quality gaming." He complimented the graphics and music, but considered "[the game] generally unchallenging, the characters weak, the violence over-the-top, and the script just lame." Mike Hall of the Albuquerque Journal compared Phantasmagoria to big-budget films with multimillion-dollar budgets and multiple production delays that ultimately failed, like Cleopatra and Waterworld. He said the graphics were beautiful, but the game ended too quickly, the plot was thin and the puzzles were too easy to solve. The Escapist writer Brendan Main said the game fell short of its ambition to change how gamers experienced horror media, and the juxtaposition of real-life actors on settings that were "ordinary, pixilated fare" was "odd and unflattering." A one-star review in The Video Games Guide, published in 2013, "now seems little more than a flawed, though ambitious, white elephant." In a 2014 review, IGN writer Kosta Andreadis said the game's special effects were still effective, and that Phantasmagoria wisely builds suspense and saves the violent content for the end of the game. But he said it was ultimately "less a bad game than a bad horror film" and said the execution, "although technically interesting, is extremely goofy, full of ludicrous situations, weird plot turns, bizarre dialogue, and dated costume choices." Awards Phantasmagoria earned an Editor's Choice Award from PC Gamer, and a Golden Triad Award from Computer Game Review, whose editors later gave it their 1995 Best FMV of the Year prize. It was nominated for Best Adventure/Role-Playing Software in the annual Excellence in Software Awards competition, known in the game industry as "The Codies," which are among the most prestigious honors in software development. It was also named Best Adventure Game of the Year by Games Magazine, Game of the Month by Windows Magazine, and was one of three nominees for PC/Computing's Game of the Year. Controversy Before Phantasmagoria was released, CompUSA, the nation's largest discount computer retailer, notified Sierra it would not stock the game. The company did not comment on specific reasons for their decision except for a written statement from chief operating officer Hal Compton: "Software comes out all the time. Some we buy, some we don't. This one we chose not to." Analysts believed CompUSA objected to the realistic violence in the game. Lee S. Isgur of Jefferies & Co., a global investment bank that followed the computer game industry, said CompUSA's decision probably would not harm overall sales of the game, and that it could in fact help it generate publicity. He added that "It's probably one of the bloodiest games ever." Likewise, Vincent Turzo of Jefferies & Co. said, "When you say you're not going to carry something, of course the consumers rush to the stores to see what it's all about." Roberta Williams said of CompUSA's announcement: "I'm disappointed that they decided to make a stand with my product." Some smaller retailers also decided not to stock Phantasmagoria, but others like Walmart continued to carry it. As well as the graphic gore and violence, the rape scene in Phantasmagoria drew particular attention and objections from the game's critics. One major newspaper claimed Phantasmagoria "makes a game of sexual violence." Multiple parents' groups, religious organizations, community action committees and special interest groups called for a boycott, and sent letters to the Sierra offices in droves voicing objections to the game. Phantasmagoria was unable to be sold in Australia after being refused classification by Office of Film and Literature Classification (the R 18+ classification was not available for video games at the time). The Roberta Williams Anthology collection was also banned in Australia because it featured a one-chapter preview of the game. The governments of two other countries also banned Phantasmagoria. The game was spotlighted in U.S. Senate hearings debating regulation of content in the computer software industry, and there was talk that it could re-ignite the debate in Congress over whether to impose federally mandated ratings on computer software that stores would be required to enforce. Sierra found that Williams tended to garner extremely favorable press, even as Phantasmagoria got negative press, so the company began having her discuss the game in person. In response to the backlash, Williams said she believed computer games were subject to harsher standards than films and television, which she said often have more violent content than Phantasmagoria, in part because computer games are often regarded as children's entertainment rather than for adults. Williams said Phantasmagoria was less excessively violent than games like Doom and Mortal Kombat, and that unlike those games, Phantasmagoria is "the good guy ... not going around shooting up people." Sierra spokeswoman Rebecca Buxton and Dennis Cloutier, the company's vice president of sales, both expressed surprise at the reaction to the game because Sierra made strong efforts to warn viewers about the adult content in its marketing, box warnings and censoring options. Cloutier said the company was essentially "censoring ourselves." Steven L. Kent of The Seattle Times said there were more violent games than Phantasmagoria, but that most video game violence appears in computer animations, and thus can be more easily overlooked than in a live-action game, which elicits a stronger emotional response. In April 1998, three years after the game was released, the Kentucky Teachers Retirement System came under criticism for owning 435,000 shares worth $9 million in Cendant, which by this point had purchased Sierra On-Line, due to the violence in the Phantasmagoria and other Sierra games. Kentucky Sen. Mitch McConnell, Arkansas Gov. Mike Huckabee, Iowa Gov. Terry Branstad, and Lamar Alexander, the former governor of Tennessee, all asked the system to sell its stock. McConnell was particularly critical of the rape scene. They suggested a link existed between recent student slayings in West Paducah, Kentucky, and Jonesboro, Arkansas, and the "increasingly violent world many American children enter when they sit down in front of a computer screen." Pat Miller, executive secretary of the Kentucky Teachers Retirement System, said the system wasn't aware of Phantasmagoria or Sierra's products, and that it invested in Cendant because it's part of an index fund of the 500 best stocks in the country. Miller added that, if the retirement system finds a problem with Sierra On-Line, it would ask Cendant to cease ownership in the company, and that the system would divest its stock if it refused. Miller said: "We're not going to continue to invest in some company that does something that will be detrimental to our children. We know that our membership would not want us to do that." The same letter was also sent to heads of teachers' retirement systems in California, Ohio and Texas. Legacy The commercial success of Phantasmagoria had a positive financial impact on Sierra On-Line. Before its release, the company's stock rose in July 1995, up $3.875 to $30.875, which Vince Turzo of Jeffries & Co attributed in part due to anticipation of Phantasmagoria. Sierra's stock continued to rise after it was released. The company closed on the NASDAQ stock market at $43.25 on September 8, 1995, translating to a 73 percent gain in less than three months, which Turzo attributed partially to Phantasmagorias success. For the quarter ending September 30, 1995, Sierra posted a profit of $3.26 million, compared to a loss of $850,000 in 1994. This improvement exceeded expectations in analysts' forecasts, and was thanks largely to Phantasmagoria sales. Williams returned to work on the King's Quest series after Phantasmagoria was completed. Sierra used the same technology and Hollywood techniques from Phantasmagoria to make The Beast Within: A Gabriel Knight Mystery (1995), another interactive movie adventure game. Williams later described Phantasmagoria as the game that best represented her career as a game designer. Due to its development delays, Phantasmagoria was released after other interactive movie games like Wing Commander III and Under a Killing Moon, so it did not receive as much credit for heralding that game subgenre as the other titles. Harley Jebens of the Austin American-Statesman said by the time it was finally released, "computer games that incorporate video [were] becoming a common sight on the software store racks." Sean Clark, a project leader at LucasArts, feared the success of Phantasmagoria would set a bad precedent in that software companies would think they need huge budgets and many discs to have a successful game, rather than focusing on quality. Sequel A Phantasmagoria sequel, Phantasmagoria: A Puzzle of Flesh (1996), was released by Sierra a year after the original. It was written and designed by Lorelei Shannon, Roberta Williams' design partner on King's Quest VII. Williams was not involved with it, and the game featured a very different tone and completely different characters, with no direct connection to the storyline of the first game. Shannon said the original game was a haunted house story in the style of a "late night creature feature," while her game has more science-fiction and fantasy horror elements, in the style of the works of Clive Barker and Tanith Lee. Roberta Williams said both she and her husband Ken enjoyed the sequel. A preview for A Puzzle of Flesh was included in the first CD of the original Phantasmagoria game. Ken Williams had wanted Roberta to work on a Phantasmagoria sequel immediately after the success of the first game, but she could not do so due to her obligations to King's Quest VIII. Ken Williams felt the sequel was less successful because players could sense the absence of Roberta's style. "It's as if a bestselling author had a book ghostwritten. Within a few pages, the fans would know they had been duped and feel disappointed, regardless of the quality of the work." Ken Williams said a third game was not produced after A Puzzle of Flesh because of issues after Sierra was acquired by CUC International in 1996. Distribution was handled by Davidson & Associates, another CUC company, which primarily sold educational software. Williams said the company was uncomfortable with the violent content in Phantasmagoria and did not attempt to sell the sequel as aggressively as Sierra would have. He also claimed Jan Davidson, the wife of Chairman and CEO Bob Davidson, personally did not like the game and "wanted it shut down." Due to these sales challenges, Williams said "there was no reason to make a third Phantasmagoria." Roberta Williams said she was asked by Sierra to make a third Phantasmagoria game, and that she said she would consider it, but it was not ultimately made. Williams said: References External links 1995 video games Adultery in fiction Censored video games Dark fantasy video games DOS games Fiction about animal cruelty Filicide in fiction Full motion video based games Interactive movie video games Classic Mac OS games Mariticide in fiction Obscenity controversies in video games Point-and-click adventure games Psychological horror games Rape in fiction ScummVM-supported games Sega Saturn games Sierra Entertainment games Uxoricide in fiction Video games about demons Video games about magic Video games about spirit possession Video games developed in the United States Video games featuring female protagonists Video games scored by Mark Seibert Video games set in New England Video games with digitized sprites Windows games Video games developed in Japan
37716470
https://en.wikipedia.org/wiki/Karl%20Wiegers
Karl Wiegers
Karl E. Wiegers (born 1953) is an American software engineer, consultant, and trainer in the areas of software development, management, and process improvement. He is the author of numerous books and articles mainly focused on software requirements, project management, process improvement, quality, design, and consulting. He is also the author of a forensic mystery novel titled The Reconstruction and a memoir of life lessons titled Pearls from Sand: How Small Encounters Lead to Powerful Lessons. Biography Karl received a B.S. degree in chemistry from Boise State College (1973), and M.S. (1975) and Ph.D. (1977) degrees in organic chemistry from the University of Illinois. Karl spent 18 years, from 1979 until 1998, at Eastman Kodak Company as a photographic research scientist, software developer, software manager, and process and quality improvement leader. In 1997, Karl started his own software process consulting and training company, Process Impact, which focuses on practical software process improvement. Karl also is an amateur musician, having played guitar since he was 12. He has written and recorded numerous songs. See also Software Requirements Publications 2022. Software Development Pearls: Lessons from Fifty Years of Software Experience. Addison-Wesley. 2021. The Thoughtless Design of Everyday Things. J. Ross Publishing. 2019. Successful Business Analysis Consulting: Strategies and Tips for Going It Alone. J. Ross Publishing. 2017. The Reconstruction. Agent Q Bookworks. 2013. Software Requirements, 3rd Edition (with Joy Beatty). Microsoft Press. 2011. Pearls from Sand: How Small Encounters Lead to Powerful Lessons. Morgan James Publishing. 2007. Practical Project Initiation: A Handbook with Tools. Microsoft Press. 2006. More About Software Requirements: Thorny Issues and Practical Advice. Microsoft Press. 2003. Software Requirements, 2nd Edition. Microsoft Press. 2002. Peer Reviews in Software: A Practical Guide. Addison-Wesley. 1999. Software Requirements, 1st Edition. Microsoft Press. 1996. Creating a Software Engineering Culture. Dorset House Publishing. References External links Karl Wiegers' Official Home Page Living people American software engineers 1953 births
4715
https://en.wikipedia.org/wiki/Boolean%20satisfiability%20problem
Boolean satisfiability problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. SAT is the first problem that was proven to be NP-complete; see Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists; yet this belief has not been proven mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing. Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols, which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design, and automatic theorem proving. Definitions A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to be satisfiable if it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography and artificial intelligence. Conjunctive normal form A literal is either a variable, called positive literal, or the negation of a variable, called negative literal. A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of clauses (or a single clause). For example, is a positive literal, is a negative literal, is a clause. The formula is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨ x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨ x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE or a=FALSE it evaluates to TRUE ∧ ¬TRUE (i.e., FALSE) or FALSE ∧ ¬FALSE (i.e., again FALSE), respectively. For some versions of the SAT problem, it is useful to define the notion of a generalized conjunctive normal form formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form for some Boolean function R and (ordinary) literals . Different sets of allowed boolean functions lead to different problem versions. As an example, R(¬x,a,b) is a generalized clause, and R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z) is a generalized conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just when exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x1∧y1) ∨ (x2∧y2) ∨ ... ∨ (xn∧yn) into conjunctive normal form yields ; while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables. Complexity SAT was the first known NP-complete problem, as proved by Stephen Cook at the University of Toronto in 1971 and independently by Leonid Levin at the National Academy of Sciences in 1973. Until that time, the concept of an NP-complete problem did not even exist. The proof shows how every decision problem in the complexity class NP can be reduced to the SAT problem for CNF formulas, sometimes called CNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph has a 3-coloring is another problem in NP; if a graph has 17 valid 3-colorings, the SAT formula produced by the Cook–Levin reduction will have 17 satisfying assignments. NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See Algorithms for solving SAT below. 3-satisfiability Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete also; this problem is called 3-SAT, 3CNFSAT, or 3-satisfiability. To reduce the unrestricted SAT problem to 3-SAT, transform each clause to a conjunction of clauses where are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent, they are equisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original, i.e. the length growth is polynomial. 3-SAT is one of Karp's 21 NP-complete problems, and it is used as a starting point for proving that other problems are also NP-hard. This is done by polynomial-time reduction from 3-SAT to the other problem. An example of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses, the corresponding graph consists of a vertex for each literal, and an edge between each two non-contradicting literals from different clauses, cf. picture. The graph has a c-clique if and only if the formula is satisfiable. There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)n where n is the number of variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT. The exponential time hypothesis asserts that no algorithm can solve 3-SAT (or indeed k-SAT for any ) in time (i.e., fundamentally faster than exponential in n). Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3-SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by a DPLL algorithm. 3-satisfiability can be generalized to k-satisfiability (k-SAT, also k-CNF-SAT), when formulas in CNF are considered with each clause containing up to k literals. However, since for any k ≥ 3, this problem can neither be easier than 3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT. Some authors restrict k-SAT to CNF formulas with exactly k literals. This doesn't lead to a different complexity class either, as each clause with j < k literals can be padded with fixed dummy variables to . After padding all clauses, 2k-1 extra clauses have to be appended to ensure that only can lead to a satisfying assignment. Since k doesn't depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses, as in . Special cases of SAT Conjunctive normal form Conjunctive normal form (in particular with 3 literals per clause) is often considered the canonical representation for SAT formulas. As shown above, the general SAT problem reduces to 3-SAT, the problem of determining satisfiability for formulas in this form. Disjunctive normal form SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are a disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; for an example exchange "∧" and "∨" in the above exponential blow-up example for conjunctive normal forms. Exactly-1 3-satisfiability A variant of the 3-satisfiability problem is the one-in-three 3-SAT (also known variously as 1-in-3-SAT and exactly-1 3-SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast, ordinary 3-SAT requires that every clause has at least one TRUE literal. Formally, a one-in-three 3-SAT problem is given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE just if exactly one of its arguments is. When all literals of a one-in-three 3-SAT formula are positive, the satisfiability problem is called one-in-three positive 3-SAT. One-in-three 3-SAT, together with its positive case, is listed as NP-complete problem "LO4" in the standard reference, Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson. One-in-three 3-SAT was proved to be NP-complete by Thomas Jerome Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NP-complete. Schaefer gives a construction allowing an easy polynomial-time reduction from 3-SAT to one-in-three 3-SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six fresh boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Then the formula R(x,a,d) ∧ R(y,b,d) ∧ R(a,b,e) ∧ R(c,d,f) ∧ R(z,c,FALSE) is satisfiable by some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3-SAT instance with m clauses and n variables may be converted into an equisatisfiable one-in-three 3-SAT instance with 5m clauses and n+6m variables. Another reduction involves only four fresh variables and three clauses: R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z), see picture (right). Not-all-equal 3-satisfiability Another variant is the not-all-equal 3-satisfiability problem (also called NAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NP-complete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem. Linear SAT A 3-SAT formula is Linear SAT (LSAT) if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. An LSAT formula can be depicted as a set of disjoint semi-closed intervals on a line. Deciding whether an LSAT formula is satisfiable or not is NP-complete. 2-satisfiability SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2-SAT. This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally all OR operations in literals are changed to XOR operations, the result is called exclusive-or 2-satisfiability, which is a problem complete for the complexity class SL = L. Horn-satisfiability The problem of deciding the satisfiability of a given conjunction of Horn clauses is called Horn-satisfiability, or HORN-SAT. It can be solved in polynomial time by a single step of the Unit propagation algorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisfiability is P-complete. It can be seen as P's version of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time. Horn clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause ¬x1 ∨ ... ∨ ¬xn ∨ y can be rewritten as x1 ∧ ... ∧ xn → y, that is, if x1,...,xn are all TRUE, then y needs to be TRUE as well. A generalization of the class of Horn formulae is that of renameable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. For example, (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 is not a Horn formula, but can be renamed to the Horn formula (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ ¬y3) ∧ ¬x1 by introducing y3 as negation of x3. In contrast, no renaming of (x1 ∨ ¬x2 ∨ ¬x3) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. XOR-satisfiability Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain) OR operators. This is in P, since an XOR-SAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time by Gaussian elimination; see the box for an example. This recast is based on the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a finite field. Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each solution of XOR-3-SAT is a solution of 3-SAT, cf. picture. As a consequence, for each CNF formula, it is possible to solve the XOR-3-SAT problem defined by the formula, and based on the result infer either that the 3-SAT problem is solvable or that the 1-in-3-SAT problem is unsolvable. Provided that the complexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisfiability is NP-complete, unlike SAT. Schaefer's dichotomy theorem The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions of subformulae; each restriction states a specific form for all subformulae: for example, only binary clauses can be subformulae in 2CNF. Schaefer's dichotomy theorem states that, for any restriction to Boolean functions that can be used to form these subformulae, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem. The following table summarizes some common variants of SAT. Extensions of SAT An extension that has gained significant popularity since 2003 is satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints, uninterpreted functions, etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints. The satisfiability problem becomes more difficult if both "for all" (∀) and "there exists" (∃) quantifiers are allowed to bind the Boolean variables. An example of such an expression would be ; it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are FALSE, and z=FALSE else. SAT itself (tacitly) uses only ∃ quantifiers. If only ∀ quantifiers are allowed instead, the so-called tautology problem is obtained, which is co-NP-complete. If both quantifiers are allowed, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallel P systems, QBF-SAT problems can be solved in linear time. Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments: MAJ-SAT asks if the majority of all assignments make the formula TRUE. It is known to be complete for PP, a probabilistic class. #SAT, the problem of counting how many variable assignments satisfy a formula, is a counting problem, not a decision problem, and is #P-complete. UNIQUE SAT is the problem of determining whether a formula has exactly one assignment. It is complete for US, the complexity class describing problems solvable by a non-deterministic polynomial time Turing machine that accepts when there is exactly one nondeterministic accepting path and rejects otherwise. UNAMBIGUOUS-SAT is the name given to the satisfiability problem when the input is restricted to formulas having at most one satisfying assignment. The problem is also called USAT. A solving algorithm for UNAMBIGUOUS-SAT is allowed to exhibit any behavior, including endless looping, on a formula having several satisfying assignments. Although this problem seems easier, Valiant and Vazirani have shown that if there is a practical (i.e. randomized polynomial-time) algorithm to solve it, then all problems in NP can be solved just as easily. MAX-SAT, the maximum satisfiability problem, is an FNP generalization of SAT. It asks for the maximum number of clauses which can be satisfied by any assignment. It has efficient approximation algorithms, but is NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation scheme (PTAS) for this problem unless P=NP. WMSAT is the problem of finding an assignment of minimum weight that satisfy a monotone Boolean formula (i.e. a formula without any negation). Weights of propositional variables are given in the input of the problem. The weight of an assignment is the sum of weights of true variables. That problem is NP-complete (see Th. 1 of ). Other generalizations include satisfiability for first- and second-order logic, constraint satisfaction problems, 0-1 integer programming. Finding a satisfying assignment While SAT is a decision problem, the search problem of finding a satisfying assignment reduces to SAT. That is, each algorithm which correctly answers if an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula Φ. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula Φ{x1=TRUE}, i.e. Φ with the first variable x1 replaced by TRUE, and simplified accordingly. If the answer is "yes", then x1=TRUE, otherwise x1=FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the algorithm are required, where n is the number of distinct variables in Φ. This property is used in several theorems in complexity theory: NP ⊆ P/poly ⇒ PH = Σ2   (Karp–Lipton theorem) NP ⊆ BPP ⇒ NP = RP P = NP ⇒ FP = FNP Algorithms for solving SAT Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses). Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors, automatic test pattern generation, routing of FPGAs, planning, and scheduling problems, and so on. A SAT-solving engine is also considered to be an essential component in the electronic design automation toolbox. Major techniques used by modern SAT solvers include the Davis–Putnam–Logemann–Loveland algorithm (or DPLL), conflict-driven clause learning (CDCL), and stochastic local search algorithms such as WalkSAT. Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. SAT solvers are developed and compared in SAT-solving contests. Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. See also Unsatisfiable core Satisfiability modulo theories Counting SAT Planar SAT Karloff–Zwick algorithm Circuit satisfiability Notes External links SAT Game: try solving a Boolean satisfiability problem yourself The international SAT competition website International Conference on Theory and Applications of Satisfiability Testing Journal on Satisfiability, Boolean Modeling and Computation SAT Live, an aggregate website for research on the satisfiability problem Yearly evaluation of MaxSAT solvers References Additional references by date of publication: A9.1: LO1 – LO7, pp. 259 – 260. This article includes material from a column in the ACM SIGDA e-newsletter by Prof. Karem Sakallah Original text is available here Boolean algebra Electronic design automation Formal methods Logic in computer science NP-complete problems Satisfiability problems
3060584
https://en.wikipedia.org/wiki/Windows%20Live
Windows Live
Windows Live is a discontinued brand name for a set of web services and software products developed by Microsoft as part of its software-as-a-service platform. Chief components under the brand name included web services (all of which were exposed through corresponding web applications), several computer programs that interact with the services, and specialized web services for mobile devices. According to Microsoft, Windows Live "is a way to extend the Windows user experience". As such, Windows Vista's welcome screen provides a link to download Windows Live Messenger or to subscribe to Windows Live OneCare. Also, Windows Mail, Windows Photo Gallery and Windows Movie Maker were not offered with Windows 7 and became an exclusive part of Windows Live. Microsoft announced that Windows 8 would see Windows Live apps included right out-of-the-box, and would include built-in synchronization technologies powered by OneDrive (then SkyDrive). The Windows Live brand was phased out during August 2012, when Microsoft released Windows 8 to manufacturing. Active Windows Live services remained active but were gradually renamed. The "live.com" domain, however, continues to be used in the URLs for Outlook and OneDrive. History Windows Live was first announced on November 1, 2005. In its initial release, several Windows Live properties were rebranded and enhanced from Microsoft's MSN set of products and services. However, MSN still exists alongside Windows Live as a means of delivering content (as opposed to customized content and communications). In May 2012 Microsoft began renaming Windows Live services, partly in anticipation of Windows 8, which integrates many of the Windows Live products and services into the operating system. Services Online services The following services were once part of Windows Live but are still online. Outlook.com and OneDrive still use the "Live" branding in the URL even though the brand name is dropped. Mobile services Windows Phone My Windows Phone is a free online companion service for Windows Phone mobile devices that provides users with a free mobile phone back-up solution by wirelessly synchronizing contacts, calendar appointments, photos, and OneNote notebooks with a password-protected online portal. Users can access and manage their information stored on their Windows Phone devices via the online portal using their Microsoft account, as well as accessing a set of features for remotely ringing, locking, mapping, and erasing their lost phones. This service integrates tightly with other Windows Live services including Outlook.com People and Calendar, and SkyDrive. iOS Microsoft released a Windows Live Messenger application on the iOS App Store, which allows users on mobile devices running the iOS to communicate with their contacts via the Microsoft Messenger service. In addition to the instant messaging functionalities, the application also allows users to view their Messenger social feed, view their friends' Profile's, and integrate with Hotmail and Photos. Feature phone Windows Live also provides customised services specifically created for feature phones. It is offered via three channels — through Client-based (for Windows Mobile and other supported mobile devices such as Nokia phones), Web-based (for WAP or GPRS-enabled mobile web browsers), or SMS-based services. Search services Bing, a replacement of the search engine Live Search, was originally named Windows Live Search (and MSN Search prior to that) and was once part of the Windows Live family of services. Windows Live Search once occupied the homepage of Live.com, the domain for all Windows Live services. However, on March 21, 2007, Microsoft decided to separate its search developments from its Windows Live services family, forming part of the Live Search and Ad Platform. As part of this reorganization, the new search brand, Live Search, was consolidated with Microsoft adCenter, a part of Microsoft's Platform and Systems division. However, Microsoft recognised that there was a brand issue as the word "Live" continued to remain in the brand. As an effort to create a new identity for Microsoft's search services, on June 3, 2009, Live Search was officially rebranded as Bing. Developer services Live Connect is a collection of APIs and common controls that allow developers to have a deeper control and offers access to the core Windows Live services and data through open and easily accessible application programming interfaces (APIs). Live Connect is built on standard web technologies such as OAuth 2.0, Representational State Transfer (REST), and JavaScript Object Notation (JSON), and is designed to work with any technology or device. Live Connect unites the previously separate APIs of Windows Live into a single API that is based on industry standards and specifications. Discontinued services Software Microsoft has released several computer programs with "Windows Live" brand, a summary of which is included below. All except Windows Live OneCare are freeware and published in a software suite called Windows Essentials (formerly Windows Live Essentials). Essentials programs are designed to integrate well with each other, within Windows, and with other Windows Live services such as OneDrive and Outlook.com. Windows Live OneCare on the other hand, was a commercial consumers utility marketed with a software as a service licensing model. Windows Live Butterfly The Windows Live Butterfly awards program (formerly the MSN Butterfly program) was a program whose members were given the benefit of new Microsoft software to test before the beta releases went public and they were in direct contact with the program managers for Windows Live products. Microsoft had initiated the Windows Live Butterfly program in order to recognize the contributions made by exemplary beta testers. Prospective 'butterflies' were selected by the Windows Live product team and were nominated for a term of one year, after which they could be renominated. The Windows Live Butterfly program was closed in June 2009. User interface All Windows Live websites sport a common theme. Different themes have been used on the sites with each phase of product release, called "Waves". Each Wave has a set of online services and desktop programs (Windows Essentials). The web services are labelled by each Wave, for example, Hotmail Wave 4. The programs from Windows Essentials are usually called by a year number, for example, Windows Live Messenger 2011. Blue Vapor/Flair/Wave 1 Most original Windows Live applications and services used a visual theme known as Blue Vapor or Flair. Wave 2 With the public beta release of the Windows Live Wave 2 Suite, a new visual theme was released to enable the Aero transparency effect in Windows Vista. Wave 3 Microsoft released a set of new themes for their Windows Live Wave 3 services, which allows users to customize their pages on Windows Live using a set of pre-selected background pictures. Several of these themes are dynamic and change according to the time of day and the current weather condition of the user. Wave 4 In addition to the themes provided in Windows Live Wave 3, the release of Windows Live Wave 4 provided additional themes for users to customise their pages on Windows Live, with several dynamic themes that changes according to the time of day and weather conditions at the user's location. Several of these newly added themes are similar to wallpapers originating from Windows 7. In addition, the Windows Live Wave 4 header features a reorganised dynamic navigation menu that displays the number of the user's current online contacts and the number of unread e-mails, as well as an in-built Windows Live Web Messenger service allowing users to connect to the Microsoft Messenger service and Facebook chat service to chat with their online contacts while browsing any Windows Live properties using a web browser. See also In addition to Windows Live, which is mainly aimed at individuals, Microsoft brands other properties as "Live", including: Xbox Live (a multiplayer gaming and content-delivery system for Xbox) Games for Windows – Live (multiplayer gaming service for Microsoft Windows) Office Live. Office Live merged into Windows Live during the Wave 4 update. Microsoft merged Office Live into the Windows Live team in January 2009. References Microsoft websites MSN Web service providers Companies based in Palo Alto, California Computer-related introductions in 2005 Products and services discontinued in 2012
4305107
https://en.wikipedia.org/wiki/Lynx%20%28protocol%29
Lynx (protocol)
Lynx is a file transfer protocol for use with modems, and the name of the program that implements the protocol. Lynx is based on a sliding window protocol with two to sixteen packets per window (or "block"), and 64 bytes of data per packet. It also applies run length encoding (RLE) to the data on a per-block basis to compress suitable data. Lynx was developed by Matthew Thomas, who released it as shareware in 1989. The protocol was supported primarily by the Lynx program, and appears to have seen little or no support in bulletin board systems (BBSs) or online services. Techniques The Lynx program was run from the command line to start transfers, there is no documented example of a 3rd party terminal emulator supporting the system. The protocol was relatively simple, largely identical to WXMODEM with the exception that it used fixed-size 64-byte packets in windows of two to sixteen packets, rather than one to four 128-byte packets in WXMODEM. Error recovery was handled by reducing the window size rather than the packet size. CRC-32 was used to detect errors. Like TeLink, Lynx also included a separate header packet that contained file information: File name (8 character body, 3 character extension) Original time/date stamp (optional) File length (exact length of files is preserved by Lynx) Lynx version number (practically useless) This allowed file transfers to be automated, sending multiple files in a single session by having the receiver extract the names of the files as they were received. The Lynx program allowed up to 99 files to be sent in a batch, although there is no limit in the protocol itself. Lynx tests each block for compressibility before transmitting it. RLE compression is used for this operation. Generally, a block containing text information will be compressed. Archived, ZIPped, or other compressed files will likely not be further condensed by this technique. Note that Lynx will always optimize the transmission of each block, if RLE decreases the block length, it will be used; otherwise, the uncompressed packet will be sent. Lynx required 8-bit clean links and did not include any sort of escaping. It only supports CTS/RTS hardware handshaking, is considered valid data. See also List of file transfer protocols References Lynx 3.02 program Lynx documentation BBS file transfer protocols Computer-related introductions in 1989
10782256
https://en.wikipedia.org/wiki/Dm-crypt
Dm-crypt
dm-crypt is a transparent disk encryption subsystem in Linux kernel versions 2.6 and later and in DragonFly BSD. It is part of the device mapper (dm) infrastructure, and uses cryptographic routines from the kernel's Crypto API. Unlike its predecessor cryptoloop, dm-crypt was designed to support advanced modes of operation, such as XTS, LRW and ESSIV (see disk encryption theory for further information), in order to avoid watermarking attacks. In addition to that, dm-crypt addresses some reliability problems of cryptoloop. dm-crypt is implemented as a device mapper target and may be stacked on top of other device mapper transformations. It can thus encrypt whole disks (including removable media), partitions, software RAID volumes, logical volumes, as well as files. It appears as a block device, which can be used to back file systems, swap or as an LVM physical volume. Some Linux distributions support the use of dm-crypt on the root file system. These distributions use initrd to prompt the user to enter a passphrase at the console, or insert a smart card prior to the normal boot process. Frontends The dm-crypt device mapper target resides entirely in kernel space, and is only concerned with encryption of the block device it does not interpret any data itself. It relies on user space front-ends to create and activate encrypted volumes, and manage authentication. At least two frontends are currently available: cryptsetup and cryptmount. cryptsetup The cryptsetup command-line interface, by default, does not write any headers to the encrypted volume, and hence only provides the bare essentials: encryption settings have to be provided every time the disk is mounted (although usually employed with automated scripts), and only one key can be used per volume; the symmetric encryption key is directly derived from the supplied passphrase. Because it lacks a "salt", using cryptsetup is less secure in this mode than is the case with Linux Unified Key Setup (LUKS). However, the simplicity of cryptsetup makes it useful when combined with third-party software, for example, with smart card authentication. cryptsetup also provides commands to deal with the LUKS on-disk format. This format provides additional features such as key management and key stretching (using PBKDF2), and remembers encrypted volume configuration across reboots. cryptmount The cryptmount interface is an alternative to the "cryptsetup" tool that allows any user to mount and unmount a dm-crypt file system when needed, without needing superuser privileges after the device has been configured by a superuser. Features The fact that disk encryption (volume encryption) software like dm-crypt only deals with transparent encryption of abstract block devices gives it a lot of flexibility. This means that it can be used for encrypting any disk-backed file systems supported by the operating system, as well as swap space; write barriers implemented by file systems are preserved. Encrypted volumes can be stored on disk partitions, logical volumes, whole disks as well as file-backed disk images (through the use of loop devices with the losetup utility). dm-crypt can also be configured to encrypt RAID volumes and LVM physical volumes. dm-crypt can also be configured to provide pre-boot authentication through an initrd, thus encrypting all the data on a computer except the bootloader, the kernel and the initrd image itself. When using the cipher block chaining mode of operation with predictable initialization vectors as other disk encryption software, the disk is vulnerable to watermarking attacks. This means that an attacker is able to detect the presence of specially crafted data on the disk. To address this problem in its predecessors, dm-crypt included provisions for more elaborate, disk encryption-specific modes of operation. Support for ESSIV (encrypted salt-sector initialization vector) was introduced in Linux kernel version 2.6.10, LRW in 2.6.20 and XTS in 2.6.24. The Linux Crypto API includes support for most popular block ciphers and hash functions, which are all usable with dm-crypt. Crypted FS support include LUKS volumes, loop-AES and since Linux kernel 3.13, the TrueCrypt target called "tcw". Compatibility dm-crypt and LUKS encrypted disks can be accessed and used under MS Windows using the now defunct FreeOTFE (formerly DoxBox, LibreCrypt), provided that the filesystem used is supported by Windows (e.g. FAT/FAT32/NTFS). Encrypted ext2 and ext3 filesystems are supported by using Ext2Fsd or so-called "Ext2 Installable File System for Windows"; FreeOTFE also supports them. Cryptsetup/LUKS and the required infrastructure have also been implemented on the DragonFly BSD operating system. See also Comparison of disk encryption software References External links Official , and websites All about dm-crypt and LUKS on one page (on archive.org) a page covering dm-crypt/LUKS, starting with theory and ending with many practical examples about its usage. Device mapper Disk encryption Cryptographic software
45196175
https://en.wikipedia.org/wiki/Radeon%20Rx%20300%20series
Radeon Rx 300 series
The Radeon R5/R7/R9 300 series is a series of Radeon graphics cards made by Advanced Micro Devices (AMD). All of the GPUs of the series are produced in 28 nm format and use the Graphics Core Next (GCN) micro-architecture. The series includes the Fiji and Tonga GPU dies based on AMD's GCN 3 or "Volcanic Islands" architecture, which had originally been introduced with the Tonga based (though cut-down) R9 285 slightly earlier. Some of the cards in the series include the Fiji based flagship AMD Radeon R9 Fury X, cut-down Radeon R9 Fury and small form factor Radeon R9 Nano, which are the first GPUs to feature High Bandwidth Memory (HBM) technology, which AMD co-developed in partnership with SK Hynix. HBM is faster and more power efficient than GDDR5 memory, though also more expensive. However, the remaining GPUs in the series outside the Tonga based R9 380 and R9 380X are based on previous generation GPUs with revised power management, and therefore only feature GDDR5 memory (something Tonga does as well). The Radeon 300 series cards including the R9 390X were released on June 18, 2015. The flagship device, the Radeon R9 Fury X, was released on June 24, 2015, with the dual-GPU variant, the Radeon Pro Duo, being released on April 26, 2016. Micro-architecture and instruction set The R9 380/X along with the R9 Fury & Nano series were AMD's first cards (after the earlier R9 285) to use the third iteration of their GCN instruction set and micro-architecture. The other cards in the series feature first and second gen iterations of GCN. The table below details which GCN-generation each chip belongs to. Ancillary ASICs Any ancillary ASICs present on the chips are being developed independently of the core architecture and have their own version name schemes. Multi-monitor support The AMD Eyefinity branded on-die display controllers were introduced in September 2009 in the Radeon HD 5000 Series and have been present in all products since. AMD TrueAudio AMD TrueAudio was introduced with the AMD Radeon Rx 200 Series, but can only be found on the dies of GCN 1.1 and later products. Video acceleration AMD's SIP core for video acceleration, Unified Video Decoder and Video Coding Engine, are found on all GPUs and are supported by AMD Catalyst and by the open-source Radeon graphics driver. Frame limiter A completely new feature to the lineup allows users to reduce power consumption by not rendering unnecessary frames. It will be user configurable. LiquidVR support LiquidVR is a technology that improves the smoothness of virtual reality. The aim is to reduce latency between hardware so that the hardware can keep up with the user's head movement, eliminating the motion sickness. A particular focus is on dual GPU setups where each GPU will now render for one eye individually of the display. Virtual super resolution support Originally introduced with the previous generation R9 285 and R9 290 series graphics cards, this feature allows users to run games with higher image quality by rendering frames at above native resolution. Each frame is then downsampled to native resolution. This process is an alternative to supersampling which is not supported by all games. Virtual super resolution is similar to Dynamic Super Resolution, a feature available on competing nVidia graphics cards, but trades flexibility for increased performance. OpenCL (API) OpenCL accelerates many scientific Software Packages against CPU up to factor 10 or 100 and more. Open CL 1.0 to 1.2 are supported for all Chips with Terascale and GCN Architecture. OpenCL 2.0 is supported with GCN 2nd Gen. or 1.2 and higher) For OpenCL 2.1 and 2.2 only Driver Updates are necessary with OpenCL 2.0 conformant Cards. Vulkan (API) API Vulkan 1.0 is supported for all GCN architecture cards. Vulkan 1.2 requires GCN 2nd gen or higher with the Adrenalin 20.1 and Linux Mesa 20.0 drivers and newer. Chipset tables Desktop models Mobile models Radeon Feature Matrix Graphics device drivers Proprietary graphics device driver Catalyst AMD Catalyst is being developed for Microsoft Windows and Linux. As of July 2014, other operating systems are not officially supported. This may be different for the AMD FirePro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers. AMD Catalyst supports all features advertised for the Radeon brand. Free and open-source graphics device driver radeon The free and open-source drivers are primarily developed on and for Linux, but have been ported to other operating systems as well. Each driver is composed out of five parts: Linux kernel component DRM Linux kernel component KMS driver: basically the device driver for the display controller user-space component libDRM user-space component in Mesa 3D a special and distinct 2D graphics device driver for X.Org Server, which is finally about to be replaced by Glamor The free and open-source radeon kernel driver supports most of the features implemented into the Radeon line of GPUs. The radeon kernel driver is not reverse engineered, but based on documentation released by AMD. This drivers still requires proprietary microcode to operate DRM functions and some GPUs may fail to launch the X server if not available. Free and open-source graphics device driver amdgpu This new kernel driver is directly supported and developed by AMD. It is available on various Linux distributions, and has been ported to some other operating systems as well. Only GCN GPUs are supported. Proprietary graphics device driver AMDGPU-PRO This new driver by AMD is still undergoing development, but can be used on a few supported Linux distributions already (AMD officially supports Ubuntu, RHEL/CentOS). The driver has been experimentally ported to ArchLinux and other distributions. AMDGPU-PRO is set to replace the previous AMD Catalyst driver and is based on the free and open source amdgpu kernel driver. Pre-GCN GPUs are not supported. See also Graphics Core Next AMD FirePro AMD FireMV AMD FireStream List of AMD graphics processing units References Advanced Micro Devices graphics cards Computer-related introductions in 2015 Graphics processing units Graphics cards
11221472
https://en.wikipedia.org/wiki/List%20of%20video%20games%20based%20on%20comics
List of video games based on comics
The following is a list of video games based on comics. The list does not include games based on Japanese manga, which are separately listed at List of video games based on anime or manga. 0-9 100 Bullets (cancelled; PlayStation 2, Xbox) XIII (2003; Xbox, PlayStation 2, GameCube, Microsoft Windows, OS X) 300: March to Glory (2007; PlayStation Portable) A Alien vs. Predator: Alien vs. Predator (cancelled; Lynx) Alien (1982; Atari 2600, ZX81, DOS) Alien (1984; Commodore 64, ZX Spectrum, Amstrad CPC) Aliens: The Computer Game (1986; Amstrad CPC, Apple II, Commodore 64, ZX Spectrum) Aliens: Alien 2 (1987; MSX) Predator (1987; Acorn Electron, Amiga, Amstrad CPC, Atari ST, BBC Micro, Commodore 64, ZX Spectrum) Aliens (1990; arcade) Predator 2 (1990; Amiga, Amstrad CPC, Atari ST, Commodore 64, DOS, ZX Spectrum) Predator 2 (1991; Game Gear, Genesis, Master System) Alien³ (1992; Amiga, Commodore 64, Game Gear, Genesis, Master System, SNES, NES, Game Boy) Alien vs. Predator (1993; SNES) Alien vs. Predator: The Last of His Clan (1993; Game Boy) Alien 3: The Gun (1993; arcade) Alien vs. Predator (1994; arcade) Alien vs Predator (1994; Jaguar) Aliens: A Comic Book Adventure (1995; DOS) Alien Trilogy (1996; PlayStation, Sega Saturn, MS-DOS) Aliens Online (1998; Microsoft Windows) Aliens versus Predator (1999; Microsoft Windows, Mac OS) Alien: Resurrection (2000; PlayStation) Aliens: Thanatos Encounter (2001; Game Boy Color) Aliens versus Predator 2 (2001; Microsoft Windows, OS X) Aliens versus Predator 2: Primal Hunt (2002; Microsoft Windows) Aliens Versus Predator: Extinction (2003; PlayStation 2, Xbox) Aliens: Unleashed (2003; BREW, J2ME) Alien vs. Predator (2004; mobile phones) Predator (2004; mobile phone) Alien vs. Predator (2004; mobile phone) Predator: Concrete Jungle (2005; PlayStation 2, Xbox) Alien vs. Predator 3D (2005; mobile phone) Aliens: Extermination (2006; arcade) Aliens vs. Predator: Requiem (2007; PlayStation Portable) Alien vs. Predator 2 2D: Requiem (2007; mobile phone) Predator: The Duel (2008; mobile phone) Aliens vs. Predator (2010; Microsoft Windows, PlayStation 3, Xbox 360) Predators (2010; iOS) Aliens: Infestation (2011; Nintendo DS) Aliens: Colonial Marines (2013; Microsoft Windows, PlayStation 3, Xbox 360) Alien Vs. Predator: Evolution (2013; Android, iOS) Aliens: Armageddon (2013; Android, iOS) Alien: Isolation (2014; Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Linux, OS X) Andy Capp: The Game (1987; Commodore 64, Amstrad CPC, ZX Spectrum) Aquaman: Battle for Atlantis (2003; Xbox, GameCube) Armorines: Project S.W.A.R.M. (1999; Nintendo 64, PlayStation, Game Boy Color) Asterix: Asterix (1983; Atari 2600) Asterix and the Magic Cauldron (1986; Commodore 64, Amstrad CPC, ZX Spectrum) Asterix and the Magic Carpet (1987; Commodore 64, DOS, Amiga 500, Atari ST, Amstrad CPC) Asterix: Operation Getafix (1989; Amiga 500, Atari ST, DOS) Asterix (1992; arcade) Asterix (1993; NES, SNES, Master System, Game Boy) Asterix and the Great Rescue (1993; Genesis, Game Gear, Master System) Asterix and the Secret Mission (1993; Master System, Game Gear) Asterix and the Power of the Gods (1995; Genesis) Asterix & Obelix (1995; MS-DOS, SNES, Game Boy, Game Boy Color) Asterix and Obelix Take on Caesar (1999; PlayStation, Microsoft Windows) Asterix: The Gallic War (1999; PlayStation, Microsoft Windows) Asterix: Search for Dogmatix (2000; Game Boy) Asterix: Mega Madness (2001; PlayStation, Microsoft Windows) Asterix & Obelix: Bash Them All! (2002; Game Boy Advance) Asterix & Obelix XXL (2004; PlayStation 2, GameCube, Microsoft Windows, Game Boy Advance) Asterix & Obelix XXL 2: Mission: Las Vegum (2005; PlayStation 2, Microsoft Windows) Asterix & Obelix XXL 2: Mission: Wifix (2006; Nintendo DS, PlayStation Portable) Asterix at the Olympic Games (2007; Microsoft Windows, Wii, PlayStation 2, Nintendo DS) The Avengers: Captain America and The Avengers (1991; arcade, NES, SNES, Genesis, Game Gear, Game Boy) Avengers in Galactic Storm (1995; arcade) Marvel: Avengers Alliance (2012; Facebook, iOS, Android, Microsoft Windows, Playdom.com) Marvel Avengers: Battle for Earth (2012; Xbox 360, Wii U) Lego Marvel's Avengers (2016; Microsoft Windows, Nintendo 3DS, OS X, PlayStation 3, PlayStation 4, PlayStation Vita, Wii U, Xbox 360, Xbox One) Avengers (2020, Microosft Windows, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, Stadia) B Batman: Batman (1986; Amstrad CPC, ZX Spectrum, MSX, Amstrad PCW) Batman: The Caped Crusader (1988; Amiga, Amstrad CPC, Atari ST, Commodore 64, Apple II, MS-DOS, ZX Spectrum) Batman (1989; Amiga, Amstrad CPC, Atari ST, Commodore 64, MS-DOS, ZX Spectrum) Batman: The Video Game (1989; NES, Game Boy, Genesis) Batman: The Video Game (1990; TurboGrafx-16) Batman (1990; arcade) Batman: Return of the Joker (1991; Genesis, NES, Game Boy) Batman Returns (1992; Amiga, Atari Lynx, Game Gear, Master System, Genesis, Sega-CD, MS-DOS, NES, SNES) Batman: The Animated Series (1993; Game Boy) The Adventures of Batman & Robin (1994; SNES, Genesis, Game Gear) Batman Forever (1995; Game Boy, Game Gear, Mega Drive, SNES, Microsoft Windows) The Adventures of Batman and Robin Activity Center (1996; Microsoft Windows, Mac OS) Batman Forever: The Arcade Game (1996; arcade, Sega Saturn, Microsoft Windows, PlayStation) Batman & Robin (1997; Game.com, PlayStation) Batman Beyond: Return of the Joker (2000; Nintendo 64, PlayStation, Game Boy Color) Batman: Chaos in Gotham (2001; Game Boy Color) Batman: Gotham City Racer (2001; PlayStation) Batman: Vengeance (2001; Microsoft Windows, PlayStation 2, Xbox, GameCube, Game Boy Advance) Batman: Dark Tomorrow (2003; Xbox, GameCube) Batman: Justice Unbalanced (2003; Microsoft Windows, OS X) Batman: Toxic Chill (2003; PC, Mac OS X) Batman: Rise of Sin Tzu (2003; GameCube, PlayStation 2, Xbox, Game Boy Advance) Batman Begins (2005; Xbox, PlayStation 2, GameCube, Game Boy Advance) Lego Batman: The Videogame (2008; Wii, PlayStation 3, Xbox 360, PlayStation 2, Nintendo DS, Microsoft Windows, OS X) Batman: Arkham Asylum (2009; PlayStation 3, Xbox 360, Microsoft Windows, OS X) Batman: The Brave and the Bold – The Videogame (2011; Wii, Nintendo DS) Batman: Arkham City (2011; PlayStation 3, Xbox 360, Microsoft Windows, Wii U, OS X) Lego Batman 2: DC Super Heroes (2012; Microsoft Windows, PlayStation 3, PlayStation Vita, Nintendo 3DS, Wii, Xbox 360, OS X) The Dark Knight Rises (2012; Android, iOS) Batman: Arkham Origins (2013; PlayStation 3, Xbox 360, Microsoft Windows, Wii U) Batman: Arkham Origins Blackgate (2013; PlayStation Vita, Nintendo 3DS, PlayStation 3, Wii U, Microsoft Windows, Xbox 360) Batman (2013; arcade) Lego Batman 3: Beyond Gotham (2014; Android, iOS, OS X, Xbox 360, Xbox One, Wii U, PlayStation 3, PlayStation 4, PlayStation Vita, Microsoft Windows, Nintendo 3DS) Batman: Arkham Knight (2015; Microsoft Windows, PlayStation 4, Xbox One) Batman: Arkham VR (2016; PlayStation 4, Microsoft Windows) Batman: Return to Arkham (2016; PlayStation 4, Xbox One) Batman: The Telltale Series (2016; Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Nintendo Switch) Batman: The Enemy Within (2017; Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Nintendo Switch) Gotham Knights (2021; Microsoft Windows, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S) B.C.: B.C.'s Quest for Tires (1983; Apple II, Atari 8-bit, ColecoVision, Commodore 64, MS-DOS, MSX, ZX Spectrum) B.C. II: Grog's Revenge (1984; Commodore 64, MSX, ColecoVision, ZX Spectrum) Beanotown Racing (2003; Microsoft Windows) Blade:Blade (2000; PlayStation, Game Boy)Blade II (2002; PlayStation 2, Xbox)Blade Kitten (2010; Microsoft Windows, PlayStation 3, Xbox 360) Bone:Bone: Out from Boneville (2005; Microsoft Windows, OS X)Bone: The Great Cow Race (2006; Microsoft Windows) C Cadillacs and Dinosaurs:Cadillacs and Dinosaurs (1993; arcade)Cadillacs and Dinosaurs: The Second Cataclysm (1994; Sega CD, DOS)El Capitán Trueno (1989; Amstrad CPC, DOS, MSX, ZX Spectrum) Captain America:Captain America in: The Doom Tube of Dr. Megalomann (1987; Commodore 64, ZX Spectrum, Amstrad CPC)Spider-Man and Captain America in Doctor Doom's Revenge (1989; Amiga, Atari ST, Commodore 64, ZX Spectrum, Amstrad CPC, DOS)Captain America and The Avengers (1991; arcade, SNES, NES, Dreamcast, Genesis, Game Boy, Game Gear)Captain America: Super Soldier (2011; Nintendo DS, PlayStation 3, Wii, Xbox 360, Nintendo Nintendo 3DS)Catwoman (2004; GameCube, Microsoft Windows, PlayStation 2, Xbox, Game Boy Advance)Chakan: The Forever Man (1992; Genesis, Game Gear, Mega Drive)The Crow: City of Angels (1997; Sega Saturn, PlayStation, Microsoft Windows) DDan Dare: Pilot of the Future (1986; Commodore 64, Amstrad CPC, ZX Spectrum)Danger Girl (2000; PlayStation)Daredevil (2003; Game Boy Advance) The Darkness:The Darkness (2007; PlayStation 3, Xbox 360)The Darkness II (2012; PlayStation 3, Xbox 360, Microsoft Windows, OS X) Darksiders:Darksiders (2010; Xbox 360, Xbox One, PlayStation 3, PlayStation 4, Nintendo Switch, Microsoft Windows, Linux)Darksiders II (2012; Xbox 360, Xbox One, PlayStation 3, PlayStation 4, Nintendo Switch, Microsoft Windows, Linux)Darksiders III (2018; Xbox One, PlayStation 4, Microsoft Windows) DC Universe:Mortal Kombat vs. DC Universe (2008; PlayStation 3, Xbox 360)DC Universe Online (2011; PlayStation 3, PlayStation 4, Microsoft Windows, Xbox One, OS X)Injustice: Gods Among Us (2013; PlayStation 3, Xbox 360, Wii U, Microsoft Windows, PlayStation Vita, PlayStation 4, Android, iOS)Injustice 2 (2017; PlayStation 4, Xbox One, Microsoft Windows, Android, iOS)Deadpool (2013; PlayStation 3, Xbox 360, Microsoft Windows, PlayStation 4, Xbox One)Dennis the Menace (1993; SNES, Game Boy) Dick Tracy:Dick Tracy (Amiga, Amstrad CPC, Atari ST, Commodore 64, DOS, ZX Spectrum, Genesis, Master System, NES, Game Boy)Dick Tracy: The Crime-Solving Adventure (1991; Amiga, DOS)Dinosaurs for Hire (1993; Genesis)Duckman: The Graphic Adventures of a Private Dick (1997; Microsoft Windows) EElektra (2005; mobile phone) F Fantastic Four:Questprobe: Featuring Human Torch and the Thing (1985; Amstrad CPC, Apple II, Atari 8-bit, Commodore 64, DOS, ZX Spectrum)Fantastic Four (1997; PlayStation)Fantastic Four (2005; Microsoft Windows, PlayStation 2, Xbox, GameCube, Game Boy Advance)Fantastic 4: Flame On (2005; Game Boy Advance)Fantastic Four (2005; TV)Fantastic Four: Rise of the Silver Surfer (2007; PlayStation 2, PlayStation 3, Xbox 360, Wii, Nintendo DS) Flash:The Flash (1991; Game Boy)The Flash (1993; Master System)Justice League Heroes: The Flash (2006; Game Boy Advance) G Garfield:Garfield (cancelled; Atari 2600)Create With Garfield (1986; Commodore 64)Garfield: Big Fat Hairy Deal (1987; ZX Spectrum, Commodore 64, Amstrad CPC, Amiga, Atari ST)Garfield: Winter's Tail (1989; Atari ST, Amiga, ZX Spectrum, Commodore 64)A Week of Garfield (1989; NES)The Real Ghostbusters (1992; Game Boy)Garfield: Caught in the Act (1995; Genesis, Game Gear, IBM PC)Garfield's Mad About Cats (2000; Microsoft Windows, Mac OS)Garfield: Robocats from Outer Space! (2003; mobile phone)Garfield: The Bubble' (2003; mobile phone)Garfield (2004; PlayStation 2, Microsoft Windows)Garfield: Attack of the Mutant Lasagna (2003; mobile phone)Garfield: The Search for Pooky (2004; Game Boy Advance)Garfield: It's All About Phonics - Kindergarten (2004; Microsoft Windows)Garfield: Saving Arlene (2005; PlayStation 2, Microsoft Windows)Garfield: A Tail of Two Kitties (2006; PlayStation 2, Microsoft Windows, Nintendo DS)Garfield's Day Out (2004; mobile phones)Garfield and His Nine Lives (2006; Game Boy Advance)Garfield's Nightmare (2007; Nintendo DS, Game Boy Advance)Garfield Gets Real (2008; Wii, Nintendo DS)Garfield's Fun Fest (2008; Nintendo DS)Garfield: Living Large! (2014; iOS)Generator Rex: Agent of Providence (2011; Nintendo DS, Nintendo 3DS, Wii, PlayStation 3, Xbox 360)Ghost Rider (2007; PlayStation 2, PlayStation Portable, Game Boy Advance) H Heavy Metal:Heavy Metal: F.A.K.K.² (2000; Microsoft Windows, Mac OS, Linux)Heavy Metal: Geomatrix (2001; arcade, Dreamcast) Hellboy:Hellboy: Dogs of the Night (2000; Microsoft Windows)Hellboy: Asylum Seeker (2004; PlayStation)Hellboy: The Science of Evil (2008; PlayStation 3, Xbox 360, PlayStation Portable) Hulk:Questprobe featuring The Hulk (1984; Acorn Electron, Atari 8-bit, BBC Micro, Commodore 64, DOS, Dragon 32, ZX Spectrum)The Incredible Hulk (1994; SNES, Genesis, Master System, Game Gear)The Incredible Hulk: The Pantheon Saga (1996; PlayStation, Sega Saturn)Hulk (2003; Microsoft Windows, PlayStation 2, Xbox, GameCube, Game Boy Advance)The Incredible Hulk (2003; Game Boy Advance)The Incredible Hulk: Ultimate Destruction (2005; PlayStation 2, Xbox, GameCube)The Incredible Hulk (2008; PlayStation 3, Xbox 360, PlayStation 2, Wii, Microsoft Windows, PlayStation Portable, Nintendo DS) I Iron Man:Iron Man and X-O Manowar in Heavy Metal (1996; PlayStation, Sega Saturn, Game Boy, Game Gear, DOS)The Invincible Iron Man (2002; Game Boy Advance)Iron Man (2008; PlayStation 2, PlayStation 3, PlayStation Portable, Wii, Nintendo DS, Xbox 360, Microsoft Windows, mobile phone)Iron Man 2 (2010; PlayStation 3, Wii, Xbox 360, PlayStation Portable, Nintendo DS, iOS, BlackBerry) JJabato (1989; ZX Spectrum, Amstrad CPC, Commodore 64, MSX, Atari ST, DOS, Amiga) Judge Dredd:Judge Dredd (cancelled; arcade)Judge Dredd (1986; Commodore 64, ZX Spectrum)Judge Dredd (1990; Amiga, Atari ST, Commodore 64)Judge Dredd (1995; SNES, Genesis, Game Gear, Game Boy)Judge Dredd (1997; arcade, PlayStation, PlayStation 3, PlayStation Portable, PlayStation Vita)Judge Dredd Pinball (1998; DOS, Microsoft Windows)Judge Dredd: Dredd vs. Death (2003; Microsoft Windows, Xbox, PlayStation 2, GameCube)Judge Dredd vs. Zombies (2011; mobile phone)Judge Dredd: Countdown Sector 106 (2012; Android, iOS, Linux, OS X, Microsoft Windows) Justice League:Justice League Task Force (1995; SNES, Genesis)Justice League: Injustice for All (2002; Game Boy Advance)Justice League: Chronicles (2003; Game Boy Advance)Justice League Heroes (2006; PS2, Xbox, PlayStation Portable, Nintendo DS)Justice League Heroes: The Flash (2006; Game Boy Advance) KKiss: Psycho Circus: The Nightmare Child (2000; Microsoft Windows, Dreamcast) LLargo Winch: Empire Under Threat (2002; Xbox, PlayStation 2, GameCube, Microsoft Windows)Little Gamers (2008; Xbox 360)Little Nemo: The Dream Master (1990; NES) M Marvel ComicsMarvel Super Heroes (1995; arcade, Sega Saturn, PlayStation)Marvel Super Heroes: War of the Gems (1996; SNES)Marvel Super Heroes vs. Street Fighter (1997; arcade, Sega Saturn, PlayStation)Marvel vs. Capcom: Clash of Super Heroes (1998; arcade, Dreamcast, PlayStation, PlayStation 3, Xbox 360)Marvel vs. Capcom 2: New Age of Heroes (2000; arcade, Dreamcast, iOS, PlayStation 2, PlayStation 3, Xbox, Xbox 360)Marvel Nemesis: Rise of the Imperfects (2005; PlayStation 2, Xbox, GameCube, PlayStation Portable, Nintendo DS)Marvel: Ultimate Alliance (2006; Xbox, PlayStation 2, Microsoft Windows, Xbox 360, Game Boy Advance, PlayStation Portable, Wii, PlayStation 3)Marvel: Ultimate Alliance 2 (2009; PlayStation 2, Xbox 360, Nintendo DS, PlayStation Portable, Wii, PlayStation 3)Marvel Heroes (2013; Microsoft Windows)Lego Marvel Super Heroes (2013; Android, iOS, Microsoft Windows, Nintendo 3DS, Nintendo DS, OS X, PlayStation 3, PlayStation 4, PlayStation Vita, Wii U, Xbox 360, Xbox One)The Mask (1995; SNES) The Men in Black:Men in Black: The Game (1997; PlayStation, Microsoft Windows)Men in Black: The Series (1998; Game Boy, Game Boy Advance)Men in Black 2: The Series (1998; Game Boy)Men in Black: The Series – Crashdown (2001; PlayStation)Men in Black II: Alien Escape (2002; PlayStation 2, GameCube)MIB: Alien Crisis (2012; PlayStation 3, Wii, Xbox 360)Mort & Phil:Clever & Smart (1987; Amiga, Amstrad CPC, Atari ST, Commodore 64, ZX Spectrum)Clever & Smart 2 (1989; Amstrad CPC, Amstrad PCW, DOS, MSX, ZX Spectrum) N O Over the Hedge:Over the Hedge (2006; GameCube, PlayStation 2, Microsoft Windows, Xbox)Over the Hedge (2006; Nintendo DS)Over the Hedge: Hammy Goes Nuts! (2006; Game Boy Advance, Nintendo DS, PlayStation Portable) P Peanuts:Snoopy and the Red Baron (1984; Atari 2600)Snoopy (1984; Commodore 64)Charlie Brown's ABCs (1984; Commodore 64, Apple II)Snoopy's Silly Sports Spectacular (1988; NES)Snoopy: The Cool Computer Game (1989; Amiga, Amstrad CPC, Atari ST, CDTV, DOS, ZX Spectrum)Snoopy's Magic Show (1990; Game Boy)Snoopy Concert (1995; Super Famicon)Snoopy's Campfire Stories (1996; Microsoft Windows, Mac OS)Snoopy Tennis (2001, Game Boy Color)Where's the Blanket Charlie Brown? (2002; Microsoft Windows, OS X)Snoopy vs. the Red Baron (2006; PlayStation 2, Microsoft Windows, PlayStation Portable)Peanuts: It's the Big Game, Charlie Brown! (2007; OS X, Microsoft Windows)Snoopy Flying Ace (2010; Xbox Live Arcade)The Peanuts Movie: Snoopy's Grand Adventure (2015; Nintendo 3DS, PS4, Wii U, Xbox 360, Xbox One) Penny Arcade:Penny Arcade Adventures: On the Rain-Slick Precipice of Darkness (2008; Xbox 360, Microsoft Windows, Linux, OS X) Popeye:Popeye (1982; arcade, Commodore 64, NES, Magnavox Odyssey², ColecoVision, Atari 2600)Popeye no Eigo Asobi (1983; NES)Popeye (1990; Game Boy)Popeye 2 (1991; Game Boy)Popeye: Ijiwaru Majo Seahag no Maki (1994; SNES)Popeye Saves the Earth (1994; arcade)Popeye: Rush for Spinach (2005; Game Boy Advance) The Punisher:The Punisher (1990 NES)The Punisher (1990; DOS, Atari ST, Amiga) The Punisher: The Ultimate Payback! (Game Boy)The Punisher (1993; arcade, Genesis)The Punisher (2004; mobile phone, PlayStation 2, Xbox, Microsoft Windows)The Punisher: No Mercy (2009; PlayStation 3) Q R Ragnarok:Ragnarok Online (2002; Microsoft Windows)Ragnarok Battle Offline (2007; Microsoft Windows)Ragnarok Online 2: The Gate of the World (2007; Microsoft Windows)Ragnarok Online Nintendo DS (2008; Nintendo DS)RoboCop Versus The Terminator (1994; Game Boy, Genesis, SNES, Master System) Rogue Trooper:Rogue Trooper (1986; Commodore 64, Amstrad CPC, ZX Spectrum)Rogue Trooper (1990; Amiga, Atari ST)Rogue Trooper (2006; PC, PlayStation 2, Xbox, Wii) S Sam & Max:Sam & Max: Freelance Police (cancelled; Microsoft Windows)Sam & Max Hit the Road (1993; DOS, Mac OS, Microsoft Windows)Sam & Max Save the World (2006; Microsoft Windows, Wii, Xbox 360)Sam & Max Beyond Time and Space (2007; Microsoft Windows, Xbox 360, Wii, OS X, PlayStation 3, iOS)Sam & Max: The Devil's Playhouse (2010; Microsoft Windows, Xbox 360, Wii, OS X, PlayStation 3, iOS) Scott Pilgrim vs. the World: The Game (2010; PlayStation 3, Xbox 360) Scud: The Disposable Assassin:Scud: The Disposable Assassin (1997; Sega Saturn)Scud: Industrial Revolution (1997; Microsoft Windows) Shadow Man:Shadow Man (1999; Nintendo 64, PlayStation, Dreamcast, Microsoft Windows, Mac OS)Shadow Man: 2econd Coming (2002; PlayStation 2) Silver Surfer:Silver Surfer (1990; NES)Fantastic Four: Rise of the Silver Surfer (2007; PlayStation 2, PlayStation 3, Xbox 360, Wii, Nintendo DS) Smurfs:Smurfette's Birthday (cancelled; Atari 2600)Smurf: Rescue in Gargamel's Castle (1982; Atari 2600, ColecoVision)The Smurfs Save the Day (1983; Atari 2600)Smurf: Paint 'n' Play Workshop (1984; ColecoVision)The Smurfs (1993; Game Boy, Game Gear, NES, Master System)The Smurfs (1994; DOS, Genesis, Sega-CD, SNES, Microsoft Windows)The Smurfs: Travel the World (1995; SNES, Genesis, Master System, Game Boy)The Smurfs' Nightmare (1997; Game Boy, Game Boy Color)The Smurfs (1999; PlayStation)Smurf Racer (2001; PlayStation, Microsoft Windows)The Revenge of the Smurfs (2002, Game Boy Advance) Spawn:Todd McFarlane's Spawn: The Video Game (1995; SNES)Spawn: The Eternal (1997; PlayStation)Spawn (1999; Game Boy Color)Spawn: In the Demon's Hand (2000; arcade, Dreamcast)Spawn: Armageddon (2003; GameCube, PlayStation 2, Xbox) Spider-Man:The Amazing Spider-Man (1980; arcade)Spider-Man (1982; Atari 2600)Questprobe featuring Spider-Man (1984; Apple II, Atari 8-bit, Atari ST, BBC Micro, Commodore Plus/4, Commodore 64, DOS, Electron, ZX Spectrum)Spider-Man and Captain America in Doctor Doom's Revenge (1989; Amiga, Atari ST, Commodore 64, ZX Spectrum, Amstrad CPC, DOS)The Amazing Spider-Man (1990; Game Boy)The Amazing Spider-Man (1990; Amiga, DOS, Commodore 64, Atari ST)Spider-Man: The Video Game (1991; arcade)The Amazing Spider-Man vs. The Kingpin (1991; Genesis, Master System, Game Gear, Sega CD)Spider-Man and the X-Men in Arcade's Revenge (1992; SNES, Genesis, Game Gear, Game Boy)Spider-Man: Return of the Sinister Six (1992; NES, Master System, Game Gear)The Amazing Spider-Man 2 (1992; Game Boy)The Amazing Spider-Man 3: Invasion of the Spider-Slayers (1993; Game Boy)Spider-Man (1994; SNES, Genesis)Spider-Man and Venom: Maximum Carnage (1994; Genesis, SNES)The Amazing Spider-Man: Lethal Foes (1995; SNES)Venom/Spider-Man: Separation Anxiety (1995; SNES, Genesis, Microsoft Windows)Spider-Man Animated Series (1995; SNES, Genesis)Spider-Man Cartoon Maker (1995; Microsoft Windows)Spider-Man: Web of Fire (1996; Sega 32X)Spider-Man (2000; Dreamcast, OS X, Nintendo 64, PlayStation, Microsoft Windows, Game Boy Color)Spider-Man 2: The Sinister Six (2001; Game Boy Color)Spider-Man 2: Enter Electro (2001; PlayStation)Spider-Man: Mysterio's Menace (2001; Game Boy Advance)Spider-Man (2002; PlayStation 2, GameCube, Xbox, Game Boy Advance, Microsoft Windows)Spider-Man 2 (2004; GameCube, Microsoft Windows, PlayStation 2, Xbox, Game Boy Advance, N-Gage, OS X, Nintendo DS, PlayStation Portable)Ultimate Spider-Man (2005; Nintendo DS, GameCube, PlayStation 2, Xbox, Microsoft Windows, Game Boy Advance)Spider-Man: Battle for New York (2006; Nintendo DS, Game Boy Advance)Spider-Man: Friend or Foe (2007; Wii, Xbox 360, PlayStation 2, Microsoft Windows, Nintendo DS, PlayStation Portable)Spider-Man 3 (2007; Game Boy Advance, Microsoft Windows, Xbox 360, PlayStation 2, PlayStation Portable, Nintendo Nintendo DS, Wii, PlayStation 3)Spider-Man: Web of Shadows (2008; Microsoft Windows, Nintendo DS, PlayStation 2, PlayStation 3, PlayStation Portable, Wii, Xbox 360)Spider-Man: Shattered Dimensions (2010; Xbox 360, PlayStation 3, Wii, Nintendo Nintendo DS, Microsoft Windows)Spider-Man: Edge of Time (2011; Nintendo 3DS, Nintendo DS, PlayStation 3, Wii, Xbox 360)The Amazing Spider-Man (2012; Nintendo DS, Nintendo 3DS, PlayStation 3, Wii, Xbox 360, Android, iOS, Microsoft Windows, Wii U, BlackBerry, PlayStation Vita, Windows Phone, mobile phone)The Amazing Spider-Man 2 (2014; Android, iOS, Microsoft Windows, Nintendo 3DS, PlayStation 3, PlayStation 4, Wii U, Xbox 360, Xbox One)Spider-Man (2018; PlayStation 4)Spider-Man: Miles Morales (2020; PlayStation 4, PlayStation 5)Spider-Man Remastered (2020; PlayStation 5) Spy vs. Spy:Spy vs. Spy (1984; Amiga, Amstrad CPC, Apple II, Atari 8-bit, Atari ST, BBC Micro, Commodore 16, Plus/4, Commodore 64, Electron, NES, PC-88, Sharp X1, Master System, ZX Spectrum)Spy vs. Spy II: The Island Caper (1985; Amiga, Amstrad CPC, Apple II, Atari 8-bit, Atari ST, Commodore 64, MSX, NES, ZX Spectrum)Spy vs. Spy III: Arctic Antics (1986; Amiga, Amstrad CPC, Apple II, Atari 8-bit, Atari ST, Commodore 64, DOS, ZX Spectrum)Spy vs. Spy (2005; Xbox, PlayStation 2) Suicide Squad:Suicide Squad: Kill the Justice League (2022; Microsoft Windows, PlayStation 5, Xbox Series X/S) Superman:Superman (1978; Atari 2600)Superman: The Game (1985; Commodore 64, ZX Spectrum, Commodore 16, Commodore Plus/4, BBC Micro, Acorn Electron)Superman (1987; NES)Superman (1988; arcade)Superman: The Man of Steel (1989; Acorn Electron, Amiga, Amstrad CPC, Apple II, Atari ST, BBC Micro, Commodore 64, MSX, DOS, ZX Spectrum)Superman (1992; Genesis)Superman: The Man of Steel (1993; Game Gear, Master System)The Death and Return of Superman (1994; SNES, Genesis)Superman: The Mysterious Mr. Mist (1996; Mac OS, Microsoft Windows)Superman (1997; Game Boy)Superman: The New Adventures (1999; Nintendo 64)Superman: The Mysterious Mr. Mist (2000; Microsoft Windows)Superman: Shadow of Apokolips (2002; PlayStation 2, GameCube)Superman: The Man of Steel (2002; Xbox)Superman: Countdown to Apokolips (2003; Game Boy Advance)Superman Returns: Fortress of Solitude (2006; Game Boy Advance)Superman Returns (2006; Xbox 360, PlayStation 2, Xbox, Nintendo DS)Superman: The Greatest Hero (2006; V.Smile)Superman (2011; iOS) Swamp Thing (1992; Game Boy, NES) T Teen Titans:Teen Titans (2005; Game Boy Advance)Teen Titans (2006; PlayStation 2, Xbox, GameCube) Teenage Mutant Ninja Turtles:Teenage Mutant Ninja Turtles (1989; NES, Amiga, Amstrad CPC, Atari ST, Commodore 64, DOS, MSX, ZX Spectrum, PlayChoice-10)Teenage Mutant Ninja Turtles (1989; arcade, NES, Xbox Live Arcade, Amiga, Amstrad CPC, Atari ST, Commodore 64, DOS, ZX Spectrum)Teenage Mutant Ninja Turtles: Fall of the Foot Clan (1990; Game Boy)Teenage Mutant Ninja Turtles: Turtles in Time (1991; arcade, SNES)Teenage Mutant Ninja Turtles II: Back from the Sewers (1991; Game Boy)Teenage Mutant Ninja Turtles III: The Manhattan Project (1991; NES)Teenage Mutant Ninja Turtles: Manhattan Missions (1991; DOS)Teenage Mutant Ninja Turtles: The Hyperstone Heist (1992; Genesis)Teenage Mutant Ninja Turtles III: Radical Rescue (1993; Game Boy)Teenage Mutant Ninja Turtles: Tournament Fighters (1993; Genesis, SNES, NES)Teenage Mutant Ninja Turtles (2003; Game Boy Advance)Teenage Mutant Ninja Turtles (2003; PlayStation 2, Xbox, GameCube, Microsoft Windows)Teenage Mutant Ninja Turtles 2: Battle Nexus (2004; Microsoft Windows, GameCube, PlayStation 2, Xbox, Game Boy Advance)Teenage Mutant Ninja Turtles 3: Mutant Nightmare (2005; GameCube, PlayStation 2, Xbox, Nintendo DS)Teenage Mutant Ninja Turtles: Mutant Melee (2005; Microsoft Windows, GameCube, PlayStation 2, Xbox)TMNT (2007; Xbox 360, Wii, PlayStation 2, PlayStation Portable, Nintendo DS, Game Boy Advance, Microsoft Windows, GameCube)Teenage Mutant Ninja Turtles: Arcade Attack (2009; Nintendo DS)Teenage Mutant Ninja Turtles: Turtles in Time Re-Shelled (2009; Xbox 360,PlayStation 3)Teenage Mutant Ninja Turtles: Smash-Up (2009; Wii, PlayStation 2)Teenage Mutant Ninja Turtles: Out of the Shadows (2013; PlayStation 3, Xbox 360, Microsoft Windows)Teenage Mutant Ninja Turtles (2013; Nintendo 3DS, Wii, Xbox 360)Teenage Mutant Ninja Turtles: Danger of the Ooze (2014; Nintendo 3DS, PlayStation 3, Xbox 360)Teenage Mutant Ninja Turtles: Training Lair (2014; Xbox 360)Teenage Mutant Ninja Turtles: Mutants in Manhattan (2016; Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One)Teenage Mutant Ninja Turtles: Shredder's Revenge (2022; Microsoft Windows, Nintendo Switch)The Tick (1994; SNES, Genesis)Thief of Thieves: Season One (2018; Microsoft Windows, Xbox One)Timecop (1995; SNES)The Adventures of Tintin:Tintin on the Moon (1989; DOS, Commodore 64, Amiga, Atari ST, Amstrad CPC, ZX Spectrum)Tintin in Tibet (1995; Game Boy, Game Boy Color, Game Gear, Genesis, SNES, Microsoft Windows, DOS)Prisoners of the Sun (1996; Microsoft Windows, DOS, SNES, Game Boy, Game Boy Color)Tintin: Destination Adventure (2001; Microsoft Windows, PlayStation)The Adventures of Tintin: The Secret of the Unicorn (2011; Nintendo 3DS, PlayStation 3, Wii, Microsoft Windows, Xbox 360) Turok:Turok: Dinosaur Hunter (1997; Nintendo 64, Microsoft Windows)Turok 2: Seeds of Evil (1998; Nintendo 64, Game Boy, Microsoft Windows)Turok: Battle of the Bionosaurs (1998; Game Boy)Turok: Rage Wars (1999; Nintendo 64, Game Boy)Turok 3: Shadow of Oblivion (1999; Nintendo 64, Game Boy)Turok: Evolution (2002; PlayStation 2, Xbox, GameCube, Microsoft Windows, Game Boy Advance)Turok (2008; PlayStation 3, Xbox 360, Microsoft Windows) U Usagi Yojimbo:Samurai Warrior: The Battles of Usagi Yojimbo (1988; Commodore 64, ZX Spectrum, Amstrad CPC) V WWatchmen: The End Is Nigh (2009; PlayStation 3, Microsoft Windows, Xbox 360)W.I.T.C.H. (2005; Game Boy Advance)The Wolf Among Us (2013; OS X, PlayStation 3, PlayStation 4, PlayStation Vita, Microsoft Windows, Xbox 360, Xbox One, Android, iOS) Wolverine (X-Men):Wolverine (1991; NES)Wolverine: Adamantium Rage (1994; SNES, Genesis)X-Men: Wolverine's Rage (2001; Game Boy Color)X2: Wolverine's Revenge (2003; GameCube, OS X, PlayStation 2, Microsoft Windows, Xbox, Game Boy Advance)X-Men Origins: Wolverine (2009; Microsoft Windows, Nintendo DS, PlayStation 2, PlayStation 3, PlayStation Portable, Wii, Xbox 360) X X-Men:X-Men: Madness in Murderworld (1989; DOS, Commodore 64, Amiga)The Uncanny X-Men (1989; NES)X-Men II: The Fall of the Mutants (1991; DOS)Spider-Man and the X-Men in Arcade's Revenge (1992; SNES, Genesis, Game Gear, Game Boy)X-Men (1992; Arcade, PlayStation 3, Xbox 360, iOS, Android)X-Men (1993; Genesis)X-Men: Mutant Apocalypse (1994; SNES)X-Men: Children of the Atom (1994; arcade, Sega Sega Saturn, DOS, PlayStation)X-Men (1994; Game Gear)X-Men: Gamesmaster's Legacy (1995; Game Gear)X-Men 2: Clone Wars (1995; Genesis)X-Men vs. Street Fighter (1996; arcade, Sega Saturn, PlayStation)X-Men: Mojo World (1996; Game Gear, Master System)X-Men: The Ravages of Apocalypse (1997; Microsoft Windows, MS-DOS, Linux, Mac OS)X-Men: Mutant Wars (2000; Game Boy Color)X-Men: Mutant Academy (2000; PlayStation, Game Boy)X-Men: Mutant Academy 2 (2001; PlayStation)X-Men: Reign of Apocalypse (2001; Game Boy Advance)X-Men: Next Dimension (2002; Xbox, PlayStation 2, GameCube)X-Men Legends (2004; PlayStation 2, Xbox, GameCube, N-Gage)X-Men Legends II: Rise of Apocalypse (2005; GameCube, Microsoft Windows, N-Gage, PlayStation 2, PlayStation Portable, Xbox, mobile phone)X-Men: The Official Game (2006; Game Boy Advance, Microsoft Windows, Nintendo DS, GameCube, PlayStation 2, Xbox, Xbox 360)X-Men: Destiny (2011; PlayStation 3, Xbox360, Nintendo Wii, Nintendo DS) YYoung Justice: Legacy'' (2013; Nintendo 3DS, PlayStation 3, Xbox 360, Wii, Microsoft Windows) Z Zipi y Zape (1989; ZX Spectrum, Amstrad CPC, MSX) See also List of video games based on DC Comics List of video games based on Marvel Comics List of video games based on anime or manga List of video games based on cartoons References List Video games based on comics, List of List
5783
https://en.wikipedia.org/wiki/Computer%20program
Computer program
In imperative programming, a computer program is a sequence of instructions in a programming language that a computer can execute or interpret. In declarative programming, a computer program is a set of instructions. A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Machine language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter. The programming language Java compiles into an a intermediate form which is then executed by a Java interpreter. If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction. If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer. History Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically. Analytical Engine In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a "store" which was memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the "store" were transferred to the "mill" for processing. It was programmed using two sets of perforated cards. One set to direct the operation and the other for the input variables. However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never fully worked together. Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program. Universal Turing machine In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine. It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. Relay-based computers The Z3 computer, invented by Konrad Zuse (1941), was a digital and programmable computer. Zuse became aware of the "Babbage Engine" in 1939 while attempting to file a German patent. The Analytical Engine was base-10 — which was easy to comprehend. Zuse recognized that a binary machine was easy to construct. Telephone relays are two-position switches — open or closed. The Z3 had approximately 2,600 relays: 1,800 for the memory, 600 for the arithmetic, and 200 for the punch tape reader, keyboard, and display. The circuits provided a floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punch tape. Manual input was through a calculator-style keyboard that accepted decimal numbers. The machine converted the input to binary and passed them through a series of calculating modules. The result was converted back to decimal and displayed on an output panel. Simultaneously developed was its successor — the Z4 computer. (An air-raid on April 6, 1945 destroyed the Z3.) In 1950, the Z4 was placed into production at the Federal Technical Institute in Zurich. The Harvard Mark I, invented by IBM (1944), also was a digital and programmable computer. The computer supported 23 signed integer digits and had seven major units: One unit had 60 sets of dial switches that were used to set application constants. One unit was used to direct the machine's operations. One unit performed addition and subtraction and had 72 registers used to store the intermediate results. One unit was used to perform multiplication and division. One unit was used to calculate trigonometric functions using interpolation. One unit was used to calculate logarithmic functions using interpolation. One unit was used to direct the machine's output medium — either typewriter printer or punched card printer. The Harvard Mark I had 3,304 relays and 530 miles of wire. Two punched tape readers provided input. One reader inputted the instructions. A code book was written by Howard H. Aiken that indexed the available algorithms. From this book, a programmer punched the coded instructions onto a tape. The other reader inputted the data to be processed. The Harvard Mark I was succeeded by IBM's two other relay-based computers: the Harvard Mark II. the Selective Sequence Electronic Calculator (SSEC). The SSEC operated until August 1952. ENIAC The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied , and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels using heavy black cables. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns. Stored-program computers Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, Dr. John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949. In 1961, the Burroughs B5000 was built specifically to be programmed in the Algol 60 language. The hardware featured circuits to ease the compile phase. In 1964, the IBM System/360 was a line of six computers each having the same instruction set architecture. The Model 30 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 75 was the most premium. Each System/360 model featured multiprogramming — having multiple processes in memory at once. When one process was waiting for input/output, another could compute. IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile. Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape or punched cards. After the medium was loaded, the starting address was set via switches, and the execute button was pressed. Very Large Scale Integration A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip. Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips. Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor. The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates. Sac State 8008 The Intel 4004 was a 4-bit microprocessor and functioned to run the Busicom calculator. Five months after its release, Intel released the Intel 8008 — an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008 — the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set. x86 series In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are: Memory instructions to set and access numbers and strings in random-access memory. Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic operations on integers. Floating point ALU instructions to perform the primary arithmetic operations on real numbers. Call stack instructions to push and pop words needed to allocate memory and interface with functions. Single instruction, multiple data (SIMD) instructions to increase speed when multiple processors are available to perform the same algorithm on an array of data. Programming environment VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language. Programming languages Computer programming (also known as software development and software engineering) is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint. A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. Hello world program The "Hello, World!" program is a few instructions used to illustrate the basic output rules of a programming language. A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax. Keywords are reserved words to form declarations and statements. Symbols are characters to form operations, assignments, control flow, and delimiters. Identifiers are words created by programmers to form constants, variable names, structure names, and function names. Syntax Rules are defined in the Backus–Naur form. Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem. Generation of programming languages The evolution of programming languages began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming languages. The first generation of programming languages is machine language. Machine language requires the programmer to enter instructions using instruction numbers called machine code. For example, the ADD operation on the PDP-11 has instruction number 24576. The second generation of programming languages is assembly language. Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. An assembler translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code. The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV. Computers also have instructions like DW (Define Word) to reserve memory cells. Then the MOV instruction can copy integers between registers and memory. The basic structure of an assembly language statement is label, operation, operand, and comment. Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses. Operations allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers. Operands tell the assembler which data the operation will process. Comments allow the programmer to articulate a narrative because the instructions alone are vague. The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target. The third generation of programming languages use compilers and interpreters to execute computer programs. Unlike assembly language, these languages generate many machine language instructions for each symbolic statement. The distinguishing feature of third generation languages is their independence from a particular hardware. They began with the languages of Fortran (1958), COBOL (1959), ALGOL (1960), and BASIC (1964). In 1973, C emerged as a high-level language that produced efficient machine language instructions. Today, an entire paradigm of languages fill the imperative third generation spectrum. The fourth generation of programming languages emphasize what output results are desired, rather than how programming statements should be constructed. Declarative languages attempt to limit side effects and allow programmers to write code with relatively few errors. One popular fourth generation language is called Structured Query Language (SQL). Database developers no longer need to process each database record one at a time. Also, a simple instruction can generate output records without having to understand how it's retrieved. The fifth generation of programming languages use visual tools to generate source code. An integrated development environment allows the programmer to draw the graphical user interface (GUI) on the screen using a mouse. The programmer writes event handling functions that will be executed when fields are hovered over, entered into, or exited from. The development system couples the functions to the GUI and outputs a computer program. Imperative languages Imperative languages specify a sequential algorithm using declarations, expressions, and statements: A declaration introduces a variable name to the computer program and assigns it to a datatype – for example: var x: integer; An expression yields a value – for example: 2 + 2 yields 4 A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something(); Fortran FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system." It first compiled correctly in 1958. It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions and statements, it supported: arrays. subroutines. "do" loops. It succeeded because: programming and debugging costs were below computer running costs. it was supported by IBM. applications at the time were scientific. However, non IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports: records. pointers to arrays. COBOL COBOL (1959) stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols didn't need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal. COBOL's development was tightly controlled, so dialects didn't emerge to require ANSI standards. As a consequence, it wasn't changed for 25 years until 1974. The 1990s version did make consequential changes like object-oriented programming. Algol ALGOL (1960) stands for "ALGOrithmic Language." It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like: block structure, where variables were local to their block. arrays with variable bounds. "for" loops. functions. recursion. Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch there's C, C++ and Java. Basic BASIC (1964) stands for "Beginner's All Purpose Symbolic Instruction Code." It was developed at Dartmouth College for all of their students to learn. If a student didn't go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language. Basic pioneered the interactive session. It offered operating system commands within its environment: The 'new' command created an empty slate. Statements evaluated immediately. Statements could be programmed by preceding them with a line number. The 'list' command displayed the program. The 'run' command executed the program. However, the Basic syntax was too simple for large programs. Recent dialects have added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface. C C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C." Its purpose was to write the UNIX operating system. C is a relatively small language -- making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like: inline assembler. arithmetic on pointers. pointers to functions. bit operations. freely combining complex operators. C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function. The global and static data region is located just above the program region. (The program region is technically called the text region. It's where machine instructions are stored.) The global and static data region is technically two regions. One region is called the initialized data segment where variables declared with default values are stored. The other region is called the block started by segment where variables declared without default values are stored. Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process. The global and static region stores the global variables that are declared on top of (outside) the main() function. Global variables are visible to main() and every other function in the source code. On the other hand, variable declarations inside of main(), other functions, or within { } block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of function definitions. They provide an interface to the function. Local variables declared using the static prefix are also stored in the global and static data region. Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter(){ static int counter = 0; counter++; return counter;} The stack region is a contiguous block of memory located near the top memory address. Variables placed in the stack, ironically, are populated from top to bottom. A stack pointer is a special-purpose register that keeps track of the last memory address populated. Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction. Local variables declared without the static prefix, including formal parameter variables, are called automatic variables and are stored in the stack. They are visible inside the function or block and lose their scope upon exiting the function or block. The heap region is located below the stack. It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks. Like the stack, the addresses of heap variables are set during runtime. Out of memory errors occur when the heap pointer and the stack pointer meet. C provides the malloc() library function to allocate heap memory. Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would need to be passed to the function via the stack. C++ In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract datatypes. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Concrete datatypes have their representation as part of their name. Abstract datatypes are structures of concrete datatypes — with a new name assigned. For example, a list of integers could be called integer_list. In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class, it's called an object. Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects. Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other persons don't have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s. C++ (1985) was originally called "C with Classes." It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula. An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application: // grade.h // ------- // Used to allow multiple source files to include // this header file without duplication errors. // ---------------------------------------------- #ifndef GRADE_H #define GRADE_H class GRADE { public: // This is the constructor operation. // ---------------------------------- GRADE ( const char letter ); // This is a class variable. // ------------------------- char letter; // This is a member operation. // --------------------------- int grade_numeric( const char letter ); // This is a class variable. // ------------------------- int numeric; }; #endif A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement. A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application: // grade.cpp // --------- #include "grade.h" GRADE::GRADE( const char letter ) { // Reference the object using the keyword 'this'. // ---------------------------------------------- this->letter = letter; // This is Temporal Cohesion // ------------------------- this->numeric = grade_numeric( letter ); } int GRADE::grade_numeric( const char letter ) { if ( ( letter == 'A' || letter == 'a' ) ) return 4; else if ( ( letter == 'B' || letter == 'b' ) ) return 3; else if ( ( letter == 'C' || letter == 'c' ) ) return 2; else if ( ( letter == 'D' || letter == 'd' ) ) return 1; else if ( ( letter == 'F' || letter == 'f' ) ) return 0; else return -1; } Here is a C++ header file for the PERSON class in a simple school application: // person.h // -------- #ifndef PERSON_H #define PERSON_H class PERSON { public: PERSON ( const char *name ); const char *name; }; #endif Here is a C++ source code for the PERSON class in a simple school application: // person.cpp // ---------- #include "person.h" PERSON::PERSON ( const char *name ) { this->name = name; } Here is a C++ header file for the STUDENT class in a simple school application: // student.h // --------- #ifndef STUDENT_H #define STUDENT_H #include "person.h" #include "grade.h" // A STUDENT is a subset of PERSON. // -------------------------------- class STUDENT : public PERSON{ public: STUDENT ( const char *name ); GRADE *grade; }; #endif Here is a C++ source code for the STUDENT class in a simple school application: // student.cpp // ----------- #include "student.h" #include "person.h" STUDENT::STUDENT ( const char *name ): // Execute the constructor of the PERSON superclass. // ------------------------------------------------- PERSON( name ) { // Nothing else to do. // ------------------- } Here is a driver program for demonstration: // student_dvr.cpp // --------------- #include <iostream> #include "student.h" int main( void ) { STUDENT *student = new STUDENT( "The Student" ); student->grade = new GRADE( 'a' ); std::cout // Notice student inherits PERSON's name << student->name << ": Numeric grade = " << student->grade->numeric << "\n"; return 0; } Here is a makefile to compile everything: # makefile # -------- all: student_dvr clean: rm student_dvr *.o student_dvr: student_dvr.cpp grade.o student.o person.o c++ student_dvr.cpp grade.o student.o person.o -o student_dvr grade.o: grade.cpp grade.h c++ -c grade.cpp student.o: student.cpp student.h c++ -c student.cpp person.o: person.cpp person.h c++ -c person.cpp Declarative languages Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages. The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function: times_10(x) = 10 * x The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as: times_10(2) = 20 A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack. Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, functional languages force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what. A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet: function max(a,b){ /* code omitted */} function min(a,b){ /* code omitted */} function difference_between_largest_and_smallest(a,b,c) { return max(a,max(b,c)) - min(a, min(b,c)); } The primitives are max() and min(). The driver function is difference_between_largest_and_smallest(). Executing: put(difference_between_largest_and_smallest(10,4,7)); will output 6. Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages. Lisp Lisp (1958) stands for "LISt Processor." It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements: ((A B) (HELLO WORLD) 94) Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x: cons(head(x), tail(x)) One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns the datatypes at runtime. This may lead to programming errors not being detected early in the development process. Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible. ML ML (1973) stands for "Meta Language." ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer: fun times_10(n : int) : int = 10 * n; ML is not parenthesis-eccentric like Lisp. The following is an application of times_10(): times_10 2 It returns "20 : int". (Both the results and the datatype are returned.) Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Prolog Prolog (1972) stands for "PROgramming in LOgic." It was designed to process natural languages. The building blocks of a Prolog program are objects and their relationships to other objects. Objects are built by stating true facts about them. Set theory facts are formed by assigning objects to sets. The syntax is setName(object). Cat is an animal. animal(cat). Mouse is an animal. animal(mouse). Tom is a cat. cat(tom). Jerry is a mouse. mouse(jerry). Adjective facts are formed using adjective(object). Cat is big. big(cat). Mouse is small. small(mouse). Relationships are formed using multiple items inside the parentheses. In our example we have verb(object,object) and verb(adjective,adjective). Mouse eats cheese. eat(mouse,cheese). Big animals eat small animals. eat(big,small). After all the facts and relationships are entered, then a question can be asked: Will Tom eat Jerry? ?- eat(tom,jerry). Prolog's usage has expanded to become a goal-oriented language. In a goal-oriented application, the goal is defined by providing a list of subgoals. Then each subgoal is defined by further providing a list of its subgoals, etc. If a path of subgoals fails to find a solution, then that subgoal is backtracked and another path is systematically attempted. Practical applications include solving the shortest path problem and producing family trees. Program modules Modular programming is a technique to refine imperative language programs to mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic: The function of a module is what it does. The context of a module are the elements being performed upon. The logic of a module is how it performs the function. The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not. The degree of interaction within a module is its level of cohesion. Cohesion is a judgement of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon. Cohesion The levels of cohesion from worst to best are: Coincidental Cohesion: A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, function read_sales_record_print_next_line_convert_to_float(). Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements." Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, function perform_arithmetic( perform_addition ). Temporal Cohesion: A module has temporal cohesion if it performs functions related to time. One example, function initialize_variables_and_open_files(). Another example, stage_one(), stage_two(), ... Procedural Cohesion: A module has procedural cohesion if it performs multiple functions, but only loosely related. For example, function read_part_number_update_employee_record(). Communicational Cohesion: A module has communicational cohesion if it performs multiple functions, but closely related. For example, function read_part_number_update_sales_record(). Informational Cohesion: A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level. Functional Cohesion: a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts. Coupling The levels of coupling from worst to best are: Content Coupling: A module has content coupling if it modifies a local variable of another function. COBOL used to do this with the alter verb. Common Coupling: A module has common coupling if it modifies a global variable. Control Coupling: A module has control coupling if another module can modify its control flow. For example, perform_arithmetic( perform_addition ). Instead, control should be on the makeup of the returned object. Stamp Coupling: A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level. Data Coupling: A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object. Data flow analysis Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level. The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules. Object-oriented programming Object-oriented programming need not be confined to an object-oriented language. Object-oriented programming is executing operations on objects. In object-oriented languages, classes are objects. In non-object-oriented languages, data structures (which are also known as records) may also be objects. To turn a data structure into an object, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome. Here is a C programming language header file for the GRADE abstract datatype in a simple school application: /* grade.h */ /* ------- */ /* Used to allow multiple source files to include */ /* this header file without duplication errors. */ /* ---------------------------------------------- */ #ifndef GRADE_H #define GRADE_H typedef struct { char letter; } GRADE; /* Constructor */ /* ----------- */ GRADE *grade_new( char letter ); int grade_numeric( char letter ); #endif The grade_new() function performs the same algorithm as the C++ constructor operation. Here is a C programming language source file for the GRADE abstract datatype in a simple school application: /* grade.c */ /* ------- */ #include "grade.h" GRADE *grade_new( char letter ) { GRADE *grade; /* Allocate heap memory */ /* -------------------- */ if ( ! ( grade = calloc( 1, sizeof( GRADE ) ) ) ) { fprintf(stderr, "ERROR in %s/%s/%d: calloc() returned empty.\n", , , ); exit( 1 ); } grade->letter = letter; return grade; } int grade_numeric( char letter ) { if ( ( letter == 'A' || letter == 'a' ) ) return 4; else if ( ( letter == 'B' || letter == 'b' ) ) return 3; else if ( ( letter == 'C' || letter == 'c' ) ) return 2; else if ( ( letter == 'D' || letter == 'd' ) ) return 1; else if ( ( letter == 'F' || letter == 'f' ) ) return 0; else return -1; } In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero. Here is a C programming language header file for the PERSON abstract datatype in a simple school application: /* person.h */ /* -------- */ #ifndef PERSON_H #define PERSON_H typedef struct { char *name; } PERSON; /* Constructor */ /* ----------- */ PERSON *person_new( char *name ); #endif Here is a C programming language source code for the PERSON abstract datatype in a simple school application: /* person.c */ /* -------- */ #include "person.h" PERSON *person_new( char *name ) { PERSON *person; if ( ! ( person = calloc( 1, sizeof( PERSON ) ) ) ) { fprintf(stderr, "ERROR in %s/%s/%d: calloc() returned empty.\n", , , ); exit( 1 ); } person->name = name; return person; } Here is a C programming language header file for the STUDENT abstract datatype in a simple school application: /* student.h */ /* --------- */ #ifndef STUDENT_H #define STUDENT_H #include "person.h" #include "grade.h" typedef struct { /* A STUDENT is a subset of PERSON. */ /* -------------------------------- */ PERSON *person; GRADE *grade; } STUDENT; /* Constructor */ /* ----------- */ STUDENT *student_new( char *name ); #endif Here is a C programming language source code for the STUDENT abstract datatype in a simple school application: /* student.c */ /* --------- */ #include "student.h" #include "person.h" STUDENT *student_new( char *name ) { STUDENT *student; if ( ! ( student = calloc( 1, sizeof( STUDENT ) ) ) ) { fprintf(stderr, "ERROR in %s/%s/%d: calloc() returned empty.\n", , , ); exit( 1 ); } /* Execute the constructor of the PERSON superclass. */ /* ------------------------------------------------- */ student->person = person_new( name ); return student; } Here is a driver program for demonstration: /* student_dvr.c */ /* ------------- */ #include <stdio.h> #include "student.h" int main( void ) { STUDENT *student = student_new( "The Student" ); student->grade = grade_new( 'a' ); printf( "%s: Numeric grade = %d\n", /* Whereas a subset exists, inheritance does not. */ student->person->name, /* Functional programming is executing functions just-in-time (JIT) */ grade_numeric( student->grade->letter ) ); return 0; } Here is a makefile to compile everything: # makefile # -------- all: student_dvr clean: rm student_dvr *.o student_dvr: student_dvr.c grade.o student.o person.o gcc student_dvr.c grade.o student.o person.o -o student_dvr grade.o: grade.c grade.h gcc -c grade.c student.o: student.c student.h gcc -c student.c person.o: person.c person.h gcc -c person.c The formal strategy to build object-oriented objects is to: Identify the objects. Most likely these will be nouns. Identify each object's attributes. What helps to describe the object? Identify each object's actions. Most likely these will be verbs. Identify the relationships from object to object. Most likely these will be verbs. For example: A person is a human identified by a name. A grade is an achievement identified by a letter. A student is a person who earns a grade. Functional categories Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. In addition to the operating system, system software includes embedded programs, boot programs, and micro programs. Application software designed for end users have a user interface. Application software not designed for end users includes middleware, which couples one application with another. Both system software and application software execute utility programs. Application software Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software. Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer. The advantages of proprietary software are features and reports may be exact to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition will necessitate enterprise software changes. The disadvantages of proprietary software are the time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming. The advantages of off-the-shelf software are its identifiable upfront costs, the basic needs should be fulfilled, and its performance and reliability have a track record. The disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes. One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide the hardware, custom software, and end-user support. They may speed development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability. Operating system An operating system is the low-level software that supports a computer's basic functions, such as scheduling tasks and controlling peripherals. In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times. The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor. Kernel Program The kernel's main purpose is to manage the limited resources of a computer: The kernel program should perform process scheduling. The kernel creates a process control block when a program is selected for execution. However, an executing program gets exclusive access to the central processing unit only for a time slice. To provide each user with the illusion of exclusive access, the kernel preempts the process control block to execute another one. The goal for system developers is to minimize dispatch latency. The kernel program should perform memory management. When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables — one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. The program pregion stores machine instructions. Since the machine instructions won't change, the program pregion may be shared by many processes (instances) of the same executable. The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address. To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely. The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory. The kernel also insures that a process only accesses its own memory, and not that of the kernel or other processes. The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files. The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes requests it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing. Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface. The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals. Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher level languages like C, C++, Objective-C, and Swift. Interrupts Interrupts cause the central processing unit (CPU) to have a control flow change away from the currently running process. Input/Output (I/O) devices are slower than the CPU's clock signal. Therefore, it would slow down the computer if the CPU had to wait for each I/O to finish. Instead, a computer may implement direct memory access (DMA) I/O. If a computer program in a computer with a direct memory access chip executes a system call to perform a DMA I/O write operation, then the system call will execute the following instructions: Create an entry in the device-status table. The operating system maintains this table to keep track of which processes are waiting for which devices. One field in the table is the address of the process control block. Place all the characters to be sent to the device into a memory buffer. Set the memory address of the memory buffer to a predetermined device register. Set the buffer size (an integer) to another predetermined register. Execute the machine instruction to begin the writing. Perform a context switch to the next process in the ready queue. While the writing takes place, the operating system will context switch to other processes as normal. When the device finishes writing, the device will interrupt the currently running process by asserting an interrupt request. The device will also place an integer onto the device's data bus. Upon accepting the interrupt request, the CPU will: Push the contents of its program counter and program status word onto the call stack. Read the integer from the data bus. The integer is an offset to the interrupt vector table. The vector table's instructions will return control to the operating system. The operating system will then: Access the device-status table. Extract the process control block. Perform a context switch back to the writing process. When the writing process has its time slice expired, the CPU will: Pop from the call stack the program status word and set it back to its register. Pop from the call stack the address of the interrupted process' next instruction and set it back into the program counter. The interrupted process will then resume its time slice. Utility program A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated. Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses. Boot program A stored-program computer requires an initial boot program stored in its read-only memory to boot. It should identify and initialize all aspects of the system, from processor registers to device controllers to memory contents. Following the initialization process, the boot program loads the operating system and sets the program counter to begin normal operations. Embedded program Independent of the host computer, a hardware device might have embedded firmware to control its operation. Firmware is used when the computer program is rarely or never expected to change, or when it must not be lost when the power is off. On a larger scale, an embedded microcontroller is used to control part of a larger system. Examples include aircraft components and life support systems. Applications running on these systems are large and complex. Moreover, they run in real-time and must be robust. The United States Department of Defense contracted with CII Honeywell Bull to develop Ada (1983) as a real-time programming language. Central to real-time systems is a task facility to permit parallel processing. Also important are interrupt controls. Microcode program A microcode program is the bottom-level interpreter that controls the data path of software driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering. A logic gate is a tiny transistor that can return one of two signals: on or off. Having one transistor forms the NOT gate. Connecting two transistors in series forms the NAND gate. Connecting two transistors in parallel forms the NOR gate. Connecting a NOT gate to a NAND gate forms the AND gate. Connecting a NOT gate to a NOR gate forms the OR gate. These five gates form the building blocks of binary algebra—the digital logic functions of the computer. Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store. These hardware-level instructions move data throughout the data path. Microcode instructions move data between a CPU's registers and throughout the motherboard. The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module. The final step is to execute the instruction using the hardware module's set of gates. Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic. Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents. Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus. See also References Computer programming Software
1948783
https://en.wikipedia.org/wiki/PGPCoder
PGPCoder
PGPCoder or GPCode is a trojan that encrypts files on the infected computer and then asks for a ransom in order to release these files, a type of behavior dubbed ransomware or cryptovirology. Trojan Once installed on a computer, the trojan creates two registry keys: one to ensure it is run on every system startup, and the second to monitor the progress of the trojan in the infected computer, counting the number of files that have been analyzed by the malicious code. Once it has been run, the trojan embarks on its mission, which is to encrypt, using a digital encryption key, all the files it finds on computer drives with extensions corresponding to those listed in its code. These extensions include .doc, .html, .jpg, .xls, .zip, and .rar. The blackmail is completed with the trojan dropping a text file in each directory, with instructions to the victim of what to do. An email address is supplied through which users are supposed to request for their files to be released after paying a ransom of $100–200 to an e-gold or Liberty Reserve account. Efforts to combat the trojan While a few Gpcode variants have been successfully implemented, many variants have flaws that allow users to recover data without paying the ransom fee. The first versions of Gpcode used a custom-written encryption routine that was easily broken. Variant Gpcode.ak writes the encrypted file to a new location, and deletes the unencrypted file, and this allows an undeletion utility to recover some of the files. Once some encrypted+unencrypted pairs have been found, this sometimes gives enough information to decrypt other files. Variant Gpcode.am uses symmetric encryption, which made key recovery very easy. In late November 2010, a new version called Gpcode.ax was reported. It uses stronger encryption (RSA-1024 and AES-256) and physically overwrites the encrypted file, making recovery nearly impossible. Kaspersky Lab has been able to make contact with the author of the program, and verify that the individual is the real author, but have so far been unable to determine his real world identity. References External links Kaspersky Lab Kaspersky Lab blog posts Kaspersky Lab forum dedicated to GPCode Kaspersky Lab virus descriptions StopGPCode trojan removal utilities Other virus description databases F-Secure Symantec McAfee: GPCoder GPCoder.e GPCoder.f GPCoder.g GPCoder.h GPCoder.i Trend Micro: TROJ_PGPCODER.A TROJ_PGPCODER.B TROJ_PGPCODER.C TROJ_PGPCODER.D TROJ_PGPCODER.E TROJ_PGPCODER.F TROJ_PGPCODER.G ThreatExpert Windows trojans Ransomware
866515
https://en.wikipedia.org/wiki/Address%20space%20layout%20randomization
Address space layout randomization
Address space layout randomization (ASLR) is a computer security technique involved in preventing exploitation of memory corruption vulnerabilities. In order to prevent an attacker from reliably jumping to, for example, a particular exploited function in memory, ASLR randomly arranges the address space positions of key data areas of a process, including the base of the executable and the positions of the stack, heap and libraries. History The Linux PaX project first coined the term "ASLR", and published the first design and implementation of ASLR in July 2001 as a patch for the Linux kernel. It is seen as a complete implementation, providing also a patch for kernel stack randomization since October 2002. The first mainstream operating system to support ASLR by default was the OpenBSD version 3.4 in 2003, followed by Linux in 2005. Benefits Address space randomization hinders some types of security attacks by making it more difficult for an attacker to predict target addresses. For example, attackers trying to execute return-to-libc attacks must locate the code to be executed, while other attackers trying to execute shellcode injected on the stack have to find the stack first. In both cases, the system obscures related memory-addresses from the attackers. These values have to be guessed, and a mistaken guess is not usually recoverable due to the application crashing. Effectiveness Address space layout randomization is based upon the low chance of an attacker guessing the locations of randomly placed areas. Security is increased by increasing the search space. Thus, address space randomization is more effective when more entropy is present in the random offsets. Entropy is increased by either raising the amount of virtual memory area space over which the randomization occurs or reducing the period over which the randomization occurs. The period is typically implemented as small as possible, so most systems must increase VMA space randomization. To defeat the randomization, attackers must successfully guess the positions of all areas they wish to attack. For data areas such as stack and heap, where custom code or useful data can be loaded, more than one state can be attacked by using NOP slides for code or repeated copies of data. This allows an attack to succeed if the area is randomized to one of a handful of values. In contrast, code areas such as library base and main executable need to be discovered exactly. Often these areas are mixed, for example stack frames are injected onto the stack and a library is returned into. The following variables can be declared: (entropy bits of stack top) (entropy bits of mmap() base) (entropy bits of main executable base) (entropy bits of heap base) (attacked bits per attempt of stack entropy) (attacked bits per attempt of mmap() base entropy) (attacked bits per attempt of main executable entropy) (attacked bits per attempt of heap base entropy) (attempts made) (total amount of entropy: ) To calculate the probability of an attacker succeeding, we have to assume a number of attempts carried out without being interrupted by a signature-based IPS, law enforcement, or other factor; in the case of brute forcing, the daemon cannot be restarted. We also have to figure out how many bits are relevant and how many are being attacked in each attempt, leaving however many bits the attacker has to defeat. The following formulas represent the probability of success for a given set of attempts on bits of entropy. (isolated guessing; address space is re-randomized after each attempt) (systematic brute forcing on copies of the program with the same address space) In many systems, can be in the thousands or millions; on 64-bit systems, these numbers typically reach the millions at least, Hector Marco-Gisbert and Ismael Ripoll showed in 2014 how to bypass the ASLR in 64-bit systems in less than one second under certain circumstances. For 32-bit systems at 2004 computer speeds which have 16 bits for address randomization, Shacham and co-workers state "... 16 bits of address randomization can be defeated by a brute force attack within minutes." The authors' statement depends on the ability to attack the same application multiple times without any delay. Proper implementations of ASLR, like that included in grsecurity, provide several methods to make such brute force attacks infeasible. One method involves preventing an executable from executing for a configurable amount of time if it has crashed a certain number of times. Android, and possibly other systems, implement Library Load Order Randomization, a form of ASLR which randomizes the order in which libraries are loaded. This supplies very little entropy. An approximation of the number of bits of entropy supplied per needed library appears below; this does not yet account for varied library sizes, so the actual entropy gained is really somewhat higher. Note that attackers usually need only one library; the math is more complex with multiple libraries, and shown below as well. Note that the case of an attacker using only one library is a simplification of the more complex formula for . (number of libraries loaded) (number of libraries used by the attacker) These values tend to be low even for large values of , most importantly since attackers typically can use only the C standard library and thus one can often assume that . However, even for a small number of libraries there are a few bits of entropy gained here; it is thus potentially interesting to combine library load order randomization with VMA address randomization to gain a few extra bits of entropy. Note that these extra bits of entropy will not apply to other mmap() segments, only libraries. Reducing entropy Attackers may make use of several methods to reduce the entropy present in a randomized address space, ranging from simple information leaks to attacking multiple bits of entropy per attack (such as by heap spraying). There is little that can be done about this. It is possible to leak information about memory layout using format string vulnerabilities. Format string functions such as printf use a variable argument list to do their job; format specifiers describe what the argument list looks like. Because of the way arguments are typically passed, each format specifier moves closer to the top of the stack frame. Eventually, the return pointer and stack frame pointer can be extracted, revealing the address of a vulnerable library and the address of a known stack frame; this can eliminate library and stack randomization as an obstacle to an attacker. One can also decrease entropy in the stack or heap. The stack typically must be aligned to 16 bytes, and so this is the smallest possible randomization interval; while the heap must be page-aligned, typically 4096 bytes. When attempting an attack, it is possible to align duplicate attacks with these intervals; a NOP slide may be used with shellcode injection, and the string '' can be replaced with '' for an arbitrary number of slashes when attempting to return to system. The number of bits removed is exactly for intervals attacked. Such decreases are limited due to the amount of data in the stack or heap. The stack, for example, is typically limited to and grows to much less; this allows for at most , although a more conservative estimate would be around 8– corresponding to 4– of stack stuffing. The heap on the other hand is limited by the behavior of the memory allocator; in the case of glibc, allocations above 128 KB are created using mmap, limiting attackers to 5 bits of reduction. This is also a limiting factor when brute forcing; although the number of attacks to perform can be reduced, the size of the attacks is increased enough that the behavior could in some circumstances become apparent to intrusion detection systems. Limitations ASLR-protected addresses can be leaked by various side channels, removing mitigation utility. Recent attacks have used information leaked by the CPU branch target predictor buffer (BTB) or memory management unit (MMU) walking page tables. It is not clear if this class of ASLR attack can be mitigated. If they cannot, the benefit of ASLR is reduced or eliminated. Implementations Several mainstream, general-purpose operating systems implement ASLR. Android Android 4.0 Ice Cream Sandwich provides address space layout randomization (ASLR) to help protect system and third-party applications from exploits due to memory-management issues. Position-independent executable support was added in Android 4.1. Android 5.0 dropped non-PIE support and requires all dynamically linked binaries to be position independent. Library load ordering randomization was accepted into the Android open-source project on 26 October 2015, and was included in the Android 7.0 release. DragonFly BSD DragonFly BSD has an implementation of ASLR based upon OpenBSD's model, added in 2010. It is off by default, and can be enabled by setting the sysctl vm.randomize_mmap to 1. FreeBSD Support for ASLR appeared in FreeBSD 13.0. It is disabled by default. iOS (iPhone, iPod touch, iPad) Apple introduced ASLR in iOS 4.3 (released March 2011). KASLR was introduced in iOS 6. The randomized kernel base is 0x01000000 + ((1+0xRR) * 0x00200000), where 0xRR is a random byte from SHA1 (random data) generated by iBoot (the 2nd-stage iOS Boot Loader). Linux The Linux kernel enabled a weak form of ASLR by default since the kernel version 2.6.12, released in June 2005. The PaX and Exec Shield patchsets to the Linux kernel provide more complete implementations. The Exec Shield patch for Linux supplies 19 bits of stack entropy on a period of 16 bytes, and 8 bits of mmap base randomization on a period of 1 page of 4096 bytes. This places the stack base in an area 8 MB wide containing 524,288 possible positions, and the mmap base in an area 1 MB wide containing 256 possible positions. Position-independent executable (PIE) implements a random base address for the main executable binary and has been in place since 2003. It provides the same address randomness to the main executable as being used for the shared libraries. The PIE feature is in use only for the network-facing daemons – the PIE feature cannot be used together with the prelink feature for the same executable. The prelink tool implements randomization at prelink time rather than runtime, because by design prelink aims to handle relocating libraries before the dynamic linker has to, which allows the relocation to occur once for many runs of the program. As a result, real address space randomization would defeat the purpose of prelinking. The randomization can be disabled for a specific process by changing its execution domain, using personality(2). Kernel address space layout randomization Kernel address space layout randomization (KASLR) enables address space randomization for the Linux kernel image by randomizing where the kernel code is placed at boot time. KASLR was merged into the Linux kernel mainline in kernel version 3.14, released on 30 March 2014. When compiled in, it can be disabled at boot time by specifying as one of the kernel's boot parameters. There are several side-channel attacks in x86 processors that could leak kernel addresses. In late 2017, kernel page-table isolation (KPTI aka KAISER) was developed to defeat these attacks. However, this method cannot protect against side-channel attacks utilizing collisions in branch predictor structures. , finer grained kernel address space layout randomization (or function granular KASLR, FGKASLR) is a planned extension of KASLR to randomize down to the function level. Microsoft Windows Microsoft's Windows Vista (released January 2007) and later have ASLR enabled only for executables and dynamic link libraries that are specifically linked to be ASLR-enabled. For compatibility, it is not enabled by default for other applications. Typically, only older software is incompatible and ASLR can be fully enabled by editing a registry entry , or by installing Microsoft's Enhanced Mitigation Experience Toolkit. The locations of the heap, stack, Process Environment Block, and Thread Environment Block are also randomized. A security whitepaper from Symantec noted that ASLR in 32-bit Windows Vista may not be as robust as expected, and Microsoft has acknowledged a weakness in its implementation. Host-based intrusion prevention systems such as WehnTrust and Ozone also offer ASLR for Windows XP and Windows Server 2003 operating systems. WehnTrust is open-source. Complete details of Ozone's implementation are not available. It was noted in February 2012 that ASLR on 32-bit Windows systems prior to Windows 8 can have its effectiveness reduced in low memory situations. A similar effect also had been achieved on Linux in the same research. The test code caused the Mac OS X 10.7.3 system to kernel panic, so it was left unclear about its ASLR behavior in this scenario. NetBSD Support for ASLR in userland appeared in NetBSD 5.0 (released April 2009), and was enabled by default in NetBSD-current in April 2016. Kernel ASLR support on amd64 was added in NetBSD-current in October 2017, making NetBSD the first BSD system to support KASLR. OpenBSD In 2003, OpenBSD became the first mainstream operating system to support a strong form of ASLR and to activate it by default. OpenBSD completed its ASLR support in 2008 when it added support for PIE binaries. OpenBSD 4.4's malloc(3) was designed to improve security by taking advantage of ASLR and gap page features implemented as part of OpenBSD's mmap system call, and to detect use-after-free bugs. Released in 2013, OpenBSD 5.3 was the first mainstream operating system to enable position-independent executables by default on multiple hardware platforms, and OpenBSD 5.7 activated position-independent static binaries (Static-PIE) by default. macOS In Mac OS X Leopard 10.5 (released October 2007), Apple introduced randomization for system libraries. In Mac OS X Lion 10.7 (released July 2011), Apple expanded their implementation to cover all applications, stating "address space layout randomization (ASLR) has been improved for all applications. It is now available for 32-bit apps (as are heap memory protections), making 64-bit and 32-bit applications more resistant to attack." As of OS X Mountain Lion 10.8 (released July 2012) and later, the entire system including the kernel as well as kexts and zones are randomly relocated during system boot. Solaris ASLR has been introduced in Solaris beginning with Solaris 11.1 (released October 2012). ASLR in Solaris 11.1 can be set system-wide, per zone, or on a per-binary basis. Exploitation A side-channel attack utilizing branch target buffer was demonstrated to bypass ASLR protection. In 2017, an attack named "ASLR⊕Cache" was demonstrated which could defeat ASLR in a web browser using JavaScript. See also Buffer overflow Stack buffer overflow Stack-smashing protection NX bit References External links Exploit Mitigation Techniques: an Update After 10 Years in OpenBSD PaX documentation on ASLR Comparison of PaX to Exec Shield and W^X Address Space Layout Randomization in Windows Vista - Michael Howard's Web Log ASLR for Windows 2000/XP/2003 (WehnTrust) Bypassing PaX ASLR protection On the effectiveness of address space layout randomization Test Applications (or libraries) for their ASLR and DEP support ASLR Smack & Laugh Reference Operating system security
6227150
https://en.wikipedia.org/wiki/HEC-RAS
HEC-RAS
HEC-RAS is a computer program that models the hydraulics of water flow through natural rivers and other channels. Prior to the 2016 update to Version 5.0, the program was one-dimensional, meaning that there is no direct modeling of the hydraulic effect of cross section shape changes, bends, and other two- and three-dimensional aspects of flow. The release of Version 5.0 introduced two-dimensional modeling of flow as well as sediment transfer modeling capabilities. The program was developed by the United States Army Corps of Engineers in order to manage the rivers, harbors, and other public works under their jurisdiction; it has found wide acceptance by many others since its public release in 1995. The Hydrologic Engineering Center (HEC) in Davis, California, developed the River Analysis System (RAS) to aid hydraulic engineers in channel flow analysis and floodplain determination. It includes numerous data entry capabilities, hydraulic analysis components, data storage and management capabilities, and graphing and reporting capabilities. Functionality The basic computational procedure of HEC-RAS for steady flow is based on the solution of the one-dimensional energy equation. Energy losses are evaluated by friction and contraction / expansion. The momentum equation may be used in situations where the water surface profile is rapidly varied. These situations include hydraulic jumps, hydraulics of bridges, and evaluating profiles at river confluences. For unsteady flow, HEC-RAS solves the full, dynamic, 1-D Saint Venant Equation using an implicit, finite difference method. The unsteady flow equation solver was adapted from Dr. Robert L. Barkau's UNET package. HEC-RAS is equipped to model a network of channels, a dendritic system or a single river reach. Certain simplifications must be made in order to model some complex flow situations using the HEC-RAS one-dimensional approach. It is capable of modeling subcritical, supercritical, and mixed flow regime flow along with the effects of bridges, culverts, weirs, and structures. Version 5.0.7 as of March 2019 supports Windows 7, 8, 8.1, and 10 64-bit only. Usages HEC-RAS is a computer program for modeling water flowing through systems of open channels and computing water surface profiles. HEC-RAS finds particular commercial application in floodplain management and [flood insurance] studies to evaluate floodway encroachments. Some of the additional uses are: bridge and culvert design and analysis, levee studies, and channel modification studies. It can be used for dam breach analysis, though other modeling methods are presently more widely accepted for this purpose. Advantages HEC-RAS has merits, notably its support by the US Army Corps of Engineers, the future enhancements in progress, and its acceptance by many government agencies and private firms. It is in the public domain and peer-reviewed, and available to download free of charge from HEC's web site. Various private companies are registered as official "vendors" and offer consulting services and add-on software. Some also distribute the software in countries that are not permitted to access US Army web sites. However, the direct download from HEC includes extensive documentation, and scientists and engineers versed in hydraulic analysis should have little difficulty utilizing the software. Disadvantages Users may find numerical instability problems during unsteady analyses, especially in steep and/or highly dynamic rivers and streams. It is often possible to use HEC-RAS to overcome instability issues on river problems. Version history The first version of HEC-RAS was released in 1995. This HEC-RAS 1.0 solves the same numerical equation of the 1968 HEC-2. GeoHECRAS GeoHECRAS is a 2D/3D visualization and editing data wrapper to the HEC-RAS software and used for flood control and flood mitigation engineering studies, including production of Federal Emergency Management Agency flood hazard maps and other river engineering studies. Features related to HEC-RAS include: Undo and redo HEC-RAS editing Multiple document interface (MDI) of HEC-RAS projects Use of AutoCAD and MicroStation CAD drawings and terrain surfaces Use of GIS databases Automated cross section generation Automated production of floodplain maps Design and analysis of roadway crossings (bridge and culvert) Adaptive 2D mesh generation WMS WMS (watershed modeling system) is a hydrology software that provides pre and post-processing tools for use with HEC-RAS. The development of WMS by Aquaveo was funded primarily by The United States Army Corps of Engineers. Features related to HEC-RAS include: Using feature objects (centerline, cross section lines) and a TIN to develop the geometry of a HEC-RAS model. Editing, merging, and creating cross sections in a database for use with HEC-RAS and other hydraulic models. Delineating flood plains from water surface elevation data. Water surface elevations can be computed by HEC-RAS, defined interactively, or imported from a file. Linking multiple simulations of HEC-1 to HEC-RAS to determine the uncertainty in modeling parameters on a delineated flood plain. Curve Number and Precipitation can be stochastically varied among HEC-1 parameters and Manning's n value for HEC-RAS. See also Hydraulic engineering References External links HEC-RAS home page at the US Army Corps of Engineers, Hydrologic Engineering Center An output video of a flood analysis done with HEC-RAS and visualization in ArcGIS Hydraulic engineering Scientific simulation software Hydrology software
1335480
https://en.wikipedia.org/wiki/ReWire%20%28software%20protocol%29
ReWire (software protocol)
ReWire is a software protocol, jointly developed by Propellerhead and Steinberg, allowing remote control and data transfer among digital audio editing and related software. Originally appearing in the ReBirth software synthesizer in 1998, the protocol has since evolved into an industry standard. Reason Studios has announced that they will discontinue the Rewire protocol at the end of 2020. Currently used in macOS and Microsoft Windows 32-bit or 64-bit audio applications, ReWire enables the simultaneous transfer of up to 256 audio tracks of arbitrary resolution and 4080 channels of MIDI data. This allows, for example, the output from synthesizer software to be fed directly into a linear editor without the use of intermediate files or analog transfers. There are also provisions to remotely trigger actions, such as starting and stopping recording. The protocol is licensed free of charge to companies only, but comes with a "non-disclosure of source code" license that is incompatible with most free-software licenses. The ReWire system consists of "Hosts", "Panels", and "Devices". Hosts are the host applications which typically do the sequencing at one end and the final mixdown at the other end. A Device is a dynamic link library that only generates sound; it has no user interface. A Panel is a graphical interface for setting the parameters of one Device. A typical setup would be to use Ableton Live in "Host" mode, and use Propellerhead Reason as a synthesizer. In this case Reason would provide Device/Panel pairs to Ableton, which could then send MIDI commands, sync timing and mix Reason's output into its own effects chains. Many applications support either mode. In fact, an application could (at the discretion of a developer) act as both a Host and a Panel at the same time. ReWire Hosts ("Sequencers/Trackers") Ableton Live Adobe Audition Cakewalk Sonar / Cakewalk by BandLab Cycling '74 Max/MSP FL Studio GarageBand Jeskola Buzz (with plugin.) Logic Pro MOTU Digital Performer MU.LAB Notion Plogue Bidule Pro Tools REAPER Renoise Samplitude Sonoma Wire Works RiffWorks Sony ACID Pro Steinberg Cubase Steinberg Nuendo Studio One Synapse Audio Orion Platinum Tracktion Zynewave Podium ReWire Devices ("Synthesizers") Ableton Live Arturia Storm Cakewalk Project 5 Cycling '74 Max/MSP Finale (beginning in v.25) FL Studio Plogue Bidule REAPER ReBirth RB-338 Record Renoise Sibelius (beginning in v.6) Sony ACID Pro Vocaloid Vocaloid 2 See also JACK — a similar, open source API for Linux, macOS and Windows. External links Propellerheads' description of ReWire A database of tutorials on how to rewire different combinations of rewire compatible software programs. Reason Studios announcement of discontinuing the product References Music software plugin architectures
52500923
https://en.wikipedia.org/wiki/Amdocs%20%28Israel%29%20Ltd.%20v.%20Openet%20Telecom%2C%20Inc.
Amdocs (Israel) Ltd. v. Openet Telecom, Inc.
Amdocs (Israel) Ltd. v. Openet Telecom, Inc., 841 F.3d 1288 (Fed. Cir. 2016), is a court case in the United States Federal Court System that ended with a panel decision by the Federal Circuit to uphold the patent eligibility of four patents on a system designed to solve an accounting and billing problem faced by network service providers. The district court had held the patents invalid because they were directed to an abstract idea. In the Federal Circuit panel's view the patents were eligible because they contained an "inventive concept"—a combination of elements that was sufficient to ensure that the patents amounted to significantly more than a patent on the ineligible concept itself. This is one of the few times since the Supreme Court's Alice Corp. v. CLS Bank International decision that the Federal Circuit has held computer software-based patent claims eligible. Background Amdocs held four patents and sued Openet for infringement. Each patent describes the same system, one that allows network service providers to account for and bill for internet protocol ("IP") network communications. The system includes network devices; information source modules ("ISMs"); so-called gatherers; a central event manager ("CEM"); a central database; a user interface server; and terminals or "clients." The patent specification asserts that the invention arrays the components of the system "in a distributed architecture that minimizes the impact on network and system resources," which is accomplished "by collecting and processing data close to its source." That result is the effect of including "distributed data gathering, filtering, and enhancements that enable load distribution," instead of performing those tasks centrally as prior art devices did, and thus "reducing congestion in network bottlenecks, while still allowing data to be accessible from a central location." More specifically, the suit involved four related patents—U.S. Patents Nos. 7,631,065 (the '065 patent), 7,412,510 (the '510 patent), 6,947,984 (the '984 patent), and 6,836,797 (the '797 patent). The '065, '984, and '510 patents each stem from the same original patent application and specification; they concern accounting and billing for services in a computer network. The '797 patent developed from an augmented patent application and its specification states that it concerns "data records, and more particularly [] records reflecting various services afforded utilizing a network." District court ruling The district court invalidated the patents on the pleadings, holding all four patents invalid under 35 U.S.C. § 101 because they claimed an abstract idea implemented in a non-inventive manner. The court said: A person may have invented an entirely new and useful advance, but if the patent claims sweep too broadly, or only claim the idea that was achieved rather than implementation of the idea, § 101 directs that the patent is invalid. Amdocs's asserted claims recite such conventional operation, in such a general way, that even if the inventor had developed an actual working system, the patent claims could foreclose fields of research beyond the actual invention. Accordingly, all asserted claims are invalid as patent-ineligible. The district court considered each patent separately. It found claim 1 of the '065 patent representative of the invention. That claim provides: A computer program product embodied on a computer readable storage medium for processing network account information comprising: computer code for receiving from a first source a first network accounting record; computer code for correlating the first network accounting record with accounting information available from a second source; and computer code for using the accounting information with which the first network accounting record is correlated to enhance the first network accounting record. The court found that claim 1 was directed to the abstract idea of "the concept of correlating two network accounting records to enhance the first record." This satisfied step one of the Supreme Court's Alice methodology and therefore the district court proceeded to step two—"whether the claim adds enough to the abstract idea to make the claim patent eligible." Because the claim did "not limit the correlation to any specific hardware, nor give any detail regarding how the records are 'correlated' or 'enhanced,' . . . the claim amounts to 'nothing significantly more than an instruction to apply the abstract idea' of correlating two network accounting records 'using some unspecified, generic' computer hardware." That made the claim invalid under the Alice test. The court turned to the '510 patent, whose claim 16 was representative: A computer program product stored in a computer readable medium for reporting on a collection of network usage information from a plurality of network devices, comprising: computer code for collecting network communications usage information in real-time from a plurality of network devices at a plurality of layers; computer code for filtering and aggregating the network communications usage information; computer code for completing a plurality of data records from the filtered and aggregated network communications usage information, the plurality of data records corresponding to network usage by a plurality of users; computer code for storing the plurality of data records in a database; computer code for submitting queries to the database utilizing predetermined reports for retrieving information on the collection of the network usage information from the network devices; and computer code for outputting a report based on the queries; wherein resource consumption queries are submitted to the database utilizing the reports for retrieving information on resource consumption in a network; and wherein a resource consumption report is outputted based on the resource consumption queries. The court held this patent "directed to the abstract idea of using a database to compile and report on network usage information." Proceeding, therefore, to step two of the Alice methodology, the court said that the claim describes a generic computer that "collects, filters, aggregates, and completes network communications information" and then "stores the information in a database, and queries the database to retrieve reports." All of these things were "basic functions" of a computer and a database system. The court held, accordingly: [C]laim 16 is directed to a computer functioning in a conventional way, and a database functioning in a conventional way. The claim does not add any specific implementation beyond the abstract idea that information is collected and stored, and reports are generated. Therefore, the claim is directed to an unpatentable abstract idea. The district court next considered representative claim 1 of the '797 patent, which states: A method for generating a single record reflecting multiple services for accounting purposes, comprising: (a) identifying a plurality of services carried out over a network; (b) collecting data describing the plurality of services; and (c) generating a single record including the collected data, wherein the single record represents each of the plurality of services; wherein the services include at least two services selected from a group consisting of a hypertext transfer protocol (HTTP) session, an electronic mail session, a multimedia streaming session, a voice over Internet Protocol (IP) session, a data communication session, an instant messaging session, a peer-to-peer network application session, a file transfer protocol (FTP) session, and a telnet session; wherein the data is collected utilizing an enhancement procedure defined utilizing a graphical user interface by: listing a plurality of available functions to be applied in real-time prior to end-user reporting; allowing a user to choose at least one of a plurality of fields, and allowing the user to choose at least one of the listed functions to be applied to the chosen field in real-time prior to end-user reporting. The court said that "the abstract idea in this claim is to generate a single record reflecting multiple services," and proceeded to step two of the Alice analysis. It found that "the claim is directed to an abstract idea performed using purely conventional computer operations, and is, therefore, invalid under § 101." Last, the district court considered representative claim 1 of the '984 patent: A method for reporting on the collection of network usage information from a plurality of network devices, comprising: (a) collecting networks communications usage information in real-time from a plurality of network devices at a plurality of layers utilizing multiple gatherers each including a plurality of information source modules each interfacing with one of the network devices and capable of communicating using a protocol specific to the network device coupled thereto, the network devices selected from the group consisting of routers, switches, firewalls, authentication servers, web hosts, proxy servers, netflow servers, databases, mail servers, RADIUS servers, and domain name servers, the gatherers being positioned on a segment of the network on which the network devices coupled thereto are positioned for minimizing an impact of the gatherers on the network; (b) filtering and aggregating the network communications usage information; (c) completing a plurality of data records from the filtered and aggregated network communications usage information, the plurality of data records corresponding to network usage by a plurality of users; (d) storing the plurality of data records in a database; (e) allowing the selection of one of a plurality of reports for reporting purposes; (f) submitting queries to the database utilizing the selected reports for retrieving information on the collection of the network usage information from the network devices; and (g) outputting a report based on the queries. Here, the abstract idea was "reporting on the collection of network usage information from a plurality of network devices." All of the steps were conventional acts for both generic computers and generic databases, making the claim invalid. Federal Circuit panel decision Majority opinion The Federal Circuit panel majority reversed the district court judgment. It analyzed the judgment by assuming that the claims were directed to an abstract idea, but it found that the claims satisfied step two of the Alice methodology. The court found it particularly important, indeed crucial, that the components (such as network devices; information source modules; gatherers; a central event manager; a central database; a user interface server; and terminals) of the system: are arrayed in a distributed architecture that minimizes the impact on network and system resources. Through this distributed architecture, the system minimizes network impact by collecting and processing data close to its source. The system includes distributed data gathering, filtering, and enhancements that enable load distribution. This allows data to reside close to the information sources, thereby reducing congestion in network bottlenecks, while still allowing data to be accessible from a central location. Each patent [specification] explains that this is an advantage over prior art systems that stored information in one location, which made it difficult to keep up with massive record flows from the network devices and which required huge databases. The court recognized that step one under the Alice methodology is determining whether the claim is directed to an abstract idea, but found doing that problematic: However, a search for a single test or definition in the decided cases concerning § 101 from this court, and indeed from the Supreme Court, reveals that at present there is no such single, succinct, usable definition or test. The problem with articulating a single, universal definition of "abstract idea" is that it is difficult to fashion a workable definition to be applied to as-yet-unknown cases with as-yet-unknown inventions. That is not for want of trying; to the extent the efforts so far have been unsuccessful it is because they often end up using alternative but equally abstract terms or are overly narrow. Therefore, the court said, it would not try to formulate a definition but would instead review the prior cases and try to distill "what prior cases were about, and which way they were decided" in order to decide whether a particular claim was. "That is the classic common law methodology for creating law when a single governing definitional context is not available." After reviewing prior Federal Circuit decisions, the panel addressed the question of what abstract idea the claims embodied: What relative level of abstraction should we employ? From a macroscopic perspective, claim 1 could be described as focusing on correlating two network accounting records to enhance the first record. Claim 1 could also be described in several other ways—such as focusing on a computer program that includes computer code for receiving initial information, for correlating that initial information with additional information, and for using that additional information to enhance the initial information. Without answering the question it asked itself, the panel majority concluded that the claims in this case were closer to those in cases upholding validity under § 101 than those in cases denying validity. The court then decided to bypass the Alice step one (is the claim directed to an abstract idea) and go directly to step two (is there an inventive concept added to the abstract idea). It concluded: "Indeed, even if we were to agree that claim 1 is directed to an ineligible abstract idea under step one, the claim is eligible under step two because it contains a sufficient 'inventive concept,' [because it] requires 'computer code for using the accounting information with which the first network accounting record is correlated to enhance the first network accounting record.' " The court reached this conclusion because it had construed the term "enhance" in the claims as meaning "being dependent upon the invention's distributed architecture," or as meaning "to apply a number of field enhancements in a distributed fashion." The concept of distribution meant that "the network usage records are processed close to their sources before being transmitted to a centralized manager," which makes the database system work faster and more efficiently. The panel majority found that way to improve functionality to be an "inventive concept." According to the patent specification "this distributed enhancement was a critical advancement over the prior art." Therefore, the panel majority said, "this claim entails an unconventional technological solution (enhancing data in a distributed fashion) to a technological problem (massive record flows which previously required massive databases)." Even though the claims use generic components, "the claim's enhancing limitation necessarily requires that these generic components operate in an unconventional manner to achieve an improvement in computer functionality." Accordingly, the panel majority reversed the district court judgment of patent ineligibility under § 101 for the '065 patent. The court then similarly analyzed the other patents, and held them "eligible for patenting for reasons similar to those that undergirded the eligibility of the '065 patent claims." In summarizing its ruling and responding to the dissent, the panel majority said that "[f]or argument's sake we accepted the district court's view of the disqualifying abstract ideas, and in each instance we then explained why, in our view, the claims seen in their entirety are not disqualified. The Alice/Mayo framework does not require more." The panel majority said that it and the dissent were in agreement that the specification disclosed a patent-eligible system. "Unlike the dissent, however, we find the claims at issue, understood in light of that written description [specification], to be eligible for patenting." Dissenting opinion Judge Reyna filed a dissent. Preliminarily, he objected to the panel majority's failure to make a full inquiry into the determination of what the abstract idea was (the panel majority said, however, in answer to this charge that it was assuming, arguendo, that the district court's identification of the abstract idea was correct). Reyna said the Alice methodology of the Supreme Court requires a specific determination of what the abstract idea is, quoting the statement in Alice—"First, we determine whether the claims at issue are directed to one of those patent-ineligible concepts." Reyna then turned to the more important aspect of his critique—that the panel majority had imported the limitations of the specification into the claims to find patent eligibility: The majority also relies on the specification to import innovative limitations into the claims at issue. For each of the four patents at issue, the majority's eligibility determination rests on the use of a "distribution architecture." As explained below, however, this limitation is insufficient to satisfy Alice step two. Indeed, that limitation does not exist in all of the claims at issue. This contravenes the fundamental principal that the section 101 inquiry is about whether the claims are directed to a patent-eligible invention, not whether the specification is so directed. Reyna added: Claims that fail to recite how a desired goal is accomplished do not recite an inventive concept. For example, limitations on the context—as opposed to the manner—of accomplishing a desired result is typically not inventive, even if that context is novel. The Pythagorean Theorem cannot be made eligible by confining its use to existing surveying techniques. Reyna emphasized that the proper "inquiry is not whether the specifications disclose a patent-eligible system, but whether the claims are directed to a patent ineligible concept." He then turned to a patent-by-patent analysis of the claims in suit. Claim 1 of the '065 patent, he said, merely "recites a software product embodied on a storage medium," and "it provides no [meaningful] structural limitations of either the physical medium or the digital software," since "[a]ll software products are stored on a physical storage medium." Furthermore, "claim 1 discusses only very broad, high-level functionality rather than details about how exactly that functionality is implemented." Patent eligibility is predicated on combining data from two sources, but "I find no specific process for accomplishing the abstract goal of combining data from two sources." Therefore, the claim is directed to an abstract idea and one must proceed to step two of the Alice methodology. "Turning to step two, I see no limitations confining the claim to a particular means of combining information from different sources." Merely "[l]imiting the abstract idea to the context in which the information relates to network accounting records is a field-of-use limitation that [under Flook] does not supply an inventive concept." Relying on the definition of "enhanced" as meaning "distributed" to confer patent eligibility, as the panel majority and Amdocs do, is incorrect because "claim 1 recites no components or structure over which the work might be 'distributed.' " Reyna therefore concluded, " I agree with the district court that claim 1 is ineligible because it fails to recite any structure or process limiting the claim to a particular means of combining accounting data from different sources." Reyna similarly analyzed the other patents. He found some claims patent ineligible because they merely claimed results or functions without describing how to accomplish them by using an inventive concept. Other claims, however, he found patent eligible because they not only described functions or goals, but they also recited specific steps or machinery to accomplish them. In this connection, he argued that "software structure and process can confer eligibility," so that "the district court erred by dismissing the recited components on the sole basis that they are software without considering whether these architectural aspects are inventive structure or process." Commentary ● Dennis Crouch, in Patently-O, did not "know how important Amdocs will be, but it offers an interesting split decision on the eligibility of software patent claims." His initial "takeaway" was— that the Federal Circuit continues to be divided on the issues. By luck-of-the-panel in this case, the minority on the court as a whole were the majority on the panel (pushing against Alice & Mayo). Going forward, the split can be reconciled by another Supreme Court opinion, a forceful Federal Circuit en banc decision, or perhaps by future judicial appointments by President Trump. I expect 2-3 vacancies on the court during Trump's first term. Crouch pointed out that representative claim 1 of the first patent "is an almost pure software claim — requiring computer code embodied on a computer readable storage medium." Therefore, that the panel majority said that the claims are "much closer" to the ones found eligible than those of the many cases finding ineligibility should be translated as "the judges in the [panel] majority prefer the decisions finding eligibility over those invalidating software patent claims." He pointed out that the majority's focus on the system's "enhancing data in a distributed fashion [being] an unconventional technological solution . . . to a technological problem (massive record flows which previously required massive databases)" ignores the fact that the claim is not for a system but for software used to implement the system, and the software itself does not require the distribution of components that the specification describes. He then quotes Judge Reyna's dissent that says, "But the inquiry is not whether the specifications disclose a patent-eligible system, but whether the claims are directed to a patent ineligible concept." ● The Patent Docs blog said the decision underlines the importance of claim construction. "A favorable claim construction can tip the patent-eligibility analysis in favor of the patentee, while an unfavorable ruling can doom the subject claims to abstract idea perdition." But the author, while hopeful of "a respite of sorts," was unsure that "the § 101 fog" confronting patentees would now clear away, because "we are still only one en banc Federal Circuit decision or Supreme Court case away from going back to that murky landscape." ● One comment on the case expressed a hope for a change in the direction of post–Alice Federal Circuit decisions, which had overwhelmingly found software and business-method patents invalid: "Perhaps the initial stages of a patent-eligible trend for narrowly drawn software-based patent claims, that are directed to specific, unconventional technological solutions, are underway." ● Another comment lauded the opinion because it was favorable to "companies operating in the software space and who seek to obtain protection for their valuable software innovations." The author argued that the opinion "serves to dispel the myth that it is not possible to obtain patent protection for software-based inventions." References External links 2016 in United States case law Software patent case law United States Court of Appeals for the Federal Circuit cases United States patent case law Amdocs
1400452
https://en.wikipedia.org/wiki/Gesture%20recognition
Gesture recognition
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Users can use simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices. Overview Gesture recognition features: More accurate High stability Time saving to unlock a device The major application areas of gesture recognition in the current scenario are: Automotive sector Consumer electronics sector Transit sector Gaming sector To unlock smartphones Defence Home automation Automated sign language translation Gesture recognition can be conducted with techniques from computer vision and image processing. The literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer. Gesture recognition and pen computing: Pen computing reduces the hardware impact of a system and also increases the range of physical world objects usable for control beyond traditional digital objects like keyboards and mice. Such implementations could enable a new range of hardware that does not require monitors. This idea may lead to the creation of holographic display. The term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor. (see Pen computing) Gesture types In computer interfaces, two types of gestures are distinguished: We consider online gestures, which can also be regarded as direct manipulations like scaling and rotating. In contrast, offline gestures are usually processed after the interaction is finished; e. g. a circle is drawn to activate a context menu. Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu. Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible object. Touchless interface Touchless user interface is an emerging type of technology in relation to gesture control. Touchless user interface (TUI) is the process of commanding the computer via body motion and gestures without touching a keyboard, mouse, or screen. Touchless interface in addition to gesture controls are becoming widely popular as they provide the abilities to interact with devices without physically touching them. Types of touchless technology There are a number of devices utilizing this type of interface such as, smartphones, laptops, games, television, and music equipment. One type of touchless interface uses the bluetooth connectivity of a smartphone to activate a company's visitor management system. This prevents having to touch an interface during the COVID-19 pandemic. Input devices The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. The kinetic user interfaces (KUIs) are an emerging type of user interfaces that allow users to interact with computing devices through the motion of objects and bodies. Examples of KUIs include tangible user interfaces and motion-aware games such as Wii and Microsoft's Kinect, and other interactive projects. Although there is a large amount of research done in image/video based gesture recognition, there is some variation within the tools and environments used between implementations. Wired gloves. These can provide input to the computer about the position and rotation of the hands using magnetic or inertial tracking devices. Furthermore, some gloves can detect finger bending with a high degree of accuracy (5-10 degrees), or even provide haptic feedback to the user, which is a simulation of the sense of touch. The first commercially available hand-tracking glove-type device was the DataGlove, a glove-type device which could detect hand position, movement and finger bending. This uses fiber optic cables running down the back of the hand. Light pulses are created and when the fingers are bent, light leaks through small cracks and the loss is registered, giving an approximation of the hand pose. Depth-aware cameras. Using specialized cameras such as structured light or time-of-flight cameras, one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short range capabilities. Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe or infrared emitters. In combination with direct motion measurement (6D-Vision) gestures can directly be detected. Gesture-based controllers. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. An example of emerging gesture-based motion capture is through skeletal hand tracking, which is being developed for virtual reality and augmented reality applications. An example of this technology is shown by tracking companies uSens and Gestigon, which allow users to interact with their surrounding without controllers. Wi-Fi sensing Another example of this is mouse gesture trackings, where the motion of the mouse is correlated to a symbol being drawn by a person's hand which can study changes in acceleration over time to represent gestures. The software also compensates for human tremor and inadvertent movement. The sensors of these smart light emitting cubes can be used to sense hands and fingers as well as other objects nearby, and can be used to process data. Most applications are in music and sound synthesis, but can be applied to other fields. Single camera. A standard 2D camera can be used for gesture recognition where the resources/environment would not be convenient for other forms of image-based recognition. Earlier it was thought that single camera may not be as effective as stereo or depth aware cameras, but some companies are challenging this theory. Software-based gesture recognition technology using a standard 2D camera that can detect robust hand gestures. Algorithms Depending on the type of the input data, the approach for interpreting a gesture could be done in different ways. However, most of the techniques rely on key pointers represented in a 3D coordinate system. Based on the relative motion of these, the gesture can be detected with a high accuracy, depending on the quality of the input and the algorithm's approach. In order to interpret movements of the body, one has to classify them according to common properties and the message the movements may express. For example, in sign language each gesture represents a word or phrase. Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance-based. The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, Appearance-based systems use images or videos for direct interpretation. 3D model-based algorithms The 3D model approach can use volumetric or skeletal models, or even a combination of the two. Volumetric approaches have been heavily used in computer animation industry and for computer vision purposes. The models are generally created from complicated 3D surfaces, like NURBS or polygon meshes. The drawback of this method is that it is very computational intensive, and systems for real time analysis are still to be developed. For the moment, a more interesting approach would be to map simple primitive objects to the person's most important body parts (for example cylinders for the arms and neck, sphere for the head) and analyse the way these interact with each other. Furthermore, some abstract structures like super-quadrics and generalised cylinders may be even more suitable for approximating the body parts. Skeletal-based algorithms Instead of using intensive processing of the 3D models and dealing with a lot of parameters, one can just use a simplified version of joint angle parameters along with segment lengths. This is known as a skeletal representation of the body, where a virtual skeleton of the person is computed and parts of the body are mapped to certain segments. The analysis here is done using the position and orientation of these segments and the relation between each one of them( for example the angle between the joints and the relative position or orientation) Advantages of using skeletal models: Algorithms are faster because only key parameters are analyzed. Pattern matching against a template database is possible Using key points allows the detection program to focus on the significant parts of the body Appearance-based models These models don't use a spatial representation of the body anymore, because they derive the parameters directly from the images or videos using a template database. Some are based on the deformable 2D templates of the human parts of the body, particularly hands. Deformable templates are sets of points on the outline of an object, used as interpolation nodes for the object's outline approximation. One of the simplest interpolation function is linear, which performs an average shape from point sets, point variability parameters and external deformators. These template-based models are mostly used for hand-tracking, but could also be of use for simple gesture classification. A second approach in gesture detecting using appearance-based models uses image sequences as gesture templates. Parameters for this method are either the images themselves, or certain features derived from these. Most of the time, only one (monoscopic) or two (stereoscopic) views are used. Electromyography-based models Electromyography (EMG) concerns the study of electrical signals produced by muscles in the body. Through classification of data received from the arm muscles, it is possible to classify the action and thus input the gesture to an external software. Consumer EMG devices allow for non-invasive approaches such as an arm or leg band, and connect via bluetooth. Due to this, EMG has an advantage over visual methods since the user does not need to face a camera to give input, enabling more freedom of movement. Challenges There are many challenges associated with the accuracy and usefulness of gesture recognition software. For image-based gesture recognition there are limitations on the equipment used and image noise. Images or video may not be under consistent lighting, or in the same location. Items in the background or distinct features of the users may make recognition more difficult. The variety of implementations for image-based gesture recognition may also cause issue for viability of the technology to general usage. For example, an algorithm calibrated for one camera may not work for a different camera. The amount of background noise also causes tracking and recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the distance from the camera, and the camera's resolution and quality, also cause variations in recognition accuracy. In order to capture human gestures by visual sensors, robust computer vision methods are also required, for example for hand tracking and hand posture recognition or for capturing movements of the head, facial expressions or gaze direction. Social acceptability One significant challenge to the adoption of gesture interfaces on consumer mobile devices such as smartphones and smartwatches stems from the social acceptability implications of gestural input. While gestures can facilitate fast and accurate input on many novel form-factor computers, their adoption and usefulness is often limited by social factors rather than technical ones. To this end, designers of gesture input methods may seek to balance both technical considerations and user willingness to perform gestures in different social contexts. In addition, different device hardware and sensing mechanisms support different kinds of recognizable gestures. Mobile device Gesture interfaces on mobile and small form-factor devices are often supported by the presence of motion sensors such as inertial measurement units (IMUs). On these devices, gesture sensing relies on users performing movement-based gestures capable of being recognized by these motion sensors. This can potentially make capturing signal from subtle or low-motion gestures challenging, as they may become difficult to distinguish from natural movements or noise. Through a survey and study of gesture usability, researchers found that gestures that incorporate subtle movement, which appear similar to existing technology, look or feel similar to every actions, and which are enjoyable were more likely to be accepted by users, while gestures that look strange, are uncomfortable to perform, interferes with communication, or involves uncommon movement caused users more likely to reject their usage. The social acceptability of mobile device gestures rely heavily on the naturalness of the gesture and social context. On-body and wearable computers Wearable computers typically differ from traditional mobile devices in that their usage and interaction location takes place on the user's body. In these contexts, gesture interfaces may become preferred over traditional input methods, as their small size renders touch-screens or keyboards less appealing. Nevertheless, they share many of the same social acceptability obstacles as mobile devices when it comes to gestural interaction. However, the possibility of wearable computers to be hidden from sight or integrated in other everyday objects, such as clothing, allow gesture input to mimic common clothing interactions, such as adjusting a shirt collar or rubbing one's front pant pocket. A major consideration for wearable computer interaction is the location for device placement and interaction. A study exploring third-party attitudes towards wearable device interaction conducted across the United States and South Korea found differences in the perception of wearable computing use of males and females, in part due to different areas of the body considered as socially sensitive. Another study investigating the social acceptability of on-body projected interfaces found similar results, with both studies labelling areas around the waist, groin, and upper body (for women) to be least acceptable while areas around the forearm and wrist to be most acceptable. Public installations Public Installations, such as interactive public displays, allow access to information and displaying interactive media in public settings such as museums, galleries, and theaters. While touch screens are a frequent form of input for public displays, gesture interfaces provide additional benefits such as improved hygiene, interaction from a distance, improved discoverability, and may favor performative interaction. An important consideration for gestural interaction with public displays is the high probability or expectation of a spectator audience. "Gorilla arm" "Gorilla arm" was a side-effect of vertically oriented touch-screen or light-pen use. In periods of prolonged use, users' arms began to feel fatigue and/or discomfort. This effect contributed to the decline of touch-screen input despite initial popularity in the 1980s. In order to measure arm fatigue and the gorilla arm side effect, researchers developed a technique called Consumed Endurance. See also Activity recognition Articulated body pose estimation Automotive head unit Computer processing of body language 3D pose estimation Pointing device gesture References External links Annotated bibliography of references to gesture and pen computing Notes on the History of Pen-based Computing (YouTube) The future, it is all a Gesture—Gesture interfaces and video gaming Ford's Gesturally Interactive Advert—Gestures used to interact with digital signage 3D Hand Tracking—A Literature Survey Applications of computer vision Virtual reality Object recognition and categorization User interface techniques History of human–computer interaction
54119928
https://en.wikipedia.org/wiki/Flaviu%20Cristian
Flaviu Cristian
Flaviu Cristian (25 June 1951 – 27 April 1999) was a Romanian-American computer scientist noted for his work in distributed systems and, in particular, the development of a method for clock synchronisation which bears his name, Cristian's algorithm. Biography He was born in 1951 in Cluj, in the Transylvania region of Romania, the son of Ilie and Rafila Cristian. After graduating from the Nicolae Bălcescu High School in his native city, he went in 1971 to France to study at the Grenoble Institute of Technology, in the Department of Applied Mathematics and Computer Science. After graduating in 1977 from both the Institute and the Grenoble School of Management, he pursued his graduate studies in computer science at the University of Grenoble, where he carried out research in operating systems and programming methodology, and received his Ph.D. in 1979. Cristian went on to the University of Newcastle upon Tyne in the United Kingdom, where he worked in the area of specification, design, and verification of fault-tolerant software. In 1982 he emigrated to the United States, joining the IBM Research Center in Almaden Valley, in San Jose, California. In 1991 he joined the University of California, San Diego as Professor in the Department of Computer Science and Engineering. He died in San Diego in 1999 after a long battle with cancer. Publications See also Cristian's algorithm Notes External links UCSD Jacobs: Flaviu Cristian, Professor and Scientist, Dies at Age 48 List of computer science publications by Cristian Flavius 1951 births 1999 deaths Romanian computer scientists Grenoble Alpes University alumni University of California, San Diego faculty Romanian emigrants to the United States Scientists from Cluj-Napoca Romanian expatriates in France Romanian expatriates in the United Kingdom IBM Research computer scientists Grenoble Institute of Technology alumni Deaths from cancer in California
255308
https://en.wikipedia.org/wiki/Apple%20SOS
Apple SOS
The Sophisticated Operating System, or SOS , is the primary operating system of the Apple III computer. SOS was developed by Apple Computer and released in October 1980. In 1985, Steve Wozniak, while critical of the Apple III's hardware flaws, called SOS "the finest operating system on any microcomputer ever". Technical details SOS is a single-tasking single-user operating system. It makes the resources of the Apple III available in the form of a menu-driven utility program as well as a programming application programming interface (API). A single program is loaded at boot time, called the interpreter. Once loaded, the interpreter can then use the SOS API to make requests of the system. The SOS API is divided into four main areas: File Calls: Create, destroy, rename, open, close, read, write files; set, get prefix (current working directory); set, get file information; get volume information; set, set mark, EOF, and level of files Device Calls: Get status, device number, information of a device; send device control data Memory Calls: Request, find, change, release memory segment; get segment information; set segment number Utility Calls: Get, set fence (event threshold); get, set time; get analog (joystick) data; terminate. The Apple III System Utilities program shipped with each Apple III computer. It provides the user interface of the operating system itself, for system configuration and file management. The System Utilities program is menu-driven and performs tasks in three categories: Device-handling commands: copy, rename, format, verify volumes (drives); list devices; set time and date File-handling commands: list, copy, delete, rename files; create subdirectories; set file write protection; set prefix (current working directory) System Configuration Program (SCP): configure device drivers. SOS has two types of devices it communicates with via device drivers: character devices and block devices. Examples of SOS character devices are keyboards and serial ports. Disk drives are typical block devices. Block devices can read or write one or more 512-byte blocks at a time; character devices can read or write single characters at a time. Boot sequence When powered on, the Apple III runs through system diagnostics, then reads block number zero from the built-in diskette drive into memory and executes it. SOS-formatted diskettes place a loader program in block zero. That loader program searches for, loads, and executes a file named SOS.KERNEL, which is the kernel and API of the operating system. The kernel, in turn, searches for and loads a file named SOS.INTERP (the interpreter, or program, to run) and SOS.DRIVER, the set of device drivers to use. Once all files are loaded, control is passed to the SOS.INTERP program. Apple ProDOS uses the same file system as SOS. On a disk formatted by ProDOS, the ProDOS loader and SOS loader are written to blocks zero and one, respectively. The ProDOS loader includes code that can execute on an Apple III, and which will chainload the SOS loader from block one, so SOS and ProDOS can co-exist on the same volume. Some software, such as ADTPro, makes use of this to store Apple II and Apple III versions of a program on the same disk, which is then bootable on both systems. References External links Washington Apple Pi – Apple /// Resources The Apple /// FAQ File V5.1 – discusses Apple III and SOS history as well as basic information about the system apple3.org - Apple /// Articles, Public Domain Software Apple Inc. operating systems Disk operating systems Discontinued operating systems 1980 software
3684381
https://en.wikipedia.org/wiki/M23%20software%20distribution%20system
M23 software distribution system
m23 is a software distribution and management system for the Debian, Ubuntu, Kubuntu Linux, Xubuntu, Linux Mint, elementary OS, Fedora, CentOS and openSUSE distributions. m23 can partition and format clients and install a Linux operating system and any number of software packages like office packages, graphic tools, server applications or games via the m23 system. The entire administration is done via a webbrowser and is possible from all computers having access to the m23 server. m23 is developed predominantly by Hauke Goos-Habermann since the end of 2002. m23 differentiates between servers and clients. An m23 server is used for software distribution and the management of the clients. Computers which are administered (e.g. software is installed) through the m23 server are the clients. The client is booted over the network during the installation of the operating system. It is possible to start the client with a boot ROM on its network card, a boot disk or a boot CD. The client's hardware is detected and set up. The gathered hardware and partition information is sent to the m23 server. Afterwards, this information is shown in the m23 administration interface. Now the administrator has to choose how to partition and format the client. There are other settings, too, e.g. the distribution to be installed on the client. The m23 clients can be installed as workstation with the graphical user interfaces KDE 5.x, GNOME 3.x, Xfce, Unity, LXDE and pure X11 or as a server without graphical subsystem. In most server setups, the server does not need a user interface because most of the server software runs in text mode. M23 is released under the GNU GPL. Features Three steps to a complete client: To install a client via m23 is rather simple. Only three steps are needed for a completely installed client. Integration of existing clients into m23: Existing Debian-based systems can be assimilated into the m23 system easily and administered like a normal client (installed with m23). Group functions: The group functions allows the management of a large number of clients that need to be updated or to install new software or if other routine jobs need to be done. E.g. a new software package can be installed on all clients or be removed from all clients of a group. Mass installation: The mass installation tools allow to install many clients with similar requirements at once. A virtual client, that holds the basic settings (such as the default gateway or the group name), is defined. This virtual client can be used to derive and install an arbitrary number of (real) clients. Imaging: Installation using image files: An image file of a partition or entire harddisk can be used to install other clients. These image files are taken from installed clients with all their software packages and settings. Support for software RAIDs: Partitions or entire harddrives can be combined into software RAIDs. m23 supports the RAID levels 0, 1, 4, 5, 6 and 10. RAIDs can be used like normal partitions to install operating systems to or to use them as swap or storage space. User management with LDAP: User accounts can be stored on a central LDAP server. This increases the ease of use in environments with many clients and a lot of users. The OpenLDAP server is shipped with the m23 server and can be administered via the highly integrated phpLDAPadmin. Pool builder: The pool builder makes it possible to combine software packages from different media (CD, DVD, internet) on the m23 server and convert them to a local package source. These package sources can be used to install clients. This can be done without an internet connection or if the internet connection is rather slow. It is possible, too, to add selfmade packages to the pool. External links m23 project page with forum, download of the m23 server installation CD,... Page of the developer for commercial support and development Review of m23 at the reallylinux web site System administration Software distribution Free network management software Configuration management Free software
40272802
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Gear
Samsung Galaxy Gear
The Samsung Galaxy Gear is a smartwatch produced by Samsung Electronics in the Samsung Gear family of devices. Unveiled during a Samsung Unpacked event in Berlin, Germany on September 4, 2013, the device serves as a companion for all Samsung Galaxy smartphones and tablets which runs on Android 4.3 "Jelly Bean" or newer. It was released on September 25, 2013. Originally released as an Android-based device, Samsung replaced the operating system with Tizen through the May 2014 software update. The Gear's successor, the Gear 2, was released on April 11, 2014. History " Development The development of the Galaxy Gear came in the midst of a push towards the growing smartwatch market; Samsung's main competitor Apple had been the subject of rumors suggesting an upcoming watch product, while the Kickstarter-backed Pebble (which reached mass-market availability in July 2013) has been credited with creating mainstream interest in the concept. Lee Young-hee, vice president of Samsung's mobile business, revealed in July 2013 that the company had been working on a watch for a long time, and referred to it as a "product for the future". In August 2013, Bloomberg reported that "two people familiar with the matter" claimed Samsung would introduce a smartwatch, tentatively known as the 'Galaxy Gear', on September 4, 2013 during a Samsung Unpacked event prior to the IFA consumer electronics trade show in Berlin. Samsung was also expected to unveil the Galaxy Note 3 phablet during the event as well. In an interview with The Korea Times published on August 27, 2013, Lee Young-hee stated that the Galaxy Gear would "enhance and enrich the current smart mobile experience in many ways", would "lead a new trend in smart mobile communications", and "add meaningful momentum to the mobile industry. Release Samsung officially unveiled the Galaxy Gear on September 4, 2013, with a release date set for September 25, 2013 (although its release in the United States and Japan would be delayed into October). Some wireless service providers, as an incentive, have also offered the Galaxy Gear as part of a bundle with the Galaxy Note 3. To promote the Galaxy Gear, Samsung released two television advertisements, "Evolution" and "A Long Time Coming". Both ads showcased historical depictions of smartwatch-like devices in popular culture (including those seen in Dick Tracy, Star Trek, The Jetsons, Predator, and Inspector Gadget), with the tagline "After all these years, it's finally real." Samsung Galaxy Gear's watch design was exclusively leaked to VentureBeat reporter Christina Farr by an unnamed source several days prior to its Berlin release on September 4, 2013, generating widespread global interest. Specifications Hardware The Galaxy Gear is powered by a dual-core, 1,600 MHz Exynos 4212 system-on-chip, that was scaled back to one core, 800 MHz to help save battery life, and contains a 320-pixel-wide square-shaped Super AMOLED touchscreen display with a pixel density of 277 ppi. Its band contains a 1.9-megapixel camera with a back-illuminated sensor, auto-focus, and 720p video recording, along with a speaker and two noise-cancelling microphones. The Galaxy Gear also includes 4 GB of internal memory, 512 MB of RAM, an accelerometer, and a gyroscope. The device contains a 315 mAh battery; the device itself does not contain a charging port and must be placed inside a special Micro USB-equipped charging case, which also contains an NFC tag that is used for the initial setup of the device. The Galaxy Gear is designed to IP55 specifications with dust resistance and protection from short-term exposure to water jets. To communicate with a host device, the Galaxy Gear uses Bluetooth Low Energy. As only Android 4.3 and later offer native support for Bluetooth LE, only the Galaxy Note 3 and Galaxy Note 10.1 2014 Edition, the first Samsung devices to ship with 4.3, were supported by the Galaxy Gear on launch. Shortly after the release of the Galaxy Gear, Samsung began rolling out Android 4.3 updates for its other recent devices, such as the Galaxy S III, S4 and Note II, to ensure Galaxy Gear compatibility. Software The Galaxy Gear uses an Android or Tizen-based operating system with a minimalistic interface and gesture-based navigation. To pair the watch with a smartphone or tablet as its host device, the user must first install the Gear Manager app. An NFC tag located inside the charging case is used to download the app. Gear Manager uses Bluetooth to coordinate the pairing process and further communication with the device. It can also be used to configure the device's settings (such as the clock display's color and appearance), and to manage and install apps via Samsung Apps. Notifications can be synced from the host device and displayed on the Gear's screen when received; a preview of a notification's content is displayed on the watch itself, while the Smart Relay feature allows users to open content displayed on Gear on their smartphone or tablet. S Voice can be used for basic voice commands and voice dictation for certain apps. Phone calls can also be answered and placed from the watch (the dialer is launched by swiping up on the clock display), while the "Find My Device" tool can be used to locate the watch with a phone or tablet, or vice versa. The camera app can take photos, or record videos up to 15 seconds in length. Photos and videos can be stored to the device's internal storage, or immediately transferred to the user's phone or tablet. Users can also record up to 5 minutes' worth of voice memos. A media controller for music or video being played by the phone or tablet is also provided. While it is possible to sideload other Android apps on the device over Android Debug Bridge (ADB), their usability is impaired by the small screen, and an inability to access the Internet directly. On May 30, 2014, Samsung released the version 2.2 firmware update for the Galaxy Gear through its Kies desktop software. The update replaces the Android-based operating system of the device with the Tizen-based operating system of its successor, the Samsung Gear 2. The new software, among other improvements, provides improved performance and battery life, a voice-controlled camera, and a standalone music player that can play tracks stored in the device's internal storage. Reception The Galaxy Gear received generally negative comments from critics. The design of the Gear itself was met with mixed reactions; although considered high quality, the incorporation of components directly into its strap was criticized for making it inflexible (and thus making the device hard to wear comfortably). The Gear was also criticized for the limited functionality of its apps, along with its initially inconsistent notification system. As an example, critics pointed out that the Android IMAP/POP3 email client was able to display full notifications with message previews on the Galaxy Gear, while Gmail was only able to display an icon and directed users to open the message on their phone instead. These limitations surrounding notifications were addressed in the Galaxy Gear's first firmware update, which allows all apps to display notifications with previews on the Galaxy Gear. On the topic, The Verge remarked that "as with industrial design, software engineering isn't among Samsung's strengths, and the results on the Gear are a painful mix of unreliability and inadequacy." The ability to answer phone calls on the device was praised for its convenience and good call quality (drawing comparisons to Dick Tracy), while its camera was also praised for having unexpectedly decent quality for its megapixel size. The device's reliance on Samsung Galaxy devices with specific versions of Android was also criticized, along with its poor battery life, which was considered more comparable to a smartphone than a watch. In an infographic posted in September 2013, TUAW compared the Galaxy Gear to the sixth-generation iPod Nano (which attracted unofficial watchband accessories due to its similarly square form factor) considering the three-year-old MP3 player to be a "better, cheaper [smart]watch" than the Galaxy Gear because it is not dependent on a "host" smartphone or tablet, and contained a larger number of features than the Galaxy Gear (including the ability to, primarily, store and play music from the device itself). Sales The Galaxy Gear performed poorly in terms of commercial sales; in late October 2013, it was reported that at least 30% of the Galaxy Gear watches sold by the U.S.-based chain Best Buy were being returned by unsatisfied customers. In November 2013, Reuters reported that company sources claimed that 800,000 Galaxy Gears were sold worldwide; however, the Korean news agency Yonhap disputed the report, believing that the numbers actually referred to units shipped and not units sold. References Samsung wearable devices Android (operating system) devices Products introduced in 2013 Smartwatches Tizen-based devices
13203695
https://en.wikipedia.org/wiki/Hans%20van%20Hemert
Hans van Hemert
Hans van Hemert (born 7 April 1945 in Voorburg, Netherlands) is a Dutch ASCAP award-winning record producer and songwriter. Mouth & MacNeal and Luv' are among the pop acts he produced. He also composed three songs for the Eurovision Song Contest. Biography His debut He started his career in 1965 when he worked at Phonogram Records and his relationship with this record company lasted until 1979. He produced and co-wrote songs for Q65, The Motions, Ro-d-ys, Zen, Group 1850, Big Wheel, Somerset, Ramses Shaffy and Liesbeth List, Sandra & Andres, Kamahl, American Gypsy and Vulcano. Mouth & MacNeal Among the bands Van Hemert worked with: Mouth & MacNeal. This duo scored big hits not only in their homeland but also abroad between 1971 and 1974: "Hey You Love" (a Dutch Top 5 single in 1971). "How Do You Do" (this single was a Dutch #1 hit in 1971 but also a German Top 5 hit and US Top 10 Hit in 1972 when it peaked at #8 on the Billboard Hot 100 chart.) This song sold two million copies worldwide. "Hello-a" (a #1 hit in the Netherlands and Germany in 1972). "Ik Zie Een Ster" / "I See A Star" (a Dutch and English Top 10 hit, this song was chosen to represent the Netherlands at the Eurovision Song Contest in 1974 (the year ABBA won it) where it finished in third place. Luv' Luv' (1977–present) is a female pop trio he formed with the help from Piet Souer and a manager, Han Meijer (later replaced by Pim Ter Linde). This girl group had popular hit records in large part of Continental Europe, South Africa, New Zealand, Australia and Mexico in the late 1970s and early 1980s. Among their hit singles were: "U.O.Me", "You're the Greatest Lover", "Trojan Horse", "Casanova" and "Ooh, Yes I Do". This formation sold seven million records. Eurovision Song Contest In addition to Mouth & MacNeal, he wrote two other Eurovision songs: "Als het om de liefde gaat" by Sandra & Andres (#4 in 1972) and "The Party's Over" by Sandra Reemer (#9 in 1976). Later career Van Hemert composed and produced "Wij houden van Oranje", a football song performed by André Hazes to support the Netherlands national soccer team at the UEFA Euro 1988. The same year, he wrote a track recorded by Yann Andersen for a UNICEF campaign. In the late 1990s, he created an independent music production company (Hans van Hemert Productions) that still licenses his hits to labels for 1970s nostalgia compilations. In addition to Luv', he produced other girl groups: in 1981: Babe (this pop act was formed by Peter Koelewijn). in 1994-1995: Lily Marlene (this manufactured trio was one of his brainchild). in 1997: Patty Cash (whose members were chosen by Patty Brard). in 2005-2006: Bling Inc. (also created by him). However, none of these groups reached Luv's level of success. In April 2005, the Luv' ladies made a surprise showcase at his 40-year business anniversary and 60th birthday. In the spring of 2006, his production team remixed "Trojan Horse" (which was included in the box set Completely In Luv' (released by Universal Music Netherlands). Two years later, he produced and wrote a digital single for Mother and Father's day, "De allerliefste mama en papa" performed by his daughter Hannah and his granddaughter Britt. In recent years, some of his compositions have been utilised by contemporary artists from the dance and techno scene. "How Do You Do" was sampled by Party Animals in 2006 and by Scooter one year later for his European hit "The Question Is What Is the Question?" (a Top 5 hit in Germany, Austria, Finland, Hungary, reaching the Top 20 in Ireland and the Top 50 in the Netherlands and UK). "You're the Greatest Lover" was covered by: Loona with a version in Spanish ("Latino Lover") that was a Top 10 hit single in German speaking countries in 2000. Massiv in Mensch with an industrial version in 2004. Swoop, whose dance rendition peaked at #25 on the Flemish Ultratop chart. "Trojan Horse" was sampled by Sheffield Jumpers for their jumpstyle single, "Jump With Me" (a minor hit on the German Media Control Charts) in 2008. The eight #1 hits on the Dutch Top 40 Eight of his productions/compositions were #1 hits in the Netherlands (some of them world hits too): "Hair" by Zen (1968) "Mijn Gebed" by DC Lewis (1970) "How Do You Do" and "Hello-a" by Mouth & MacNeal (1972) "The Elephant Song" by Kamahl (1975) "You're the Greatest Lover" and "Trojan Horse" by Luv' (1978) "Shine Up" by Doris D & The Pins (1980) Awards Among the awards he received were: two Conamus Export Prizes (for his collaboration with the best Dutch export music acts): in 1972 (with Mouth & MacNeal) and in 1979 (with Luv'). a Gouden Harp (Golden Harp) in 1974 for his talents as a producer. an ASCAP award in the United States for the song "How Do You Do" by Mouth & MacNeal. Forty gold and platinum records. References Bibliography "500 Nr.1 Hits uit de Top 40", book by Johan van Slooten, Gottmer Becht Publishing, 1997 "Top 40 Hitdossier 1956-2005 (9e editie)", book by Johan van Slooten, Gottmer Bech Publishing, 2006 "50 jaar nummer-1-hits 1956-2006", book by Johan van Slooten, Gottmer Uitgevers Groep, 2006 "Albumdossier 1969-2002", book by Johan van Slooten, Becht's Uitgevers, 2002 External links Hans Van Hemert official site Information about Van Hemert's career at the Popinstituut website [Dutch] Van Hemert's chart performances in the Netherlands on the DutchCharts.nl portal 1945 births Living people Dutch music managers Dutch record producers People from Voorburg Dutch songwriters
31957510
https://en.wikipedia.org/wiki/Automated%20mining
Automated mining
Automated mining involves the removal of human labor from the mining process. The mining industry is in the transition towards automation. It can still require a large amount of human capital, particularly in the developing world where labor costs are low so there is less incentive for increasing efficiency. There are two types of automated mining- process and software automation, and the application of robotic technology to mining vehicles and equipment. Mine automation software In order to gain more control over their operations, mining companies may implement mining automation software or processes. Reports generated by mine automation software allow administrators to identify productivity bottlenecks, increase accountability, and better understand return on investment. Mining equipment automation Addressing concerns about how to improve productivity and safety in the mine site, some mine companies are turning to equipment automation consisting of robotic hardware and software technologies that convert vehicles or equipment into autonomous mining units. Mine equipment automation comes in four different forms: remote control, teleoperation, driver assist, and full automation. Remote control Remote control mining equipment usually refers to mining vehicles such as excavators or bulldozers that are controlled with a handheld remote control. An operator stands in line-of-sight and uses the remote control to perform the normal vehicle functions. Because visibility and feel of the machine are heavily reduced, vehicle productivity is generally reduced as well using remote control. Remote control technology is generally used to enable mining equipment to operate in dangerous conditions such as unstable terrain, blast areas or in high risk areas of falling debris, or underground mining. Remote control technology is generally the least expensive way to automate mining equipment making it an ideal entry point for companies looking to test the viability of robotic technology in their mine. Teleoperated mining equipment Teleoperated mining equipment refers to mining vehicles that are controlled by an operator at a remote location with the use of cameras, sensors, and possibly additional positioning software. Teleoperation allows an operator to further remove themselves from the mining location and control a vehicle from a more protected environment. Joysticks or other handheld controls are still used to control the vehicle's functions, and operators have greater access to vehicle telemetry and positioning data through the teleoperation software. With the operator removed from the cab, teleoperated mining vehicles may also experience reduced productivity; however, the operator has a better vantage point than remote control from on-vehicle cameras and sensors and is further removed from potentially dangerous conditions. Driver assist "Driver assist" refers to partly automated control of mining machines. Only some of the functions are automated and operator intervention is needed. Common functions include both spotting assist and collision avoidance systems. Full automation "Full automation" can refer to the autonomous control of one or more mining vehicles. Robotic components manage all critical vehicle functions including ignition, steering, transmission, acceleration, braking, and implement control (i.e. blade control, dump bed control, excavator bucket and boom, etc.) without the need for operator intervention. Fully autonomous mining systems experience the most productivity gains as software controls one or more mining vehicles allowing operators to take on the role of mining facilitators, troubleshooting errors and monitoring efficiency. Benefits The benefits of mining equipment automation technologies are varied but may include: improved safety, better fuel efficiency, increased productivity, reduced unscheduled maintenance, improved working conditions, better vehicle utilization, and reduced driver fatigue and attrition. Automation technologies are an efficient way to mitigate the effects of widespread labor shortages for positions such as haul truck driver. In the face of falling commodity prices, many mining companies are looking for ways to dramatically reduce overhead costs while still maintaining site safety and integrity; automation may be the answer. Drawbacks Critics of vehicle automation often focus on the potential for robotic technology to eliminate jobs while proponents counter that while some jobs will become obsolete (normally the dirty, dangerous, or monotonous jobs), others will be created. Communities supporting underprivileged workers that rely on entry level mining positions are worried about and are calling for social responsibility as mining companies transition to automation technologies that promise to increase productivity in the face of falling commodity prices. Risk averse mining companies are also reluctant to commit large amounts of capital to an unproven technology, preferring more often to enter the automation scene at lower, more inexpensive levels such as remote control. Examples of autonomous mining equipment Mine of the future Rio Tinto Group embarked on their Mine of the Future initiative in 2008. From a control center in Perth, Rio Tinto employees operate autonomous mining equipment in Australia's remote but mineral rich Pilbara region. The autonomous mining vehicles reduce the footprint of the mining giant while improving productivity and vehicle utilization. As of June 2014, Rio Tinto's autonomous mining fleet reached the milestone of 200 million tonnes hauled. Rio Tinto also operate a number of autonomous blast hole drill rigs. Bingham Canyon Mine Located near Salt Lake City, Utah, the Bingham Canyon Mine (Kennecott Utah Copper/Rio Tinto) is one of the largest open pit mine in the world and one of the world's largest copper producers. In April 2013, the mine experienced a catastrophic landslide that halted much of the mine's operations. As part of the cleanup efforts and to improve safety, mine administrators turned to remote control excavator, dozers and teleremote blast hole drills to perform work on the highly unstable terrain areas. Robotic technology helped Kennecott to reduce the steeper, more dangerous areas of the slide to allow manned vehicles access for cleanup efforts. Automation of underground works in China German company «EEP Elektro-Elektronik Pranjic» delivered and put into operation more than 60 sets of advanced automatic control for underground coal mining for the period ~ 2006–2016. For the first time completely deserted coal mining technology has been used by the Chinese concern «China National Coal Group Corp. (CME)» at the mine «Tang Shan Gou» (longwall mining, shearers, three lava, depth 200 m), and at the mine «Nan Liang» (one plow, depth 100 m). Both coal mines have coal layer thickness 1-1.7 m. Monitoring the harvesting is carried out by means of video cameras (in real time with signal transmission over optical fiber). Typically, an underground staff is required to monitor the production process and for carrying out repairs. Automation has improved the safety and economic performance. Next Generation Mining BHP have deployed a number of autonomous mining components as part of their Next Generation Mining program. This includes autonomous drills and autonomous trucks in the Pilbara region. Autonomy in Europe In March 2021, Ferrexpo plc announced that it had successfully deployed the first large scale autonomous mining trucks in Europe with the conversion of its CAT 793D haul trucks. The company has used semi-autonomous drill rigs at its operations since 2017. See also Mining Automation Robotics Mechanization Environmental impact of mining Natural resource References Emerging technologies Industrial automation Mining techniques
31106633
https://en.wikipedia.org/wiki/Mumbai%20Educational%20Trust
Mumbai Educational Trust
Mumbai Educational Trust also known as MET League of Colleges is an academic institution located in Bandra, Mumbai and Nashik. It offers degrees in areas including management, information technology, mass media, pharmacy, medical sciences and insurance training research & development. Campus Mumbai Campus MET Institute of Management MET Institute of Mass Media MET Asian Management Development Centre MET Centre for Insurance Training, Research & Development MET Institute of Pharmacy MET Institute of Medical Sciences MET Institute of Information Technology MET Institute of Software Development and Research MET Institute of Computer Science MET Institute of International Studies MET Rishikul Vidyalaya Nashik Campus Institute of Management, Adgaon Institute of Pharmacy, Adgaon Institute of Engineering, Adgaon Institute of Technology (Polytechnic), Adgaon Institute of Information Technology, Adgaon Institute of D. Pharmacy, Adgaon School of Architecture and Interior Designing, Gowardhan Famous Alumni John Abraham, Actor References Universities and colleges in Mumbai
42630834
https://en.wikipedia.org/wiki/Disney%20Infinity%202.0
Disney Infinity 2.0
Disney Infinity 2.0 (also known as Disney Infinity: Marvel Super Heroes) is an action-adventure sandbox toys-to-life video game developed by Avalanche Software and Heavy Iron Studios and published by Disney Interactive Studios. The game features additional work by Studio Gobo and Ninja Theory. It is the sequel and successor of the 2013 video game Disney Infinity and was announced on April 8, 2014. The game was released on September 23, 2014 in North America, September 19, 2014 in the United Kingdom, September 18, 2014 in Australia and the rest of Europe for iOS, PlayStation 3, PlayStation 4, Wii U, Microsoft Windows, Xbox 360 and Xbox One, and for PlayStation Vita on May 9, 2015. Featuring original story material by Brian Michael Bendis, Marvel Super Heroes includes three plots and includes characters from both Marvel and Disney properties. The main campaign is divided into three stories. The first storyline follows the Avengers as they prevent the Norse God Loki from rebuilding the Casket of Ancient Winters. The second storyline centers on Spider-Man and features him, Nova, Iron Fist, and Nick Fury teaming with Venom and being tasked with the objective to stop Green Goblin and Mysterio's symbiote invasion. The third and final storyline concentrates around the Guardians of the Galaxy, in a plot heavily inspired by the movie with the same name, where the Guardians must protect Knowhere from Ronan the Accuser. Disney Infinity 2.0 expands Disney Infinitys array of tools to use in the Toybox mode, an open world sandbox. Additionally, the ability to personalize individual characters has been added, using a Skill Tree. Marvel Super Heroes adds on the previous game's community sharing options, including the sharing of personal creations. It was also the first Marvel-licensed game to be published by Disney Interactive Studios, due to Sega's contract with Marvel expiring after the release of Captain America: Super Soldier in 2011, and Activision's contract expiring after the release of The Amazing Spider-Man 2, released a few months prior to the release of Infinity 2.0. A sequel, Disney Infinity 3.0, was released on August 28, 2015 in Europe and August 29, 2015 in North America. Plot The game follows multiple plots, with each Playset having its own. Avengers Playset The Avengers Playset centers on Loki and MODOK attempting to rebuild the Casket of Ancient Winters, with the aid of the Frost Giants, in order to freeze New York City. Spider-Man Playset The Spider-Man Playset, which is heavily inspired by Ultimate Spider-Man, centers on Green Goblin and Mysterio cloning the Venom symbiote and launching a symbiote invasion. Guardians of the Galaxy Playset The Guardians of the Galaxy Playset, which is heavily inspired by the Guardians of the Galaxy film in terms of design, centers on the Guardians stealing an Infinity Stone from Ronan the Accuser. Ronan follows the Guardians to the planet Knowhere. With the help of Cosmo the Spacedog and The Collector the Guardians must protect Knowhere from Ronan and the Sakaaran fleet. The Guardians of the Galaxy eventually make way onto Ronan's ship, the Dark Aster, to defeat him once and for all. Gameplay Disney Infinity 2.0 features similar gameplay to Disney Infinity. Each character can equip a "Pack" or a "Tool". Packs are items that are worn and can be equipped on a character's back, occasionally shown as some sort of backpack. Tools are items, such as rifles, that can be equipped in a character's hand. Some aspects of the game, however, have been changed. In addition to ground movement, certain characters can fly and hover. The size of Toybox worlds and Playset has been increased, notably due to the inclusion of flying. Certain characters can climb walls, including the Hulk. The level cap of playable characters has been expanded from Level 15 to Level 20. Characters can be "leveled up" by collecting sparks. A "Skill Tree" has been incorporated into the game and can be used to upgrade a character's abilities. The figures from the first game also receive additional abilities through the Skill Tree. The choices made in the Skill Tree are semi-permanent, as players have the option to reset the Skill Tree choices by resetting their character to Level 0. Some characters can jump long distances, including Hulk. Each character plays differently from another, with characters being roughly divided into two categories, Ranged and Melee. Ranged characters fight from far distances, while Melee characters fight using close combat. Signature moves can be unlocked as the player progresses. For example, Drax primarily utilizes short-ranged melee attacks to defeat enemies, while Star-Lord can use ranged pistols. As characters are leveled up, new abilities will be unlocked. Iron Man's ground slam and Hulk's Thunderclap abilities can be unlocked once players have enough Skill Points. Characters' upgrades differ. Iron Man can upgrade his blasting capabilities, while Thor and Hulk can upgrade their strength, using the new Special Skills and Combat system. All upgrades, skills and progress applied to the character will be saved to the figurine. The freeform unlock system in Disney Infinity 2.0 replaces the roulette-based unlock system from Disney Infinity. If a player is defeated in-game, they must wait a certain time frame until they can use the character they were using again. A map scanner has been implemented to allow players to get around worlds more easily. Mission givers are also present in the game. In order to web-sling players do not need to attach their webs to surroundings near by. Disney Infinity 2.0 is split up into two game modes: Playset and Toybox (referred in Disney Infinity 2.0 as Toybox 2.0.) In Playset mode, players can play open world multiple story modes based specifically on a certain franchise. Each Playset spans from 6–10 hours in length. These Playsets can be accessed by placing the Playset piece (also known as a crystal shard piece) on the Disney Infinity Base, a near field communication-enabled hexagonal platform that has two round spots to place figures (and use them in the game) and one hexagonal spot to place the Playset piece or Power Discs. Only figures, usable in the game through near-field communication technology, that are from a Playset's corresponding franchise can be used in that Playset. Certain characters, including Nova, Iron Man, Rocket Raccoon and Hulk are playable in more than one Playset, by collecting tokens (called "Cross Over Tokens") scattered throughout levels. Once players have collected ten Cross Over Tokens for one character, that character can be used in another Playset and will unlock exclusive storylines. The Avengers Playset and the Spider-Man Playset take place in "Marvel Manhattan", a version of Manhattan that is 4x larger than The Incredibles Playset featured in the first Disney Infinity title. The Incredibles Playset was the largest environment in Disney Infinity. While in Playset mode, players can uncover secret missions and new goods that can be used in the Toybox. Bendi penned the stories and dialogue of all future Marvel Playsets. Local multiplayer can be accessed through split screen. The latter game mode, Toybox mode is a sandbox mode that allows players to create their own games and worlds, using items unlocked in Playset mode. Certain characters, including Tinker Bell, cannot be played inside Playset mode and only in Toybox mode. Toybox mode supports local and online multiplayer. In Toybox mode, players are given a "Magic Wand", which can be used to place items around the sandbox, customize items and access the Toy Shop (a virtual shop where players can purchase new items for use in the Toybox). The Magic Wand can also be used to set commands on objects, using "Game Logic". Creativi-Toys, objects in the Toybox, can be set to make Game Logic return in the game, with 75 new additions. For example, a player can set a party cannon to shoot confetti if they connect the cannon to a button. Players can also restrict access to certain packs and tools in the Toybox, including the Magic Wand. The Toy Shop replaces the Toy Spinner from the first game. Items that can be used in Toybox mode can be unlocked through Playset mode. In an attempt to make Toybox building easier, the game features improved and enhanced Toybox editor tools (including in-game "brushes", that automatically create scenery for players, including cities, dungeons and race tracks). Templates, pre-built mini-games when placed into a world or "Auto Creators" are also present in the game. Templates range from Cliff-side beat 'em up Battle Arenas (similar to Super Smash Bros.) to obstacle courses. Players can then tweak and change Templates to their liking. In order to add some Templates into the Toybox, players must decide the length, width and height of a box, that will house and generate the desired structure. A Challenge Maker has also been implemented into the game, allowing players to create structured games. Result screens were also added for competitive play Toyboxes. A new feature introduced in Disney Infinity 2.0 is the Text Creator, which allows players to write their own text bubbles in the Toybox. All text, however, must be filtered and moderated through a safe chat server. The game also introduces the Toy Box Game Maker, a tool that can restrict access to certain actions, including the use of the Magic Wand. An undo button has also been implemented into the game. Objects from the first game can be unlocked in Disney Infinity 2.0 by placing a Disney Infinity Playset piece on the Disney Infinity 2.0 base. It also features automated builders, that can be dropped into the Toybox and generate worlds for players. Automated builders include Eve from WALL-E and Sultan from Aladdin. In the Toybox mode players can also create interior areas, a feature absent from the first game. Interior areas (styled in-game as INteriors) can also be themed to certain characters. More than twenty rooms can be added inside an INteriors. Multiple INteriors can be connected through doorways inside buildings. In Disney Infinity each figure had their own "Adventure", a playable level that was made specifically for the character. Toybox Adventures did not return in Disney Infinity 2.0. An introductory Toybox is pre-loaded into the game, and is filled with Game Play Hosts, who can assign missions and answer questions a player may have. Rewards are given to players once Game Play Host missions are completed. Players can share their Toybox creations with friends by uploading them onto the internet. To help incorporate the new sharing features, a Newsfeed screen was added to the main menu. Players can upload up to 300 Toyboxes to a private cloud and can save up to 100,000 Toyboxes. Players can "like" favorite published Toyboxes and find other Toyboxes, that have found positive reception from other players. In order to make searching through the shared Toyboxes easier, players can set categories and filters that apply to the Toybox that they want to play in. The game introduces Toybox Games, discs which allows players to create games based on genres such as tower defense and dungeon crawling. Each Toybox Game is around 3–4 hours in length. The tower defense Toybox Game is set in Asgard and called Attack on Asgard. In Attack on Asgard, Loki commands Frost Giants to invade Odin's Palace. Power Discs, physical discs placed on the Infinity Base to unlock skies, textures and in-game toys, return from the first game. There are two types of Power Discs, Circular and Hexagonal. Circular Power Discs give in-game boosts to playable characters and Hexagonal Power Discs unlock skies, textures and in-game toys (including vehicles). The game also features two-wheeled vehicles, like motorcycles in addition to four-wheeled vehicles. Eighty-two Power Discs have been released. Some Power Discs feature "Team-Ups". Team-Ups make other characters pop up in-game to fight alongside the player. Certain Power Discs also unlock new costumes for characters, including Gamora's "space armor". Additionally, some Hexagonal Power Discs can launch "Area of Effect" attacks. All figures, Power Discs and toyboxes, from Disney Infinity are compatible with Disney Infinity 2.0. Bases from Disney Infinity are also compatible. However, an Xbox 360 base is not compatible with Xbox One. Whether or not players decide to use the Infinity Base from the first or second game, that base must be connected to a console via USB. Development Concept Disney Infinity was originally described to be a "platform that's always growing and will ultimately have something for everyone," and that the Infinity platform is open to expanding. When Disney Infinity was yet to be released, Avalanche Software CEO, John Blackburn stated that discussions about the inclusion of Star Wars and Marvel characters in the game, took place. On that subject Blackburn said "I hope so! As a developer, I so want that. When Disney bought Lucasfilm, I was like, 'Holy crap! This is awesome!' But at this point in time, there's been some discussions about that, there's just been no decisions made." In February 2014, Avalanche Software began hiring game testers for a potential Disney Infinity sequel. Star Wars characters and playsets would eventually be added in the following expansion, Disney Infinity 3.0. Avalanche Software started working with Marvel Entertainment, after it was acquired by Disney in 2009, about developing a game featuring Marvel superheroes, in 2011. The conception of the game started immediately after the acquisition, as Disney Infinity 2.0s executive producer, John Vignocchi commented "As soon as the acquisition happened we started having conversations about Marvel. I giggle that I know the date, because I checked in via Foursquare at Marvel. So that's how I knew, that was the first time we sat down with Joe Quesada." Avalanche developed a demo in October 2013, which featured the Marvel superhero Iron Man flying through a rudimentary city. The demo was later released to the public almost a year later. Since the game was set to include Marvel characters, Brian Michael Bendis, a comic book writer, was brought on to write original story material. This would allow the game's developers to provide more deep and immersive storylines. With a more complex combat system, Ninja Theory was brought on the development team in February 2014, to fulfill the concept. Announcement On April 8, 2014, Marvel Entertainment released a teaser trailer featuring characters from Disney Infinity interacting with Captain America's shield. D23, the official Disney fan club, originally reported that the game would be released in August 2014. This statement was later removed, as representatives of the site said "we erroneously reported about the launch of the next chapter of Disney Infinity. This information was incorrect. More news about Disney Infinity will be shared on April 30, and we hope you will check back for that update." On April 30, 2014, Disney announced a follow-up game, to Disney Infinity, Disney Infinity 2.0, at Pacific Theatres' Arc-Light Cinerama Done, in Hollywood, for release on the Xbox 360, PlayStation 3, Wii U, iOS, Microsoft Windows, Xbox One, and PlayStation 4 in Q4 2014, featuring characters from the Marvel Comics universe. The announcement was also broadcast live online through a webcast. The unveiling included appearances by Clark Gregg, who appears as Agent Coulson in Agents of S.H.I.E.L.D., and Samuel L. Jackson, who appears as Nick Fury in Marvel Super Heroes and many films in the Marvel Cinematic Universe. Jackson was in-character as Fury, in a video where he discusses "Project Infinity." Marvel Entertainment chief creative officer Joe Quesada was also at the unveiling, declaring that the comic book characters are "uniquely Marvel but still fit within the game". At Sony's Electronic Entertainment Expo 2014 conference it was revealed that the game would be released for the PlayStation Vita and PlayStation TV, in 2015. The PlayStation Vita version of the game would play like the PlayStation 3 version. The Guardians of the Galaxy Playset was revealed on July 23, 2014, and it was confirmed Studio Gobo (the studio that developed the Pirates of the Caribbean Playset for Disney Infinity) developed the Playset. Design The art style from Disney Infinity, in which every character and object is designed to look like a toy and was co-designed by Pixar, was carried over to Disney Infinity 2.0. Developers re-imagined Marvel characters in this art style. John Vignocchi said on the redesigns, "it was a really interesting process. Working with Joe Quesada and the creative team and Marvel -- getting them to understand what the visual aesthetic of 'Disney Infinity' is, that is, they're not the characters themselves, but toy versions of those characters, and reimagining some of Marvel's most iconic superheroes in that art style was a daunting, but very fun task for this team." Each character goes through multiple stages of development, including how they will look in the game and what their figurine's pose will be. The developers specifically designed the Venom character to be on par with Hulk when it came to size and mass. On Venom's design John Vignocchi said "we hulked him out a bit. in fact, the character is the only one that comes close to the Incredible Hulk in the game (designers wanted another brute to offer) and when fully powered up, stands on equal footing with the green beast." No "Crystal" variants of characters have been designed for Disney Infinity 2.0, unlike the first game. The Disney Infinity team wanted to focus on a more diverse array of characters, as said by John Vignocchi "we're not planning on doing the crystal characters again. Our focus this time around – rather than create a whole series of chase characters, we wanted to make sure that we had a wider variety of characters," he said. "What we decided to do instead was really put our horsepower into creating different characters as opposed to variants. We think our fans appreciate having a character like Star-Lord or Rocket Raccoon as opposed to another version of Mickey Mouse." The game has been designed to be less violent, in order to appeal to a wider audience. Despite this, a Crystal variation of Sorcerer's Apprentice Mickey Mouse would be released. When designing the game, Avalanche Software focused not only on the inclusion of the NFC-enabled Marvel figurines, but improving the Toybox mode (especially for younger players) as well. Jimmy Pitaro, president of Disney Interactive Studios stated "we wanted to take everything that worked so well and make it epic." Each character was picked based on how they would fit in the game, as said by TQ Jefferson, "what you're going to see as we roll out and reveal more characters, is that they're going to be more characters that you've known forever and they make sense," Jefferson said. "And the other characters, you may not know them as well, but once you see them in-game, you're going to understand why we made the choices that we made. These characters bring a certain narrative strength with them and, just as important, they bring a unique bit of gameplay with them ... [these are the] characters that are the most fun to play ... that have the most impact on story. Once you see the full spectrum of characters, it's not just an Avengers game. It's very much a Marvel super heroes game." The development team tried to appeal to the game's target audience (children) by making changes to environments, with one being to disregard physics. Matt Solie, of Disney Interactive Studios, when asked about the topic stated "I had my nephews, who are eight and five, try Amazing Spider-Man 2, which is the epitome of physics-based, buildings attached—[it's] not really fun for them. They didn't understand, 'Why can't I swing?'" Solie said. "It just made sense for a kids and family game." Also basing the game around its target audience John Blackburn, part of the game's development team, looked to his 6-year-old nephew, Logan, to see what younger audiences wanted out of the game. The procedural editing options were added, after Blackburn noticed that Logan was building very simple things in the Toybox. Though Disney Infinity 2.0 is heavily inspired by Avengers Assemble, Ultimate Spider-Man, Hulk and the Agents of S.M.A.S.H. and the Marvel Cinematic Universe in terms of design, the possibility of the female Thor from the comics joining the game has not been ruled out. On the possibility of the game's artists possibly designing a toylike female Thor, John Vignocchi stated "The Thor that we have inside of Infinity is based on the animated series Avengers Assemble. Infinity being a platform that evolves over time, that certainly doesn't rule out the possibility of creating a new version of Thor. Should there be a demand, we'd love to do that." No X-Men characters would be included in the game. Analysts believe this is due to the fact that 20th Century Fox owned the film rights to X-Men. The Disney Infinity fanbase and "community" have avidly played a role in development of Disney Infinity 2.0. Community members Stewart "CrazybyNick" Malcolm, Patrick "PapaEcho" Efird, Tyler "Tyforce" Cole, Jose "doctorlogicgate" Abalos and Quinn "quinnjitsu" Johnson, who competed in multiple challenges hosted by the Disney Infinity team, were actually hired to create free downloadable levels for Disney Infinity 2.0. Ninja Theory was hired by Disney to design Loki, Ronan, Green Goblin and Jasmine's combat systems and polish all of the other characters' combat systems. PlayStation 4 and Xbox One development Prior to the release of Disney Infinity, Avalanche Software expressed interest in supporting Disney Infinity toys on Xbox One and PlayStation 4 consoles. John Day, of Avalanche Software stated that Xbox One and PlayStation 4 versions of Disney Infinity would likely be "way more beautiful," offer "a whole lot more" content, "or both." The versions of the game for the PlayStation 4 and the Xbox One have better graphics and more memory than their counterparts for the PlayStation 3, Xbox 360, and Wii U. When discussing the additional power of the PlayStation 4 and the Xbox One, John Vignocchi stated "I'd say, specifically related to Xbox One, the graphical prowess of that system is incredible. So, we're looking at what we can do to make sure the game looks incredible on [the Xbox One]. From a features perspective, I'd say the thing we're most excited about is that players are going to be able to create Toy Boxes now that are much, much larger than they were before," and "In terms of what we're doing with [PS4 and Xbox One], we're planning on making use of the graphical prowess of those machines — they can pump out some amazing-looking graphics. In addition to that, what we're planning on doing is taking advantage of the overall memory footprint. In Disney Infinity there were certain platforms that were constrained by the amount of RAM, so people couldn't truly build the toy box of their dreams. We've optimized the engine to allow the last-gen, PS3 and 360, to still build compelling Toy Box levels, but with Xbox One and PS4, the sky's the limit." The Xbox One and PlayStation 4 versions of the game are more advanced than the others, as executive producer John Vignocchi said "one of the most unique features of Disney Infinity 2.0 on Xbox One and PlayStation 4, just because of the memory footprint of those systems, is that the size of the toy boxes you can create are just gigantic." The game engine is improved and expanded. Release Disney Infinity 2.0 game disc and video game software were released to the public on multiple dates, through "Starter Packs." Starter Packs are bundles that provided players with the game disc, the Infinity Base, a web code card, a poster and varying playable characters based on the Starter Pack they purchase. The game was first released in the Marvel Super Heroes Starter Pack, which includes the game disc, the Infinity Base, the Avengers Playset, a web code card, a poster as well as Iron Man, Black Widow and Thor playable characters. This bundle was released on September 23, 2014 in North America, September 19, 2014 in the United Kingdom, and on September 18, 2014 in Australasia and the rest of Europe, on PlayStation 3, PlayStation 4, Wii U, Xbox 360 and Xbox One. The Toy Box Starter Pack (known as the Toybox Combo Starter Pack in some markets), was released on November 4, 2014 in North America and November 7, 2014 in the United Kingdom, includes the game disc, Merida and Stitch playable characters, two Toybox Game Discs, the Infinity Base, a web code card and a poster. This bundle was released for PlayStation 3, PlayStation 4, Wii U, Xbox 360 and Xbox One. A Collector's Edition Starter Pack was exclusive to the PlayStation 3 and PlayStation 4 released, in addition to everything included in the Disney Infinity 2.0 Starter Pack, it includes the Hulk, Captain America and Hawkeye as playable characters and a diorama featuring a Frost Beast to display all The Avengers characters. The Collector's Edition was unveiled during Sony's E3 2014 press conference, and was created after a partnership between Sony and Disney was formed. The Collector's Edition was exclusive to EB Games in Australia and New Zealand. This bundle was eventually released on September 23, 2014 in North America, September 19, 2014 in the United Kingdom, and on September 18, 2014 in Australasia and the rest of Europe. The game was bundled with a 12 GB PlayStation 3 console. An Disney Infinity 2.0 Marvel Premium Value Pack bundle exclusive to Amazon.com includes the Marvel Super Heroes Starter Pack, Captain America, Venom and Rocket Raccoon playable characters, a Power Disc blind pack and the Spider-Man Playset. For those who purchased the first game, Disney Infinity 2.0 was also released digitally, through the Nintendo eShop, PlayStation Store, and Xbox Games Store. The digital version of the game was available for Xbox One, due to hardware compatibility issues. The digital version of Disney Infinity 2.0 was available at launch, but instead was released on November 4, 2014 in North America. This version of the game includes the Toybox mode, but requires an Infinity Base and Disney Infinity figurines. The Microsoft Windows version of the game was released digitally, as a standalone, brand-new application for Windows. Micro transactions were required to unlock additional content. The iOS version of Disney Infinity 2.0 was through the Apple App Store. Disney Interactive Studios announced that starting July 24, 2014 through September 22, 2014, consumers who pre-order the Starter Pack would receive a free Disney Infinity 2.0 figurine from retailers nationwide. Retailers including Disney Store, Tesco and Game have offered this deal. Marketing Disney Interactive Studios has been present at multiple conventions (including the Electronic Entertainment Expo and San Diego Comic-Con International), promoting Disney Infinity 2.0. Free figures were handed out at E3 2014, and at Blogger Bash's Sweet Suite 2014. In promotion of both Disney Infinity 2.0 and Big Hero 6, Disney created a giant-sized version of the Baymax Disney Infinity 2.0 figure. At San Diego Comic-Con Disney set up a pop-up shop, which allowed the public to play the game, meet special guests and participate in giveaways. Alex Hirsch, creator of Gravity Falls was also present at the pop-up shop, to announce the inclusion of Gravity Falls Power Discs. Disney also hosted special events for D23 and Marvel Unlimited members. The game was also present during the Marvel Games panel. Additionally the Marvel Booth and Microsoft Game Lounge held demos, to promote the game. A Disney Infinity 2.0-themed Xbox One was also given away at Comic-Con. On August 9, 2014 GameStop held Disney Infinity Day, an event that celebrates the game in stores, by offering promotions on it. An official Disney Infinity Toy Box Summit was held from August 15, 2014 to August 17, 2014 in Salt Lake City. The summit was a fan appreciation event and allowed invitees to participate in multiple competitions involving the Toybox. Disney allowed invitees to have a hands-on preview with Disney Infinity 2.0. An in-depth look at the Toybox was also showcased during the Summit. Disney also held a "Movie Magic Video Contest," which gave players a chance to win a Captain America figure and a chance to go to the Toybox Summit. Disney Infinity 2.0 was included in Disney's "Marvel Super Hero September" campaign, which promoted multiple Marvel films and products, including Captain America: The Winter Soldier. Multiple trailers have been released for the game. A teaser was released on April 8, 2014. The reveal trailer for the game was unveiled on April 30, 2014. A trailer focusing on the Avengers Playset was released on May 22, 2014. A second trailer this time based on the Spider-Man Playset was released on June 10, 2014. Disney revealed Stitch and Tinker Bell with a preview of the game, on July 16, 2014. Another trailer focusing on the Guardians of the Galaxy Playset was released on July 23, 2014 A trailer centering on super villains (Loki, Ronan and Green Goblin) was shown at San Diego Comic-Con 2014 on July 26, 2014, and was released to the public on July 28, 2014. On August 7, 2014 a trailer focusing on Aladdin and Jasmine from Aladdin was released. To coincide Gamescom 2014, Disney Interactive Studios released a trailer featuring Donald Duck on August 12, 2014. An informational trailer for the Toybox was released on August 14, 2014. Another trailer centering on the Toybox was released on August 20, 2014. A trailer revolving around Baymax and Hiro Hamada from Big Hero 6 was released on August 27, 2014. A trailer, titled Walk It, themed to the Aerosmith song Walk This Way was released on September 5, 2014. Multiple teases from the Disney Infinity team have been made on social networks, including Twitter and Facebook. One prime example is the "Something wicked is coming to #DisneyInfinity... pic.twitter.com/tt8sJytvwi" tweet, which revealed Maleficent would appear in the game. To promote the game, developers have appeared in interviews conducted by gaming sites, including GameSpot. Disney has also chosen to advertise the game through Radio Disney. A weekly web series, The Disney Infinity Toy Box TV: The Official Weekly Web Show, which highlights outstanding Toy Box creations from players, has regularly been providing updates on the game. These updates include "sneak peeks" at characters (in-depth descriptions of characters' abilities and appearances.) The web show is hosted John Vignocchi and the game's community manager Allison Petrek. In a marketing stunt, a holographic visualization of Hulk was projected at Tower Bridge. The projection of Hulk stood 36 feet high, and after debuting the stunt, Disney officially uploaded a video of the hologram on YouTube. The stunt also featured holographic projections of Spider-Man and Iron Man. Since the game's unveiling there have been multiple leaks revealing content featured within Disney Infinity 2.0, primarily from stores marketing the game. Target inattentively leaked the release date of the game, September 23, 2014, in a flyer. Venom was also leaked by the flyer, as the character had never been revealed prior. Walmart later leaked that multiple characters including the Guardians of the Galaxy would be playable in the game. Walmart did so by listing multiple characters, Playset packs and Starter Packs on their website. Another image was leaked, showcasing Star-Lord and Gamora figures. On the leaks, Vignocchi stated "Yeah, this year we definitely had a challenge with retail – putting out the sku plans and having those accessible by managers prior to announcements. But at the same time, as we've said internally – well, people care enough to talk about it, so we know we're doing something special!" Reception Upon release, Disney Infinity 2.0 received mixed to positive reviews from game critics. Review aggregator website Metacritic gave the PlayStation 4 version of the game 71/100. While the new additions to game's Toybox mode received praise, Disney Infinity 2.0s Playset mode was criticized. The Toybox was praised for its new additions by both David Roberts of GamesRadar and Randolph Ramsay of GameSpot. Roberts, however panned the game's "awkward combat, boring, repetitive quests and obtuse controls." He also described the Avengers Playset as being too empty and bland. Ramsay gave the game a more favorable review, but primarily found some of Disney Infinity 2.0s technical hiccups and the need to unlock toys in the Toybox, which made some of his time playing the game adverse. Brandon Jones of GameTrailers praised the easier-to-use Toybox tools, but was disappointed by a lack of diversity from the game's Playsets. Subsequently, Daniel Krupa of IGN commended the game's improved combat, which made characters more interesting, and the more accessible Toybox mode. He, like Roberts and Ramsay, criticized the game's lack of variety in its Playset mode, repetitive missions and "lifeless" version of New York City. Andrew Reiner of Game Informer thought that though The Avengers, Guardians of the Galaxy and Spider-Man Playsets were "miserable adventures," the characters' controls were done well and that the character animations were well done. Reiner, however, was not pleased with the music featured within all three of Disney Infinity 2.0s Playsets, calling it "cringe-worthy." He also cited that the dubstep music featured in the Spider-Man Playset was poorly done. Blake Peterson of Game Revolution gave the game a more commendatory review, explaining that the game's improved combat, stronger level design (compared to the first game), expanded Toybox options, and the ability to personalize characters, were some of the major things that made the game worthwhile. In spite of that, Peterson condemned Disney Infinity 2.0s bad driving mechanics, poor player training within Toybox mode and high cost to unlock content. Steve Hannley of Hardcore Gamer also gave the game a positive review, scoring it a 4/5, saying "Toy Box mode has been notably upgraded and overall gameplay is more polished, while increased depth will have players of all ages coming back for more," yet criticizing the "repetitive" Play Sets. At the 2014 National Academy of Video Game Trade Reviewers (NAVGTR) awards Disney Infinity 2.0 was nominated for the Game, Franchise Family. It was nominated in the 2015 Kids' Choice Awards for "Most Addicting Game", but lost to Minecraft. Sales Michael Pachter, of Wedbush Securities forecast that about 3.2 million copies of the game would be sold. He also forecast that the Marvel Playsets and figurines would double in sales, due to strong international appeal of super heroes. Prior to the game's release, Disney was initially surprised by the popularity of the Venom figurine. Disney has cited the character's design and abilities as its source of high sales. Rocket Raccoon was the best-selling character, since becoming available to pre-order in July 2014. The Rocket figurine sold almost twice as much as the Mike Wazowski and Elsa figurines from the first game. Those two figurines were the best-selling characters of Disney Infinity. The Rocket Raccoon figurine was later reported to have tripled in sales, but Disney declined to provide actual sales figures. Groot was the third selling character, only behind Venom. Disney expects Disney Infinity and Disney Infinity 2.0 to generate $1 billion in sales. Sequel A potential third installment in the Disney Infinity game franchise, and the inclusion of characters from Star Wars have been teased at by executive producer John Vignocchi. When talking about the franchise's future, Vignocchi stated that "2015 is not so far, far away anymore, is it?" On May 5, 2015, Disney announced a follow-up game through Game Informer, Disney Infinity 3.0, for release on Xbox 360, PlayStation 3, Wii U, iOS, Microsoft Windows, Xbox One, and PlayStation 4 in 2015, featuring characters from the Star Wars universe and other new Disney characters. Notes Through the Marvel Super Heroes Starter Pack, the first Starter Pack that was released for the game. The Microsoft Windows and iOS versions of the game were only released digitally, and do not have multiplayer modes. References External links 2014 video games Action-adventure games Action video games Android (operating system) games Crossover video games Disney video games IOS games Nintendo Network games Open-world video games PlayStation 3 games PlayStation 4 games PlayStation Vita games Sentient toys in fiction Science fiction video games Superhero video games Video games about toys Video games based on Marvel Comics Video game sequels Video games scored by Henry Jackman Video games set in New York City Wii U games Wii U eShop games Windows games Xbox 360 games Xbox One games Multiplayer and single-player video games Video games developed in the United States Video games featuring female protagonists Toys-to-life games Ninja Theory games Avalanche Software games 3D platform games
54241631
https://en.wikipedia.org/wiki/ACM%20SIGOPS
ACM SIGOPS
ACM SIGOPS is the Association for Computing Machinery's Special Interest Group on Operating Systems, an international community of students, faculty, researchers, and practitioners associated with research and development related to operating systems. The organization sponsors prestigious international conferences related to computer systems, operating systems, computer architectures, distributed computing, and virtual environments. In addition, the organization offers multiple awards recognizing outstanding participants in the field, including the Dennis M. Ritchie Doctoral Dissertation Award, in honor of Dennis Ritchie, co-creator of the renowned C programming language and Unix operating system. History In 1965, Henriette Avram started the ACM Special Interest Committee on Time-Sharing (SICTIME), and Arthur M. Rosenberg became the first chair. In 1968, the name was changed to ACM SIGOPS. By 1969, the organization included nearly 1000 members. Conferences ACM SIGOPS sponsors the following industry conferences, some independently and some in partnership with industry participants such as ACM SIGPLAN, USENIX, Oracle, Microsoft, and VMWare. APSYS: Asia-Pacific Workshop on Systems ASPLOS: International Conference on Architectural Support for Programming Languages and Operating Systems EuroSys: European Conference on Computer Systems OSDI: USENIX Symposium on Operating Systems Design and Implementation PODC: Symposium on Principles of Distributed Computing SOCC: International Symposium on Cloud Computing SOSP: Symposium on Operating Systems Principles SYSTOR: ACM International Systems and Storage Conference VEE: International Conference on Virtual Execution Environments Hall of Fame ACM SIGOPS includes a Hall of Fame Award, started in 2005, recognizing influential papers from ten or more years in the past. Notable recipients include: Leslie Lamport (2013) Barbara Liskov (2012) Richard Rashid Dennis Ritchie (2002) Journal ACM SIGOPS publishes the Operating Systems Review (OSR), a forum for topics including operating systems and architecture for multiprogramming, multiprocessing, and time-sharing, and computer system modeling and analysis. See also Cloud computing Computer engineering Computer multitasking Computer science Computing Kernel List of operating systems Operating system Timeline of operating systems Virtual machine References External links SIGOPS Association ACM SIGOPS France Association for Computing Machinery Special Interest Groups International professional associations
47902355
https://en.wikipedia.org/wiki/Void%20Linux
Void Linux
Void Linux is an independent Linux distribution that uses the X Binary Package System (XBPS) package manager, which was designed and implemented from scratch, and the runit init system. Excluding binary kernel blobs, a base install is composed entirely of free software, but users can access an official non-free repository to install proprietary software. History Void Linux was created in 2008 by Juan Romero Pardines, a former developer of NetBSD, to have a test-bed for the XBPS package manager. The ability to natively build packages from source using xbps-src is likely inspired by pkgsrc and other BSD ports collections. In May 2018, the project was moved to a new website and code repository by the core team after the project leader had not been heard from for several months. As of December 2021, Void is the fourth highest rated project on DistroWatch with a score of 9.07 out of 10. Features Void is a notable exception to the majority of Linux distributions because it uses runit as its init system instead of the more common systemd used by other distributions including Arch Linux, CentOS, Debian, Fedora, Mageia and Ubuntu. It is also unique among distributions in that separate software repositories and installation media using both glibc and musl are available. Void was the first distribution to have incorporated LibreSSL as the system cryptography library by default. In February 2021, the Void Linux team announced the switching back to OpenSSL on March 5, 2021. Among the reasons were the problematic process of patching software that was primarily written to work with OpenSSL, the support for some optimizations and earlier access to newer algorithms. A switch to OpenSSL began in April 2020 in the GitHub issue of the void-packages repository where most of the discussion has taken place. Due to its rolling release nature, a system running Void is kept up-to-date with binary updates always carrying the newest release. Source packages are maintained on GitHub and can be compiled using the xbps-src build system. The package build process is performed in a clean environment, not tied to the current system, and most packages can be cross-compiled for foreign architectures. As of April 2017, Void Linux supports Flatpak, which allows the installation of the latest packages from upstream repositories. Jesse Smith of DistroWatch notes fast boot times which he credited to runit, but also notes that documentation and bug-testing are lacking. Editions Void Linux can be downloaded as a base image or as a flavor image. The base image contains little more than basic programs, which users can then configure an environment for themselves. The flavor image contains a pre-configured Xfce desktop environment. Cinnamon, Enlightenment, LXDE, LXQt, MATE, and GNOME used to be offered as pre-packaged live images, but are no longer offered "in order to decrease the overhead involved with testing." The live images contain an installer that offers a ncurses-based user interface. The default root shell is Dash. Forks Void Linux for PowerPC/Power ISA (unofficial) is a fork of Void Linux for PowerPC and Power ISA. It supports 32-bit and 64-bit devices, big-endian and little-endian operation, and musl and glibc. Void-ppc maintains its own build infrastructure and package repositories, and aims to build all of Void Linux's packages on all targets. It is a fork largely because of technical issues with Void Linux's build infrastructure. Derivatives Project Trident was the first Linux distribution based on Void Linux, but has since been discontinued. Notes References External links Linux distributions without systemd Rolling Release Linux distributions Linux distributions
11550543
https://en.wikipedia.org/wiki/SYS%20%28command%29
SYS (command)
In computing, sys is a command used in many operating system command-line shells and also in Microsoft BASIC. DOS, Windows, etc. SYS is an external command of Seattle Computer Products 86-DOS, Microsoft MS-DOS, IBM PC DOS, Digital Research FlexOS, IBM/Toshiba 4690 OS, PTS-DOS, Itautec/Scopus Tecnologia SISNE plus, and Microsoft Windows 9x operating systems. It is used to make an already formatted medium bootable. It will install a boot sector capable of booting the operating system into the first logical sector of the volume. Further, it will copy the principal DOS system files, that is, the DOS-BIOS (IO.SYS or IBMBIO.COM) and the DOS kernel (MSDOS.SYS or IBMDOS.COM) into the root directory of the target. Due to restrictions in the implementation of the boot loaders in the boot sector and DOS' IO system, these two files must reside in the first two directory entries and be stored at the beginning of the data area under MS-DOS and PC DOS. Depending on version, the whole files or only a varying number of sectors of the DOS-BIOS (down to only three sectors in modern issues of DOS) will have to be stored in one consecutive part. SYS will try to physically rearrange other files on the medium in order to make room for these files in their required locations. This is why SYS needs to bypass the filesystem driver in the running operating system. Other DOS derivatives such as DR-DOS do not have any such restrictions imposed by the design of the boot loaders, therefore under these systems, SYS will install a DR-DOS boot sector, which is capable of mounting the filesystem, and can then simply copy the two system files into the root directory of the target. SYS will also copy the command line shell (COMMAND.COM) into the root directory. The command can be applied to hard drives and floppy disks to repair or create a boot sector. Although an article on Microsoft's website says the SYS command was introduced in MS-DOS version 2.0, this is incorrect. SYS actually existed in 86-DOS 0.3 already. According to The MS-DOS Encyclopedia, the command was licensed to IBM as part of the first version of MS-DOS, and as such it was part of MS-DOS/PC DOS from the very beginning (IBM PC DOS 1.0 and MS-DOS 1.25). DR DOS 6.0 includes an implementation of the command. Syntax The command-syntax is: SYS [drive1:][path] drive2: Arguments: [drive1:][path] – The location of the system files drive2: – The drive to which the files will be copied Example sys a: Microsoft BASIC SYS is also a command in Microsoft BASIC used to execute a machine language program in memory. The command took the form SYS n where n is a memory location where the executable code starts. Home computer platforms typically publicised dozens of entry points to built-in routines (such as Commodore's KERNAL) that were used by programmers and users to access functionality not easily accessible through BASIC. See also List of DOS commands Disk formatting References Further reading External links MS-DOS and Windows command line sys command Open source SYS implementation that comes with MS-DOS v2.0 External DOS commands Microcomputer software Microsoft free software BASIC commands
6433017
https://en.wikipedia.org/wiki/NEWP
NEWP
New Executive Programming Language (NEWP) is a high-level programming language used on computers running the Unisys operating system Master Control Program (MCP). The language is used to write the operating system and other system utility software, though it can also be used to write user software. Several constructs separate it from extended ALGOL on which it is based. Language operators such as MEMORY which allows direct memory access are strictly used by programs running as the MCP. NEWP replaced Burroughs Executive Systems Problem Oriented Language (ESPOL) Main constructs NEWP is a block-structured language very similar to Extended ALGOL. It includes several features borrowed from other programming languages which help in proper software engineering. These include modules (and later, super-modules) which group together functions and their data, with defined import and export interfaces. This allows for data encapsulation and module integrity. Since NEWP is designed for use as an operating system language, it permits the use of several unsafe constructs. Each block of code can have specific unsafe elements permitted. Unsafe elements are those only permitted within the operating system. These include access to the tag of each word, access to arbitrary memory elements, low-level machine interfaces, etc. If a program does not make use of any unsafe elements, it can be compiled and executed by anyone. If any unsafe elements are used, the compiler marks the code as non-executable. It can still be executed if blessed by a security administrator. Folklore NEWP is rumored to really stand for "Nearly Every Word Pascal" after a West coast engineering initiative to move Burroughs languages such as ALGOL over to a more Pascal-like syntax. Stories were also told that it stands for "No Executive Washroom Privileges," supposedly after its designers fell out of favor with management. Alternately, NEWP was chosen as the name of the compiler/language at the spur of the moment, by the designer, when pressed for a name under which the compiler code would be managed. It stood for "NEW Programming language", an essentially dull name, with the unhappy property that the "new" part of the name would quickly become incorrect. The original designer of the project was a Texan and soon started to describe the name as the answer to the question, "Is it done yet?". NEWP sounded like a West Texas version of "nope". Once the project was released, the name was "redefined" to stand for "No Executive Washroom Privileges" - a description of the type of person who would likely use the language. For a while a contest ran to come up with a better name for the compiler and language, but by that time the name NEWP had sunk its roots too deeply. See also Executive Systems Problem Oriented Language References ALGOL 60 dialect Systems programming languages Procedural programming languages
1286913
https://en.wikipedia.org/wiki/Reverse%20DNS%20lookup
Reverse DNS lookup
In computer networks, a reverse DNS lookup or reverse DNS resolution (rDNS) is the querying technique of the Domain Name System (DNS) to determine the domain name associated with an IP address – the reverse of the usual "forward" DNS lookup of an IP address from a domain name. The process of reverse resolving of an IP address uses PTR records. rDNS involves searching domain name registry and registrar tables. This may be used to try to identify the originator’s domain name to track, for example, a spammer sending spam emails or the domain name of a computer trying to break into a firewall or someone trying to hack the system. It may also be used to determine the name of the internet service provider assigned to a particular IP address. The reverse DNS database of the Internet is rooted in the .arpa top-level domain. Although the informational RFC 1912 (Section 2.1) recommends that "every Internet-reachable host should have a name" and that "for every IP address, there should be a matching PTR record," it is not an Internet Standard requirement, and not all IP addresses have a reverse entry. Historical usage The modern "reverse DNS lookup" should not be confused with the now-obsolete "inverse query" (IQUERY) mechanism specified in : The IQUERY message type was always "optional" and "never achieved widespread use"; it was "permanently retired" in 2002 with the adoption of . Implementation details IPv4 reverse resolution Reverse DNS lookups for IPv4 addresses use the special domain in-addr.arpa. In this domain, an IPv4 address is represented as a concatenated sequence of four decimal numbers, separated by dots, to which is appended the second level domain suffix .in-addr.arpa. The four decimal numbers are obtained by splitting the 32-bit IPv4 address into four octets and converting each octet into a decimal number. These decimal numbers are then concatenated in the order: least significant octet first (leftmost), to most significant octet last (rightmost). It is important to note that this is the reverse order to the usual dotted-decimal convention for writing IPv4 addresses in textual form. For example, to do a reverse lookup of the IP address the PTR record for the domain name 4.4.8.8.in-addr.arpa would be looked up, and found to point to google-public-dns-b.google.com. If the A record for google-public-dns-b.google.com in turn pointed back to then it would be said to be forward-confirmed. Classless reverse DNS method Historically, Internet registries and Internet service providers allocated IP addresses in blocks of 256 (for Class C) or larger octet-based blocks for classes B and A. By definition, each block fell upon an octet boundary. The structure of the reverse DNS domain was based on this definition. However, with the introduction of Classless Inter-Domain Routing, IP addresses were allocated in much smaller blocks, and hence the original design of pointer records was impractical, since autonomy of administration of smaller blocks could not be granted. RFC 2317 devised a methodology to address this problem by using CNAME records. IPv6 reverse resolution Reverse DNS lookups for IPv6 addresses use the special domain ip6.arpa (previously ip6.int). An IPv6 address appears as a name in this domain as a sequence of nibbles in reverse order, represented as hexadecimal digits as subdomains. For example, the pointer domain name corresponding to the IPv6 address is b.a.9.8.7.6.5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. Multiple pointer records While most rDNS entries only have one PTR record, DNS does not restrict the number. Multiple PTR records are used, for example, when a web server supports many virtual hosts i.e. multiple hostnames then resolve to a single address, and multiple hostnames will be returned for a PTR lookup for that shared address. However, DNS lookups typically occur over UDP, and since UDP has a limited message size, in extreme cases, multiple PTRs could cause a DNS response to exceed those UDP limits. Records other than PTR records Record types other than PTR records may also appear in the reverse DNS tree. For example, encryption keys may be placed there for IPsec, SSH and IKE. DNS-Based Service Discovery uses specially-named records in the reverse DNS tree to provide hints to clients about subnet-specific service discovery domains. Less standardized usages include comments placed in TXT records and LOC records to identify the geophysical location of an IP address. Uses The most common uses of the reverse DNS include: The original use of the rDNS: network troubleshooting via tools such as traceroute, ping, and the "Received:" trace header field for SMTP e-mail, web sites tracking users (especially on Internet forums), etc. One e-mail anti-spam technique: checking the domain names in the rDNS to see if they are likely from dialup users, or dynamically assigned addresses unlikely to be used by legitimate mail servers. Owners of such IP addresses typically assign them generic rDNS names such as "1-2-3-4-dynamic-ip.example.com." Some anti-spam filters assume that email that originates from such addresses is likely to be spam, and may refuse connection. A forward-confirmed reverse DNS (FCrDNS) verification can create a form of authentication showing a valid relationship between the owner of a domain name and the owner of the server that has been given an IP address. While not very thorough, this validation is strong enough to often be used for whitelisting purposes, since spammers and phishers usually cannot achieve forward validation when they use zombie computers to forge domain records. System logging or monitoring tools often receive entries with the relevant devices specified only by IP addresses. To provide more human-usable data, these programs often perform a reverse lookup before writing the log, thus writing a name rather than the IP address. References External links ICANN DNS Operations RFC 3596 DNS Extensions to Support IP Version 6 RDNS policies: AOL, Comcast, Craigslist, Misk.com Information retrieval systems Domain Name System
30517876
https://en.wikipedia.org/wiki/Rightware
Rightware
Rightware is a Finnish software development company that provides user interface software for automotive and other embedded industries. Headquartered in Helsinki, Finland, Rightware has offices in Silicon Valley, Detroit, Tokyo, Seoul, Shanghai, London and Munich. In 2015, Deloitte nominated Rightware on its "Fast 50" list of 50 fastest growing tech companies. History Rightware was founded in December 2009 after it received a €3 million investment to acquire the mobile business unit from Futuremark. In January 2011, Rightware officially released their UI design software called Kanzi UI Solution, and demonstrated a stereoscopic 3D home screen for Android made by Kanzi at Mobile World Congress 2011. In 2013 it was announced that Audi was using Rightware's Kanzi software for developing digital user interfaces for Audi cars. The first commercial vehicle (Audi A3) with Kanzi technology was shipped in June 2012. At CES 2014, Audi revealed its fully digital Virtual Cockpit instrument cluster, created using Rightware Kanzi. In May 2013, Rightware raised another $5.2 million in venture capital funding. On May 7, 2015, Rightware divested its graphics benchmarking business to Basemark, a newly formed Helsinki-based company in a management buyout deal. In September 2015, Rightware announced that it was collaborating with Nissan. In January 2016, Rightware announced the founding members of its Kanzi Partner Program. In February 2016, Rightware stated that more than 15 car manufacturers were using its Kanzi software for creating user interfaces. In August 2018, Ville Ilves was appointed as the CEO of the company. Previous CEO Jonas Geust transitioned to the role of board member. Products Kanzi Kanzi is a user interface creation software consisting of two parts: Kanzi Studio and Kanzi Engine. Kanzi Studio: provides a PC-based real-time WYSIWYG editor for designers and embedded engineers to create and customize user interfaces that run on embedded hardware. Kanzi Engine: run-time system that enables UI designs to be executed on any device supporting OpenGL ES. Kanzi Lite: reduced version of Kanzi run-time that is able to run and render content in low power systems, without a discrete GPU and OpenGL ES capabilities. Kanzi Performance Analyzer Kanzi Performance Analyzer is a performance measurement toolkit for automotive hardware and software platforms. By running a series of tests based on automotive Human-Machine Interface (HMI) use cases, the toolkit provides the user with reports and insight over the performance of available automotive platforms, helping them choose the right hardware and software platform. References Companies based in Helsinki Finnish companies established in 2009 Software companies of Finland Software companies established in 2009