id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
34120930
https://en.wikipedia.org/wiki/Nagpur%20Metro
Nagpur Metro
Nagpur Metro is a rapid transit system for the city of Nagpur, Maharashtra, India. It is estimated to cost . In February 2014, the Government of Maharashtra gave approval for the Metro Project, while the Government of India's Ministry of Urban Development had given "In Principle" approval for the Project. On 20 August 2014, Union Cabinet gave approval for the development of project, and Prime Minister Narendra Modi laid foundation stone on 21 August on his visit to the city. Construction on the project began on 31 May 2015, with trial run beginning on 30 September 2017. Prime minister Narendra Modi inaugurated operations on Nagpur Metro on 8 March 2019 via video conferencing along with Chief Minister of Maharashtra, Devendra Fadnavis and Union Cabinet Minister Nitin Gadkari. Nagpur Metro becomes 13th Metro system in India to be operational. It is also being touted as the greenest metro rail in India. In 21 August 2021, Maharashtra Chief Minister Uddhav Thackeray inaugurated a 1.6 km expansion between Sitabuldi and Kasturchand Park. This featured the Zero Mile Freedom Park station which has a 60,000-sq ft park and a 20-storey building, designed by a French architect. History The project was conceived during the year 2012, when cities having population of more than 20 lakh were made eligible for having a metro rail system. This decision by the government had made 12 cities in India eligible for the metro rail, including Nagpur. Kamal Nath, then Union minister for urban development, then proposed a metro rail in Nagpur and called for a detailed project report from the state government. On 22 February 2012, the Nagpur Improvement Trust (NIT) signed an agreement with Delhi Metro Rail Corporation (DMRC) to prepare the Detailed Project Report (DPR) for the metro rail in Nagpur. The Central Ministry had allocated to the State Government for carrying out the DPR work. Special Purpose Vehicle To execute the project, the Cabinet approved setting up of Nagpur Metro Rail Corporation Limited (NMRCL) now known as Maharashtra Metro Rail Corporation Limited(MAHA-METRO). Maha Metro is a Special Purpose Vehicle (SPV) created for the smooth implementation and operations of the Nagpur Metro Rail Project and is a joint venture of the Government of India and the Government of Maharashtra, with 50:50 equity. Maha Metro shall be solely responsible for the successful and timely completion of the project and its subsequent operations. Phase 1 Detailed Project Report DMRC submitted the report to the nodal agency Nagpur Improvement Trust on 12 February 2013. The total cost of the project was estimated at around 9,000 crore. The project consisted of two routes: Automotive square, Kamptee to MIHAN Metro depot and Prajapati Nagar, East Wardhaman Nagar to Lokmanya Nagar, Hingna. There would be 17 stations on Route no. 1 with a terminal cum depot at MIHAN and 19 stations on Route no. 2 with a terminal cum depot at Lokmanya Nagar. A cross major station was proposed at Munje Square where the two routes would meet and the commuter would be able to exchange the routes. It was expected that 12.21 percent of Nagpur's estimated population of 29 lakh in 2021, i.e., around 3,63,000 commuters would use the Metro by 2021. To provide first and last mile connectivity, feeder services such as shuttle buses, battery operated vehicles, pedestrian facilities and bicycle sharing schemes were included in the project. There would be 19 feeder routes covering all stations and a total distance of around 160 km. Feeder services would enhance the accessibility of the Metro for all classes of commuters, to and from homes and offices. Metro Rail Expenditure The Total Expenditure of the Project is estimated to ₹8,680 crore with the Central Government and State Government each contributing 20% share in the form of equity and subordinate debt. Nagpur Municipal Corporation and Nagpur Improvement Trust each giving 5% share of the expenditure while the rest 50% is being financed by loan. Recently, KfW, a German government-owned development bank, has approved a loan of 3,700 crores to NMRCL as a part of fund required for the project. An additional ₹444 crore loan will also be provided for funding the proposed feeder service and solar energy installation of Nagpur Metro Rail on mere concessional terms of KfW Germany. The remaining requirement of about €130 million for the entire project has been funded by AFD France. The 20-year credit, is used for funding signalling, telecommunication, automatic fare collection systems, lifts and escalators. Proposed Alignment by DMRC In early 2012 Nagpur Improvement Trust (NIT) requested DMRC to provide Consultancy services for preparation of a Detailed Project Report for Metro Rail System in Nagpur, Maharashtra initially for 30 km which was revised to 42 km in July 2012. Thereafter, DMRC conducted Traffic Surveys, Topographical Surveys, Geotechnical Investigations and Environment Impact Assessment Survey. The study area consisted of Nagpur Municipal Corporation Area. The study area totalled to approximately 217 km. Based on the different types of surveys done by DMRC, metro alignments were finalised after repeated inspection of the road network, intersections, passenger traffic flow, traffic congestion, connectivity to important land uses. Realignment of route On 3 August 2013, a meeting was held at Nagpur by authorities of Ministry for Urban Development to discuss the DPR of Nagpur Metro. In that meeting, the Joint Secretary of MoUD had expressed that the Financial Internal Rate of Return (FIRR) of the project should be at least 8% since the Ministry had already issued an advisory that FIRR of metro project should not be below 8%. On 1 October 2013, a presentation on the DPR was made by NIT to The Chief Minister, Government of Maharashtra. The Chief Minister was of the opinion to avoid underground alignment in MIHAN and also construct Maintenance Depot in the land belonging to State Govt Land. Subsequently, on 21 October 2013, a joint inspection of the North South corridor was done by MADC, NIT and DMRC. The original alignment of Corridor-I proposed was passing through Khamla Road, Airport Area after Sahakar Nagar and finally was ending at MIHAN. The alignment up to old airport station was elevated, then for a length of 3.30 km it was underground with one underground station named as New Airport Station and again elevated in MIHAN Area. Since the cost of underground section of the alignment is much more than the elevated section or the section at grade, alternative alignment was suggested for cost reduction, enhancement in PHPDT and to increase FIRR so that project becomes financially and economically viable. The new proposed alignment suggested in the above inspection, was to pass through a 24-meter wide road adjacent to London Street after Sahakar Nagar Junction and was proposed to be taken to the east along 24-meter wide road and London Street up to Wardha Road. From the intersection at Wardha road, the elevated alignment was proposed to be on the central divider on the Wardha Road. After crossing existing intersection point of Wardha Road & Airport Road, the alignment was to be shifted to the MIHAN area. Alignment in this portion was proposed to be at grade and to run parallel to Wardha Road up to ROB and abutting railway line thereafter up-to proposed Car depot. But, while working on this modification of alignment, it was noticed that a very large number of properties were falling along the alignment due to sharp curve at the junction of Sahakar Nagar & 24-meter wide road and also at the junction of 24-meter wide road & Wardha Road. As acquiring of these properties will be very tough and may delay the whole project, hence to avoid all such situation, it was decided to take the alignment on Wardha Road only without going on Khamla Road. This decision changed the alignment, and it was decided that North-South Corridor will pass through Wardha Road after Congress Nagar Metro Station. After crossing existing intersection point of Wardha Road & Airport Road, it was decided that the alignment will be shifted to the MIHAN area and alignment in this portion would be at grade and will run parallel to Wardha Road up to ROB and parallel to railway line thereafter up-to proposed Car Depot. 14-meter wide stretch of land between the railway boundary line and the road near proposed Container Depot of Container Corporation of India Ltd. would be affected by this proposed alignment of the Metro Rail as the proposed alignment passes through this stretch of land. MADC land of area 73 Ha was available on the west side of railway line and south of existing flyover near Khapri station. Average width of this land was about 80 m and was about 1800 m long. This MADC land would be utilised for Car Depot. Similarly, Depot of East-West Corridor had also been shifted to SRPF land near proposed Lokmanya Nagar Metro Station. This has caused deletion of few earlier proposed metro stations on North-South Corridor and addition of new stations on the same. Conflict with MSRDC As per the plan of east–west corridor of the Nagpur Metro, the route from east side of the Nagpur Railway station passes through the same road as that of the place where the Ram Jhula phase II was under construction. This led to a conflict between MSRDC and NMRCL regarding the position of pillars for their respective projects. Even after several meetings no solution came out. The matter was discussed in the High court of Nagpur Bench during the hearing of a PIL filled by Nagpur Chamber of Commerce Ltd (NCCL) for early completion of Ram Jhula, the six-lane cable-stayed railway over-bridge near Santra Market that was pending since nine years. High Court then directed both the parties to submit their plans to superintendent engineer of Public Works Department (PWD), based in Mumbai. The High Court also urged the Chief Minister to resolve the deadlock over Ram Jhula. In response, Devendra Fadnavis, Chief Minister of Maharashtra, assured that there would be an early resolve on the ongoing conflict between NMRCL and MSRDC. After some meetings between officials of NMRCL and MSRDC, which Chief Minister facilitated, both parties found a solution on the conflict. As per the new plan NMRCL jointly decided with Maharashtra State Road Development Corporation (MSRDC) to lay the metro rail track between two spans of Ramjhula. This alignment, although technically challenging, renders demolition of part of the Indira Gandhi Government Medical College and Hospital (IGGMCH) unnecessary, and will not disturb the Poddareshwar Ram Temple either. Phase 2 Detailed Project Report Maharashtra Metro Rail Corporation in early 2018 had started plans for phase 2 of Nagpur Metro by appointing RITES to prepare detailed project report for future routes. The DPR was to submitted by April 2018 but got delayed. RITES in the DPR has envisioned five routes for metro rail. Two routes were in the north–south corridor from Automotive Square to Kanhan River having length of 13 km with 12 stations and from MIHAN to MIDC ESR having length of 18.50 km with 10 stations. Two routes in the east–west corridor from Lokmanya Nagar to Hingna having length of 6.70 km and 7 stations and from Prajapati Nagar to Transport Nagar having length 5.60 km with 3 stations. A new route is also proposed from Wasudeo Nagar to Dattawadi having length of 4.50 km with 3 stations. The DPR was submitted to Urban Development Department(UDD) of Government of Maharashtra in July 2018 by Maharashtra Metro Rail Corporation. The phase 2 was approved in January 2019 by the State Government. Network Present Network Nagpur Metro Lines Line 1: Orange Line (North-South Corridor) (Rail Length:22.293 km; No of Stations:20) This corridor originates from Automotive Square on Kamptee Road; moves along Kamptee Road and reaches the intersection point of Amravati Road and Wardha Road, then after crossing fly over moves towards Munje Square, moves towards Dhantoli and along nala moves towards Empire/Dr Munje Marg, leads towards Congress Nagar T-Point, then on Rahate Colony Road and then falls on Wardha Road, leads towards NEERI, then moves along Wardha Road and then west of Railway Track in MIHAN area. And passes through 14m wide stretch of land between the railway boundary line and the road near proposed Container Depot. Entire length (22.293 km.) of this corridor is proposed as elevated except in 4.6 km at grade after Airport Station and in MIHAN area near Khapri Railway Station. There are 20 stations on this corridor of which 15 stations are elevated and 5 stations are at Grade. Sitaburdi Station is an inter-change station. Average inter-station distance is 1.20 km approximately varying from 0.54 km to 2.4 km depending upon the site, operational and traffic requirements. This line has started operations partially between Sitabuldi to Khapri since 8 March 2019. Line 2: Aqua Line (East-West Corridor) (Rail Length:19.407 km; No of Stations:20) This corridor originates from Prajapati Nagar and runs westwards, through Vaishnodevi Square, Ambedkar Square, Telephone Exchange, Chittar Oli Square, Agarsen Square, Doser Vaisya Square, Nagpur Railway Station, Sitaburdi, Jhansi Rani Square, Institute of Engineers, Shankar Nagar Square, Lad Square, Dharmpeth College, Subhash Nagar, Rachna (Ring road Junction), Vasudev Nagar, Bansi Nagar to Lokmanya Nagar. The entire corridor is elevated. The total length of the corridor is 19.407 kilometre. There are 20 stations on this corridor. All stations are elevated stations and Sitaburdi station is an Interchange Station. Average inter-station distance is 1.00 km approximately varying from 0.65 km to 1.29 km depending upon the site, operational and traffic requirements. The DMRC in its Detailed Project Report (DPR) submitted to Nagpur Improvement Trust has suggested to start the construction work on both the routes simultaneously contradicting the prior suggestion of phase wise development. This line has started operations partially between Sitabuldi to Lokmanya Nagar since 28 January 2020. Status updates November 2013: Detailed Project Report final version submitted. August 2014: Union Cabinet approved the project. Orange Line: North-South Corridor November 2015: Work started on construction of depot. January 2016: Work started on Wardha Road. August 2016: Work started on Ajni road. (near Ajni railway station). September 2016: Work started on back side of Nagpur railway station. October 2016: Work started Near Ambazari Lake. October 2016: Work started on Sitabuldi Interchange Station (Munje Square). December 2016: Work started for Construction of Depot at MIHAN on the N-S Corridor. January 2017: Work Started on Zero Mile Station. August 2017: Trial run started till Airport Station. September 2017: First trial run was conducted on a 5.6 km section between MIHAN area and Khapri station. April 2018: Second and Final inspection of CMRS for clearance. December 2018: Work progressing in all sections but Metro may miss March 2019 deadline. January 2019: The first metro train from China reached metro depot in Nagpur. February 2019: Trial run begins. February 2019: Prime Minister is expected to inaugurate long Phase-1 section from Sitabuldi to Khapri in first week for March 2019. 3 March 2019: CMRS inspection carried out. 5 March 2019: Prime Minister to inaugurate long Sitabuldi to Khapri section on 8 March 2019. 8 March 2019: Prime Minister Narendra Modi inaugurated metro between Sitabuldi to Khapri via video conferencing. August 2021: Maharashtra Chief Minister Uddhav Thackeray inaugurated a 1.6 km expansion between Sitabuldi and Kasturchand Park. Aqua Line: East-West Corridor December 2016: Work started on East-West Corridor. August 2019: A trial run from Subhash Nagar to Sitabuldi was held on 15 August 2019. CMRS inspection to be held on 30 August 2019 and metro on Aqua Line to start in September 2019. September 2019: Inauguration of Aqua line by Prime Minister Narendra Modi got postponed due to heavy rains. Inauguration to happen after 2019 Maharashtra Legislative Assembly election in October 2019. 28 January 2020 : CM Uddhav Thackeray and few Union Ministers inaugurate commercial services on metro between Sitabuldi to Lokmanya Nagar via video conferencing. Awards On 27 November 2015, Nagpur Metro Rail won 2nd prize for Best Exhibitor at "8th Urban Mobility India conference and expo 2015". Maha Metro got the first prize for the best urban mobility project at the UMI international expo. and conf. held at Hyderabad from 4 to 6 November 2017. Network Map See also Urban rail transit in India Nagpur broad-gauge Metro Greater Nashik Metro Mumbai Metro Pune Metro Thane Metro References External links 2019 establishments in Maharashtra
383758
https://en.wikipedia.org/wiki/Mike%20Honda
Mike Honda
Michael Makoto Honda (born June 27, 1941) is an American politician and former educator. A member of the Democratic Party, he served in Congress from 2001 to 2017. Initially involved in education in California, he first became active in politics in 1971, when then San Jose mayor Norman Mineta appointed Honda to the city's Planning Commission. Mineta later joined Bush and Clinton cabinets. After holding other positions, Honda was elected to the Santa Clara County Board of Supervisors in 1990, and to the California State Assembly in 1996, where he served until 2001. In November 2003, Democratic National Committee chair Terry McAuliffe appointed Honda as deputy chair of the DNC. In February 2005, Honda was elected a vice chair of the Democratic National Committee under the chairmanship of Howard Dean. In 2009, Honda was reelected for a second term as DNC vice chair, under the chairmanship of former Virginia Governor Tim Kaine; he served in this role until 2013. Honda became the subject of an ethics investigation by the United States House Committee on Ethics in 2015 for the alleged use of taxpayer resources to bolster his 2014 re-election campaign. He lost the election for California's 17th congressional district election in 2016 to Ro Khanna. Early life and teaching A third-generation Japanese American ("sansei"), Makoto Honda was born in 1941 in Walnut Grove, California, the son of Fusako (Tanouye) and Giichi Honda. His father, Giichi (nicknamed "Byron"), was one of 6000 Military Intelligence Service (MIS) agents, although the family was subjected to internment. His grandparents were from Kumamoto prefecture and immigrated to the United States in the early 1900s, and both of his parents were born in California. When he was one year old, he and his family were sent to Camp Amache, a Japanese American internment camp in southeastern Colorado. In 1953 his family returned to California, where they became strawberry sharecroppers in Blossom Valley in San Jose. Honda started at Andrew P. Hill High School, then transferred to, and graduated from, San Josė High Academy. He entered San Josė State University, but interrupted his studies from 1965 to 1967 to serve in the United States Peace Corps in El Salvador, where he learned to speak Spanish. He returned to San Jose State, where in 1968 he received a bachelor's degree in Biological Sciences and Spanish. He earned a master's degree in Education from San Jose State in 1974. In his 30-year career as an educator, Honda was a science teacher, a principal at two public schools, a school board member, and he conducted educational research at Stanford University. Political career Teaching and state positions In 1971, San Jose Mayor Norman Mineta appointed Honda to the city's Planning Commission. In 1981, Honda was elected to the San Jose Unified School Board. He was elected to the Santa Clara County Board of Supervisors in 1990, and to the California State Assembly in 1996, where he served until 2001. DNC and national positions In the 2000 United States House of Representatives elections, Honda won the Democratic nomination for the 15th District, which had once been represented by Norm Mineta. In 2000, Honda won by a 12-point margin. In November 2003, Chairman of the Democratic National Committee Terry McAuliffe appointed Honda as Deputy Chair of the DNC. In February 2005, Honda was elected a vice chair of the Democratic National Committee under the chairmanship of Howard Dean. In 2009, Honda was reelected for a second term as DNC vice chair, under the chairmanship of former Virginia Governor Tim Kaine; he served in this role until 2013. He remained the incumbent in the resultant elections of 2002, 2004, 2006, 2008, 2010, and 2012. Due to redistricting after the 2010 US Census, Honda began representing the California's 17th congressional district at the beginning of the 113th Congress on January 3, 2013. The district incorporates Silicon Valley, which is the only Asian American-majority district in the continental United States. The district encompasses all or part of the cities of Cupertino, Fremont, Milpitas, Newark, Santa Clara, San Jose, and Sunnyvale. He won again in 2014. He lost the election for California's 17th congressional district election in 2016 to Ro Khanna. Early committees and caucuses From 2001 to 2007 Honda served on the United States House Committee on Science, Space and Technology and was the ranking member of its Energy Subcommittee from 2005–07. He also served on the United States House Committee on Transportation and Infrastructure from 2001–07. In 2007, Speaker of the House Nancy Pelosi appointed Honda to the United States House Committee on Appropriations. From 2011–13, he was ranking member of the Legislative Branch Appropriations Subcommittee. From 2001 to 2003, and again from 2011–13, Honda also was appointed to serve on the House Budget Committee. Honda was a member of the following (and other) committees, commissions, and caucuses: United States House Committee on Appropriations United States House Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies (Ranking Member) United States House Appropriations Subcommittee on Labor, Health and Human Services, Education, and Related Agencies Congressional Asian Pacific American Caucus (Chair Emeritus from 2004-2010) Congressional Anti-Bullying Caucus (Founder and Chair) LGBT Equality Caucus (Vice Chair and Founding Member) Congressional Hepatitis Caucus (Co-Chair) Congressional Progressive Caucus (Vice Chair for New Members) Democratic Caucus New Media Working Group (Co-Chair) Congressional Ethiopia Caucus (Founder and Chair) Congressional-Executive Commission on China (Appointed Commissioner since 2005) Major appropriations As of August 2015, Honda had secured over $1.3 billion in appropriations since 2001. Some of Honda's most notable appropriations were for the extension of the BART system into Silicon Valley. During his five years on the House Transportation Committee, he secured $11 million in direct earmarks attached to a number of bills. Also during his time on that committee, he facilitated the BART projects qualification for the New Starts Program, which authorized another $900 million in funds, the first $400 million of which Honda managed to appropriate over the three-year period of FY2012-14. During the 2014 midterm election cycle, Honda's opponent Ro Khanna alleged that Honda only secured $2 million for the project. In response, a number of local officials including a Congresswoman, a State Senator, a former US Secretary of Transportation, and former and current Valley Transportation Authority chairs, sent the Khanna campaign an open letter refuting its claims and requesting that they correct their campaign language. The $11 million in direct appropriations was part of: H.R. 2673 (FY2004 Consolidated Appropriations Act, became Public Law 108-199) H.R. 4818 (FY 2005 Consolidated Appropriations Act, became Public Law 108-447) H.R. 3058 (the Transportation, Treasury, Housing and Urban Development, the Judiciary, and Independent Agencies Appropriations Act for 2006, became Public Law 109-115) The $400 million from the New Starts Program was allocated as follows: $100M for FY 2012 in H.R. 2112, The Consolidated and Further Continuing Appropriations Act, 2012, which became Public Law 112-55 $150M for FY 2013 in H.R. 933, the FY 2013 Consolidated and Further Continuing Appropriations Act, which became Public Law 113-6 $150M for FY 2014 in H.R. 3547, the Consolidated Appropriations Act, 2014, which became Public Law 113–76. Legislation Raising the minimum wage Honda has been a long-time supporter of organized labor, and has supported numerous bills for creating a living wage. In 2013 and 2014, he cosponsored the Original Living American Wage Act (H.R. 229), the WAGES Act (H.R. 650), and the Fair Minimum Wage Act (H.R. 1010), which would raise the federal minimum wage. Honda was also a supporter of the San Jose's successful ballot initiative that raised the city's minimum wage to $10 per hour. Defense As former Chairman of the Afghanistan Taskforce for the Congressional Progressive Caucus, and former Co-Chair of the CPC's Peace and Security Taskforce, Congressman Honda has consistently critiqued the war strategy through a series of Congressional briefings, legislation, published opinion pieces ("Alternative Strategies to Obama's Afghan Agenda", "A Different Kind of Surge"), and Congressional letters to the Administration. Honda advocated an orderly withdrawal of U.S. military forces from Afghanistan and a significant realignment of U.S. aid to focus on strengthening government institutions, capacity building, economic development, and humanitarian assistance in Afghanistan. Honda criticized the Obama administration for failing to seek Congressional approval for U.S. military operations in Libya. He is critical of the wide-scale use of drones and is a cosponsor of the Targeted Lethal Force Transparency Act (H.R. 4372) requiring an annual report on the number of civilians and combatants killed and injured in drone strikes. Education In 2008, Honda worked with then-Senator Barack Obama to introduce the Enhancing STEM Education Act. Honda introduced the House version, H.R. 6104, and Obama introduced the Senate version, S.3047, on the same day. The bills sought to enhance the coordination among state and federal governments to improve STEM (science, technology, engineering, and mathematics) education by creating a committee on STEM education at the Office of Science and Technology Policy (OSTP) and an Office of STEM at the Department of Education, instituting a voluntary State Consortium on STEM education, and creating a National STEM Education Research Repository. Portions of this bill (notably, creating a committee on STEM education at OSTP), as well as Honda's INVENT Act (which would develop curriculum tools for use in teaching innovation and fostering inventiveness at the K-16 level), were eventually included in the America COMPETES Act reauthorization, which President Obama signed into law on January 4, 2011. Honda led the Congressional authorization for The Equity and Excellence Commission, a commission that began in 2011 and reported its findings to the Secretary of Education in late 2012. The Commission is a federal advisory committee chartered by Congress, operating under the Federal Advisory Committee Act (FACA); 5 U.S.C., App.2. The commission had 27 members from a range of backgrounds, including education, law, tax, government, business, and civil rights. The committee met 17 times in Washington, DC and across the country. In November 2012, the commission presented its findings in a report titled "For Each and Every Child: A Strategy for Education Equity and Excellence." The findings focused around five recommendations: (1) restructuring the financing of schools, focusing on equitable resources; (2) supporting quality teachers and school leaders; (3) supporting early childhood education; (4) promoting increased parental engagement; and, (5) addressing changes in accountability and governance in the education system. Opposed by special interests, including the teachers' unions, the commission's recommendations went largely ignored. Environment Honda secured millions of dollars in federal funding for the cleanup and demolition of the former Almaden Air Force Station atop Mt. Umunhum. Contaminated with standard-use hazardous materials during its military use (lead paint, asbestos, etc.), the site was remediated, demolished, and is slated to open for public access in spring 2017. Honda has also advocated for programs such as the Land and Water Conservation Fund. In 2014, Honda introduced the Climate Change Education Act (H.R. 4461), legislation that aims to improving public understanding of the impact of greenhouse gases on the environment and the steps that individuals and communities can take to combat the global warming crisis. Faith and religion In 2014, Honda introduced the Freedom of Faith Act (H.R. 4460). Honda has been a defender of the civil rights of American Muslims. Soon after the September 11 attacks in 2001, Honda spoke at a convention of the American Muslim Alliance (AMA) on October 2001. He told those in attendance not to change their identity or name. "My last name is Honda. You cannot be more Japanese than that." The congressman remembered what he and especially his parents had to go through when Pearl Harbor was attacked. "We were taken in a vehicle with windows covered, we had no idea where we were being taken." In the Quran oath controversy of the 110th United States Congress, Rep. Virgil Goode (R-VA) issued a letter to his constituents stating his view that the decision of Representative-elect Keith Ellison (D-MN) to use the Quran in his swearing-in ceremony is a threat to "the values and beliefs traditional to the United States of America... I fear that in the next century we will have many more Muslims in the United States if we do not adopt the strict immigration policies". In response, Honda penned a response to Goode expressing his surprise and offense by that letter and declaring "No person should be labeled as un-American based on his or her religion, and it is outrageous to cast aspersions on Representative-elect Ellison purely because of his religious background." Government reform In 2007, Honda voted for the Honest Leadership and Open Government Act of 2007, which was the legislative response to the Jack Abramoff scandal and introduced comprehensive new transparency requirements for lobbyists and for Members of Congress. In 2012, he cosponsored H.R. 1148, the Stop Trading on Congressional Knowledge (STOCK) Act, which criminalized insider trading by Members of Congress and required numerous disclosures. He voted for H.Res. 895, which created the first-ever independent ethics office, the Office of Congressional Ethics. Health care Honda has advocated for the expansion of health coverage for all through the Affordable Care Act (ACA) and is a big proponent of the public option. As the Chair of the Congressional Asian Pacific American Caucus, Honda was successful in ensuring that the ACA addressed racial and ethnic health disparities, including improvements in data collection, and measures to increase the number of health care providers from different backgrounds. As the Chairman and Chair Emeritus of Congressional Asian Pacific American Caucus, he sponsored and supported the Health Care Equality and Accountability Act, which would have expanded access to care for individuals with limited English proficiency, increased health workforce diversity, and encouraged further studies on the minority health issues. As a member of the TriCaucus with the Congressional Black Caucus and Congressional Hispanic Caucus, Honda has introduced legislation focused on health disparities in correlation to an annual health disparities summit. Honda has led efforts to address tuberculosis by seeking changes to the Centers of Disease Control and Prevention's (CDC) formula for direct funding for tuberculosis treatment and education to include highly impacted counties. He was successful in getting report language in the 2009 Omnibus Appropriations Bill to have the CDC review its funding distribution policies. Honda has been a leader in Congress for viral hepatitis and founded and co-chairs the Congressional Viral Hepatitis Caucus. He is a cosponsor of the Viral Hepatitis Testing Act (H.R. 3723), which will authorize new prevention and testing programs for hepatitis B and hepatitis C, and implement screening for veterans for hepatitis C. He also cosponsored the Viral Hepatitis Testing Act (H.R. 3381) in the 112th Congress, the Viral Hepatitis and Liver Cancer Control and Prevention Act (H.R. 3974) in the 111th Congress, and the National Hepatitis B Act (H.R. 3944) in the 110th Congress. Honda has supported mobile health technology innovation and introduced the Health Care Innovation and Marketplace Technologies Act of 2013 (H.R. 2363). This bill establishes an Office of Wireless Health at the FDA, award grant for the development of effective product, process, or structure that enhances the use, particularly by patients, of health information technology, and provides medical professionals tax incentives to implement qualified health information technology in their practices. Honda has been an advocate for women's health including supporting provisions in the Patient Protection and Affordable Care Act such as the elimination of gender-based discrimination in insurance prices, recognizing that being a woman is not a preexisting condition that should force women to pay higher premiums. Honda opposed the Stupak–Pitts Amendment to the Affordable Care Act (ACA), which would have prohibited the use of federal funds "to pay for any abortion or to cover any part of the costs of any health plan that includes coverage of abortion" except in cases of rape, incest or danger to the life of the mother. The amendment was dropped by co-author Rep. Bart Stupak in exchange for an executive order promised by President Obama which would address the concerns of the Stupak-Pitts amendment supporters. Honda has supported Medicare and Medicaid programs throughout his career, fighting for the health rights of seniors and low-income families. He introduced the People's Budget, the Congressional Progressive Caucus 2012 budget alternative, which would keep Medicare and Medicaid solvent while closing the national debt within 10 years. Honda supports the permanent repeal of the Sustainable Growth Rate (SGR) and cosponsored the SGR Repeal and Medicare Provider Payment Modernization Act of 2014 (H.R. 4015) which would have repealed the SGR and improved the physician payment system to reward value over volume. Human rights On the issue of comfort women, in 2007 Honda proposed US H.Res. 121, which stated that Japan should formally acknowledge, apologize, and accept historical responsibility in a clear and unequivocal manner, refute any claims that the issue of comfort women never occurred, and educate current and future generations "about this horrible crime while following the recommendations of the international community with respect to the 'comfort women'." Honda stated, "the purpose of this resolution is not to bash or humiliate Japan." On July 30, 2007, the House of Representatives passed Honda's resolution after 30 minutes of debate, in which no opposition was voiced. Honda was quoted on the floor as saying, "We must teach future generations that we cannot allow this to continue to happen. I have always believed that reconciliation is the first step in the healing process." Honda later secured report language in the Fiscal Year 2014 Consolidated Appropriation Act (submitted July 2013) urging the Secretary of State to encourage the Government of Japan to address issues raised in H.Res.121. President Obama signed the spending bill into law on January 17, 2014. Honda works on the elimination of human trafficking. He cosponsored the Fraudulent Overseas Recruitment and Trafficking Elimination Act of 2013 (H.R. 3344). The bill addresses predatory recruiters who use international labor recruitment as a human trafficking medium. On January 23, 2014, Honda hosted a training at the San Jose International Airport for airport and airline employees on how to detect signs of human trafficking. Honda is a cosponsor of the Border Security, Economic Opportunity, and Immigration Modernization Act (H.R. 15), which entails comprehensive immigration reform to increase high skill visas, reunite families, and provide a pathway to citizenship for those living in the shadows. LGBT issues Honda has been recognized as a long-time supporter of equality for lesbian, gay, bisexual and transgender people, with a 100% scorecard rating from the Human Rights Campaign since 2001. HRC endorsed Honda for his 2014 reelection. In the 1990s, he supported same-sex partner benefits as a Santa Clara County Supervisor. In 2008, he was a co-founder of the Congressional Equality Caucus, when there were only two openly gay congresspersons. He opposed the use of taxpayer funds to protect the Defense of Marriage Act in the United States Supreme Court. In 2013, Honda worked with Mayor of Campbell Evan Low to raise awareness for the ban against blood donations from gay and bisexual men. In 2015, Honda revealed in a speech at the event Courageous Conversation, a one-day symposium that addresses how administrators can work to make their schools safer for their students, that his granddaughter Malisa is transgender. "As both an individual, and as an educator, I have experienced and witnessed bullying in its many forms. And as the proud jichan, or grandpa, of a transgender grandchild, I hope that my granddaughter can feel safe going to school without fear of being bullied. I refuse to be a bystander while millions of people are dealing with the effects of bullying on a daily basis." Manufacturing In 2013, Honda introduced the Market Based Manufacturing Incentives Act (H.R. 615), one of the main bills in the Democratic Party's Make it in America Agenda, which would create a commission of private-sector experts to designate market-changing technologies. These technologies would be eligible for a consumer tax credit as long as they are made in the United States. Honda introduced the Scaling Up Manufacturing Act (H.R. 616). The bill would provide companies a 25% tax credit on the costs associated with building their first manufacturing facility in the United States. Honda was a vocal supporter of the National Network for Manufacturing Innovation proposed by President Obama to help revitalize American manufacturing. He is a cosponsor of the bipartisan Revitalizing American Manufacturing Innovation Act (H.R. 2996) and has urged President Obama to locate a manufacturing hub in Silicon Valley to focus on the domestic development of the next generation of semiconductor manufacturing tools. Honda used his position as a member of the House Appropriations Committee and the Commerce, Justice, Science Subcommittee to prioritize funding for the National Institute of Standards and Technology's Hollings Manufacturing Extension Partnership (MEP) program which works with small and medium-sized manufacturers to help them create and retain jobs, increase profits, and save time and money. Science and technology As the Representative for the heart of Silicon Valley, Honda has been intimately involved in technology and nanotechnology policy for many years. He has supported the principle of network neutrality, and is a cosponsor of the Open Internet Preservation Act (H.R. 3982). Honda was critical of the National Security Agency's surveillance of electronic communications as a violation of privacy. He is an original cosponsor of the Uniting and Strengthening America by Fulfilling Rights and Ending Eavesdropping, Dragnet Collection, and Online Monitoring Act (USA FREEDOM ACT - H.R 3361) which seeks to rein in the dragnet collection of data by the NSA, increase transparency of the Foreign Intelligence Surveillance Court, provide businesses the ability to release information regarding FISA requests, and create an independent constitutional advocate to argue cases before the FISC. Honda has been a proponent of government intelligence transparency and has pushed to require that top-line intelligence spending be disclosed during annual budget submission to Congress through his co-sponsorship of the Intelligence Budget Transparency Act (H.R. 3855). In 2002, he introduced one of the first nanotechnology bills in Congress, the Nanoscience and Nanotechnology Advisory Board Act of 2002, which sought to establish a Nanoscience and Nanotechnology Advisory Board to advise the President on a range of policy matters. Such a board was recommended by the National Research Council in its review of the National nanotechnology Initiative, Small Wonders, Endless Frontiers. In 2003, he worked with then-Science Committee Chairman Sherwood Boehlert (R-NY), to introduce the Nanotechnology Research and Development Act of 2003. This bill authorized federal investments in nanotechnology research and development, restructured the National Nanotechnology Initiative to improve interagency coordination and the level of input from outside experts in the field, and laid the path to address novel social, ethical, philosophical, legal, environmental health issues that might arise. H.R. 766 was passed overwhelmingly by the U.S. House of Representatives on May 7, 2003, signed into law on December 3, 2003, and to date has been funded at nearly $4 billion. Honda continued his interest in nanotechnology by convening the Blue Ribbon Task Force on Nanotechnology with then-controller Steve Westly in 2005. This group met numerous times to discuss and develop strategies to promote the San Francisco Bay Area and all of California as the national and worldwide center for nanotechnology research, development and commercialization. Under the direction of Working Chair Scott Hubbard, then-Director of the National Aeronautics and Space Administration's Ames Research Center, the Task Force spent a year developing recommendations that would assure California a leading position in what could be a trillion-dollar economic sector. The recommendations were included in the BRTFN report, Thinking Big About Thinking Small. Honda developed two pieces of legislation based on the report: 1) the Nanomanufacturing Investment Act of 2005 and 2) the Nanotechnology Advancement and New Opportunities Act. Many provisions of these bills were included in larger pieces of legislation, the National Nanotechnology Initiative Amendments Act of 2009 and the America COMPETES Reauthorization Act, that passed the House of Representatives in the 111th Congress. Mike Honda was recognized by the Foresight Institute, which awarded him its Foresight Institute Government Prize in 2006. Research and Development tax credit Congressman Honda has supported expanding and making permanent the Research and Development tax credit, and in the 113th Congress is a cosponsor of the bipartisan H.R. 4438, the American Research and Competitiveness Act of 2014. He has called the research credit, "the best incentive in the tax code to ensure that companies continue to conduct their R&D in the U.S." Seniors and retirement security Honda has been a vocal advocate for expanding the Social Security program. In the 113th Congress, Honda introduced H.R. 3118, the Strengthening Social Security Act, with Congresspersons Linda Sanchez (D-CA) and Rush Holt (D-NJ), which would increase benefits for current beneficiaries, eliminate the cap on how much of an individual's earnings can be paid into Social Security, change the benefits formula to increase payments by about $70 a month, and adopt a higher cost of living adjustment called CPI-E, designed to reflect the cost of healthcare for seniors. Also in the 113th Congress, Honda authored H.R. 4202, the CPI-E Act of 2014, which would apply CPI-E to index federal retirement programs other than Social Security, to include programs such as civil service retirement, military retirement, Supplemental Security Income, veterans pensions and compensations, and other retirement programs with COLAs triggered directly by Social Security or civil service retirement. As a member of the Congressional Progressive Caucus Budget Taskforce, Honda also inserted this CPI-E provision into the FY 2015 CPC Budget, entitled the "Better Off Budget." Honda said during floor debate on the CPC budget, that the provision was intended to be a first step to applying CPI-E to all federal retirement programs, including Social Security. Veterans Honda has been a leading voice to overhaul and improve the current VA system. As an appropriator, he worked with his colleagues in both parties to not only call for change, but to provide funds to create a new electronic health record program between the Department of Defense and Veterans Affairs. He is also working to ensure that the government makes use of the knowledge and experience of health information technology experts, such as those in Silicon Valley, to ensure this new platform will eliminate the current backlog of claims. Honda helped obtain $2.8 million in grants to aid homeless and at-risk veterans and their families in Silicon Valley. Women's rights Honda has a 100% legislative score from Planned Parenthood and from the National Abortion Rights Action League (NARAL) and has been a long-time supporter of pro-choice legislation and for women's health due to his voting record. Honda has supported Paycheck Fairness Act and voted for the Lily Ledbetter Fair Pay Act — the first piece of legislation to be signed by President Barack Obama in 2009. During the debate over the new health care bill, Honda voted against the Stupak–Pitts Amendment to the Affordable Care Act (ACA), which would have prohibited the use of federal funds "to pay for any abortion or to cover any part of the costs of any health plan that includes coverage of abortion" except in cases of rape, incest or danger to the life of the mother. The amendment was dropped by its co-author Stupak in exchange for an executive order promised by President Obama which would address the Stupak-Pitts concerns. In 2013, Honda voted for the reauthorization of the Violence Against Women Act (VAWA), which included updated protections for Native American women, immigrant women, and provided specialized support and resources for LGBT, religious and ethnic communities. VAWA reauthorization also included the Trafficking Victims Protection Reauthorization Act, which Honda has also supported. Honda introduced the Domestic Violence Judicial Support Act of 2013, which would strengthen the judicial programs that comprise the basis of VAWA. To support full implementation of the Obama Administration's Executive Order 13595 and the U.S. National Action Plan (NAP) on Women, Peace, and Security, and to secure Congressional oversight, Honda introduced the Women, Peace, and Security Act of 2013, along his female colleagues Congresswomen Jan Schakowsky, Eddie Bernice Johnson, and Niki Tsongas. Civilian body armor ban In July 2014, Honda introduced a bill to ban level 3 body armor for anyone not in law enforcement. In September, it was referred to the subcommittee on Crime, Terrorism, Homeland Security, and Investigations. This bill would ban anyone except law enforcement and military personnel from obtaining level 3 body armor. He was quoted as saying: "We should be asking ourselves, why is this armor available to just anyone, if it was designed to be used only by our soldiers to take to war?". Ethics investigation It has been alleged that Honda and key members of his congressional staff violated House rules by using taxpayer resources to bolster his 2014 re-election campaign. In September 2015, the House Ethics Committee decided to extend the review of the matter after the Office of Congressional Ethics (OCE) released its report on the allegation. The OCE report noted "there is substantial reason to believe that Representative Honda improperly tied official events to past or potential campaign or political support." As of August 8, 2016, the House Ethics Committee had not decided whether Honda violated House rules. Personal life Honda's wife, Jeanne, was a kindergarten teacher at Baldwin Elementary School in San José. She died in 2004. He has two children: Mark, an aerospace engineer, living in Torrance, and Michelle, a marketing and communications manager, in San Jose. Michelle is the mother of one daughter and two sons. In February 2015, Honda's announcement that he is a "proud jichan", or grandfather, of his transgender granddaughter Malisa, gained regional, national, and international coverage. Electoral history in the U.S. House of Representatives See also List of Asian Americans and Pacific Islands Americans in the United States Congress History of the Japanese in San Francisco References External links Biographical Directory of the United States Congress, Mike Honda Sunlight Foundation's OpenCongress profile of Mike Honda Center for Responsive Politics OpenSecrets.org profile of Mike Honda GovTrack.us profile of Mike Honda Peace Corps biography of Mike Honda |- |- |- 1941 births 20th-century American politicians 21st-century American politicians American educators of Japanese descent American politicians of Japanese descent American Protestants California Democrats California politicians of Japanese descent Comfort women Democratic Party members of the United States House of Representatives Japanese-American internees Living people LGBT rights activists from the United States Members of the California State Assembly Members of the United States Congress of Japanese descent Members of the United States House of Representatives from California Asian-American members of the United States House of Representatives Peace Corps volunteers People from Walnut Grove, California Politicians from San Jose, California San Jose State University alumni School board members in California Democratic Party members of the United States House of Representatives from California
221277
https://en.wikipedia.org/wiki/Kernel%20panic
Kernel panic
A kernel panic (sometimes abbreviated as KP) is a safety measure taken by an operating system's kernel upon detecting an internal fatal error in which either it is unable to safely recover or continuing to run the system would have a higher risk of major data loss. The term is largely specific to Unix and Unix-like systems. For Microsoft Windows operating systems the equivalent term is "Stop error", resulting in a bug check screen that presents the bug check code on a blue background in Windows (colloquially known as a "Blue Screen of Death" or BSoD), or on a green background on the Xbox One platform and some Windows Insider builds. The kernel routines that handle panics, known as panic() in AT&T-derived and BSD Unix source code, are generally designed to output an error message to the console, dump an image of kernel memory to disk for post-mortem debugging, and then either wait for the system to be manually rebooted, or initiate an automatic reboot. The information provided is of a highly technical nature and aims to assist a system administrator or software developer in diagnosing the problem. Kernel panics can also be caused by errors originating outside kernel space. For example, many Unix operating systems panic if the init process, which runs in user space, terminates. History The Unix kernel maintains internal consistency and runtime correctness with assertions as the fault detection mechanism. The basic assumption is that the hardware and the software should perform correctly and a failure of an assertion results in a panic, i.e. a voluntary halt to all system activity. The kernel panic was introduced in an early version of Unix and demonstrated a major difference between the design philosophies of Unix and its predecessor Multics. Multics developer Tom van Vleck recalls a discussion of this change with Unix developer Dennis Ritchie: I remarked to Dennis that easily half the code I was writing in Multics was error recovery code. He said, "We left all that stuff out. If there's an error, we have this routine called panic, and when it is called, the machine crashes, and you holler down the hall, 'Hey, reboot it.'" The original panic() function was essentially unchanged from Fifth Edition UNIX to the VAX-based UNIX 32V and output only an error message with no other information, then dropped the system into an endless idle loop. Source code of panic() function in V6 UNIX: /* * In case console is off, * panicstr contains argument to last * call to panic. */ char *panicstr; /* * Panic is called on unresolvable * fatal errors. * It syncs, prints "panic: mesg" and * then loops. */ panic(s) char *s; { panicstr = s; update(); printf("panic: %s\n", s); for(;;) idle(); } As the Unix codebase was enhanced, the panic() function was also enhanced to dump various forms of debugging information to the console. Causes A panic may occur as a result of a hardware failure or a software bug in the operating system. In many cases, the operating system is capable of continued operation after an error has occurred. However, the system is in an unstable state and rather than risking security breaches and data corruption, the operating system stops to prevent further damage and facilitate diagnosis of the error and, in usual cases, restart. After recompiling a kernel binary image from source code, a kernel panic while booting the resulting kernel is a common problem if the kernel was not correctly configured, compiled or installed. Add-on hardware or malfunctioning RAM could also be sources of fatal kernel errors during start up, due to incompatibility with the OS or a missing device driver. A kernel may also go into panic() if it is unable to locate a root file system. During the final stages of kernel userspace initialization, a panic is typically triggered if the spawning of init fails. A panic might also be triggered if the init process terminates, as the system would then be unusable. The following is an implementation of the Linux kernel final initialization in kernel_init(): static int __ref kernel_init(void *unused) { ... /* * We try each of these until one succeeds. * * The Bourne shell can be used instead of init if we are * trying to recover a really broken machine. */ if (execute_command) { if (!run_init_process(execute_command)) return 0; pr_err("Failed to execute %s. Attempting defaults...\n", execute_command); } if (!run_init_process("/sbin/init") || !run_init_process("/etc/init") || !run_init_process("/bin/init") || !run_init_process("/bin/sh")) return 0; panic("No init found. Try passing init= option to kernel. " "See Linux Documentation/init.txt for guidance."); } Operating system specifics Linux Kernel panics appear in Linux like in other Unix-like systems, but they can also generate another kind of error condition, known as a kernel oops. In this case, the kernel normally continues to run after killing the offending process. As an oops could cause some subsystems or resources to become unavailable, they can later lead to a full kernel panic. On Linux, a kernel panic causes keyboard LEDs to blink as a visual indication of a critical condition. macOS When a kernel panic occurs in Mac OS X 10.2 through 10.7, the computer displays a multilingual message informing the user that they need to reboot the system. Prior to 10.2, a more traditional Unix-style panic message was displayed; in 10.8 and later, the computer automatically reboots and displays a message after the restart. The format of the message varies from version to version: 10.0–10.1: The system displays text on the screen, giving details about the error, and becomes unresponsive. 10.2: Rolls down a black transparent curtain then displays a message on a white background informing the user that they should restart the computer. The message is shown in English, French, German and Japanese. 10.3–10.5: The kernel panic is almost the same as version 10.2 but the background of the error screen is black. 10.6–10.7: The text has been revised and now includes a Spanish translation. 10.8 and later: The computer becomes unresponsive before it immediately reboots. When the computer starts back up, it shows a warning message for a few seconds about the computer restarting because of a kernel panic, and then the computer restarts back up. The message now includes a Chinese translation. Sometimes when there are five or more kernel panics within three minutes of the first one, the Mac will display a prohibitory sign for 30 seconds, and then shut down; this is known as a "recurring kernel panic". In all versions above 10.2, the text is superimposed on a standby symbol and is not full screen. Debugging information is saved in NVRAM and written to a log file on reboot. In 10.7 there is a feature to automatically restart after a kernel panic. In some cases, on 10.2 and later, white text detailing the error may appear in addition to the standby symbol. See also Core dump Blue screen of death Screen of death References Computer errors Operating system kernels Screens of death
32072257
https://en.wikipedia.org/wiki/Marc%20Zwillinger
Marc Zwillinger
Marc Zwillinger is the founder and managing member of the Washington, D.C. based data privacy and information security law firm ZwillGen. Zwillinger has been active in the field of Internet law on issues such as encryption, data security, government access to user data, data breaches, and fantasy sports. Career Marc Zwillinger founded Zwillinger Genetski LLP (now ZwillGen PLLC), a boutique law firm specializing in data protection & information security, in March 2010. Prior to founding ZwillGen, Zwillinger was a partner at Sonnenschein Nath & Rosenthal in the firm's Internet, Communications & Data Protection Group where he had created the Internet, Communications and Data Protection Practice Group (originally called Information Security and Anti-Piracy). Zwillinger worked for the United States Department of Justice in the Computer Crime and Intellectual Property Section as a trial attorney from 1997-2000. Before entering the DOJ, Zwillinger was a litigation associate for Kirkland & Ellis from 1995-1997. Zwillinger started his career clerking for the Honorable Mark L. Wolf of the United States District Court, District of Massachusetts from 1994-1995. Education Marc earned his bachelor's degree from Tufts University in 1991, and received his law degree graduating magna cum laude from Harvard Law School in 1994. Work with Apple Zwillinger has represented Apple in several cases, including those brought under the 18th century All Writs Act involving government access to user data. In 2015, Zwillinger, representing Apple, contested unlocking an iPhone 5S belonging to a defendant accused of selling drugs in New York. Most notably, in 2016, Zwillinger represented Apple in the Apple vs San Bernardino case where the government tried to compel Apple to unlock the personal iPhone recovered from one of the terrorists in the San Bernardino attack. The case itself was later dropped. Work with Yahoo In 2008 Zwillinger represented Yahoo! over the government's efforts to force Yahoo! to comply with "surveillance orders and other types of legal process in national security investigations." Of the experience, Zwillinger said that he was proud to be one of the "lawyers who represented Yahoo in its historic challenge to the government's surveillance program in the Foreign Intelligence Surveillance Court ("FISC") and the Foreign Intelligence Court of Review ("FISCR")." Service Zwillinger is one of five amici curiae appointed to serve to the Foreign Intelligence Surveillance Court ("FISC"); a position stipulated under the USA Freedom Act. Amici serve staggered terms, with Zwillinger slated to serve a four-year term. Awards From 2007 through 2015, Zwillinger has been ranked in Chambers & Partners USA as a leading lawyer in his field of Privacy & Data Security. References External links Lawyers who have represented the United States government 1969 births Living people Tufts University alumni Harvard Law School alumni Kirkland & Ellis alumni
7141148
https://en.wikipedia.org/wiki/VIT%2C%20C.A.
VIT, C.A.
VIT, C.A. (Venezolana de Industria Tecnológica, Compañía Anónima) is a Venezuelan manufacturer of desktop computers and laptops, supported by the Venezuelan government and a Chinese information technology company Inspur (former ). The first computer they produced was called Computador Bolivariano (English: Bolivarian Computer), which came with the Kubuntu Linux operating system. Since April 28, 2009, VIT computers are pre-installed with Canaima GNU/Linux. See also Canaima (operating system) GendBuntu Inspur LiMux Nova (operating system) Ubuntu Kylin Notes External links VIT homepage Ministry of Science and Technology Companies established in 2005 Computer hardware companies Government-owned companies of Venezuela Manufacturing companies of Venezuela Venezuelan brands
37237664
https://en.wikipedia.org/wiki/Smart%20Grid%20Energy%20Research%20Center
Smart Grid Energy Research Center
The UCLA Smart Grid Energy Research Center (SMERC), located on the University of California Los Angeles (UCLA) campus, is an organization focused on developing the next generation of technologies and innovation for the SmartGrid. Partnerships with government, technology providers, DOE research labs and universities, utilities, policy makers, and electric vehicle and appliance manufacturers provide SMERC with diverse capabilities and exceptional, matured leadership. The organizations ever-growing developments are created with the intention to satisfy the Smart Grid by allowing an increase in grid flexibility, integration of renewable energy sources, competitive energy pricing, improved efficiencies, and reduce power outages and losses. Overall SMERC's developments will provide a service by being more responsive to the market, consumer, and society in general. Currently, SMERC is performing research on Microgrids, Automated Demand Response, Electric Vehicle Integration (G2V and V2g), Cybersecurity, and Distributed and renewable integration. All technology and research is being developed and collected at UCLA's Henry Samueli School of Engineering and Applied Science by a team of well experienced staff and the school's graduate students. SMERC has collaborations with USC and Caltech/JPL, LADWP in a smart grid demonstration project. Internationally, SMERC has connected with the Korea Institute of Energy Research (KIER). "The partnership involves SMERC testing for the development of the software and platform involved in smart grid technology, while KIER focuses on various renewable energy technologies, such as solar, wind and fuel cells, as well as wireless communications and semiconductor systems." Background "While the electrical grid in the United States is very reliable, it is currently somewhat limited in its ability to incorporate new renewable energy sources; to effectively manage demand response; to sense and monitor trouble spots; and to repair itself." This reliability will not last if the grid systems stay the same as populations rise and electricity demands rise. This demand calls for innovative technologies and systems to provide and manage demand response, sensory/monitor repair, and self-repair to help stabilize the grid. SMERC has been building these technologies since the fall of 2004. The system also calls for better efficiently among energy generators and savers. Today, the current grid in North America is very old and in many areas, up to 100 years old. The grid is inflexible and must be modernized to handle the intermittency of renewable energy sources (solar power, wind turbines, etc.). These energy sources, if resourced properly will prove to be valuable to the grid, providing it with energy that is currently wasted. With this high demand for electricity, there is a tremendous opportunity in the United States for innovation between the current electric grid and the next generations of systems using RFID and Integrated Sensors, Information, and Wireless technologies. With awareness in Smart Grid growing, questions about what the new modernized grid will be like are being asked. Unfortunately there is no clear answer to what the grid will look like. For instance, it is like predicting what an apple computer would be capable of accomplishing today when the first apple computer was released in 1976 (36 years ago). There is now enormous opportunity for experimentation, creativity, and research in Smart Grid technology. Entrepreneurs, universities, and other innovators are in the process of creating indescribable possibilities for the future Smart Grid. Funding The major starting point for investment into modernizing the current grid was the U.S. Department of Energy's (DOE) stimulus package (American Recovery and Reinvestment Act, i.e. ARRA). The ARRA invested approximately $4.4 billion for Smart Grid research. LADWP received $60 million from the DOE's stimulus package. "The money will be used for “smart grid" demonstration projects. The projects will allow the city’s Department of Water and Power, the largest municipal utility in the nation, to use advanced meters and other technology at the universities to chart how power is being consumed, forecast demand and potential outages, and seek ways to reduce energy use." The Waxman–Markley comprehensive energy bill (American Clean Energy and Security Act of 2009) increased the awareness and impact on the electric transmission grid. The act was designed with the intention to reduce greenhouse emissions by 17 percent by 2020. This reduction would require there to be a concentration on energy consumption and production. This bill directly and indirectly stimulates universities and private industries into being innovators in new technologies for the grid. Collaborations among utilities, government, technology providers, and universities are made to provide information and technologies for the new generation of Smart Grid and Smart Energy Technology. SMERC also receives funding from California Energy Commission, EPRI, KIER, and the UCLA Smart Grid Industry Partners Program(SMERC-IPP). Projects The Smart Grid Energy Research Center (SMERC) consists of several key projects as follows: Connected and Autonomous Electric Vehicles (CAEV™) CAEV™ is a UCLA lead consortium whose members consist of modern age automotive companies, electric and autonomous transportation providers and electric power companies that are modernizing the automotive industry into one that is electric, digital, connected, smart, autonomous, and serves the transportation and energy needs of society for the 21st Century and beyond. The purpose of the consortium is to create a partnership of Electric Vehicle and Autonomous Vehicle manufacturers in California partnering with new energy companies that advance technology, create innovative business models, and, educate and train the next generation of students to create the industry that will change the face of the automotive sector worldwide. UCLA WINSmart Grid™ “The UCLA WINSmartGrid™ is a network platform technology that allows electricity operated appliances such as plug-in automobile, washer, dryer, or, air conditioner to be wirelessly monitored, connected and controlled via a Smart Wireless hub.” Overall the WINSmartGrid™ advantages are as follows: provides a low power technology, uses low standards-based hardware resulting in lower overall cost, wireless infrastructure for monitoring and control, open architecture for easy integration, plug-and-play approach, reconfigure ability, and service architecture with three layers – Edgeware, Middleware and Centralware. The WINSmartGrid™ technology uses a three layered Serviceware architecture along with ReWINS technology. A simple explanation of the process is that the Centralware makes a decision, the Middleware reads that decision, then maps and routes these decisions to the Edgeware, where the decisions are then sent through the low-level control signals. The Edgeware: controlling and utilizing the wireless technology networks, and creation, management, setup of software and firmware. It connects with RFID tags, motion detectors, temperature monitors, or 10X controllers on refrigerators. Within the WINSmartGrid™ hub, a variety of monitors/sensors are supported that the Edgeware has connection with including humidity, current, voltage, power, shock, motion, chemicals, etc. This hub is capable of supporting wireless protocols (e.g. WiFi, Bluetooth, Zigbee, GPRS, and RFID). The most efficient protocols seem to be the low-power protocols such as Zigbee. The Middleware: The “middle man” between the Edgeware and the Centralware. Capable of providing functions such as data filtration, extraction of meaningful information, aggregation and messaging of data from the Edgeware, and distribution of the information to the proper destination/ web service accordingly. The Centralware: Decision making web service. It receives all information and determines what the best decisions are based on rules and carries out the execution of these decisions. Currently the WINSmartGrid™ Centralware is running on a basic set of rules, whereas, it will eventually work with external intelligent services as they begin to come online. Automated Demand Response (ADR) “The Automated Demand-Response (ADR) programs shows control models and secure messaging schemes, automation in load curtailment, leveraging multiple communication technologies, and maintaining interoperability between the Smart Grid automation architecture layers.” SMERC is in the process of creating a test area that would provide information on consumers’ energy usage and the distribution of that energy from a utility service. The test beds are located on the UCLA campus which will serve as a living lab for demonstration of ADR concepts. Since UCLA produces 75% of its own energy through its natural gas power plant, the campus is an easy and desirable place for conducting ADR research and demonstration. ADR will require control technology components and subsystems that will work with security, network standards, messaging, protocols, etc. in culmination with operational parameters. Advanced Metering Infrastructure (AMI) will also be checked for proper ability in terms of data volume and networking aspects. Further requirements such as rate design models, system-wide data and metadata modeling, etc. will be used to guide though system architecture The Demand-Response system provides an efficient service to utility systems and consumers. It is based on a service-oriented architecture (SOA) that would use information from the utility systems technical evaluations and requirement analysis to help assist integration modalities for backend utility systems. Through this architecture, real-time collaboration among the entire network involving billing, metering, distribution, etc., can be accomplished. Consumers are able to make requests and a supervisory controlling system will monitor the demands of the consumer and make the best available decisions. This Demand-Response system will also can be represented by various types of energy customers (e.g. commercial, residential, industrial). This will create unique and different load profiles and pricing for each type of these customers, all of which the system must keep track of. With the WINSmartGrid™ technology, transactions will be communicated through wireless technologies to convey common data payloads. Currently, SOA in conjunction with open embedded system can provide support for plug-and-play and secure-demand-response. Also, an application programming interface (API) provides customizability and extensibility to the system. The test beds use automation technologies and will provide demonstration of the systems functionality, communication fidelity and reliability, testing of data, protocols, etc. These technologies are AMI-DR models, hardware and software interfaces, software architecture, access control policies, recommended security schemes and algorithms, and desired set of optimizations. The testing phase would provide developed, detailed performance on the demand-response processes and technology components or sub-systems where efficient changes and predictions can be made to fulfill a targeted load curtailment and consumer demands. The test beds for the current research will have a "network platform that enables appliances such as plug-in electric vehicles, washers, dryers and air conditioners to be wirelessly monitored, connected, and controlled through a wireless communications framework. These test bed arrangements will provide vital research on the demand-response systems." Electric vehicle integration into the grid The automotive market in California is unlike any other. With an immense population and energy consumption, the state calls for creative ways to conserve energy in the most energy-conscious and cost efficient ways. It comes to no surprise then that California would be the base for most significant electric vehicle (EV) innovators such as Tesla. As these changes and innovations to the EV culture continue to grow, the next step is to supply this innovation with the capability to communicate and integrate EVs into the smart grid of tomorrow. Currently, technology within SMERC is being used and built for the program WINSmartEV™. It focuses on the integration of both wireless and RF-monitoring and control technologies. The EV technology provides a more energy efficient, economical, and user friendly smart technology for charging an EV. Several parking structures on the UCLA campus now provide EV charging to its members. These stations are monitored by SMERC's software systems in the Engineering Department. All data regarding these charging stations is collected by members of the SMERC team to evaluate tendencies and requests of its users. This data will be evaluated to provide the stations users with the best possible management of charging their EV. WINSmartEV™'s main objective is to increase the stability of the local power system and reduce energy cost by managing all operations conducted in charging an EV. The most recent implementation developed allows for several EVs to charge at one charging station while receiving different, yet controllable current. This type of charging system will provide the user with the vast flexibility towards charging an EV. This system provides the user with conveniences pertaining to parking, price, time limits, and power consumption. Another objective for the WINSmartEV™ program wirelessly gathering information from the electric grid and EV to determine more efficient charging capabilities for the EV. With the proper management of EV’s, charging and backfill operations can be used to lower electricity rates and flatten the load curve. User interface allows the EV owner to have the capability of controlling where, when, why, and how to charge their vehicle. An EV user may use a handheld device to view a map of charging stations, schedule exact time charge, start and stop her charge at any convenience, and this all could be done from a single touch on a Smartphone or other handheld devices. Also, if necessary or requested, an alert can be issued to the driver when the battery capacity is need of charging. SMERC evaluates EVs and charging stations patterns in order to determine the appropriate wireless technologies and sensor modules that are best for installation. In conclusion, integrating the EVs with WINSmardGrid™ the local AMI and the Demand-Response will provide communication and alerting systems for WINSmartEV™. Cyber Security project Electricity distribution systems are becoming drastically more complex and more dynamic, while the power grid is in the transition to the smart grid. The deployment of distributed energy resources (DERs) such as solar panels and energy storage devices is proliferating. Numerous inputs and controls are pushed and pulled from various advanced distribution grid platforms; some of the inputs and controls connect the grid resources to the public Internet. Improved sensing, communication, and control capabilities have the capability to enormously enhance the performance of the electric grid, but at the cost of increased vulnerabilities to deliberate attacks and accidental failures, threatening the grid’s functionality and reliability. EV charging system that connects to the smart grid is considered as an information network with a massive communication among utility, EV and DER control centers, EV supply equipment (EVSE), and power meters. As EV charging consumes a lot of power and thus can have a considerable impact on a distribution system, the cybersecurity on the EV charging domain is as critical as a distribution grid. The ongoing research project titled “UC-Lab Center for Electricity Distribution Cybersecurity,” which is currently sponsored by UCLRP (UCOP LFR-18-548175) has bring together a multi-disciplinary UC-Lab team of cybersecurity and electricity infrastructure experts to investigate the impact of cyberattacks on electricity distribution infrastructure and develop new strategies for mitigation of vulnerabilities, detection of intrusion, and protection against detrimental system-wide impact. The SMERC team focuses on the cybersecurity for the EV charging network, including system vulnerability analysis, risk assessment, and the impacts of cyber-attacks, as well as anomaly detection. The team has researched the vulnerability analysis and risk assessment for the smart charging infrastructures based on the charging system on the UCLA campus, which is called WINSmartEV™. The research has outlined a codified methodology and taxonomy for assessing vulnerability and risk of cyber-physical attacks on the EV charging networks to create a generalizable and comprehensive solution. For the anomaly detection, the team analyzes the multidimensional time-series data, including building load, solar generation, dynamic electricity price, and EV load, within the WINSmartEV™. The objective is to characterize the regular EV charging operation to establish a correlation-invariant network, thereby identifying anomalies or malicious data injection, which disturbs the correlations within the system. Other projects Other projects in beginning stages or current development in the SMERC are Battery storage integration with renewable solar, EV to solar integration, V2G, Cyber Security Testing, Wireless Monitoring and Control of the grid, Microgrid modeling and control, Autonomous Electric Vehicles, Home Area Networks and Consumer Issue in EV Integration and DR. Recent news and events SMERC has hosted several events both inside and outside UCLA with notable speakers from both academia and industry. Notable locations of seminars and panel discussions include Shanghai Jiao Tong University, Indian Institutes of Technology, and at the California State Capitol Building in Sacramento. The director of the lab, Dr. Rajit Gadh, has been quoted in notable articles such as Fast Company, and his activity includes meeting with the director of The Energy and Resources Institute and appearances in various events such as the Intercharge Network Conference in 2018. In addition, every year there is an Electric and Autonomous Transportation UCLA CAEV Annual Conference where the electric vehicle industry is discussed. Other notable events include the Workshop on Technology Trends in Transportation and Electricity, Artificial Intelligence and Autonomous Systems: Technology Innovations and Business Opportunities, and Distributed Energy Resources (DER)—EV, PV and Storage—for a Modern Grid. References External links Smart Grid Energy Research Center - Official Site WINSmartGrid™ WINMEC UCLA Electric vehicle technologies Smart grid
1982350
https://en.wikipedia.org/wiki/TARGET%20%28CAD%20software%29
TARGET (CAD software)
TARGET 3001! is a CAD computer program for EDA and PCB design, developed by Ing.-Büro Friedrich in Germany. It supports the design of electronic schematics, PCBs, and device front panels. It runs under Windows and is available in English, German and French. A special branch of the program is the ASIC Designer, which allows design of integrated circuits. The free version (for non-commercial use) is limited to 250 connection pins or pads on two copper layers. The PCB manufacturer PCB-Pool and Conrad Electronic provide a free unlimited version, that generates only printed output or output for PCB-Pool and Conrad's PCB service. Commercial versions with all features are available. Features TARGET 3001! collects several features under one user interface (MDI). All project information is stored in one file to avoid redundancy and version conflicts. Design begins with the creation of a schematic diagram and usually ends with the layout of a PCB (or chip). The schematics can be simulated by the integrated PSPICE compatible mixed mode simulator. Components are stored in a SQLite or MySQL database, also externally accessible. Component data include direct links to datasheets and component supplier information as well as simulation information and 3D models. TARGET's open Component Interchange Format CXF is supported by universal component databases like Ultra Librarian and Footprint Expert. PCBs or ASICs can be designed manually or using an autoplacer and autorouter. A Specctra interface to external autorouters is available. The design can be automatically checked for spacing violations and many other design rules. If the PCB is ready designed it can be directly displayed and rotated in a live 3D view. The 3D data can be exported in STEP format to produce preview 3D dummies of the PCB on 3D printers. Circuit design on 3D bodies (Molded Interconnect Device, MID) is possible. CNC data for PCB milling can be obtained in several formats. Additionally a device front panel can directly be derived from the PCB, using the coordinates on the PCB, e.g. from LEDs or potentiometers. History A predecessor of TARGET 3001! was "RULE" (), a DOS-based program for PCB layout (1989). After hobbyists used this, there were calls for a schematic tool and autorouter. In response, TARGET 2.1 (for DOS) was released in 1992. The move to Windows was difficult: early versions of "TARGET V3 for Windows" were prone to crash. The package became stable and more accepted among hobby, educational and professional users. Developments in versions V7 to V16 included an EMC tool and PSPICE-compatible simulation. The name TARGET was changed to TARGET 2001!, but, as year 2001 approached, "TARGET 3001!" was registered as a trademark and used for versions V9 and higher. TARGET 3001! is used also by industrial designers. For example, TOYOTA used it for cable harnesses in their Formula 1 racing car. Today, TARGET 3001! is one of the most popular PCB layout systems in Germany and Europe. Readers of electronics magazine Elektor voted it number two. Also testers of the electronics magazine "c't Hardware Hacks" rated it number two. See also Comparison of EDA software List of free electronics circuit simulators References External links Electronic design automation software Electronic design automation companies
1081685
https://en.wikipedia.org/wiki/List%20of%20system%20quality%20attributes
List of system quality attributes
Within systems engineering, quality attributes are realized non-functional requirements used to evaluate the performance of a system. These are sometimes named architecture characteristics, or "ilities" after the suffix many of the words share. They are usually Architecturally Significant Requirements that require architects' attention. Quality attributes Notable quality attributes include: accessibility accountability accuracy adaptability administrability affordability agility auditability autonomy [Erl] availability compatibility composability [Erl] confidentiality configurability correctness credibility customizability debuggability degradability determinability demonstrability dependability deployability discoverability [Erl] distributability durability effectiveness efficiency evolvability extensibility failure transparency fault-tolerance fidelity flexibility inspectability installability integrity interchangeability interoperability [Erl] learnability localizability maintainability manageability mobility modifiability modularity observability operability orthogonality portability precision predictability process capabilities producibility provability recoverability relevance reliability repeatability reproducibility resilience responsiveness reusability [Erl] robustness safety scalability seamlessness self-sustainability serviceability (a.k.a. supportability) securability simplicity stability standards compliance survivability sustainability tailorability testability timeliness traceability transparency ubiquity understandability upgradability usability vulnerability Many of these quality attributes can also be applied to data quality. Common subsets Together, reliability, availability, serviceability, usability and installability, are referred to as RASUI. Functionality, usability, reliability, performance and supportability are together referred to as FURPS in relation to software requirements. Agility in working software is an aggregation of seven architecturally sensitive attributes: debuggability, extensibility, portability, scalability, securability, testability and understandability. For databases reliability, availability, scalability and recoverability (RASR), is an important concept. Atomicity, consistency, isolation (sometimes integrity), durability (ACID) is a transaction metric. When dealing with safety-critical systems, the acronym reliability, availability, maintainability and safety (RAMS) is frequently used. Dependability is an aggregate of availability, reliability, safety, integrity and maintainability. Integrity depends on security and survivability. Security is a composite of confidentiality, integrity and availability. Security and dependability are often treated together. See also Non-functional requirement Information quality ISO/IEC 9126 Software engineering—product quality Cognitive dimensions of notations Software quality References Further reading Software engineering terminology Software requirements Software quality
30233970
https://en.wikipedia.org/wiki/SysAid%20Technologies
SysAid Technologies
SysAid Technologies (formerly Ilient) is an international company founded in 2002 that develops and provides IT Service Management software. SysAid Technologies is a privately owned company, founded by Israel Lifshitz (also founder of NUBO Software). Company overview Corporate headquarters are located in Airport City, Israel, near Tel Aviv. In June 2010, the company opened an additional office in Sydney, Australia. In May 2012, they opened another office in Brazil, South America. SysAid's products are now used in more than 100,000 organizations worldwide in numerous industries, including: healthcare, retail, education, financial services, manufacturing, aviation, and food/beverages. SysAid is a software system for IT professionals, however SysAid has also been deployed and used by other industry professionals, such as municipalities and insurance companies. References See also Comparison of issue tracking systems Comparison of help desk issue tracking software Software companies of Israel Privately held companies of Israel Business software Help desk software Bug and issue tracking software
42672986
https://en.wikipedia.org/wiki/Apache%20Samza
Apache Samza
Apache Samza is an open-source, near-realtime, asynchronous computational framework for stream processing developed by the Apache Software Foundation in Scala and Java. It has been developed in conjunction with Apache Kafka. Both were originally developed by LinkedIn. Overview Samza allows users to build stateful applications that process data in real-time from multiple sources including Apache Kafka. Samza provides fault tolerance, isolation and stateful processing. Unlike batch systems such as Apache Hadoop or Apache Spark, it provides continuous computation and output, which result in sub-second response times. There are many players in the field of real-time stream processing and Samza is one of the mature products. It was added to Apache in 2013. Samza is used by multiple companies. The biggest installation is in LinkedIn. See also Apache Beam Druid (open-source data store) List of Apache Software Foundation projects Storm (event processor) References External links Apache Samza website LinkedIn software Samza Java platform Free software programmed in Java (programming language) Free software programmed in Scala Software using the Apache license Free software Distributed stream processing Distributed computing architecture Parallel computing
63774689
https://en.wikipedia.org/wiki/Scantrust
Scantrust
Scantrust is a Swiss company that provides an Internet of things platform for identifying products on the internet. Scantrust offers traceability for various industries, such as luxury goods, food products, industrial machines, water filters, cables, agrochemical products and fiscal stamps. Nathan J. Anderson is the CEO. History Justin Picard, Nathan J. Anderson, and Paul Landry founded Scantrust at the end of 2013. A seed round led by SOSV was raised in 2015, and a series A led by Credit Suisse was raised in 2017. In 2016, the company concluded a partnership with Agfa-Gevaert to integrate its technologies into Agfa's security software. In 2017 the National Seeds Institute of Argentina released a fiscal stamp printed with Scantrust secure QR Code. In 2018 the company entered into a partnership with Hyperledger and began offering services using Hyperledger Sawtooth. In 2019, the Dutch Standards organisation NEN announced they would use Scantrust secure QR codes to ensure the authenticity of their certificates. The same year, Scantrust entered into a partnership with HP Indigo for labels printed with HP commercial printers. In 2020, Scantrust entered into a partnership with SAP to deliver end-to-end traceability. Product authentication and traceability The company has developed a QR Code system with an additional layer of protection against copying, based on inserting a copy detection pattern or secure graphic which loses information when it is copied. The technology does not require special materials, inks, and modifications to printing equipment to implement. Related product authentication and traceability data can be stored into a blockchain. QR Codes used in Scantrust authentication and traceability systems are printed on product packaging and scanned with a smartphone to authenticate and track products. The company provides a free app to consumers which can be used to scan products with the copy detection pattern and help detect counterfeits. Scanning of a code with a smartphone can also offer a traceability feature with origin and supply chain information made about the product made available An enterprise app is also provided for employees, distributors and forensic inspections. References Technology companies of Switzerland Technology companies established in 2013 2013 establishments in Switzerland Companies based in Lausanne
54390847
https://en.wikipedia.org/wiki/Blob%20emoji
Blob emoji
Google's blob emoji were a feature of its Android mobile operating system between 2013 and 2017. History Google introduced the blobs as part of its Android KitKat mobile operating system in 2013. The next year, Google expanded the blob style to include the emojis that normally depict humans. As an example, instead of a flamenco dancer in Apple emoji style and its derivates, Google's blob style showed a less glamorous blob with a rose in its teeth. In 2016, Google redesigned the blobs into a gumdrop-shape. As Unicode, the group that establishes emoji standards, introduced skin tone and gender options to emojis, Google's emojis progressively appeared more as humans and less as yellow, amorphous blobs. Google retired the blobs in 2017 with the release of Android Oreo in favor of circular emojis similar in style to that of other platforms. Consistent cross-platform emoji interpretation was among the redesign's primary aims. The redesign, which had been in development for about a year, mimicked an Apple effort to include more detail in the emoji glyph and offer yellow skin tone as the default. Despite their deprecation, Google's Gmail continued to use the blob emojis, as of 2021. Reception The blob emoji were a divisive feature between 2013 and 2017. Proponents praised their novel interpretation of emoji ideograms while detractors criticized the miscommunication that results when emoji are interpreted differently across platforms. Google released sticker packs featuring blob emoji for Gboard and Android Messages in 2018. References Further reading External links Emoji typefaces Android (operating system)
1207161
https://en.wikipedia.org/wiki/Ultrashort%20pulse
Ultrashort pulse
In optics, an ultrashort pulse, also known as an ultrafast event, is an electromagnetic pulse whose time duration is of the order of a picosecond (10−12 second) or less. Such pulses have a broadband optical spectrum, and can be created by mode-locked oscillators. Amplification of ultrashort pulses almost always requires the technique of chirped pulse amplification, in order to avoid damage to the gain medium of the amplifier. They are characterized by a high peak intensity (or more correctly, irradiance) that usually leads to nonlinear interactions in various materials, including air. These processes are studied in the field of nonlinear optics. In the specialized literature, "ultrashort" refers to the femtosecond (fs) and picosecond (ps) range, although such pulses no longer hold the record for the shortest pulses artificially generated. Indeed, x-ray pulses with durations on the attosecond time scale have been reported. The 1999 Nobel Prize in Chemistry was awarded to Ahmed H. Zewail, for the use of ultrashort pulses to observe chemical reactions at the timescales on which they occur, opening up the field of femtochemistry. Definition There is no standard definition of ultrashort pulse. Usually the attribute 'ultrashort' applies to pulses with a duration of a few tens of femtoseconds, but in a larger sense any pulse which lasts less than a few picoseconds can be considered ultrashort. The distinction between "Ultrashort" and "Ultrafast" is necessary as the speed at which the pulse propagates is a function of the index of refraction of the medium through which it travels, whereas "Ultrashort" refers to the temporal width of the pulse wavepacket. A common example is a chirped Gaussian pulse, a wave whose field amplitude follows a Gaussian envelope and whose instantaneous phase has a frequency sweep. Background The real electric field corresponding to an ultrashort pulse is oscillating at an angular frequency ω0 corresponding to the central wavelength of the pulse. To facilitate calculations, a complex field E(t) is defined. Formally, it is defined as the analytic signal corresponding to the real field. The central angular frequency ω0 is usually explicitly written in the complex field, which may be separated as a temporal intensity function I(t) and a temporal phase function ψ(t): The expression of the complex electric field in the frequency domain is obtained from the Fourier transform of E(t): Because of the presence of the term, E(ω) is centered around ω0, and it is a common practice to refer to E(ω-ω0) by writing just E(ω), which we will do in the rest of this article. Just as in the time domain, an intensity and a phase function can be defined in the frequency domain: The quantity is the power spectral density (or simply, the spectrum) of the pulse, and is the phase spectral density (or simply spectral phase). Example of spectral phase functions include the case where is a constant, in which case the pulse is called a bandwidth-limited pulse, or where is a quadratic function, in which case the pulse is called a chirped pulse because of the presence of an instantaneous frequency sweep. Such a chirp may be acquired as a pulse propagates through materials (like glass) and is due to their dispersion. It results in a temporal broadening of the pulse. The intensity functions—temporal and spectral —determine the time duration and spectrum bandwidth of the pulse. As stated by the uncertainty principle, their product (sometimes called the time-bandwidth product) has a lower bound. This minimum value depends on the definition used for the duration and on the shape of the pulse. For a given spectrum, the minimum time-bandwidth product, and therefore the shortest pulse, is obtained by a transform-limited pulse, i.e., for a constant spectral phase . High values of the time-bandwidth product, on the other hand, indicate a more complex pulse. Pulse shape control Although optical devices also used for continuous light, like beam expanders and spatial filters, may be used for ultrashort pulses, several optical devices have been specifically designed for ultrashort pulses. One of them is the pulse compressor, a device that can be used to control the spectral phase of ultrashort pulses. It is composed of a sequence of prisms, or gratings. When properly adjusted it can alter the spectral phase φ(ω) of the input pulse so that the output pulse is a bandwidth-limited pulse with the shortest possible duration. A pulse shaper can be used to make more complicated alterations on both the phase and the amplitude of ultrashort pulses. To accurately control the pulse, a full characterization of the pulse spectral phase is a must in order to get certain pulse spectral phase (such as transform-limited). Then, a spatial light modulator can be used in the 4f plane to control the pulse. Multiphoton intrapulse interference phase scan (MIIPS) is a technique based on this concept. Through the phase scan of the spatial light modulator, MIIPS can not only characterize but also manipulate the ultrashort pulse to get the needed pulse shape at target spot (such as transform-limited pulse for optimized peak power, and other specific pulse shapes). If the pulse shaper is fully calibrated, this technique allows controlling the spectral phase of ultrashort pulses using a simple optical setup with no moving parts. However the accuracy of MIIPS is somewhat limited with respect to other techniques, such as frequency-resolved optical gating (FROG). Measurement techniques Several techniques are available to measure ultrashort optical pulses. Intensity autocorrelation gives the pulse width when a particular pulse shape is assumed. Spectral interferometry (SI) is a linear technique that can be used when a pre-characterized reference pulse is available. It gives the intensity and phase. The algorithm that extracts the intensity and phase from the SI signal is direct. Spectral phase interferometry for direct electric-field reconstruction (SPIDER) is a nonlinear self-referencing technique based on spectral shearing interferometry. The method is similar to SI, except that the reference pulse is a spectrally shifted replica of itself, allowing one to obtain the spectral intensity and phase of the probe pulse via a direct FFT filtering routine similar to SI, but which requires integration of the phase extracted from the interferogram to obtain the probe pulse phase. Frequency-resolved optical gating (FROG) is a nonlinear technique that yields the intensity and phase of a pulse. It is a spectrally resolved autocorrelation. The algorithm that extracts the intensity and phase from a FROG trace is iterative. Grating-eliminated no-nonsense observation of ultrafast incident laser light e-fields (GRENOUILLE) is a simplified version of FROG. (Grenouille is French for "frog".) Chirp scan is a technique similar to MIIPS which measures the spectral phase of a pulse by applying a ramp of quadratic spectral phases and measuring second harmonic spectra. With respect to MIIPS, which requires many iterations to measure the spectral phase, only two chirp scans are needed to retrieve both the amplitude and the phase of the pulse. Multiphoton intrapulse interference phase scan (MIIPS) is a method to characterize and manipulate the ultrashort pulse. Wave packet propagation in nonisotropic media To partially reiterate the discussion above, the slowly varying envelope approximation (SVEA) of the electric field of a wave with central wave vector and central frequency of the pulse, is given by: We consider the propagation for the SVEA of the electric field in a homogeneous dispersive nonisotropic medium. Assuming the pulse is propagating in the direction of the z-axis, it can be shown that the envelope for one of the most general of cases, namely a biaxial crystal, is governed by the PDE: where the coefficients contains diffraction and dispersion effects which have been determined analytically with computer algebra and verified numerically to within third order for both isotropic and non-isotropic media, valid in the near-field and far-field. is the inverse of the group velocity projection. The term in is the group velocity dispersion (GVD) or second-order dispersion; it increases the pulse duration and chirps the pulse as it propagates through the medium. The term in is a third-order dispersion term that can further increase the pulse duration, even if vanishes. The terms in and describe the walk-off of the pulse; the coefficient is the ratio of the component of the group velocity and the unit vector in the direction of propagation of the pulse (z-axis). The terms in and describe diffraction of the optical wave packet in the directions perpendicular to the axis of propagation. The terms in and containing mixed derivatives in time and space rotate the wave packet about the and axes, respectively, increase the temporal width of the wave packet (in addition to the increase due to the GVD), increase the dispersion in the and directions, respectively, and increase the chirp (in addition to that due to ) when the latter and/or and are nonvanishing. The term rotates the wave packet in the plane. Oddly enough, because of previously incomplete expansions, this rotation of the pulse was not realized until the late 1990s but it has been experimentally confirmed. To third order, the RHS of the above equation is found to have these additional terms for the uniaxial crystal case: The first and second terms are responsible for the curvature of the propagating front of the pulse. These terms, including the term in are present in an isotropic medium and account for the spherical surface of a propagating front originating from a point source. The term can be expressed in terms of the index of refraction, the frequency and derivatives thereof and the term also distorts the pulse but in a fashion that reverses the roles of and (see reference of Trippenbach, Scott and Band for details). So far, the treatment herein is linear, but nonlinear dispersive terms are ubiquitous to nature. Studies involving an additional nonlinear term have shown that such terms have a profound effect on wave packet, including amongst other things, a self-steepening of the wave packet. The non-linear aspects eventually lead to optical solitons. Despite being rather common, the SVEA is not required to formulate a simple wave equation describing the propagation of optical pulses. In fact, as shown in, even a very general form of the electromagnetic second order wave equation can be factorized into directional components, providing access to a single first order wave equation for the field itself, rather than an envelope. This requires only an assumption that the field evolution is slow on the scale of a wavelength, and does not restrict the bandwidth of the pulse at all—as demonstrated vividly by. High harmonics High energy ultrashort pulses can be generated through high harmonic generation in a nonlinear medium. A high intensity ultrashort pulse will generate an array of harmonics in the medium; a particular harmonic of interest is then selected with a monochromator. This technique has been used to produce ultrashort pulses in the extreme ultraviolet and soft-X-ray regimes from near infrared Ti-sapphire laser pulses. Applications Advanced material 3D micro-/nano-processing The ability of femtosecond lasers to efficiently fabricate complex structures and devices for a wide variety of applications has been extensively studied during the last decade. State-of-the-art laser processing techniques with ultrashort light pulses can be used to structure materials with a sub-micrometer resolution. Direct laser writing (DLW) of suitable photoresists and other transparent media can create intricate three-dimensional photonic crystals (PhC), micro-optical components, gratings, tissue engineering (TE) scaffolds and optical waveguides. Such structures are potentially useful for empowering next-generation applications in telecommunications and bioengineering that rely on the creation of increasingly sophisticated miniature parts. The precision, fabrication speed and versatility of ultrafast laser processing make it well placed to become a vital industrial tool for manufacturing. Micro-machining Among the applications of femtosecond laser, the microtexturization of implant surfaces have been experimented for the enhancement of the bone formation around zirconia dental implants. The technique demonstrated to be precise with a very low thermal damage and with the reduction of the surface contaminants. Posterior animal studies demonstrated that the increase on the oxygen layer and the micro and nanofeatures created by the microtexturing with femtosecond laser resulted in higher rates of bone formation, higher bone density and improved mechanical stability. See also Attosecond chronoscopy Bandwidth-limited pulse Femtochemistry Frequency comb Medical imaging: Ultrashort laser pulses are used in multiphoton fluorescence microscopes Optical communication (Ultrashort pulses) Filtering and Pulse Shaping. Terahertz (T-rays) generation and detection. Ultrafast laser spectroscopy Wave packet References Further reading External links The virtual femtosecond laboratory Lab2 Animation on Short Pulse propagation in random medium (YouTube) Ultrafast Lasers: An animated guide to the functioning of Ti:Sapphire lasers and amplifiers. Nonlinear optics Laser science
349768
https://en.wikipedia.org/wiki/System%20requirements
System requirements
To be used efficiently, all computer software needs certain hardware components or other software resources to be present on a computer. These prerequisites are known as (computer) system requirements and are often used as a guideline as opposed to an absolute rule. Most software defines two sets of system requirements: minimum and recommended. With increasing demand for higher processing power and resources in newer versions of software, system requirements tend to increase over time. Industry analysts suggest that this trend plays a bigger part in driving upgrades to existing computer systems than technological advancements. A second meaning of the term of system requirements, is a generalisation of this first definition, giving the requirements to be met in the design of a system or sub-system. Recommended system requirements Often manufacturers of games will provide the consumer with a set of requirements that are different from those that are needed to run a software. These requirements are usually called the recommended requirements. These requirements are almost always of a significantly higher level than the minimum requirements, and represent the ideal situation in which to run the software. Generally speaking, this is a better guideline than minimum system requirements in order to have a fully usable and enjoyable experience with that software. Hardware requirements The most common set of requirements defined by any operating system or software application is the physical computer resources, also known as hardware, A hardware requirements list is often accompanied by a hardware compatibility list (HCL), especially in case of operating systems. An HCL lists tested, compatible, and sometimes incompatible hardware devices for a particular operating system or application. The following sub-sections discuss the various aspects of hardware requirements. Architecture All computer operating systems are designed for a particular computer architecture. Most software applications are limited to particular operating systems running on particular architectures. Although architecture-independent operating systems and applications exist, most need to be recompiled to run on a new architecture. See also a list of common operating systems and their supporting architectures. Processing power The power of the central processing unit (CPU) is a fundamental system requirement for any software. Most software running on x86 architecture define processing power as the model and the clock speed of the CPU. Many other features of a CPU that influence its speed and power, like bus speed, cache, and MIPS are often ignored. This definition of power is often erroneous, as AMD Athlon and Intel Pentium CPUs at similar clock speed often have different throughput speeds. Intel Pentium CPUs have enjoyed a considerable degree of popularity, and are often mentioned in this category. Memory All software, when run, resides in the random access memory (RAM) of a computer. Memory requirements are defined after considering demands of the application, operating system, supporting software and files, and other running processes. Optimal performance of other unrelated software running on a multi-tasking computer system is also considered when defining this requirement. Secondary storage Data storage device requirements vary, depending on the size of software installation, temporary files created and maintained while installing or running the software, and possible use of swap space (if RAM is insufficient). Display adapter Software requiring a better than average computer graphics display, like graphics editors and high-end games, often define high-end display adapters in the system requirements. Peripherals Some software applications need to make extensive and/or special use of some peripherals, demanding the higher performance or functionality of such peripherals. Such peripherals include CD-ROM drives, keyboards, pointing devices, network devices, etc. Software requirements Software requirements deal with defining software resource requirements and prerequisites that need to be installed on a computer to provide optimal functioning of an application. These requirements or prerequisites are generally not included in the software installation package and need to be installed separately before the software is installed. Platform A computing platform describes some sort of framework, either in hardware or software, which allows software to run. Typical platforms include a computer's architecture, operating system, or programming languages and their runtime libraries. Operating system is one of the requirements mentioned when defining system requirements (software). Software may not be compatible with different versions of same line of operating systems, although some measure of backward compatibility is often maintained. For example, most software designed for Microsoft Windows XP does not run on Microsoft Windows 98, although the converse is not always true. Similarly, software designed using newer features of Linux Kernel v2.6 generally does not run or compile properly (or at all) on Linux distributions using Kernel v2.2 or v2.4. APIs and drivers Software making extensive use of special hardware devices, like high-end display adapters, needs special API or newer device drivers. A good example is DirectX, which is a collection of APIs for handling tasks related to multimedia, especially game programming, on Microsoft platforms. Web browser Most web applications and software depend heavily on web technologies to make use of the default browser installed on the system. Microsoft Internet Explorer is a frequent choice of software running on Microsoft Windows, which makes use of ActiveX controls, despite their vulnerabilities. Other requirements Some software also has other requirements for proper performance. Internet connection (type and speed) and resolution of the display screen are notable examples. Examples Following are a few examples of system requirement definitions for popular PC games and trend of ever-increasing resource needs: For instance, while StarCraft (1998) requires: Doom 3 (2004) requires: Star Wars: The Force Unleashed (2009) requires: Grand Theft Auto V (2015) requires: See also Requirement Requirements analysis Software Requirements Specification Specification (technical standard) System requirements specification (SyRS) References Software requirements
20628468
https://en.wikipedia.org/wiki/Jing%20%28software%29
Jing (software)
Jing was a screencasting computer program launched in 2007 as Jing Project by the TechSmith Corporation. The software took a picture or video of the user's computer screen and uploaded it to the Web, FTP, computer or clipboard. If uploaded to the web, the program automatically created a URL to the content so it could be shared with others. On 6 January 2009, TechSmith released Jing Pro, a paid premium version of Jing. In February 2012, Techsmith announced Jing Pro is to be retired. All users (regardless of subscription) could use this service until 28 February 2013. On 14 July 2020, Techsmith shut down the support for uploading to Screencast.com in line with the previously-announced end of support for Jing. TechSmith also changed the Jing product page to point to a new product named TechSmith Capture that performs a similar function. See also Comparison of screencasting software References 2007 software Screencasting software Screenshot software MacOS text-related software Windows text-related software
20257969
https://en.wikipedia.org/wiki/Managed%20security%20service
Managed security service
In computing, managed security services (MSS) are network security services that have been outsourced to a service provider. A company providing such a service is a managed security service provider (MSSP) The roots of MSSPs are in the Internet Service Providers (ISPs) in the mid to late 1990s. Initially, ISP(s) would sell customers a firewall appliance, as customer premises equipment (CPE), and for an additional fee would manage the customer-owned firewall over a dial-up connection. According to recent industry research, most organizations (74%) manage IT security in-house, but 82% of IT professionals said they have either already partnered with, or plan to partner with, a managed security service provider. Businesses turn to managed security services providers to alleviate the pressures they face daily related to information security such as targeted malware, customer data theft, skills shortages and resource constraints. Managed security services (MSS) are also considered the systematic approach to managing an organization's security needs. The services may be conducted in-house or outsourced to a service provider that oversees other companies' network and information system security. Functions of a managed security service include round-the-clock monitoring and management of intrusion detection systems and firewalls, overseeing patch management and upgrades, performing security assessments and security audits, and responding to emergencies. There are products available from a number of vendors to help organize and guide the procedures involved. This diverts the burden of performing the chores manually, which can be considerable, away from administrators. Industry research firm, Forrester Research, identified the 14 most significant vendors in the global market in 2018 with its 23-criteria evaluation of managed security service providers (MSSPs)--identifying Accenture, IBM, Dell SecureWorks, Trustwave, AT&T, Verizon, Deloitte, Wipro and others as the leaders in the MSSP market. Newcomers to the market include a number of smaller providers used to protect homes, small businesses, and high networth clients. Early History of Managed Security Services An early example of an outsourced and off-site MSSP service is US West !NTERACT Internet Security. The security service didn't require the customer to purchase any equipment and no security equipment was installed at the customers premises. The service is considered a MSSP offering in that US West retained ownership of the firewall equipment and the firewalls were operated from their own Internet Point of Presence (PoP) The service was based on Check Point Firewall-1 equipment. Following over a year long beta introduction period, the service was generally available by early 1997. The service also offered managed Virtual Private Networking (VPN) encryption security at launch. Industry terms Asset: A resource valuable to a company worthy of protection. Incident: An assessed occurrence that actually or potentially jeopardizes the confidentiality, integrity, or availability of an asset. Alert: Identified information, i.e. fact, used to correlate an incident. Six categories of managed security services On-site consulting This is customized assistance in the assessment of business risks, key business requirements for security and the development of security policies and processes. It may include comprehensive security architecture assessments and design (include technology, business risks, technical risks and procedures). Consulting may also include security product integration and On-site mitigation support after an intrusion has occurred, including emergency incident response and forensic analysis Perimeter management of the client's network This service involves installing, upgrading, and managing the firewall, Virtual Private Network (VPN) and/or intrusion detection hardware and software, electronic mail, and commonly performing configuration changes on behalf of the customer. Management includes monitoring, maintaining the firewall's traffic routing rules, and generating regular traffic and management reports to the customer. Intrusion detection management, either at the network level or at the individual host level, involves providing intrusion alerts to a customer, keeping up to date with new defenses against intrusion, and regularly reporting on intrusion attempts and activity. Content filtering services may be provided by; such as, email filtering and other data traffic filtering. Product resale Clearly not a managed service by itself, product resale is a major revenue generator for many MSS providers. This category provides value-added hardware and software for a variety of security-related tasks. One such service that may be provided is archival of customer data. Managed security monitoring This is the day-to-day monitoring and interpretation of important system events throughout the network—including unauthorized behavior, malicious hacks, denial of service (DoS), anomalies, and trend analysis. It is the first step in an incident response process. Penetration testing and vulnerability assessments This includes one-time or periodic software scans or hacking attempts in order to find vulnerabilities in a technical and logical perimeter. It generally does not assess security throughout the network, nor does it accurately reflect personnel-related exposures due to disgruntled employees, social engineering, etc. Regularly, reports are given to the client. Compliance monitoring Conduct change management by monitoring event log to identify changes to a system that violates a formal security policy. For example, if an impersonator grants himself or herself too much administrative access to a system, it would be easily identifiable through compliance monitoring. Engaging an MSSP The decision criteria for engaging the services of a MSSP are much the same as those for any other form of outsourcing: cost-effectiveness compared to in-house solutions, focus upon core competencies, need for round-the-clock service, and ease of remaining up-to-date. An important factor, specific to MSS, is that outsourcing network security hands over critical control of the company's infrastructure to an outside party, the MSSP, whilst not relieving the ultimate responsibility for errors. The client of an MSSP still has the ultimate responsibility for its own security, and as such must be prepared to manage and monitor the MSSP, and hold it accountable for the services for which it is contracted. The relationship between MSSP and client is not a turnkey one. Although the organization remains responsible for defending its network against information security and related business risks, working with a MSSP allows the organization to focus on its core activities while remaining protected against network vulnerabilities. Business risks can result when information assets upon which the business depends are not securely configured and managed (resulting in asset compromise due to violations of confidentiality, availability, and integrity). Compliance with specific government-defined security requirements can be achieved by using managed security services. Managed security services for mid-sized and smaller businesses The business model behind managed security services is commonplace among large enterprise companies with their IT security experts. The model was later adapted to fit medium-sized and smaller companies (SMBs - organizations up to 500 employees, or with no more than 100 employees at any one site) by the value-added reseller (VAR) community, either specializing in managed security or offering it as an extension to their managed IT service solutions. SMBs are increasingly turning to managed security services for several reasons. Chief among these are the specialized, complex and highly dynamic nature of IT security and the growing number of regulatory requirements obliging businesses to secure the digital safety and integrity of personal information and financial data held or transferred via their computer networks. Whereas larger organizations typically employ an IT specialist or department, organizations at a smaller scale such as distributed location businesses, medical or dental offices, attorneys, professional services providers or retailers do not typically employ full-time security specialists, although they frequently employ IT staff or external IT consultants. Of these organizations, many are constrained by budget limitations. To address the combined issues of lack of expertise, lack of time and limited financial resources, an emerging category of managed security service provider for the SMB has arisen. The organizations across sectors are now shifting to Managed Security services from the traditional in-house IT security practices. A trend of outsourcing the IT security jobs to the Managed Security Services vendors is picking up at an appreciable pace. This also helps the enterprises to focus more on their core business activities as a strategic approach. Effective management, cost-effectiveness and seamless monitoring are the major drivers fueling the demand of these services. Further, with the increase in the participation of leading IT companies worldwide, the end user enterprises are gaining confidence in outsourcing the IT security. Services providers in this category tend to offer comprehensive IT security services delivered on remotely managed appliances or devices that are simple to install and run for the most part in the background. Fees are normally highly affordable to reflect financial constraints, and are charged every month at a flat rate to ensure predictability of costs. Service providers deliver daily, weekly, monthly or exception-based reporting depending on the client's requirements. Security Tuning(Firewall tuning/ IDS tuning/ SIEM tuning) Today IT security has become a power weapon as cyberattacks have become highly sophisticated. As enterprises toil to keep at par with the new malware deviant or e-mail spoofing fraud gambit. Among different prominent players, Managed Security Service provider observe the growing need to combat increasingly complicated and intended attacks. In response, these vendors are busy enhancing the sophistication of their solution, in many cases winning over other security expert to expand their portfolio. Besides this increasing regulatory compliance associated with the protection of citizen's data worldwide, is likely to stimulate enterprises to ensure a high data-security level. Some of the frontrunners in engaging managed security services are Financial Services, telecom, information technology etc. To maintain a competitive edge, MSS vendors are focusing more and more on refining their product offering of technologies deployed at clients. Another crucial factor of profitability remains the capability to lower the cost yet generate more revenue by avoiding the deployment of additional tools. Simplifying both service creation and integration of the products ensures unprecedented visibility as well as integration. Besides this, the MSS market would witness a tremendous growth in regions such as North America, Europe, Asia –Pacific and Latin America, Middle East and Africa. See also Information security operations center Security as a service References Further reading Computer network security Outsourcing
13736885
https://en.wikipedia.org/wiki/VNG%20Corporation
VNG Corporation
VNG Corporation (VNG) is a Vietnamese technology company, founded in 2004, specializing in digital content and online entertainment, social networking, and e-commerce. VNG focuses on four main businesses, including online games, platforms, digital payments and cloud services. Many key products developed by VNG have attracted hundreds of millions of users such as Zalo, ZaloPay, Zing MP3, and 123phim. The company is "Vietnam’s first ever unicorn start-up" according to The ASEAN Post. History VNG was founded on 9 September 2004 under the name of VinaGame. 2006-2007: The company focused on developing software products for internet users like "Internet Cyber Station Manager” (CSM), 123mua.com.vn (e-commerce). The company also develop web products under Zing Brand – comprehensive platform covering information, connection and entertainment for users and ranked by Alexa in August 2008 as ZingMP3 the most popular listening and searching tool for online music in Vietnam. 2008-2009: Under the name of VNG Corporation. 2010-2011: Developed an online game called Thuận Thiên Kiếm, which won the “Sao Khue” award in 2010 in the category of Products/Gaming Solutions and Electronic Entertainment. Not only that, but the company also exported the “Un In” online game to Japan. 2012-2013: VNG caught the mobile trend by focusing on developing products on this platform. The most outstanding one was Zalo which is a mobile application for instant messaging and call. It reached 10 million users in only 1.5 years. 2014: VNG was evaluated 1 billion US dollar by World Startup Report and become the first and only unicorn startup of Vietnam 2015: VNG was honored "Global Fast-Growing Enterprise in East Asia " by the World Economic Forum (Manila, Philippines) 2016: VNG launched Zalopay – a mobile payment application 2017: VNG was the first Vietnamese tech company to sign a MOU with the world's second-largest stock exchange Nasdaq to explore a US listing 2018: VNG announced new strategic businesses such as finance and payment and cloud services Data center VNG has two Tier-3 standard data centers, located in Ho Chi Minh City, and Hanoi. Products and services Digital content and online entertainment Zuni Online Learning Zuni is an online non-profit education project operated and invested by VNG and VNIF, which was officially published in March 2014. In June 2014, Zuni has nearly 124,000 members with more than 4,000 members active every day. Zuni was evaluated as one of the competent education projects right after its publishing, with 1466 exam samples, 618 video lectures and 124 major topics available currently. Zing MP3 Zing MP3, a music streaming service, was launched in August 2007, currently consisting of 2 versions: website and app on the iOS, Android and Windows Phone platforms. Zing MP3 application on smartphone has been one of the few most frequently downloaded ones in Vietnam in the past few years. When it was brought onto the mobile platform, Zing MP3 was an outstanding app of Vietnam with over 5 million downloads on Android Games VNG is one of the four main game publishers in the Vietnamese market. Produce, develop and globally publishing online games as well as import and publish well-known games in the local market. Highlighted products: Sky Garden, Võ Lâm Truyền Kỳ (link), Crossfire Legend (link), 360 Games, ... Community connections Zing Me social network Zing Me is a social network operated by VNG, which was introduced in August 2009. It is integrated directly via Zing system with a variety of special applications like blogging, photo and music sharing, gaming, video clips, email. In addition, Zing Me was the first social network in Vietnam that had the properties of a platform. It allows the third-party developing apps which use common infrastructure and sharing users via opening API (Application Programing Interface) in order to diversify the system contents. In March 2010, Zing Me launched its first version for mobile phones. With those achievements, Zing Me won “Outstanding Value-added Service in 2009” awarded by HCMC Department of Information and Communications on 27 March 2009; and the “Sao Khue” Award in 2010 for beingrated 4-star in the group of value-added products and services on mobile/internet. After two year of releasing, Zing Me reached its 8.2 million-user milestone in October 2011. Not only does Zing Me allow users to share their status updates and blog, it also allows them to communicate via photos, voices and emoticons. Zing Me is getting popular with more than one million photos shared by users everyday since June 2013. Zalo Zalo is a free message and call application on mobile and desktop released on 8 August 2012 for iOS, Android, Windows Phone. At the end of May, VTV used Zalo as a bridge for the community to share their feelings and thoughts and send messages of encouragement to the people and soldiers on duty in Spratly Islands. Finance and payments 123Pay In order to meet the needs of using internally as well as developing the external market, in 2010, VNG invested in R&D for an Online Payment Platform called 123Pay. This product is inherited and developed on the payment platform of ZingPay (a payment platform used for online games of VNG since 2005). Zalopay Up-to-date digital payment and personal payment platforms: mobile wallet Zalopay. Applications Laban Key Laban Key is a Vietnamese keyboard for mobile devices developed by VNG. It was introduced in September 2013 by the Project Manager Pham Kim Long, who is also the developer of the Windows input method editor UniKey. Software CSM Cyber Station Manager (CSM) is a free software from VNG for managing Internet agencies. The first version of CSM was published by VNG on 2 February 2006. CSM is the most popular managing internet shops in Vietnam with over 60% of market share and over 2 million downloads every day. There are currently 25,000 Internet shops nationwide using CSM to manage play time, extra services, game updates, and automatic applications. CSM is also got certified for offering national standards TCVN 8702:2011 as in Certificate of Conformance no. B0001310314CS01A3 on 31 March 2014. This Certificate of Conformance is awarded based on the most updated procedures issued in Decision no. 350/QĐ-CVT (new) on September 12, 2013 by Vietnam Telecommunications Authority – Vietnam Ministry of Information and Communications, a substitute for Decision no 75/QĐ-QLCL. In addition, CSM won the Sao Khue Award in the years of 2009, 2012 and 2014. Cloud services Provide full-stack cloud services for organizations and businesses with smart tech solutions linked through Internet connection and cloud technology. Highlighted products: 123CS, Cloud server, IoT HUB, vCloudStack VNG culture VNG's headquarter is located in Ho Chi Minh City. People and Corporate Culture are two key elements of VNG. Understanding 3 core values (Embracing Challenges, Advancing Partnership and Upholding Integrity), VNG Starters always keep growing dedicate spirit for the development of VNG and community. Scandal VNG in particular and the game online industry in general are always closely observed by authorities for the risks of social problems related to playing games too much. Violence In December 2010, the inspectors of HCMC Department of Information and Communications required VinaGame that they must eliminate violences in online games. The game "Sudden Attack" was closed in HCMC on 17 October 2011. Privacy The Inspectors examined VNG's conformity to computer software copyright laws leading to an official announcement to charge VinaGame 10 million dong as a fine for copyright infringement. In addition, the inspectors ordered VinaGame to take down the infringed software and to commit to contacting the authors to discuss the legitimacy. After receiving the decision from Ministry of Culture - Sports and Tourism, VinaGame had to examine every single computer currently in use, prepare a list, and pay copyright fees for the software that was work-related. Copyright infringement In August 2007, when accessing Zing MP3, which is a musical entertainment tool, you can find such popular names such as Bao Thy, Dan Truong, Duy Manh, Cam Ly, and even Thu Phuong and Bang Kieu – neither of them was allowed to publish songs in Vietnam. According to some lawyers, the websites, which enable searching, collecting and displaying songs, and allow online listening, infringe copyrights. On 23 and 24 February 2009 inspectors discovered a software copyright infringement worth VND 5 billion (around US$295,000) during a sudden raid on Vinagame headquarters in Ho Chi Minh city. In August 2020, VNG sued TikTok for $9.5 million on allegations of copyright infringement. Awards September 2013: Chairman of HCMC People's Committee Le Hoang Quan awarded merits to VNG in the recognition of many achievements in service business and participating in charities and social activities continutously from 2008 to 2012. October 2013: In the “Awarding Ceremony of HCMC Entrepreneurs 2013”, VNG stood among 105 entrepreneurs to get the award for “Outstanding HCMC Entrepreneurs 2013”. April 2014: VNG received the "Third-class Labor Medal " signed by President of Socialist Republic of Vietnam and Merits awarded by the Prime Minister to Lê Hồng Minh, Chairman and CEO of VNG. References External links Companies established in 2004 Online companies of Vietnam Vietnamese brands Companies based in Ho Chi Minh City Vietnamese companies established in 2004 Software companies established in 1996
35349168
https://en.wikipedia.org/wiki/Legend%20of%20Grimrock
Legend of Grimrock
Legend of Grimrock is an action role-playing game video game developed and published by Almost Human. The title is a 3D grid-based, real-time dungeon crawler based on the 1987 game Dungeon Master. It was originally released for Microsoft Windows in April 2012, and later ported for OS X and Linux in December 2012 and iOS in May 2015. Legend of Grimrock was the debut game of Almost Human, a four-man Finnish indie development team formed in February 2011, which self-financed the title's development. A sequel, Legend of Grimrock II, was released in October 2014. Gameplay Legend of Grimrock is a first-person action role-playing game with tile-based movement and realtime game mechanics. Players control a party of one to four characters which they move through a 3D rendered grid-based dungeon, a style of gameplay popular in RPG games from the 1990s such as Dungeon Master and Eye of the Beholder, from which Legend of Grimrock draws heavy inspiration. Gameplay consists of a combination of puzzle solving and combat. Characters within the party gain experience for slaying creatures and beasts within the dungeon, allowing them to increase in level and progress skills which enhance combat abilities and allow the casting of new spells, and equipment is obtained through exploration and solving of puzzles throughout the dungeon. Many of the harder puzzles throughout the game are designed as bonuses, being optional to the progression through the dungeon but granting superior items and equipment for solving them. In reference to its classic roots, the player has the option to switch on "old-school mode" when beginning a new game. In this mode the game's map system is deactivated, leaving navigation through the dungeon's grid down to the player alone; this references the 1990s games which Grimrock is based on, which left remembering routes and paths through the dungeon completely down to the player. The game's digital manual contains a printable grid sheet which encourages players to chart their course through the game to this end. Plot On top of Mount Grimrock, an airship carries a group of prisoners escorted by armed knights. The prisoners, sentenced by "the court" for crimes against the King, have been sentenced to be thrown into the pit of Mount Grimrock, at which point their crimes will be absolved. However, no prisoner pardoned in this manner has ever returned. On being sealed within the mountain, the prisoners make their way downwards through the levels of Grimrock Dungeon, guided by a disembodied voice which comes to them in their sleep promising that a way of escape for both it and the party awaits at the bottom of the dungeon. The party also occasionally finds notes from a previous wanderer of the dungeon named Toorum, who aside from offering clues to certain puzzles and hidden stashes of equipment, talks about his experiences of the dungeon's periodic tremors and the dungeon's design seemingly meant to be "traversed from the top down". Eventually the party reaches the bottom level, signposted as "Prison". Inside, the source of the voice guides the player to reconstruct a broken machine which will activate a portal out of the dungeon, however upon assembling the parts and repairing it, the voice is revealed to be the machine itself, which manifests as a giant mechanical clockwork cube which attempts to crush the party. Escaping through a portal, the party locates the tomb of the creators of Grimrock Dungeon, who left behind scrolls explaining the dungeon's purpose of containing the machine, which they refer to as "the Undying One", until "the gears of time finally come to rest". The tomb also contains a weapon designed to be used in the event of the Undying One's escape from confinement. Using the weapon, which temporarily stuns the machine, the party disassemble the parts which they used to repair the Undying One and deal a fatal blow to it using spells of lightning. The Undying One eventually explodes and falls apart, triggering another tremor which shakes the dungeon apart. The last scenes show the party running down a stone hallway of the dungeon, before a beam of pale blue light explodes from Mount Grimrock, ascending to the sky. A gigantic crater is shown to be all that remains of Mount Grimrock, the final fate of the prisoners seemingly unknown. Development and release In 2001 LoG began as Dungeon Master clone hobby project called Dungeon Master 2000 by a former Amiga demoscene coder. Later it was renamed Escape from Dragon Mountain and released in its final version in 2004. In early 2011 the developers decided to grow the project beyond being a simple clone and started aiming for commercial game quality. The developers used also other games as inspiration for the game, including Eye of the Beholder and Ultima Underworld. The developers established an indie video game company called "Almost Human" located at Matinkylä, Espoo in Finland. The four founding developers left the Finnish video game industry (Remedy Entertainment, Futuremark) and started working on the game full-time, now named Legend of Grimrock. A forum and development blog was set up and updated frequently. The game was first released for the Windows platform on 11 April 2012 in DRM-free versions on the developer's website and GOG.com, and also a Steam version. On 4 October 2012, with the release of patch 1.3.1, an editor was included which allowed the creation of user generated dungeons & content and led to the development of a vivid modding community. On 19 December 2012 the game was released as part of the Humble Indie Bundle 7 including newly developed ports of the game for Mac OS and Linux. While the Mac and Linux port were released in December 2012, the iOS port did not materialize until May 2015. In 2014 a live-action series, located in the Grimrock universe, from Wayside Creations (the makers of Fallout: Nuka Break) was funded through Kickstarter, but never produced. An announced sequel, Legend of Grimrock II, was released on 15 October 2014. Reception Legend of Grimrock was generally well received by both critics and gamers upon release, garnering an average critic score of 82 from 49 reviews on Metacritic. Reviewers generally praised the game for faithfully recreating the gameplay of old-school action RPGs into the modern era. GameSpy gave the game a 4.5/5 "Great" rating, saying that "the best aspect of Grimrock is its puzzles, which have been largely abandoned by RPG developers in this age of Internet hints and walkthroughs." Destructoid's Patrick Hancock awarded the game a score of 95/100, claiming "Grimrock takes an old-school feel and injects it seamlessly into the modern era." Edge said that the game wasn't "a love letter to Dungeon Master" but "a near-facsimile", saying that "Legend Of Grimrock replicates a classic faithfully enough to massage the nostalgia glands of anyone who played the original, and it's a test of the timelessness of an almost universally loved game." Some reviewers commented negatively on the game's combat system. Jon Blyth of PC Gamer praised the game's revival of Dungeon Master's classic gameplay, but commented on the "exploitability" of the enemy AI, as "any single enemy, no matter how tough, can be dominated by a series of cowardly stab-retreats and sidesteps". In their 7.25/10 review, Game Informer also stated that while the game's modern presentation of an old-school game format was a "beautiful marriage", it failed in that "its lazy monster design encourages the worst kind of tedious, mechanically abusive player behavior, though, which is a grave offense in the world of party-based RPGs." In January 2013, Almost Human announced that the game had sold over 600,000 copies. In October 2014, Almost Human announced that the game has sold 900,000 copies. In 2017, the game was selected for a collection of 100 Finnish games, which were presented on the opening of the Finnish Museum of Games in Tampere. References External links 2012 video games First-person party-based dungeon crawler video games Fantasy video games Indie video games IOS games Linux games Lua (programming language)-scripted video games MacOS games Role-playing video games Single-player video games Video games with Steam Workshop support Video games developed in Finland Video games featuring protagonists of selectable gender Windows games
5929146
https://en.wikipedia.org/wiki/PikeOS
PikeOS
PikeOS is a commercial, hard real-time operating system (RTOS) that offers a separation kernel based hypervisor with multiple logical partition types for many other operating systems (OS), each called a GuestOS, and applications. It enables users to build certifiable smart devices for the Internet of things (IoT) according to the high quality, safety and security standards of different industries. For safety and security critical real-time applications on controller-based systems without memory management unit (MMU) but with memory protection unit (MPU) PikeOS for MPU is available. Overview PikeOS combines a real-time operating system (RTOS) with a virtualization platform and Eclipse-based integrated development environment (IDE) for embedded systems. It is a commercial clone of L4 microkernel family. PikeOS has been developed for safety and security-critical applications with certification needs in the fields of aerospace, defense, automotive, transport, industrial automation, medical, network infrastructures, and consumer electronics. A key feature of PikeOS is an ability to safely execute applications with different safety and security levels concurrently on the same computing platform. This is done by strict spatial and temporal segregation of these applications via software partitions. A software partition can be seen as a container with pre-allocated privileges that can have access to memory, central processing unit (CPU) time, input/output (I/O), and a predefined list of OS services. With PikeOS, the term application refers to an executable linked against the PikeOS application programming interface (API) library and running as a process inside a partition. The nature of the PikeOS application programming interface (API) allows applications to range from simple control loops up to full paravirtualized guest operating systems like Linux or hardware virtualized guests. Software partitions are also called virtual machines (VMs), because it is possible to implement a complete guest operating system inside a partition which executes independently from other partitions and thus can address use cases with mixed criticality. PikeOS can be seen as a Type 1 hypervisor. Supported toolchain, IDE CODEO The Eclipse-based IDE CODEO supports system architects with graphical configuration tools, providing all the components that software engineers will need to develop embedded applications, as well as including comprehensive wizards to help embedded project development in a time-saving and cost-efficient way: Guided configuration Remote debugging (down to the hardware instruction level) Target monitoring Remote application software deployment Timing analysis Several dedicated graphical editing views are supporting the system integrator to always keep the overview on important aspects of the PikeOS system configuration showing partition types, scheduling, communication channels, shared memory and IO device configuration within partitions. Projects can be easily defined with the help of reusable templates and distributed to the development groups. Users can configure predefined components for their project and can also define and add other components during the development process. Key benefits Real-time operating system including type 1 hypervisor defined for highly flexible configuration Supports fast or secure booting times Supporting mixed criticality via separation kernel in one system Configuration of partitions with time and hardware resources Kernel driver and user space drivers supported Hardware independence between processor types and families Easy migration processes and high portability on single- and multi-core Developed to support certification according to multiple safety & security standards Reduced time to market via standard development and verification tools Wide range of supported GuestOS types: APIs No export restriction: European solution Certification standards Safety certification standards according to: Radio Technical Commission for Aeronautics (RTCA) – DO-178B/C International Organization for Standardization (ISO) – 26262 International Electrotechnical Commission (IEC) – 62304, 61508 EN – 50128, 50657 Security certification standards according to: Common Criteria SAR (?) Partner ecosystem SYSGO is committed to establish the technology and business partnerships that will help software engineers to achieve their goals. , SYSGO is working with about 100 partners globally. An excerpt of partners per category is mentioned below: Board vendors: Curtiss-Wright Controls Embedded Computing, Kontron, MEN or ABACO Silicon vendors: NXP, Renesas, Texas Instruments (TI), Xilinx, Infineon, NVidia or Intel Software partners: CoreAVI, wolfSSL, Aicas, AdaCore, Esterel, RTI, PrismTech, Datalight, Systerel, Imagination Technologies or RAPITA Tool partners: Lauterbach, Vector Software, Rapita, iSYSTEM Supported architectures: ARM, PowerPC, x86, or SPARC (on request) Supported GuestOS types Linux or Android (ideally SYSGO Linux distribution ELinOS) POSIX PSE51 with PSE52 extensions ARINC 653 RTEMS Java AUTOSAR Ada, including Ravenscar profile and others End-of-life overview References External links , SYSGO PikeOS Official Product Site PikeOS Product Note (PDF) PikeOS Flyer (PDF) Real-time operating systems Microkernels Virtualization software Embedded operating systems ARM operating systems Microkernel-based operating systems
868236
https://en.wikipedia.org/wiki/Calma
Calma
Calma Company, based in Sunnyvale, California, was, between 1965 and 1988, a vendor of digitizers and minicomputer-based graphics systems targeted at the cartographic and electronic, mechanical and architectural design markets. In the electronic area, the company's best known products were GDS (an abbreviation for "Graphic Design System" [GDS78]), introduced in 1971, and GDS II, introduced in 1978. By the end of the 1970s, Calma systems were installed in virtually every major semiconductor manufacturing company. The external format of the GDS II database, known as GDS II Stream Format, became a de facto standard for the interchange of IC mask information. The use of this format persisted into the 21st century, long after the demise of the GDS II computer system. In the integrated circuit industry jargon of 2008, "GDS II" referred no longer to the computer system, but to the format itself. Vendors of electronic design automation software often use the phrase "from RTL to GDSII" to imply that their system will take users from a high-level logic design to a completed integrated circuit layout ready for delivery to the mask vendor. In the mechanical area, the DDM (for "Design Drafting and Manufacturing") product was introduced in 1977. It was later extended, under the name "Dimension III", to address the architecture, engineering and construction (AEC) market. By 1983, these two products together accounted for 60% of Calma's revenue. [WEI08] Dimension III continued to be used as late as the late 1990s. History Calma Company was incorporated in California on November 13, 1963. Its initial business was as a product distributor, continuing the business of a previously existing partnership of the same name. [UT78] The company took its name from its founders, Calvin and Irma Louise Hefte. In 1965 Calma introduced the Calma Digitizer, a device consisting of a table-like surface with constrained cursor, whereby an operator could enter coordinate data from a paper drawing and have it turned into computer readable form. In about 1969, the company undertook to develop a minicomputer-based graphics system built around a digitizer. This effort was spurred by the arrival of Josef Sukonick, a recent MIT math PhD who had become aware of the market potential for such a system for integrated circuit (IC) design through his work at the CAD (computer-aided design) group of Fairchild Semiconductor in Sunnyvale, CA. The GDS software system was conceived and, in its initial implementation, almost single-handedly built by Dr. Sukonick. The first GDS system was shipped in late 1971 to Intel. The growth of sales of GDS paralleled that of the nascent integrated circuit industry. By August 1976 there were 121 GDS systems installed at 70 companies including many Fortune 500 corporations including Motorola, International Telephone & Telegraph, Fairchild Semiconductor, and others. Of these, 43 were installed outside the U.S. [INT76] In 1978, Calma, which never had a public stock offering, was acquired by United Telecommunications, Inc., (UTI) of Kansas City, Missouri, for $17 million in stock. Calma became part of UTI's United Computing Systems (UCS) operating unit. UTI took a hands-off approach to managing its acquisition, allowing Calma to continue largely unchanged on its growth path. In 1978, Calma introduced GDS II (pronounced "G-D-S two"), a modernized replacement for GDS. With its 32-bit database, GDS II met the need for greater capacity and resolution in IC designs. GDS II quickly replaced GDS as the data entry system of choice for many IC design groups. By late 1980, there were 171 installed GDS II systems. [SCH81] In December 1980, the sale of Calma by UTI to General Electric (GE) was announced. The sale price was $110 million, with an additional $60 million contingent on Calma's profits over the next five years. The acquisition was completed on April 1, 1981. [BW80,SJM80] GE had grander designs for Calma than had UTI. In addition to the hope of maintaining dominance in the IC market, GE aimed for Calma to expand in the architectural, engineering, manufacturing and construction markets – "factory of the future" was a prominent slogan. Due partly to a mass exodus of talent after GE moved its own people into key management positions, partly due to excessive expectations, the changing nature of the market and the inherent difficulty of keeping up with rapidly changing technology, these ambitions went largely unrealized. Beginning in 1988, GE sold Calma. The electronic side of the business was sold to Valid Logic Systems in April 1988. (Valid in turn was acquired by Cadence Design Systems in 1991). The remainder of the business (mechanical/architectural) was acquired by Prime Computer in a sale completed in January 1989. [WEI08] Prime had just completed a hostile take-over of Computervision. Prime basically merged the Calma Mechanical and AEC product lines with Computervision. Computervision, including the Dimension III product, was acquired by Parametric Technology Corporation in 1998. Business and financial The following data on sales, earnings, and employee count are drawn from a number of sources. Financial data 1973–1977 are from [UT78]. At the time of the 1978 acquisition by UTI, the largest shareholder was Calma board chairman Ronald D. Cone. He held 321,706 of Calma's 635,266 outstanding shares. [UT78] Legal In February 1977, Computervision (CV) filed suit in federal court over Calma's hiring of a group of 5 employees from CV in San Diego. (This group developed Calma's DDM product.) The CV suit against Calma and the five employees alleged breach of competition, breach of non-competition agreements, and interference with contractual relations. This draining lawsuit was finally settled out of court in October 1979. In the UTI acquisition of Calma in 1978, 5% of the newly issued stock was held in escrow as a reserve pending the outcome of this litigation. [UT78] Buildings As early as 1970, Calma occupied a building at 707 Kifer Road in Sunnyvale. Roughly , the building consisted of a large warehouse/manufacturing area in the rear, with an office area of about 10 offices in the front. Somewhat later, an additional building to the rear (on San Gabriel Drive) was leased as a manufacturing/shipping area, bringing total square footage to 35,000. In February 1978, the company relocated to a single-story building at 527 Lakeside Drive in Sunnyvale, part of the newly developed Oakmead Village industrial park. Additional buildings were added as the employee count grew. In 1979, the R&D department moved to a building at 212 Gibraltar Drive (corner of Borregas Avenue) in the Moffett Park area of Sunnyvale. Other buildings were added in the area. In 1980 a new manufacturing facility was opened in Milpitas, California. [WEI08] In 1982 a new headquarters was opened in Santa Clara, California. [SJM82] In 1984 Calma bought a facility near Dublin, Ireland, that had originally been built for Trilogy Systems. [WEI08] Products General description (1978) The following is quoted from [UT78]: Calma's computer-aided design and drafting systems (also referred to as interactive graphics systems) are component hardware modules, electronic interfaces, and software programs. Most of the systems sold are constructed by combining available components to meet the requirements of the customers' specific design or drafting application. Calma's systems enable customers to automate a wide variety of design and manufacturing processes which have previously been performed manually. The primary hardware components of a system are a central processing unit, operator stations and plotter outputs. Their GDS I and II software operated on Data General Corporation's Nova and Eclipse line of 16 bit mini Computers. Sketches or layouts of electronic system were first manually drawn on mylar or paper to scale and were placed on large backlit 48 by 60 inch table digitizers. Using a moving stylus, these layouts were organized in layers, first placing the smaller common and custom circuits, created in a library, then manually traced their interconnecting circuitry on further layers, the completed layout then stored in computer files. Printed Circuit Boards (PCB's) and Small Scale Integrated Circuits (SSIC) were manually traced buy an operator, usually a draftsman or electronic engineer then plotted on a large pen plotter (In later years to faster Electrostatic Plotters) to be visually inspected to confirm that the physical layout properly matched the schematic. Once the layout and schematics final edits were manually checked to confirm their accuracy, the multiple layers of the physical circuitry were sent to a film plotter to create masks for fabrication. The central processing unit consists of a minicomputer, a computer console and page printer, a magnetic tape transport and a magnetic disk memory unit. Other optional peripheral devices such as card readers and paper tape punches are also available. These components are interfaced with Calma-designed and manufactured controllers, and integrated into a single unit with system software designed and programmed by Calma. An operator station consists of a digitizing device, an interactive cathode ray tube (CRT) display unit, coordinate readouts and a keyboard. The main difference between stations is in the type of digitizing input station used. The Calma digitizer is a backlit 48 by 60 inch table. To digitize analog graphical data directly on computer-compatible medium, the operator of the digitizer manually traces graphical data with a moveable stylus. The graphic tablet has a smaller surface and is operated with an electromechanical graphic pen and were used primary to edit an electronic layout once it was digitized. The digitizing input station is linked by system software to the CRT display, which allows an almost instantaneous display of any segment of the source drawing or a graphic element from the library. The CRT display also has windowing and magnification capability. An alphanumeric keyboard is used for entering text, scaling information, dimensions and commands, and an optional functional keyboard is available for entering frequently used functions, symbols and macro commands. The output most commonly used in Calma's systems is a graphic plotter. Calma software supports both on and off-line pen and photo plotting devices. Calma's computer-aided design systems are used in a wide variety of applications. To date systems have been sold principally to electronics firms for use in the design of integrated circuits, printed circuit boards and electrical schematics; to governmental agencies and public utilities for use in cartographic applications; and to manufacturing companies for use in the design of mechanical parts and systems. Calmagraphics/CGI GDS GDS II For an overview of the GDS II system as it was 1981, see [SCH81]. Some scanned product documents can be found at [GDS78] and [GDS79]. The original GDS system used Data General mini-computers to digitize and assemble chip designs. The UI consisted of simple one or two letter commands, and a set of colored lights as a response. The operator typed a command, and if the green light came on, it was successful. GDSII introduced an actual Command Line Interface (CLI), where the user typed commands that were echoed back to the screen. GDS systems had no text screen at all, just a "green-screen" oscilloscope type display. GDSII introduced a text display, using a regular text terminal in addition to the 'scope screen, and also introduced the first color screens. This ushered in the new method of "online design", where the drafting employees actually sat at the screen and drew the chips. In the older GDS systems, an operator took the mylar drawings and digitized them in. A typical GDSII system in 1980 would have a 300mb disk, 1/2 mb of memory, using a 16bit DG minicomputer and up to 4 screens. This cost over $500,000 in 1980 dollars. DDM DDM (short for "Design, Drafting, Manufacturing") was a 3-dimensional wireframe computer-aided design application. In the mid-1980s, it was one of the top ten selling CAD packages on the market. By 2006, DDM continued to be supported by Parametric Technology Corporation as "Dimension III". The General Motors Central Foundry Division (GM-CFD) had applied DDM to the design of castings and tooling for automotive components such as engine blocks, cylinder heads and steering knuckles. DDM was run on Calma's proprietary dual monitor workstation hardware connected to Data General Nova and later Digital VAX 11/780-series computers. GM-CFD had DDM installations in Saginaw, MI, Pontiac, MI, Defiance, OH, Bedford, IN, Danville, IL and Massena, NY. People This section gives thumbnail sketches of people who had significant roles at Calma over the years. This includes managers, key technical contributors, and the like, with special emphasis on people who went on to achieve recognition in the EDA industry. The years after each name are years of employment at Calma. John Benbow 1981–1983 1981 VP of R&D, previously at Dataskil (UK) Robert Benders 1968–1983 1968 hired as chief engineer, from Lockheed 1971–1983 CEO of Calma Enrico F. Biondi BSEE, MSEE, and PhD,EE, Stanford University Died 2003, age 65 Lemuel D. Bishop 1972 VP of finance Arthur J. Collmeyer 1974–1981 1969 PhD Southern Methodist University. electrical engineering 1974 joined Calma from Xerox as VP of R&D 1981 co-founder of Weitek, CEO 1981–1988, Chairman 1992 Died 2011, age 70 Ronald D. Cone Michael L. Courtade 1973–1978 1974 manager of GDS software development Eugene W. Emmerich 1977 VP of marketing Gerry Devere 1976 DDM development team later head of DDM R&D Thomas S. Hedges 1973–1984 1972 BS Caltech engineering 1977 member GDS II development team 1991 co-founder and chairman of Fractal Design Corp. 1997–2000 chief systems architect Metacreations, Inc. Died 2007, age 57 Calvin Hefte and Irma Louise Hefte Calma's eponymous founders Cal is deceased [HOL90] Irma reported to have been running flower shop with daughters "Carousel of Flowers", Los Gatos, CA [HOL90]; died September 27, 1992 Andrew Hidalgo 1987–1988 eastern regional manager founder & CEO of WPCS International Incorporated Harvey C. Jones Jr. 1974–1981 hired 1974 as application engineer in Reston, VA office later manned the Boston sales office. 1980 named VP of MED (microelectronics division) Business Development 1981 co-founded Daisy Systems Corp 1987 first CEO of Synopsys (chairman of board 1994–1998) co-founder of Tensilica Daniel McGlaughlin PhD EE Case Western Reserve University veteran of IBM, GE president of Calma 1984–1989 president and CEO of Equifax 1996–1998 William Nickels 1973– 1973 manager of software development 1975 manager of customer support Thomas J. Schaefer 1972–1981 Math PhD UC Berkeley 1978 1977 member GDS II design team VLSI Technology Inc 1981–1990 Compass Design Automation 1990–1997 Synopsys 1997–2000 Carl Smith 1973– 1977 member GDS II design team Cadence Design Systems Simplex Solutions Robert Smuland 25-year GE veteran 1983–1984 president of Calma Roger Sturgeon BSEE & MSEE-CS UC Berkeley 1977 head of GDS II design team co-founder of Transcription Enterprises Limited (acquired in 2000 by Numerical Technologies, Inc) 2007 overall winner of Sydney to Hobart Yacht Race Josef S. Sukonick 1969 PhD MIT math creator of GDS Robert Young 1973– 1973 manager documentation & training 1975 manager of customer support Mark Zimmer member GDS II development team 1991 co-founder of Fractal Design Corp, CEO 1991–1997 In 2002, the industry organization SEMI presented the annual SEMI Award for North America to the team of Roger Sturgeon, Carl Smith, Tom Hedges, Tom Schaefer and Mark Zimmer for their contribution to the GDS II interchange format. References Notes [BEN73] Robert Benders, Calma memo dated December 18, 1973 [BIS81] Lemuel D. Bishop, article in the first issue of the yet-to-be-named "Viewport" Calma company employee newsletter, Feb 1981 [BW80] Business Week December 22, 1980, p 22 [FOR80] Forbes, September 29, 1980, p 146 (interview with Paul Henson, chair of UTI) [GDS78] Calma GDS II Graphic Design System User's Operating Manual First Edition 1978. (260 pages) Online at http://www.bitsavers.org/pdf/calma/GDS_II_Users_Operating_Manual_Nov78.pdf Retrieved Apr 21, 2020. [GDS79] Calma GDS II Product Specification Draft dated April 13, 1979. (45 pages)Online at http://www.bitsavers.org/pdf/calma/GDS_II_Product_Specification_Apr79.pdf Retrieved Apr 21, 2020. [HOL90] Bruce Holloway web post quoted in "Ray Tracing News", vol. 3 no. October 4, 1, 1990. https://web.archive.org/web/20180524215455/http://jedi.ks.uiuc.edu/~johns/raytracer/rtn/rtnv3n4.html [INT76] Calma internal document dated 8/06/76 [PH74] Calma company phone list dated 9/11/74 [PH76] Calma company phone list dated November 17, 1976 [SCH81] Thomas J. Schaefer, "GDS-II : An efficient and extensible VLSI design system", IEEE Compcon, Spring 1981, pp. 333–336 [SFC81] San Francisco Chronicle, 11/04/81 [SJM80] San Jose Mercury News, 12/09/80 [SJM82] San Jose Mercury News, April 19, 1982, p 3D [UT78] United Telecommunications prospectus dated August 16, 1978 and Calma Company proxy statement for special shareholder meeting August 31, 1978 to approve merger of Calma with UTI [WEI08] David E. Weisberg The Engineering Design Revolution: The People, Companies and Computer Systems That Changed Forever the Practice of Engineering (2008 – online book) http://www.cadhistory.net. Chapter 11 is devoted to Calma. American companies established in 1965 American companies disestablished in 1988 Companies based in Sunnyvale, California Computer companies established in 1965 Computer companies disestablished in 1988 Defunct companies based in California Defunct computer companies of the United States Electronic design automation companies
32399
https://en.wikipedia.org/wiki/Video%20game%20developer
Video game developer
A video game developer is a software developer specializing in video game development – the process and related disciplines of creating video games. A game developer can range from one person who undertakes all tasks to a large business with employee responsibilities split between individual disciplines, such as programming, design, art, testing, etc. Most game development companies have video game publisher financial and usually marketing support. Self-funded developers are known as independent or indie developers and usually make indie games. A developer may specialize in a certain video game console (such as Nintendo's Nintendo Switch, Microsoft's Xbox One, Sony's PlayStation 4), or may develop for a number of systems (including personal computers and mobile devices). Video game developers specialize in certain types of games (such as role-playing video games or first-person shooters). Some focus on porting games from one system to another, or translating games from one language to another. Less commonly, some do software development work in addition to games. Most video game publishers maintain development studios (such as Electronic Arts's EA Canada, Square Enix's studios, Activision's Radical Entertainment, Nintendo EAD and Sony's Polyphony Digital and Naughty Dog). However, since publishing is still their primary activity they are generally described as "publishers" rather than "developers". Developers may be private as well (such as how Bungie was, the company which developed the Halo series exclusive to Microsoft's Xbox). Types First-party developers In the video game industry, a first-party developer is part of a company which manufactures a video game console and develops exclusively for it. First-party developers may use the name of the company itself (such as Nintendo), have a specific division name (such as Sony's Polyphony Digital) or have been an independent studio before being acquired by the console manufacturer (such as Rare or Naughty Dog). Whether by purchasing an independent studio or by founding a new team, the acquisition of a first-party developer involves a huge financial investment on the part of the console manufacturer, which is wasted if the developer fails to produce a hit game in a timely manner. However, using first-party developers saves the cost of having to make royalty payments on a game's profits. Current examples of first-party studios include PlayStation Studios for Sony, and Xbox Game Studios for Microsoft. Second-party developers Second-party developer is a colloquial term often used by gaming enthusiasts and media to describe game studios who take development contracts from platform holders and develop games exclusive to that platform, i.e. a non-owned developer making games for a first-party company. As a balance to not being able to release their game for other platforms, second-party developers are usually offered higher royalty rates than third-party developers. These studios may have exclusive publishing agreements (or other business relationships) with the platform holder, but maintain independence so upon completion or termination of their contracts are able to continue developing games for other publishers if they choose to. For example, while HAL Laboratory initially began developing games on personal computers like the MSX, they became one of the earliest second-party developers for Nintendo, developing exclusively for Nintendo's consoles starting with the Famicom. Third-party developers A third-party developer may also publish games, or work for a video game publisher to develop a title. Both publisher and developer have considerable input in the game's design and content. However, the publisher's wishes generally override those of the developer. The business arrangement between the developer and publisher is governed by a contract, which specifies a list of milestones intended to be delivered over a period of time. By updating its milestones, the publisher verifies that work is progressing quickly enough to meet its deadline and can direct the developer if the game is not meeting expectations. When each milestone is completed (and accepted), the publisher pays the developer an advance on royalties. Successful developers may maintain several teams working on different games for different publishers. Generally, however, third-party developers tend to be small, close-knit teams. Third-party game development is a volatile sector, since small developers may be dependent on income from a single publisher; one canceled game may be devastating to a small developer. Because of this, many small development companies are short-lived. A common exit strategy for a successful video-game developer is to sell the company to a publisher, becoming an in-house developer. In-house development teams tend to have more freedom in the design and content of a game compared to third-party developers. One reason is that since the developers are employees of the publisher, their interests are aligned with those of the publisher; the publisher may spend less effort ensuring that the developer's decisions do not enrich the developer at the publisher's expense. Activision in 1979 became the first third-party video game developer. When four Atari, Inc. programmers left the company following its sale to Warner Communications, partially over the lack of respect that the new management gave to programmers, they used their knowledge of how Atari VCS game cartridges were programmed to create their own games for the system, founding Activision in 1979 to sell these. Atari took legal action to try to block sale of these games, but the companies ultimately settled, with Activision agreeing to pay a portion of their sales as a license fee to Atari for developing for the console. This established the use of licensing fees as a model for third-party development that persists into the present. The licensing fee approach was further enforced by Nintendo when it decided to allow other third-party developers to make games for the Famicom console, setting a 30% licensing fee that covered game cartridge manufacturing costs and development fees. The 30% licensing fee for third-party developers has also persisted to the present, being a de facto rate used for most digital storefronts for third-party developers to offer their games on the platform. In recent years, larger publishers have acquired several third-party developers. While these development teams are now technically "in-house", they often continue to operate in an autonomous manner (with their own culture and work practices). For example, Activision acquired Raven (1997); Neversoft (1999), which merged with Infinity Ward in 2014; Z-Axis (2001); Treyarch (2001); Luxoflux (2002); Shaba (2002); Infinity Ward (2003) and Vicarious Visions (2005). All these developers continue operating much as they did before acquisition, the primary differences being exclusivity and financial details. Publishers tend to be more forgiving of their own development teams going over budget (or missing deadlines) than third-party developers. A developer may not be the primary entity creating a piece of software, usually providing an external software tool which helps organize (or use) information for the primary software product. Such tools may be a database, Voice over IP, or add-in interface software; this is also known as middleware. Examples of this include SpeedTree and Havoc. Indie game developers Independents are software developers which are not owned by (or dependent on) a single publisher. Some of these developers self-publish their games, relying on the Internet and word of mouth for publicity. Without the large marketing budgets of mainstream publishers, their products may receive less recognition than those of larger publishers such as Sony, Microsoft or Nintendo. With the advent of digital distribution of inexpensive games on game consoles, it is now possible for indie game developers to forge agreements with console manufacturers for broad distribution of their games. Other indie game developers create game software for a number of video-game publishers on several gaming platforms. In recent years this model has been in decline; larger publishers, such as Electronic Arts and Activision, increasingly turn to internal studios (usually former independent developers acquired for their development needs). Quality of life Video game development is usually conducted in a casual business environment, with T-shirts and sandals common work attire. Many workers find this type of environment rewarding and pleasant professionally and personally. However, the industry also requires long working hours from its employees (sometimes to an extent seen as unsustainable). Employee burnout is not uncommon. An entry-level programmer can make, on average, over $66,000 annually only if they are successful in obtaining a position in a medium to large video game company. An experienced game-development employee, depending on their expertise and experience, averaged roughly $73,000 in 2007. Indie game developers may only earn between $10,000 and $50,000 a year depending on how financially successful their titles are. In addition to being part of the software industry, game development is also within the entertainment industry; most sectors of the entertainment industry (such as films and television) require long working hours and dedication from their employees, such as willingness to relocate and/or required to develop games that do not appeal to their personal taste. The creative rewards of work in the entertainment business attracts labor to the industry, creating a competitive labor market which demands a high level of commitment and performance from employees. Industry communities, such as the International Game Developers Association (IGDA), are conducting increasing discussions about the problem; they are concerned that working conditions in the industry cause significant deterioration in its employees' quality of life. Crunch Some video game developers and publishers have been accused of the excessive invocation of "crunch time". "Crunch time" is the point at which the team is thought to be failing to achieve milestones needed to launch a game on schedule. The complexity of work flow, reliance on third-party deliverables, and the intangibles of artistic and aesthetic demands in video-game creation create difficulty in predicting milestones. The use of crunch time is also seen to be exploitative of the younger male-dominated workforce in video games, who have not had the time to establish a family and who were eager to advance within the industry by working long hours. Because crunch time tends to come from a combination of corporate practices as well as peer influence, the term "crunch culture" is often used to discuss video game development settings where crunch time may be seen as the norm rather than the exception. The use of crunch time as a workplace standard gained attention first in 2004, when Erin Hoffman exposed the use of crunch time at Electronic Arts, a situation known as the "EA Spouses" case. A similar "Rockstar Spouces" case gained further attention in 2010 over working conditions at Rockstar San Diego. Since then, there has generally been negative perception of crunch time from most of the industry as well as from its consumers and other media. Discrimination and harassment Gender Game development had generally been a predominately male workforce. In 1989, according to Variety, women constituted only 3% of the gaming industry, while a 2017 IGDA survey found that the female demographic in game development had risen to about 20%. Taking into account that a 2017 ESA survey found 41% of video game players were female, this represented a significant gender gap in game development. The male-dominated industry, most who have grown up playing video games and are part of the video game culture, can create a culture of "toxic geek masculinity" within the workplace. In addition, the conditions behind crunch time are far more discriminating towards women as this requires them to commit time exclusively to the company or to more personal activities like raising a family. These factors established conditions within some larger development studios where female developers have found themselves discriminated in workplace hiring and promotion, as well as the target of sexual harassment. This can be coupled from similar harassment from external groups, such as during the 2014 Gamergate controversy. Major investigations into allegations of sexual harassment and misconduct that went unchecked by management, as well as discrimination by employers, have been brought up against Riot Games, Ubisoft and Activision Blizzard in the late 2010s and early 2020s, alongside smaller studios and individual developers. However, while other entertainment industries have had similar exposure through the Me Too movement and have tried to address the symptoms of these problems industry-wide, the video game industry has yet to have its Me Too-moment, even as late as 2021. There also tends to be pay-related discrimination against women in the industry. According to Gamasutra's Game Developer Salary Survey 2014, women in the United States made 86 cents for every dollar men made. Game designing women had the closest equity, making 96 cents for every dollar men made in the same job, while audio professional women had the largest gap, making 68% of what men in the same position made. Increasing the representation of women in the video game industry required breaking a feedback loop of the apparent lack of female representation in the production of video games and in the content of video games. Efforts have been made to provide a strong STEM (science, technology, engineering, and mathematics) background for women at the secondary education level, but there are issues with tertiary education such as at colleges and universities, where game development programs tend to reflect the male-dominated demographics of the industry, a factor that may led women with strong STEM backgrounds to choose other career goals. Racial There is also a significant gap in racial minorities within the video game industry; a 2019 IGDA survey found only 2% of developers considered themselves to be of African descent and 7% Hispanic, while 81% were Caucasian; in contrast, 2018 estimates from the United States Census estimate the U.S. population to be 13% of African descent and 18% Hispanic. In a 2014 and 2015 survey of job positions and salaries, the IGDA found that people of color were both underrepresented in senior management roles as well as underpaid in comparison to white developers. Further, because video game developers typically draw from personal experiences in building game characters, this diversity gap has led to few characters of racial minority to be featured as main characters within video games. Minority developers have also been harassed from external groups due to the toxic nature of the video game culture. This racial diversity issue has similar ties to the gender one, and similar methods to result both have been suggested, such as improving grade school education, development of games that appeal beyond the white, male gamer stereotype, and identify toxic behavior in both video game workplaces and online communities that perpetuate discrimination against gender and race. LGBT In regards to LGBT and other gender or sexual orientations, the video game industry typically shares the same demographics as with the larger population based on a 2005 IGDA survey. Those of LGBT do not find workplace issues with their identity, though work to improve the representation of LGBT themes within video games in the same manner as with racial minorities. However, LGBT developers have also come under the same type of harassment from external groups like women and racial minorities due to the nature of the video game culture. Age The industry also is recognized to have an ageism issue, discriminating against the hiring and retention of older developers. A 2016 IGDA survey found only 3% of developers were over 50 years old, while at least two-thirds were between 20 and 34; these numbers show a far lower average age compared to the U.S. national average of about 41.9 that same year. While discrimination by age in hiring practices is generally illegal, companies often target their oldest workers first during layoffs or other periods of reduction. Older developers with experience may find themselves too qualified for the types of positions that other game development companies seek given salaries and compensations offered. Contract workers Some of the larger video game developers and publishers have also engaged contract workers through agencies to help add manpower in game development in part to alleviate crunch time from employees. Contractors are brought on for a fixed period and generally work similar hours as full-time staff members, assisting across all areas of video game development, but as contractors, do not get any benefits such as paid time-off or health care from the employer; they also are typically not credited on games that they work on for this reason. The practice itself is legal and common in other engineering and technology areas, and generally it is expected that this is meant to lead into a full-time position, or otherwise the end of the contract. But more recently, its use in the video game industry has been compared to Microsoft's past use of "permatemp", contract workers that were continually renewed and treated for all purposes as employees but received no benefits. While Microsoft has waned from the practice, the video game industry has adapted it more frequently. Around 10% of the workforce in video games is estimated to be from contract labor. Unionization Similar to other tech industries, video game developers are typically not unionized. This is a result of the industry being driven more by creativity and innovation rather than production, the lack of distinction between management and employees in the white-collar area, and that the pace at which the industry moves that makes union actions difficult to plan out. However, when situations related to crunch time become prevalent in the news, there have typically been followup discussions towards the potential to form a union. A survey performed by the International Game Developers Association in 2014 found that more than half of the 2,200 developers surveyed favored unionization. A similar survey of over 4,000 game developers run by the Game Developers Conference in early 2019 found that 47% of respondents felt the video game industry should unionize. In 2016, voice actors in the Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) union doing work for video games struck several major publishers, demanding better royalty payments and provisions related to the safety of their vocal performances, when their union's standard contract was up for renewal. The voice actor strike lasted for over 300 days into 2017 before a new deal was made between SAG-AFTRA and the publishers. While this had some effects on a few games within the industry, it brought to the forefront the question of whether video game developers should unionize. A grassroots movement, Game Workers Unite, was established around 2017 to discuss and debate issues related to unionization of game developers. The group came to the forefront during the March 2018 Game Developers Conference by holding a roundtable discussion with the International Game Developers Association (IGDA), the professional association for developers. Statements made by the IGDA's current executive director Jen MacLean relating to IGDA's activities had been seen by as anti-union, and Game Workers Unite desired to start a conversation to lay out the need for developers to unionize. In the wake of the sudden near-closure of Telltale Games in September 2018, the movement again called out for the industry to unionize. The movement argued that Telltale had not given any warning to its 250 employees let go, having hired additional staff as recently as a week prior, and left them without pensions or health-care options; it was further argued that the studio considered this a closure rather than layoffs, as to get around failure to notify required by the Worker Adjustment and Retraining Notification Act of 1988 preceding layoffs. The situation was argued to be "exploitive", as Telltale had been known to force its employees to frequently work under "crunch time" to deliver its games. By the end of 2018, a United Kingdom trade union, Game Workers Unite UK, an affiliate of the Game Workers Unite movement, had been legally established. Following Activision Blizzard's financial report for the previous quarter in February 2019, the company said that they would be laying off around 775 employees (about 8% of their workforce) despite having record profits for that quarter. Further calls for unionization came from this news, including the AFL-CIO writing an open letter to video game developers encouraging them to unionize. Game Workers Unite and the Communications Workers of America established a new campaign to push for unionization of video game developers, the Campaign to Organize Digital Employees (CODE), in January 2020. Initial efforts for CODE were aimed to determine what approach to unionization would be best suited for the video game industry. Whereas some video game employees believe they should follow the craft-based model used by SAG-AFTRA which would unionized based on job function, others feel an industry-wide union, regardless of job position, would be better. Sweden presents a unique case where nearly all parts of its labor force, including white-collar jobs such as video game development, may engage with labor unions under the Employment Protection Act often through collective bargaining agreements. Developer DICE had reached its union agreements in 2004. Paradox Interactive became one of the first major publishers to support unionization efforts in June 2020 with its own agreements to cover its Swedish employees within two labor unions Unionen and SACO. In Australia, video game developers could join other unions, but the first video game-specific union, Game Workers Unite Australia, was formed in December 2021 under Professionals Australia to become active in 2022. See also List of video game developers List of independent game developers Video game industry practices References Bibliography External links Breaking into the game industry from the IGDA "I Have A Game Idea!" and Design Career Preparation from game industry veteran Tom Sloper "Quality of Life in the Videogame Industry" Video game development Video game developers Tech sector trade unions
39209687
https://en.wikipedia.org/wiki/BetterCloud
BetterCloud
BetterCloud, an independent software vendor based in New York, NY and with engineering offices in Atlanta, GA, builds unified SaaS management software. A venture-backed startup, BetterCloud has raised $187 million in total funding, with the most recent round was led by Warburg Pincus with series F funding with $75 million raised to date. A previous round of funding was done in April 2018 which was led by Bain Capital Ventures. In December 2016, BetterCloud completed pivot from G Suite to general SaaS management. History BetterCloud, founded in November 2011 in New York City, by founder and CEO David Politis and former CTO David Hardwick. Soon after the company launched DomainWatch, which was a security tool for Google Docs, Google Sites and Google Calendar. In May 2012 shortly after its launch, BetterCloud raised $2.2 million in seed funding from undisclosed angel investors. In January 2013, with its FlashPanel product reportedly serving 15,000 domains and 5.5 million end-users, the company raised a Series A round of $5 million in venture capital from Flybridge Capital Partners, Greycroft Partners and TriBeCa Venture Partners, bringing its total funding at the time to $7.2 million. On September 25, 2013, the company successfully raised a Series B round of $6 million led by Flybridge Capital Partners with participation from Greycroft Partners, BLH Venture Partners, TriBeCa Venture Partners, Bear Creek Capital, and Hallett Capital. With a total funding of $13.2 million the company has announced it is expanding its products to serve as the foundational layer for the cloud based software space, serving companies such as Zendesk and Salesforce.com. In February, 2015, the company rebranded its early product, FlashPanel, to BetterCloud for Google Apps, and also announced the launch of an additional product, BetterCloud for Office 365 [Beta]. The launch of an insights, monitoring, and alerting product for Microsoft Office 365 rounded out the company's offerings of solutions for the two major cloud messaging and collaboration platforms. And in March 2015, BetterCloud raised an additional $25 million in funding, led by Accel Partners and with participation from all existing investors. In April 2018, BetterCloud raised $60 million in a Series E funding led by Bain Capital Ventures bringing the company's total funding to $107 million. With this investment the company doubled its valuation to $270 million. In December 2016, BetterCloud pivoted from G Suite to general SaaS management. As of September 2018, the company supported connections to 10 SaaS apps: G Suite, Atlassian, Box, Dropbox, Namely, Office 365, Okta, Salesforce, Slack, and Zendesk. BetterCloud closed a $60 million Series E funding round in April 2018, led by Bain Capital Ventures' Enrique Salem, former CEO of Symantec. This round doubled the company's valuation to $270 million. One month later BetterCloud and Okta, an identity and access management provider, announced a partnership to connect their. In March 2019 BetterCloud opened its platform's operations dashboard to any SaaS. In September 2019, Dropbox announced a partnership with BetterCloud and an investment of $5 million. In the same month the company also launched the Integration Center. Corporate affairs Leadership BetterCloud is managed by CEO and founder David Politis. Other key executives are: Bart Hacking, Chief Financial Officer Rachel Orston, Chief Customer Officer Chris Jones, Chief Revenue Officer Jim Brennan, Chief Product Officer Shreyas Sadalgi, Chief Business Strategy Officer Customer and revenue As of 2018, the company reports a network of around 2,500 customers, 14,000 IT professionals, and operates in around 60 countries. Awards BetterCloud was ranked #11 on Crain's Best Places to Work in NYC 2018. References External links Software companies established in 2011 Companies based in New York City Software companies based in New York (state) Software companies of the United States American companies established in 2011
27584798
https://en.wikipedia.org/wiki/Java%20Desktop%20Integration%20Components
Java Desktop Integration Components
The Java Desktop Integration Components (JDIC) project provides components which give Java applications the same access to operating system services as native applications. For example, a Java application running on one user's desktop can open a web page using that user's default web browser (e.g. Firefox), but the same Java application running on a different user's desktop would open the page in Opera (the second user's default browser). Initially the project supports features such as embedding the native HTML browser, programmatically opening the native mail client, using registered file-type viewers, and packaging JNLP applications as RPM, SVR4, and MSI installer packages. As a bonus, an SDK for developing platform-independent screensavers is included. Most of the features provided by JDIC were incorporated into the JDK starting with version 1.6. As a result, the development of the project has come to an end. Components The cross-platform JDIC package, which files should allow the user to work, includes: jdic.jar: JAR file which contains all the Java classes needed for development. It must be in the classpath of the user for compilation. jdic.dll and tray.dll: On Windows installations, these files need to be into the directory where this operating system is installed (normally, C:\Windows). They contain the "bridge" methods between the jdic.jar Java methods and the native OS methods. libjdic.so and libtray.so: On Solaris and Linux operating systems, these two files must go into the LD_LIBRARY_PATH folder. See also Java Desktop References External links The JDIC project home on java.net Understanding JDIC File-Type Associations Integrate native OS features in your desktop applications with JDIC Java platform software Free software programmed in Java (programming language)
57156122
https://en.wikipedia.org/wiki/Visopsys
Visopsys
Visopsys, (Visual Operating System), is an operating system, written by Andy McLaughlin. Development of the operating system began in 1997. The operating system is licensed under the GNU GPL, with the headers and libraries under the less restrictive LGPL license. It runs on the 32-bit IA-32 architecture. It features a multitasking kernel, supports asynchronous I/O and the FAT line of file systems. It requires a Pentium processor. History The development of Visopsys began in 1997, being written by Andy McLaughlin. The first public release of the Operating System was on 2 March 2001, with version 0.1. In this release, Visopsys was a 32 bit operating system, supporting preemptive multitasking and virtual memory. System Overview Visopsys uses a monolithic kernel, written in the C programming language, with elements of assembly language for certain interactions with the hardware. The operating system sports a graphical user interface, with a small C library. References External links Homepage Operating systems
51010782
https://en.wikipedia.org/wiki/Bob%20Albrecht
Bob Albrecht
Bob Albrecht is a key figure in the early history of microcomputers. He was one of the founders of the People's Computer Company and its associated newsletters which turned into Dr. Dobb's Journal. He also brought the first Altair 8800 to the Homebrew Computer Club and was one of the main supporters of the effort to make Tiny BASIC a standard on many early machines. Albrecht has authored a number of books on BASIC and other computer topics. He is mentioned as one of the "who's who" in Steven Levy's Hackers: Heroes of the Computer Revolution. Career In 1955 Albrecht was studying for a master's degree when he quit for a job at the Minneapolis-Honeywell Aeronautical Division in Minneapolis, which had entered the computer market in April that year. He was working in a large room of engineers on flight control systems for high-speed jet aircraft using analog techniques. After a few months he was invited to join work on an IBM 650 drum computer, with the intention that he would then promote the use of the computer amongst his erstwhile analog-working co workers. In 1962, while working for Control Data Corporation as a senior applications analyst, he was asked to give a talk at George Washington High School in Denver. This incident prompted a career change after his interest was triggered by the young learners' response. People's Computer Company After Albrecht left his job at Control Data Corporation, he became involved with an educational nonprofit organization called Portola Institute. Albrecht launched his project called People's Computer Company in October 1972. It is not a company but a newsletter that took its name in honor of Janis Joplin's band, Big Brother and the Holding Company. The newsletter operated with a walk-in storefront to teach children "about having fun with computers". A spinoff newsletter was called Dr. Dobb's Journal of Computer Calisthenics and Orthodontia. Albrecht's computer-book publishing company Dymax also brought computing to the people by teaching young students to program. References Further reading Interview of Bob Albrecht at History of Computing in Learning and Education Virtual MuseumMuseum, 2015 Year of birth missing (living people) Living people American computer specialists
28195
https://en.wikipedia.org/wiki/Symbolics
Symbolics
Symbolics is a defunct computer manufacturer Symbolics, Inc., and a privately held company that acquired the assets of the former company and continues to sell and maintain the Open Genera Lisp system and the Macsyma computer algebra system. The symbolics.com domain was originally registered on March 15, 1985, making it the first .com-domain in the world. In August 2009, it was sold to napkin.com (formerly XF.com) Investments. History Symbolics, Inc. was a computer manufacturer headquartered in Cambridge, Massachusetts, and later in Concord, Massachusetts, with manufacturing facilities in Chatsworth, California (a suburban section of Los Angeles). Its first CEO, chairman, and founder was Russell Noftsker. Symbolics designed and manufactured a line of Lisp machines, single-user computers optimized to run the programming language Lisp. Symbolics also made significant advances in software technology, and offered one of the premier software development environments of the 1980s and 1990s, now sold commercially as Open Genera for Tru64 UNIX on the Hewlett-Packard (HP) Alpha. The Lisp Machine was the first commercially available workstation, although that word had not yet been coined. Symbolics was a spinoff from the MIT AI Lab, one of two companies to be founded by AI Lab staffers and associated hackers for the purpose of manufacturing Lisp machines. The other was Lisp Machines, Inc., although Symbolics attracted most of the hackers, and more funding. Symbolics' initial product, the LM-2, introduced in 1981, was a repackaged version of the MIT CADR Lisp machine design. The operating system and software development environment, over 500,000 lines, was written in Lisp from the microcode up, based on MIT's Lisp Machine Lisp. The software bundle was later renamed ZetaLisp, to distinguish the Symbolics' product from other vendors who had also licensed the MIT software. Symbolics' Zmacs text editor, a variant of Emacs, was implemented in a text-processing package named ZWEI, an acronym for Zwei was Eine initially, with Eine being an acronym for Eine Is Not Emacs. Both are recursive acronyms and puns on the German words for one (eins, eine) and two (zwei). The Lisp Machine system software was then copyrighted by MIT, and was licensed to both Symbolics and LMI. Until 1981, Symbolics shared all its copyrighted enhancements to the source code with MIT and kept it on an MIT server. According to Richard Stallman, Symbolics engaged in a business tactic in which it forced MIT to make all Symbolics' copyrighted fixes and improvements to the Lisp Machine OS available only to Symbolics (and MIT but not to Symbolics competitors), and thereby choke off its competitor LMI, which at that time had insufficient resources to independently maintain or develop the OS and environment. Symbolics felt that they no longer had sufficient control over their product. At that point, Symbolics began using their own copy of the software, located on their company servers, while Stallman says that Symbolics did that to prevent its Lisp improvements from flowing to Lisp Machines, Inc. From that base, Symbolics made extensive improvements to every part of the software, and continued to deliver almost all the source code to their customers (including MIT). However, the policy prohibited MIT staff from distributing the Symbolics version of the software to others. With the end of open collaboration came the end of the MIT hacker community. As a reaction to this, Stallman initiated the GNU project to make a new community. Eventually, Copyleft and the GNU General Public License would ensure that a hacker's software could remain free software. In this way, Symbolics played a key, albeit adversarial, role in instigating the free software movement. The 3600 series In 1983, a year later than planned, Symbolics introduced the 3600 family of Lisp machines. Code-named the "L-machine" internally, the 3600 family was an innovative new design, inspired by the CADR architecture but sharing few of its implementation details. The main processor had a 36-bit word (divided up as 4 or 8 bits of tags, and 32 bits of data or 28 bits of memory address). Memory words were 44 bits, the additional 8 bits being used for error-correcting code (ECC). The instruction set was that of a stack machine. The 3600 architecture provided 4,096 hardware registers, of which half were used as a cache for the top of the control stack; the rest were used by the microcode and time-critical routines of the operating system and Lisp run-time environment. Hardware support was provided for virtual memory, which was common for machines in its class, and for garbage collection, which was unique. The original 3600 processor was a microprogrammed design like the CADR, and was built on several large circuit boards from standard TTL integrated circuits, both features being common for commercial computers in its class at the time. Central processing unit (CPU) clock speed varied depending on which instruction was being executed, but was typically around 5 MHz. Many Lisp primitives could be executed in a single clock cycle. Disk input/output (I/O) was handled by multitasking at the microcode level. A 68000 processor (termed the front-end processor, (FEP)) started the main computer up, and handled the slower peripherals during normal operation. An Ethernet interface was standard equipment, replacing the Chaosnet interface of the LM-2. The 3600 was roughly the size of a household refrigerator. This was partly due to the size of the processor (the cards were widely spaced to allow wire-wrap prototype cards to fit without interference) and partly due to the size of disk drive technology in the early 1980s. At the 3600's introduction, the smallest disk that could support the ZetaLisp software was wide (most 3600s shipped with the 10½-inch Fujitsu Eagle). The 3670 and 3675 were slightly shorter in height, but were essentially the same machine packed a little tighter. The advent of , and later , disk drives that could hold hundreds of megabytes led to the introduction of the 3640 and 3645, which were roughly the size of a two-drawer file cabinet. Later versions of the 3600 architecture were implemented on custom integrated circuits, reducing the five cards of the original processor design to two, at a large manufacturing cost savings and with performance slightly better than the old design. The 3650, first of the G machines, as they were known within the company, was housed in a cabinet derived from the 3640s. Denser memory and smaller disk drives enabled the introduction of the 3620, about the size of a modern full-size tower PC. The 3630 was a fat 3620 with room for more memory and video interface cards. The 3610 was a lower priced variant of the 3620, essentially identical in every way except that it was licensed for application deployment rather than general development. The various models of the 3600 family were popular for artificial intelligence (AI) research and commercial applications throughout the 1980s. The AI commercialization boom of the 1980s led directly to Symbolics' success during the decade. Symbolics computers were widely believed to be the best platform available for developing AI software. The LM-2 used a Symbolics-branded version of the complex space-cadet keyboard, while later models used a simplified version (at right), known simply as the . The Symbolics keyboard featured the many modifier keys used in Zmacs, notably Control/Meta/Super/Hyper in a block, but did not feature the complex symbol set of the space-cadet keyboard. Also contributing to the 3600 series' success was a line of bit-mapped graphics color video interfaces, combined with extremely powerful animation software. Symbolics' Graphics Division, headquartered in Westwood, Los Angeles, California, near to the major Hollywood movie and television studios, made its S-Render and S-Paint software into industry leaders in the animation business. Symbolics developed the first workstations able to process high-definition television (HDTV) quality video, which enjoyed a popular following in Japan. A 3600, with the standard black-and-white monitor, made a cameo appearance in the movie Real Genius. The company was also referenced in Michael Crichton's novel Jurassic Park. Symbolics' Graphics Division was sold to Nichimen Trading Company in the early 1990s, and the S-Graphics software suite (S-Paint, S-Geometry, S-Dynamics, S-Render) ported to Franz Allegro Common Lisp on Silicon Graphics (SGI) and PC computers running Windows NT. Today it is sold as Mirai by Izware LLC, and continues to be used in major motion pictures (most famously in New Line Cinema's The Lord of the Rings), video games, and military simulations. Symbolics' 3600-series computers were also used as the first front end controller computers for the Connection Machine massively parallel computers manufactured by Thinking Machines Corporation, another MIT spinoff based in Cambridge, Massachusetts. The Connection Machine ran a parallel variant of Lisp and, initially, was used primarily by the AI community, so the Symbolics Lisp machine was a particularly good fit as a front-end machine. For a long time, the operating system didn't have a name, but was finally named Genera around 1984. The system included several advanced dialects of Lisp. Its heritage was Maclisp on the PDP-10, but it included more data types, and multiple-inheritance object-oriented programming features. This Lisp dialect was called Lisp Machine Lisp at MIT. Symbolics used the name ZetaLisp. Symbolics later wrote new software in Symbolics Common Lisp, its version of the Common Lisp standard. Ivory and Open Genera In the late 1980s (2 years later than planned), the Ivory family of single-chip Lisp Machine processors superseded the G-Machine 3650, 3620, and 3630 systems. The Ivory 390k transistor VLSI implementation designed in Symbolics Common Lisp using NS, a custom Symbolics Hardware Design Language (HDL), addressed a 40-bit word (8 bits tag, 32 bits data/address). Since it only addressed full words and not bytes or half-words, this allowed addressing of 4 Gigawords (GW) or 16 gigabytes (GB) of memory; the increase in address space reflected the growth of programs and data as semiconductor memory and disk space became cheaper. The Ivory processor had 8 bits of ECC attached to each word, so each word fetched from external memory to the chip was actually 48 bits wide. Each Ivory instruction was 18 bits wide and two instructions plus a 2-bit CDR code and 2-bit Data Type were in each instruction word fetched from memory. Fetching two instruction words at a time from memory enhanced the Ivory's performance. Unlike the 3600's microprogrammed architecture, the Ivory instruction set was still microcoded, but was stored in a 1200 × 180-bit ROM inside the Ivory chip. The initial Ivory processors were fabricated by VLSI Technology Inc in San Jose, California, on a 2 µm CMOS process, with later generations fabricated by Hewlett Packard in Corvallis, Oregon, on 1.25 µm and 1 µm CMOS processes. The Ivory had a stack architecture and operated a 4-stage pipeline: Fetch, Decode, Execute and Write Back. Ivory processors were marketed in stand-alone Lisp Machines (the XL400, XL1200, and XL1201), headless Lisp Machines (NXP1000), and on add-in cards for Sun Microsystems (UX400, UX1200) and Apple Macintosh (MacIvory I, II, III) computers. The Lisp Machines with Ivory processors operated at speeds that were between two and six times faster than a 3600 depending on the model and the revision of the Ivory chip. The Ivory instruction set was later emulated in software for microprocessors implementing the 64-bit Alpha architecture. The "Virtual Lisp Machine" emulator, combined with the operating system and software development environment from the XL machines, is sold as Open Genera. Sunstone Sunstone was a processor similar to a reduced instruction set computer (RISC), that was to be released shortly after the Ivory. It was designed by Ron Lebel's group at the Symbolics Westwood office. However, the project was canceled the day it was supposed to tape out. Endgame As quickly as the commercial AI boom of the mid-1980s had propelled Symbolics to success, the AI Winter of the late 1980s and early 1990s, combined with the slowdown of the Ronald Reagan administration's Strategic Defense Initiative, popularly termed Star Wars, missile defense program, for which the Defense Advanced Research Projects Agency (DARPA) had invested heavily in AI solutions, severely damaged Symbolics. An internal war between Noftsker and the CEO the board had hired in 1986, Brian Sear, over whether to follow Sun's suggested lead and focus on selling their software, or to re-emphasize their superior hardware, and the ensuing lack of focus when both Noftsker and Sear were fired from the company caused sales to plummet. This, combined with some ill-advised real estate deals by company management during the boom years (they had entered into large long-term lease obligations in California), drove Symbolics into bankruptcy. Rapid evolution in mass market microprocessor technology (the PC revolution), advances in Lisp compiler technology, and the economics of manufacturing custom microprocessors severely diminished the commercial advantages of purpose-built Lisp machines. By 1995, the Lisp machine era had ended, and with it Symbolics' hopes for success. Symbolics continued as an enterprise with very limited revenues, supported mainly by service contracts on the remaining MacIvory, UX-1200, UX-1201, and other machines still used by commercial customers. Symbolics also sold Virtual Lisp Machine (VLM) software for DEC, Compaq, and HP Alpha-based workstations (AlphaStation) and servers (AlphaServer), refurbished MacIvory IIs, and Symbolics keyboards. In July 2005, Symbolics closed its Chatsworth, California, maintenance facility. The reclusive owner of the company, Andrew Topping, died that same year. The current legal status of Symbolics software is uncertain. An assortment of Symbolics hardware was still available for purchase . The United States Department of Defense (US DoD) is still paying Symbolics for regular maintenance work. First .com domain On March 15, 1985, symbolics.com became the first (and currently, since it is still registered, the oldest) registered domain of the Internet. The symbolics.com domain was purchased by XF.com in 2009. Networking Genera also featured the most extensive networking interoperability software seen to that point. A local area network system called Chaosnet had been invented for the Lisp Machine (predating the commercial availability of Ethernet). The Symbolics system supported Chaosnet, but also had one of the first TCP/IP implementations. It also supported DECnet and IBM's SNA network protocols. A Dialnet protocol used phone lines and modems. Genera would, using hints from its distributed namespace database (somewhat similar to Domain Name System (DNS), but more comprehensive, like parts of Xerox's Grapevine), automatically select the best protocol combination to use when connecting to network service. An application program (or a user command) would only specify the name of the host and the desired service. For example, a host name and a request for "Terminal Connection" might yield a connection over TCP/IP using the Telnet protocol (although there were many other possibilities). Likewise, requesting a file operation (such as a Copy File command) might pick NFS, FTP, NFILE (the Symbolics network file access protocol), or one of several others, and it might execute the request over TCP/IP, Chaosnet, or whatever other network was most suitable. Application programs The most popular application program for the Symbolics Lisp Machine was the ICAD computer-aided engineering system. One of the first networked multi-player video games, a version of Spacewar, was developed for the Symbolics Lisp Machine in 1983. Electronic CAD software on the Symbolics Lisp Machine was used to develop the first implementation of the Hewlett-Packard Precision Architecture (PA-RISC). Contributions to computer science Symbolics' research and development staff (first at MIT, and then later at the company) produced several major innovations in software technology: Flavors, one of the earliest object-oriented programming extensions to Lisp, was a message passing object system patterned after Smalltalk, but with multiple inheritance and several other enhancements. The Symbolics operating system made heavy use of Flavors objects. The experience gained with Flavors led to the design of New Flavors, a short-lived successor based on generic functions rather than message passing. Many of the concepts in New Flavors formed the basis of the CLOS (Common Lisp Object System) standard. Advances in garbage collection techniques by Henry Baker, David A. Moon and others, particularly the first commercial use of generational scavenging, allowed Symbolics computers to run large Lisp programs for months at a time. Symbolics staffers Dan Weinreb, David A. Moon, Neal Feinberg, Kent Pitman, Scott McKay, Sonya Keene, and others made significant contributions to the emerging Common Lisp language standard from the mid-1980s through the release of the American National Standards Institute (ANSI) Common Lisp standard in 1994. Symbolics introduced one of the first commercial object databases, Statice, in 1989. Its developers later went on to found Object Design, Inc. and create ObjectStore. Symbolics introduced in 1987 one of the first commercial microprocessors designed to support the execution of Lisp programs: the Symbolics Ivory. Symbolics also used its own CAD system (NS, New Schematic) for the development of the Ivory chip. Under contract from AT&T, Symbolics developed Minima, a real-time Lisp run-time environment and operating system for the Ivory processor. This was delivered in a small hardware configuration featuring much random-access memory (RAM), no disk, and dual network ports. It was used as the basis for a next-generation carrier class long-distance telephone switch. The Graphics Division's Craig Reynolds devised an algorithm that simulated the flocking behavior of birds in flight. Boids made their first appearance at SIGGRAPH in the 1987 animated short "Stanley and Stella in: Breaking the Ice", produced by the Graphics Division. Reynolds went on to win the Scientific And Engineering Award from The Academy of Motion Picture Arts and Sciences in 1998. The Symbolics Document Examiner hypertext system originally used for the Symbolics manuals- it was based on Zmacs following a design by Janet Walker, and proved influential in the evolution of hypertext. Symbolics was very active in the design and development of the Common Lisp Interface Manager (CLIM) presentation-based User Interface Management System. CLIM is a descendant of Dynamic Windows, Symbolics' own window system. CLIM was the result of the collaboration of several Lisp companies. Symbolics produced the first workstation which could genlock, the first to have real time video I/O, the first to support digital video I/O and the first to do HDTV. Symbolics Graphics Division The Symbolics Graphics Division (SGD, founded in 1982, sold to Nichimen Graphics in 1992) developed the S-Graphics software suite (S-Paint, S-Geometry, S-Dynamics, S-Render) for Symbolics Genera. Movies This software was also used to create a few computer animated movies and was used for some popular movies. 1984, graphics for the little screens on the bridges of the Enterprise and the Klingon ship in Star Trek III: The Search for Spock 1985, 3D animations for Real Genius 1987, Symbolics, Stanley and Stella in: Breaking the Ice 1989, Symbolics, The Little Death 1990, Symbolics, Ductile Flow, presented at SIGGRAPH 1990 1990, 3D animations for Jetsons: The Movie 1991, Symbolics, Virtually Yours 1993, 3D animation of the Orca for Free Willy References Further reading External links The Symbolics Museum Archives from the Symbolics Lisp Users Group (SLUG) Mailing List, 1986-1993 Archives from the Symbolics Lisp Users Group (SLUG) Mailing List, 1990-1999 Ralf Möller's Symbolics Lisp Machine Museum A page of screenshots of Genera "Genera Concepts" – Web copy of Symbolic's introduction to Genera A collection of press releases from Symbolics "Symbolics announces the first true Single-Chip Lisp CPU" – Symbolics press release announcing the Ivory chip Lisp machines timeline – A timeline of Symbolics' and others' Lisp machines Kalman Reti, the Last Symbolics Developer, Speaks of Lisp Machines. – Video of a talk from June 28, 2012 Computer workstations Defunct computer companies based in Massachusetts Lisp (programming language) Lisp (programming language) software companies Macintosh peripherals
63873848
https://en.wikipedia.org/wiki/Doas
Doas
doas (“dedicated openbsd application subexecutor”) is a program to execute commands as another user. The system administrator can configure it to give specified users privileges to execute specified commands. It is free and open-source under the ISC license and available in Unix and Unix-like operating systems. doas was developed by Ted Unangst for OpenBSD as a simpler and safer sudo replacement. Unangst himself had issues with the default sudo config, which was his motivation to develop doas. doas was originally developed by Ted Unangst and was released with OpenBSD 5.8 in October 2015 replacing sudo. However, OpenBSD still provides sudo as a package. Configuration Definition of privileges should be written in the configuration file, /etc/doas.conf. The syntax used in the configuration file is inspired by the packet filter configuration file. Examples Allow user1 to execute procmap as root without password: permit nopass user1 as root cmd /usr/sbin/procmap Allow members of the wheel group to run any command as root: permit :wheel as root Simpler version (only works if default user is root (after install it is)): permit :wheel To allow members of wheel group to run any command (default as root) AND remember that they entered the password: permit persist :wheel Ports and availability Jesse Smith’s port of doas is packaged for DragonFlyBSD, FreeBSD, and NetBSD. According to the author, it also works on illumos and macOS. OpenDoas, a Linux port, is packaged for Debian, Alpine, Arch, CRUX, Fedora, Gentoo, GNU Guix, Hyperbola, Manjaro, Parabola, NixOS, Ubuntu, and Void Linux. See also sudo runas References Computer security software Unix software
47908879
https://en.wikipedia.org/wiki/Jonny%20Holmstrom
Jonny Holmstrom
Jonny Holmstrom is a Swedish professor of Informatics at Umeå University and director and co-founder of Swedish Center for Digital Innovation. Biography Holmstrom was born in Arvidsjaur in 1968. He received his Ph.D. from Umeå University in 2000, and has been a visiting scholar at Georgia State University and Florida International University. Holmstrom is a senior editor at Information and Organization and serves at the editorial boards for European Journal of Information Systems and Communications of the AIS. Holmstrom has written 2 books and has published over 100 scholarly articles in professional journals and edited volumes. Career Holmstrom is a professor of Informatics at Umeå University and director and co-founder of the Swedish Center for Digital Innovation. His work has appeared in journals including Communications of the AIS, Convergence, Design Issues, European Journal of Information Systems, Industrial Management and Data Systems, Information and Organization, Information Resources Management Journal, Information Systems Journal, Information Technology and People, Journal of the AIS, Journal of Strategic Information Systems, Research Policy, and The Information Society. IT and Organizational Change Holmstrom's early work examined the interaction between information technology and organizations. Prior research often assumed information technology to be either an objective, external force with deterministic impacts on organizations, or the outcome of social action. Holmstrom's research suggested that either view is incomplete, and he proposed to take both perspectives into account. Drawing from actor-network theory as analytical lens, Holmstrom has explored the role of IT in contexts such as municipal organizations, airports, and digital cash projects. In these studies, Holmstrom stressed the notion of non-human agency in which processes, technological tools and other similar concepts can be viewed as non-human actors that acquire an identity of their own. Digital Innovation In more recent research, Holmstrom has addressed digital innovation and digital transformation, specifically how organizations are dealing with “the digitization of everything” as a challenge and an opportunity. To deal with digital transformation, Holmstrom argues that organizations need to develop a comprehensive digital strategy. Holmstrom's research addresses how digital capabilities increasingly determines which companies create or lose value. Among these studies we find studies of firms in the mining industry, the paper and pulp industry, and the publishing industry. He also published a Harvard Business School case focusing on the ways in which digitization brings challenges as well as opportunities to firms in the publishing industry. Awards Holmstrom was awarded a post-doctoral fellowship by the STINT Foundation (The Swedish Foundation for International Co-operation in Research and Higher Education) for post-doctoral research at Georgia State University, Atlanta, GA in 2000/2001. He was also awarded a fellowship by the STINT Foundation to enable a visiting research position at Decision Sciences and Information Systems Department, College of Business Administration, Florida International University, Miami, FL, USA, in 2004. Holmstrom received Royal Skytteanska Samfundets award for young researchers at the Social Sciences faculty, Umeå University, in 2005, and Nordea’s Scientific Award 2007. In 2009 he received Umeå University's Young Researcher Award and the AIS senior scholars award for best IS journal paper of the year in 2010. Holmstrom was also awarded with the Pedagogical Award of the Year in the IS discipline in Sweden 2011 by the Swedish Association of Information Systems. References External links Homepage at Umeå University Homepage at Swedish Center for Digital Innovation Swedish computer scientists Umeå University faculty 1968 births People from Arvidsjaur Municipality Information science Living people
347379
https://en.wikipedia.org/wiki/Function%20key
Function key
A function key is a key on a computer or terminal keyboard which can be programmed so as to cause an operating system command interpreter or application program to perform certain actions, a form of soft key. On some keyboards/computers, function keys may have default actions, accessible on power-on. Function keys on a terminal may either generate short fixed sequences of characters, often beginning with the escape character (ASCII 27), or the characters they generate may be configured by sending special character sequences to the terminal. On a standard computer keyboard, the function keys may generate a fixed, single byte code, outside the normal ASCII range, which is translated into some other configurable sequence by the keyboard device driver or interpreted directly by the application program. Function keys may have abbreviations or pictographic representations of default actions printed on/besides them, or they may have the more common "F-number" designations. History The Singer/Friden 2201 Flexowriter Programatic, introduced in 1965, had a cluster of 13 function keys, labeled F1 to F13 to the right of the main keyboard. Although the Flexowriter could be used as a computer terminal, this electromechanical typewriter was primarily intended as a stand-alone word processing system. The interpretation of the function keys was determined by the programming of a plugboard inside the back of the machine. Soft keys date to avionics multi-function displays of military planes of the late 1960s/early 1970s, such as the Mark II avionics of the F-111D (first ordered 1967, delivered 1970–73). In computing use, they were found on the HP 9810A calculator (1971) and later models of the HP 9800 series, which featured 10 programmable keys in 5×2 block (2 rows of 5 keys) at the top left of the keyboard, with paper labels. The HP 9830A (1972) was an early desktop computer, and one of the earliest specifically computing uses. HP continued its use of function keys in the HP 2640 (1975), which used screen-labeled function keys, placing the keys close to the screen, where labels could be displayed for their function. NEC's PC-8001, introduced in 1979, featured five function keys at the top of the keyboard, along with a numeric keypad on the right-hand side of the keyboard. Their modern use may have been popularized by IBM keyboards: first the IBM 3270 terminals, then the IBM PC. IBM use of function keys dates to the IBM 3270 line of terminals, specifically the IBM 3277 (1972) with 78-key typewriter keyboard or operator console keyboard version, which both featured 12 programmed function (PF) keys in a 3×4 matrix at the right of the keyboard. Later models replaced this with a numeric keypad, and moved the function keys to 24 keys at the top of the keyboard. The original IBM PC keyboard (PC/XT, 1981) had 10 function keys (F1–F10) in a 2×5 matrix at the left of the keyboard; this was replaced by 12 keys in 3 blocks of 4 at the top of the keyboard in the Model M ("Enhanced", 1984). Schemes on various keyboards Apple Macintosh: The classic Mac OS supported system extensions known generally as FKEYS which could be installed in the System file and could be accessed with a Command-Shift-(number) keystroke combination (Command-Shift-3 was the screen capture function included with the system, and was installed as an FKEY); however, early Macintosh keyboards did not support numbered function keys in the normal sense. Since the introduction of the Apple Extended Keyboard with the Macintosh II, however, keyboards with function keys have been available, though they did not become standard until the mid-1990s. They have not traditionally been a major part of the Mac user interface, however, and are generally only used on cross-platform programs. According to the Macintosh Human Interface Guidelines, they are reserved for customization by the user. Current Mac keyboards include specialized function keys for controlling sound volume. The most recent Mac keyboards include 19 function keys, but keys F1–F4 and F7–F12 by default control features such as volume, media control, and Exposé. Former keyboards and Apple Keyboard with numeric keypad has the F1–F19 keys. Apple Macintosh notebooks: Function keys were not standard on Apple notebook hardware until the introduction of the PowerBook 5300 and the PowerBook 190. For the most part, Mac laptops have keys F1 through F12, with pre-defined actions for some, including controlling sound volume and screen brightness. Apricot PC/Xi: six unlabelled keys, each with an LED beside it which illuminates when the key can be used; above the keys is a liquid crystal display—the 'microscreen'—that is used by programs to display the action performed by the key. Atari 8-bit family (400/800/XL/XE): four dedicated keys (Reset, Option, Select, Start) at the right hand side or on the top of the keyboard; the XL models also had a Help key. Atari 1200XL had four additional keys labeled F1 through F4 with pre-defined actions, mainly related to cursor movement. Atari ST: ten parallelogram-shaped keys in a horizontal row across the top of the keyboard, inset into the keyboard frame instead of popping up like normal keys. BBC Micro: red/orange keys F0 to F9 in a horizontal row above the number keys on top of the computer/keyboard. The break, arrow, and copy keys could function as F10–F15. The case included a transparent plastic strip above them to hold a function key reference card. Coleco Adam: six dark brown keys in a horizontal row above the number keys, labeled with Roman numerals I–VI. Commodore VIC-20 and C64: F1/F2 to F7/F8 in a vertical row of four keys descending on the computer/keyboard's right hand side, odd-numbered functions accessed unshifted, even-numbered shifted; orange, beige/brown, or grey key color, depending on VIC/64 model/revision. Commodore 128: essentially same as VIC-20/C64, but with (grey) function keys placed in a horizontal row above the numeric keypad right of the main QWERTY-keyboard; also had Help key. Commodore Amiga: ten keys arranged in a row of two five-key groups across the top of the keyboard (flush with the ordinary keyboard top row); function keys are 1½ times the width of ordinary keys. Like the Commodore 128, this also had a Help key. Graphing calculators, particularly those from Texas Instruments, Hewlett-Packard and Casio, usually include a row of function keys with various preassigned functions (on a standard hand-held calculator, these would be the top row of buttons under the screen). On low-end models such as the TI-83-series, these function mainly as an extension of the main keyboard, but on high-end calculators the functions change with the mode, sometimes acting as menu navigation keys as well. HP 2640 series terminals (1975): first known instance—late 1970s—of screen-labeled function keys (where keys are placed in proximity or mapped to labels on CRT or LCD screen). HP 9830: F1–F8 on two rows of four in upper left with paper template label. An early use of function keys (1972). IBM 3270: probably the origin of function keys on keyboards, circa 1972. On this mainframe keyboard early models had 12 function keys in a 3×4 matrix at the right of the keyboard; later that changed to a numeric keypad, and the function keys moved to the top of the keyboard, and increased to 24 keys in two rows. IBM 5250: early models frequently had a "cmd" modifier key, by which the numeric row keys emulate function keys; later models have either 12 function keys in groups of 4 (with shifted keys acting as F13–F24), or 24 in two rows. These keys, along with "Enter", "Help", and several others, generate "AID codes", informing the host computer that user-entered data is ready to be read. IBM PC AT and PS/2 keyboard: F1 to F12 usually in three 4-key groups across the top of the keyboard. The original IBM PC and PC XT keyboards had function keys F1 through F10, in two adjacent vertical columns on the left hand side; F1|F2, F3|F4, ..., F9|F10, descending. Some IBM compatible keyboards, e.g., the Northgate OmniKey/102, also featured function keys on the left, which on examples with swapped left Alt and Caps Lock keys, facilitate fingers of a single hand simultaneously striking modifier key(s) and function keys swiftly and comfortably by touch even by those with small hands. Many modern PC keyboards also include specialized keys for multimedia and operating system functions. MCK-142 Pro: two sets of function keys: F1–F12 at the left side of the keyboard and additionally 24 user programmable PF keys located above QWERTY keys. NEC PC-8000 Series (1979): five function keys at the top of the keyboard, along with a numeric keypad on the right-hand side of the keyboard. Sharp MZ-700: blue keys F1 to F5 in a horizontal row across the top left side of the keyboard, the keys are vertically half the size of ordinary keys and twice the width; there is also a dedicated "slot" for changeable key legend overlays (paper/plastic) above the function key row. VT100 terminals: four function keys (PF1, Alt key; PF2, help; PF3, menu; PF4, escape to shell) above the numeric keypad. Action on various programs and operating systems Mac OS In the classic Mac OS, the function keys could be configured by the user, with the Function Keys control panel, to start a program or run an AppleScript. macOS assigns default functionality to (almost) all the function keys from to , but the actions assigned by default to these function keys has changed a couple of times over the history of Mac products and corresponding Mac OS X versions. As a consequence, the labels on Macintosh keyboards have changed over time to reflect the newer mappings of later Mac OS X versions : for instance, on a 2006 MacBook Pro, functions keys , and are labelled for volume down/volume up, whereas on later MacBook Pros (starting with the 2007 model), the volume controls are located on function keys to where they are mapped to various functions. Any recent version of Mac OS X or macOS is able to detect which generation of Apple keyboard is being used, and to assign proper default actions corresponding to the labels shown on this Apple keyboard (provided that this keyboard was manufactured BEFORE the release of the version of Mac OS X being used). As a result, default mappings are sometimes wrong (i.e. not matching the labels shown on the keyboard) when using a recent USB Apple keyboard on an older version of Mac OS X which doesn't know about the new function key mapping of this keyboard (e.g. because Mission control and Launchpad didn't exist at that time, the corresponding labels shown on the keyboard can't match the default actions assigned by older versions of Mac OS X which were Exposé and Dashboard). It can be noted that: all function keys have been changed over time, to the exception of and who have always been mapped to brightness control. all Apple laptops after 2007 are missing any Num Lock key, even if they lack a keypad (the Num Lock was previously located on the key on older Apple laptops). the special key for ejection of disks (which was located at the right of the key on older Apple keyboards) has been removed from Apple computers since they don't have an internal optical disk drive anymore, to the exception of the MacBook Air 2010 which had disk ejection labelled on its key (for use in combination with an external USB SuperDrive). function keys to have no labels; they were only available on full keyboards of fixed Apple computers (iMac, Mac Pro, or Mac Mini). All laptop computers have always lacked these extra keys, as well as any recent fixed Apple computer equipped with wireless Apple keyboard. on some macOS versions, it's said that function keys and are mapped by default to decrease/increase contrast (although nothing is labelled on these keys on Macintosh keyboards). on Boot Camp, function keys to are mapped to the corresponding IBM PC keys (which are located on the same place of the keyboard): Print Screen, Scroll Lock and Pause key on all versions of Mac OS X or macOS, software functions can be used by holding down the Fn key while pressing the appropriate function key, and this scheme can be reversed by changing the macOS system preferences. as of 2016, Apple has replaced the individual function keys with the touchbar on certain models of MacBook Pro. Windows/MS-DOS Under MS-DOS, individual programs could decide what each function key meant to them, and the command line had its own actions (e.g., copied to the current command prompt words from the previous command). Following the IBM Common User Access guidelines, the key gradually became universally associated with Help in most early Windows programs. To this day, Microsoft Office programs running in Windows list as the key for Help in the Help menu. Internet Explorer in Windows does not list this keystroke in the help menu, but still responds with a help window. is commonly used to activate a search function in applications, often cycling through results on successive presses of the key. + is often used to search backwards. Some applications such as Visual Studio support + as a means of searching for the currently highlighted text elsewhere in a document. is also commonly used as a reload key in many web browsers and other applications, while activates the full screen/kiosk mode on most browsers. Under the Windows environment, + is commonly used to quit an application; + will often close a portion of the application, such as a document or tab. generally activates the menu bar, while + activates a context menu. is used in many Windows applications such as Windows Explorer, Excel, Visual Studio and other programs to access file or field edit functions. is used in some applications to make the window "fullscreen", like in 3D Pinball: Space Cadet. In Microsoft IE, it is used to view the URL list of previously viewed websites. Other function key assignments common to all Microsoft Office applications are: to check spelling, + to call the macros dialog, + to call the Visual Basic Editor and ++ to call the Script Editor. In Microsoft Word, + reveals formatting. In Microsoft PowerPoint, starts the slide show, and moves to the next pane. WordPerfect for DOS is an example of a program that made heavy use of function keys. In Internet Explorer 6 and Internet Explorer 7, opens Internet Explorer Developer Toolbar. highlights the URL in the address bar. BIOS/booting Function Keys are also heavily used in the BIOS interface. Generally during the power-on self-test, BIOS access can be gained by hitting either a function key or the delete key. In the BIOS keys can have different purposes depending on the BIOS. However, is the de facto standard for save and exit which saves all changes and restarts the system. During Windows 10 startup, + is used to enter safe mode; in legacy versions of Microsoft Windows, the key was used alone. References Computer keys
38892252
https://en.wikipedia.org/wiki/TRS-80%20%28disambiguation%29
TRS-80 (disambiguation)
TRS-80 is the name of Tandy Corporation's original 1977 microcomputer system (also known as the Model I). The TRS-80 brand was also later applied to many different computers sold by Tandy, including several unrelated in design to the Model I. Computers TRS-80 Model I (1977), the original TRS-80 Micro Computer System TRS-80 Model III (1980), improved and compatible replacement for the Model I TRS-80 Model 4 (including Model 4P), successor to the Model III TRS-80 Model II (1979), small-business oriented microcomputer, not related to the original Model I TRS-80 Model 12 (1982), successor to the Model II TRS-80 Model 16, (including Model 16B), successor to the Model 12 TRS-80 Color Computer ("CoCo") (1980), a Motorola 6809-based line of computers TRS-80 MC-10 (1983), a short-lived, low-end hobbyist-oriented computer TRS-80 Model 100 (1983), an early portable computer TRS-80 Pocket Computer, a series of rebadged pocket computers manufactured by Sharp and Casio for Tandy TRS-80 Pocket Computer PC-1, the original model in the lane (a rebadged Sharp PC-1211) TRS-80 Pocket Computer PC-2, a rebadged Sharp PC-1500 TRS-80 Pocket Computer PC-3, a rebadged Sharp PC-1251 Other TRS-80 (group), an electronic music group formed in Chicago in 1997 See also Video Genie, a line of TRS-80 Model I clones sold as the TRZ-80 in South Africa List of TRS-80 clones
1457953
https://en.wikipedia.org/wiki/Timeworks%20Publisher
Timeworks Publisher
Timeworks Publisher was a desktop publishing (DTP) program produced by GST Software in the United Kingdom. It is notable as the first affordable DTP program for the IBM PC. In appearance and operation, it was a Ventura Publisher clone, but it was possible to run it on a computer without a hard disk. Versions Timeworks Desktop Publisher Timeworks Publisher 1 for Atari TOS relied on the GDOS software components, which were available from Atari but were often distributed with applications that required them. GDOS provided TOS/GEM with a standardized method for installing printer drivers and additional fonts, although these were limited to bitmapped fonts in all but the later releases. GDOS had a reputation for being difficult to configure, used a lot of system resources and was fairly buggy, meaning that Timeworks could struggle to run on systems without a hard disk and less than 2 MB of memory - but it was possible, and for many users Timeworks was an inexpensive introduction to desktop publishing. For the IBM PC, Timeworks ran on Digital Research's GEM Desktop (supplied with the program) as a runtime system. Later versions ran on Microsoft Windows. Timeworks Publisher 2 included full WYSIWYG, paragraph tagging, manual control of kerning, text and graphics imports and more fonts. Timeworks Publisher 2.1 with GEM/5 is known to have supported Bézier curves already. Acorn Desktop Publisher In mid-1988, following on from the release of GST's word processor, First Word Plus, Acorn Computers announced that it had commissioned GST to port and enhance the Timeworks product for the Archimedes series. Being designed for use with RISC OS, using the anti-aliased font technology already demonstrated on the Archimedes, utilising the multi-tasking capabilities of the RISC OS desktop environment, and offering printed output support for laser and dot-matrix printers, availability was deferred until the release of RISC OS in April 1989. The delivered product, Acorn Desktop Publisher, introduced Acorn's outline font manager and bundled 14 scalable fonts plus upgraded printer drivers (for Postscript-compatible and Hewlett-Packard Laserjet-compatible printers, plus Integrex colour inkjet printers) to provide consistent, high-quality output on screen and paper. Despite being described as "streets ahead" of Timeworks on the Atari ST, offering "real desktop publishing, not the pale imitation possible with a Master 128 or model B", being comparable to "mid-priced DTP packages on the Mac or IBM PC", the software was regarded as barely usable on a machine with 1 MB of RAM and no hard disk (Acorn recommended 2 MB to use the software alongside other applications), and the limitations in editing and layout facilities led one reviewer to note that at the £150 price level and with other desktop publishing packages (notably Computer Concepts' Impression, Beebug's Ovation, and Clares' Tempest) announced if not yet available, purchasers would be advised to "wait and see" before making any decision. Nevertheless, with competitors still unavailable in early 1990, Acorn User deemed to name it as the platform's best desktop publishing package, noting that there was "little available yet for Archimedes DTP, although much is on the way soon". Ultimately, Acorn would promote Impression as part of its Publishing System package. Of the other anticipated competitors, Ovation was released later in 1990, and succeeded by Ovation Pro in 1996, having been previewed in 1995, whereas Tempest was apparently never released, being absent from Clares' software catalogue. Curiously, Tempest was itself described as being "based on the Acorn DTP package" but aiming to remedy deficiencies and provide enhancements such as multi-column frames, "text flow around regular shapes", and improved text editing support, along with memory management facilities. Developed by a freelance programmer for Clares, a pre-release version was demonstrated in late 1989, apparently requiring only 128 KB of RAM, with work underway to optimise the display routines. A price of £129.95 including VAT was announced. Initially destined for an autumn 1989 release, it was postponed to an unspecified point in time in September 1989 with the specification having changed, but hints of a 1990 release were subsequently made in early 1990. Although a demo disk was apparently available, and the product was widely advertised, the product does not seem to have been completed. Clares later took over development of another Acorn product, the spreadsheet Schema, in 1990. Publish-It! In the US, Timeworks Inc. marketed the program as Publish-It!. Released in 1987, there were versions available for IBM PC (running over the GEM environment), Apple Macintosh, and Apple II (Enhanced IIe or better) computers. Further versions were named KeyPublisher 1.0 (versions 1.19 and 1.21) and produced by Softkey Software Products Inc. in 1991 for PCs with GEM. Another version, aimed at the business market, was named DESKpress. A later CD-based multilingual version for Windows was named Press International. Other names The product was also sold under other names including NEBS PageMagic (changed after objections from Adobe), Macmillan Publisher, Canon Publisher, and many other brands, distinguished by use of the .DTP file extension. The latest version was sold as Greenstreet Publisher 4 and is downwards file compatible with earlier versions. Releases 1987 - Timeworks Publisher (IBM PC, Atari ST) 1987 - Timeworks Publish-It! 1.12 (IBM PC GEM-based) 19?? - Publish-It! 1.19 by GST 1987 - Publish-It! (Apple IIe) 1988 - Acorn Desktop Publisher 1990 - Publish-It! 1.20 (IBM PC) 1990 - Publish-It! Easy 2.0 (Macintosh) 1991 - KeyPublisher 1 by softkey (IBM PC) 1991 - Timeworks Publisher 2 (IBM PC, Atari ST) GEM-based 1991 - Timeworks Publish-It! PC 2.00 (IBM PC) 1991 - Publish-It! Easy 2.1 (Macintosh) 1992 - Publish-It! Easy 2.1.9 (Macintosh) 199? - Timeworks Publisher 2.1 (IBM PC - GEM/5-based) 1992 - Timeworks Publisher 3 (IBM PC for Windows) 1994 - Timeworks Publish-It! 4 (Windows 3.1) 2009 - Publisher 4.6 Home & Business (Windows XP, Vista) See also Fleet Street Publisher PagePlus References 1987 software Atari ST software Desktop publishing software Discontinued software GEM software
39657114
https://en.wikipedia.org/wiki/Morten%20Middelfart
Morten Middelfart
Dr. Morten Middelfart (born October 2, 1970) is a Danish-born, American serial entrepreneur, inventor, and technologist. He is best known for inventing the Lumina Analytics Radiance AI platform, as well as the TARGIT software for business intelligence and analytics. Dr. Middelfart is currently the founder/Chief Data Scientist of Lumina Analytics, Advisory CIO of Genomic Expression and founder of Social Quant. With seven U.S. patents for his work in business intelligence and analytics software, Dr. Middelfart holds the most patents of any Danish person working in software. Lumina Analytics and Radiance In 2015 Dr. Middelfart cofounded Lumina Analytics, which uses artificial intelligence and machine learning technologies to uncover corporate risk by sifting through massive amounts of information. Lumina’s Radiance platform is a disruptive search technology that automates internet searches to identify risks and threats through initial vetting and continuous monitoring of employee or customer behavior. In 2020, Lumina was recognized by Goldman Sachs as one of the top 100 Most Intriguing Entrepreneurs in the World. During the global COVID-19 pandemic, Lumina assisted by directing its Radiance platform to help identify future hot-spots of the virus. In March 2021 Dr. Middelfart and Lumina were featured in a Forbes article addressing the global digital revolution and various countries' roles within that transformation. Dr. Middelfart discussed the vision of Lumina and the need to have the "end game" in mind from the beginning. Early Entrepreneurship Dr. Middelfart founded an analytics company, Morton Systems, in 1996, and became CTO of TARGIT following TARGIT's acquisition of his business in 1997. After the acquisition, TARGIT transitioned from reselling enterprise software systems to developing and selling business intelligence and analytics software. In 2014, Dr. Middelfart left TARGIT and founded Social Quant,a social media optimization service. In 2015, Dr. Middelfart joined Genomic Expressions, which uses data analytics to offer treatment ideas for cancer patients. Patents and Inventions Dr. Middelfart holds seven U.S. patents (and 25 patents worldwide) for developments within business intelligence and analytics software, placing him among the top 1.8% of all active inventors. His inventions pertain primarily to graphical representation of OLAP-structured of data within a business intelligence and analytics platform, methods of retrieving data from the platform, and the processing of natural language and multi-lingual queries to the data warehouse through the platform. Dr. Middelfart is the most prolific Danish inventor in the field of database technology. Dr. Middelfart has one patent application pending with the U.S. Patent and Trademark Office, focused on the intelligent processing of user queries in natural language into a business intelligence platform. In addition to his patents, Dr. Middelfart's work incorporates the OODA loop process within business intelligence and data warehousing. The concept, as first developed by USAF Colonel John Boyd, describes the way individuals and organizations gather information, decide on a course of action based on that information, and carry out the decision. Academic Path Dr. Middelfart holds a PhD from Aalborg University, a PhD from Rushmore University, and an MBA from Henley Management College. His research concentrates on "human-computer synergy", Big Data, and how computing power and human interaction can be used to make faster, more effective decisions in general. Dr. Middelfart is the author of two books as a function of this research, namely: CALM: Computer Aided Leadership & Management and Sentinel Mining. In addition, he is the co-author of the books: Enabling Real-Time Business Intelligence and Business Intelligence. Dr. Middelfart's work has been cited in work dedicated to exploring knowledge base inspection. He has also published several peer-reviewed articles that outline his philosophy and technological developments in detail. Dr. Middelfart has been guest lecturer at Information Technologies for Business Intelligence Doctoral College, Third European Business Intelligence Summer School, and Alborg University. Personal life Dr. Middelfart is an active skydiver, with more than 1,700 skydives and BASE jumps to his name. Dr. Middelfart incorporates the experiences from skydiving into his OODA research when dealing with extreme emotions and stress. In addition, Dr. Middelfart has been a supporter of entrepreneurship in the movie industry by joining the board of Folio, as well as being an early investor in the movie: Inheritance. References 21st-century Danish businesspeople Danish emigrants to the United States Danish computer programmers 21st-century Danish inventors Skydivers Aalborg University alumni People from Hjørring 1970 births Living people Chief information officers
23223470
https://en.wikipedia.org/wiki/Pyaar%20Impossible%21
Pyaar Impossible!
Pyaar Impossible! () is a 2010 Indian Hindi-language romantic comedy film directed by actor-turned-director Jugal Hansraj under the banner of Yash Raj Films. It features Priyanka Chopra and Uday Chopra. The film stars Anupam Kher and Dino Morea in supporting roles. It is based on the 1991 Malayalam film Kilukkampetti. Pyaar Impossible! was released on 8 January 2010. This film marked the first time Priyanka Chopra worked under the Yash Raj Films banner. Plot In Ankert University, California, Alisha (Priyanka Chopra) is the most beautiful girl on campus with plenty of admirers. Awkward, nerdy Abhay (Uday Chopra) is in love with her, although she is unaware of his existence. One night Alisha is partying with her friends and accidentally falls into a river. Abhay jumps in and rescues her from drowning, but her friends take her away before she regains consciousness. Abhay is further prevented from seeing Alisha the next day when her outraged father comes and removes her from college. Abhay nurses dreams of Alisha for seven years as he moves on with his life. He invents a revolutionary software program that cross integrates all operating systems. He meets with an investor to try to sell it, and he excuses himself to call his father and ask for advice. While he is gone, the investor copies the files onto a drive and steals them. Abhay discovers that this investor is the unscrupulous software salesman Siddharth 'Siddhu' Singh (Dino Morea), and he is now marketing the stolen software to a Singaporean firm as his own invention. Abhay goes to Singapore to confront Siddhu and sees Alisha at the company headquarters where she works as its PR representative. Still besotted, he follows her home. Due to a misunderstanding, she mistakes him for a nanny she was expecting from an employment agency. She is divorced with an unruly daughter named Tanya (Advika Yadav) and in search of another nanny, as Tanya drives every one away. Abhay decides to become Tanya's nanny and keeps his identity a secret to stay close to Alisha. Abhay takes care of the house mostly by paying contractors to clean it and eventually wins over Tanya when he buys her the Rockband video game so she can become a rockstar. Tanya nicknames him "Froggy" because of his nerdy looks. Things become complicated when Siddhu shows up trying to romance Alisha and sells the stolen software to her company. Abhay finds out that Siddharth's real name is Varun Sanghvi and tries to hide from Varun even while he grows closer to Alisha. She confides in Abhay, and he dresses her up in glasses and old clothes to show her how differently people are treated when they appear to be unattractive. Alisha feels sorry for Abhay. As the launch date for the software approaches, Abhay is unmasked by Varun who claims he is a delusional stalker and was never a nanny. Alisha is angry that he lied and orders Abhay out of the house without giving him a chance to explain. She finds out from her daughter that Abhay is the mysterious person who rescued her in college; she realizes that when Abhay told her about the girl he loved in college for seven years, he was talking about her. Alisha finds Abhay and apologizes to him saying that she has fallen in love with him. Abhay tells her that he created the software Varun is taking the credit for. They rush to the software launch to stop Varun who is easily discredited when he doesn't know the password to Abhay's software. Abhay is able to prove that it is his creation by entering the password. Abhay tells Alisha that she already knows the password: It is her name: A-L-I-S-H-A. Alisha and Abhay live happily with Tanya. Cast Priyanka Chopra as Alisha Merchant Uday Chopra as Abhay Sharma Dino Morea as Varun Sanghvi / Siddharth "Sidhu" Singh Anupam Kher as Mr. Sharma, Abhay's father Advika Yadav as Tanya Merchant, Alisha's daughter Rahul Vohra as C.P. Saidah Jules Jugal Hansraj (Cameo Appearance) Nataliya Kozhenova Release The film was promoted on the show Music Ka Maha Muqqabla on 10 January 2010 by Priyanka Chopra and Uday Chopra on STAR Plus and on Bigg Boss on 19 December 2009 by the same pair. Box office Pyaar Impossible! made about Rs. 5 crore and was declared as another box office failure for Uday Chopra. Soundtrack The soundtrack of Pyaar Impossible! was composed by Salim–Sulaiman. The music was released on 14 December 2009. The lyrics are penned by Anvita Dutt Guptan and the songs are remixed by Abhijit Vaghani. Track listing References External links 2010s Hindi-language films 2010 romantic comedy films 2010 films Indian films Yash Raj Films films Films shot in Singapore Films set in Singapore Hindi remakes of Malayalam films Indian romantic comedy films
13629727
https://en.wikipedia.org/wiki/Uswsusp
Uswsusp
uswsusp, abbreviated from userspace software suspend and stylized as µswsusp, is a set of userspace command-line utilities for Linux that act primarily as wrappers around the Linux kernel hibernation functionality and implement sleep mode ( utility, referred to as "suspend to RAM"), hibernation ( utility, referred to as "suspend to disk"), and hybrid sleep ( utility, referred to as "suspend to both"). It supports Linux kernel versions 2.6.17 and newer. uswsusp supports image checksumming, data compression, disk encryption, and integration with Splashy and fbsplash. References External links Launchpad page Linux Linux-only free software
22776814
https://en.wikipedia.org/wiki/Dan%20Galorath
Dan Galorath
Daniel D. Galorath is an American software developer, businessman and author. Galorath is the President and CEO of Galorath Incorporated and one of the chief developers of the project management software known as SEER-SEM. He is also the co-author of Software Sizing, Estimation, and Risk Management. Education Dan Galorath graduated with a Bachelor's Degree from California State University, and in 1980, he also received an MBA in management from California State University. Career Following college, Galorath worked in software development with a focus on software management. He began working in the aerospace and defense industries. One of his earliest projects was working with Don Reifer on the creation of NASA's Jet Propulsion Laboratory's Softcost program for Robert Tauseworth. In 1979, Galorath founded Galorath, Inc. as a software development consulting organization. In 1984, Galorath began consulting for Computer Economics, Inc. It was in that consulting role where Galorath became familiar with Dr. Randall Jensen's modifications to the Putnam model. Because the work was not usable in a commercial environment, Galorath worked to design a more user-friendly software estimation program, which was known as CEI System-3. By 1988, Dan's company Galorath Inc had introduced what would come to be known as SEER-SEM. SEER-SEM built on the work Galorath had done on Jensen’s model and added features such as a graphic user interface. These advances made SEER-SEM an application which could be used by project managers to better estimate the needs for their software applications. Since its inception, Galorath's SEER-SEM, has been used by companies ranging from aircraft manufacturers Lockheed Martin and Northrup Grumman, to electronics manufacturer Siemens, Bell Helicopter, GKN Aerospace, and even the United States Department of Defense. In 2001, Galorath received the 2001 International Society of Parametric Analysts (ISPA) Freiman Award for lifetime achievement in parametric modeling. In 2006, Galorath and Michael W Evans collaborated on Software Sizing, Estimation, and Risk Management, a book about software estimation. Galorath received a lifetime achievement award in 2009 from the Society of Cost Estimating and Analysis. Galorath continues to serve as the chief executive officer of Galorath Inc, which is headquartered in El Segundo, California. He is also a member of the Board of Directors of the non-profit organizations Book of Mormon Central, the John A. Widtsoe Foundation, and the ISBSG (International Software Benchmarking Standards Group). Galorath has also published several scholarly articles about software engineering and estimation. Personal Dan Galorath is the father of five. He is married and lives in Ponte Vedra Beach, Florida . Dan places a large emphasis on being physically fit and health conscious. His diet and exercise regimen was featured in The Wall Street Journal's health section. Work JPL Softcost: software estimation model developed for NASA’s Jet Propulsion Laboratory CEI System-3: software estimation model based on Dr. Randall Jensen’s modifications to the Putnam model SEER-SEM: application used to estimate resources needed for developing software Galorath, D. D., Evans, M. W. (2006). Software sizing, estimation, and risk management when performance is measured performance improves. United Kingdom: CRC Press. ISBN 9781420013122 References External links Galorath, Inc. American technology chief executives Living people California State University alumni People from Los Angeles County, California Year of birth missing (living people)
773841
https://en.wikipedia.org/wiki/Quarterdeck%20Office%20Systems
Quarterdeck Office Systems
Quarterdeck Office Systems, later Quarterdeck Corporation (NASDAQ: QDEK), was an American computer software company. It was founded by Therese Myers and Gary Pope in 1981 and incorporated in 1982. Their offices were initially located at 150 Pico Boulevard in Santa Monica, California and later at 13160 Mindanao Way in Marina del Rey, California, as well as a sales and technical support unit located in Clearwater, Florida. In the 1990s, they had a European office in Dublin, Ireland. Their most famous products were the Quarterdeck Expanded Memory Manager, DESQview, CleanSweep, DESQview/X, Quarterdeck Mosaic, Manifest and Partition-It. On April 18, 1989, Quarterdeck was awarded a US software patent that allowed multiple windowed PC applications under MS-DOS. After sales and its stock plummeted in 1995, interim CEO King R. Lee hired Gaston Bastiaens as CEO. In order to diversify the company's product offerings, Bastiaens began an ultimately unsuccessful acquisition spree. In 1995, the company acquired Landmark Research International Corp. for 3.5 million shares of Quarterdeck (acquiring MagnaRAM and WinProbe) and then Inset Systems, Inc. of Brookfield, Connecticut in September of that year for 933,000 shares of Quarterdeck (acquiring HiJaak graphics software in the deal). In March 1996, Quarterdeck acquired Datastorm Technologies, Inc., publishers of PROCOMM and PROCOMM PLUS, and relocated its technical support and development operations from California and Florida, to Datastorm's Columbia, Missouri headquarters. In July 1996, Quarterdeck acquired Vertisoft Systems, publishers of the DoubleDisk and Fix-It utilities, also for 3.5 million shares of Quarterdeck. Both Landmark and Vertisoft had extensive revenues from direct-marketing of third party products through telemarketing and direct mail. Bastiaens resigned in August 1996, and Quarterdeck continued under acting Co-CEOs King R. Lee, and Anatoly Tikhman, the former CEO of Vertisoft. The company announced a restructuring and a loss, and in January 1997, Quarterdeck hired Curtis Hessler to run the company. In 1998, with its DOS utilities market all but collapsed, Quarterdeck was acquired for $0.52 per share by Symantec (the Norton Utilities company), which discontinued support of some Quarterdeck products, e.g., Mosaic, and integrated others into larger offerings, e.g., CleanSweep, which became part of Norton SystemWorks. List of software products CleanSweep DESQ, predecessor to DESQview DESQview (DESQ successor, with IBM TopView compatibility) DESQview 386 (DESQview that shipped bundled with QEMM 386) DESQview/X (an X based version of DESQview) GameRunner GlobalChat IRC client GlobalStage IRC server (run on irc.scifi.com until 2003) HiJaak Graphics Suite MagnaRAM Manifest Partition-It! Quarterdeck Expanded Memory Manager, formerly QEMM 386 Quarterdeck InternetSuite Quarterdeck Message Center Quarterdeck Mosaic Quarterdeck Sidebar QRAM, an Intel 80286-based expanded memory manager TotalWeb ViruSweep Quarterdeck WebAuthor for Word WebCompass, an early metasearch tool WebStar (via StarNine) WebTalk Files.com References External links Merger announcement Usenet group for DESQview SEC 10-K form for 9/30/96 Floor plan and Pictures of the Quarterdeck Building located in Columbia, Missouri that was purchased by the University of Missouri. Software companies established in 1982 Defunct software companies of the United States Defunct companies based in California NortonLifeLock acquisitions 1982 establishments in California
22179748
https://en.wikipedia.org/wiki/Emmelia
Emmelia
Emmelia is a genus of bird dropping moths in the family Noctuidae, found primarily in Africa and the Palearctic. Taxonomy Emmelia is a name available as a genus name for a group of butterflies of the subfamily Acontiinae (Noctuidae). However, its status is uncertain. The name Emmelia is also known as subgenus in the genus Acontia and it is sometimes considered a synonym of Acontia. Species These 36 species belong to the genus Emmelia: Emmelia albovittata Hacker, 2013 (Africa) Emmelia amarei Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia atripars Hampson, 1914 (Africa) Emmelia bellula (Hacker, 2010) Emmelia bethunebakeri Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia binominata (Butler, 1892) Emmelia callima Bethune-Baker, 1911 (Africa) Emmelia citrelinea Bethune-Baker, 1911 (Africa) Emmelia dichroa Hampson, 1914 (Africa and temperate Asia) Emmelia eburnea Hacker, 2010 Emmelia esperiana Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia fascialis (Hampson, 1894) Emmelia fastrei Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia fuscoalba (Hacker, 2010) Emmelia homonyma Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia karachiensis Swinhoe, 1889 (Africa and temperate Asia) Emmelia lanzai (Berio, 1985) (Africa) Emmelia manakhana Hacker, Legrain & Fibiger, 2010 (temperate Asia) Emmelia mascheriniae (Berio, 1985) (Africa) Emmelia mineti Hacker, 2011 (Africa) Emmelia notha Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia nubila Hampson, 1910 (Africa and temperate Asia) Emmelia obliqua Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia paraalba Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia philbyi (Wiltshire, 1988) (temperate Asia) Emmelia praealba Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia purpurata Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia robertbecki Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia saldaitis Hacker, Legrain & Fibiger, 2010 Emmelia schreieri Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia semialba Hampson, 1910 (Africa and temperate Asia) Emmelia stassarti (Hacker, 2010) Emmelia subnotha (Hacker, 2010) Emmelia szunyoghyi Hacker, Legrain & Fibiger, 2010 (Africa) Emmelia trabealis (Scopoli, 1763) (spotted sulphur) (temperate Asia and Europe) Emmelia veroxanthia Hacker, Legrain & Fibiger, 2010 (Africa) References Emmelia at Markku Savela's Lepidoptera and Some Other Life Forms Natural History Museum Lepidoptera genus database Acontiinae
289724
https://en.wikipedia.org/wiki/Interplay%20Entertainment
Interplay Entertainment
Interplay Entertainment Corp. is an American video game developer and publisher based in Los Angeles. The company was founded in 1983 as Interplay Productions by developers Brian Fargo, Jay Patel, Troy Worrell, and Rebecca Heineman, as well as investor Chris Wells. As a developer, Interplay is best known as the creator of the Fallout series and as a publisher for the Baldur's Gate and Descent series. History Interplay Productions Prior to Interplay, the company's founding developers—Brian Fargo, Troy Worrell, Jay Patel, and Rebecca Heineman—worked for Boone Corporation, a video game developer based in California. When Boone eventually folded, the four got together with investor Chris Wells and, believing they could create a company that was better than Boone, founded Interplay in October 1983. The first projects were non-original and consisted of software conversions and even some military work for Loral Corporation. After negotiations with Activision, Interplay entered a US$100,000 contract to produce three illustrated text adventures for them. Published in 1984, Mindshadow is loosely based on Robert Ludlum's Bourne Identity while The Tracer Sanction puts the player in the role of an interplanetary secret agent. Borrowed Time which features a script by Arnie Katz' Subway Software followed in 1985. These adventures built upon work previously done by Fargo: his first game was the 1981-published Demon's Forge. The same year, Interplay Productions, then contracted out by Electronic Arts, ported EA's Racing Destruction Set to the Atari 8-bit family of computers. The conversion, entirely coded by Rebecca Heineman, was released in 1986 via Electronic Arts for the United States and Ariolasoft for the European market. Interplay's parser was developed by Fargo and an associate and in one version understands about 250 nouns and 200 verbs as well as prepositions and indirect objects. In 1986, Tass Times in Tonetown followed. Interplay made a name for itself as a quality developer of role-playing video games with the three-part series The Bard's Tale (1985–1988), critically acclaimed Wasteland (1988) and Dragon Wars (1989). All of them were published by Electronic Arts. Interplay started publishing its own games, beginning with Neuromancer and Battle Chess, in 1988, and then moved on to publish and distribute games from other companies, while continuing internal game development. In 1995, Interplay published the hit game Descent, developed by startup Parallax Software. Interplay published several Star Trek video games, including Star Trek: 25th Anniversary for computers and for Nintendo Entertainment System and Star Trek: Judgment Rites. These games had later CD-ROM editions released with the original Star Trek cast providing voices. Interplay also published Starfleet Academy and Klingon Academy games, and Starfleet Command series, beginning with Star Trek: Starfleet Command. Another game, Star Trek: Secret of Vulcan Fury, was in development in the late 1990s but was never completed and much of its staff laid off due to budgetary cuts prompted by various factors. In 1995, after several years of delays, Interplay finally published its role-playing game Stonekeep. Other PC games released during the mid- to late 1990s included Carmageddon, Fragile Allegiance, Hardwar and Redneck Rampage. In 1997, Interplay developed and released Fallout, a successful and critically acclaimed role-playing video game set in a retro-futuristic post-apocalyptic setting. Black Isle Studios, a newly created in-house developer, followed with the sequel, Fallout 2, in 1998. Another successful subsequent Interplay franchise was Baldur's Gate, a Dungeons & Dragons game that was developed by BioWare and which spawned a successful expansion, sequel and spin-off series. The spin-off series started with Baldur's Gate: Dark Alliance; the game's success forged a sequel as well. Aside from Dark Alliance, Interplay published a few notable console series such as Loaded and the fighting game series ClayFighter and the games by Shiny Entertainment, MDK and Wild 9. Interplay Entertainment By 1998, the financial situation at Interplay was dire and the company was in bankruptcy court. To avert bankruptcy, Interplay went public on the NASDAQ stock exchange under the name Interplay Entertainment. Interplay continued to endure losses under Brian Fargo due to increased competition, less than stellar returns on Interplay's sports division and the lack of console titles. This forced Interplay to seek additional funding two years later with an investment from Titus Software, a Paris-based game company. Titus agreed to invest 25 million dollars in Interplay and a few months later this was followed up by an additional 10 million investment. Interplay also acquired a 49.9% ownership in publisher Virgin Interactive in February 1999. With this, Interplay would be able to distribute Virgin's games in North America, while Virgin would distribute Interplay's games in Europe. By 2001, Titus Interactive completed its acquisition of majority control of Interplay. Immediately afterwards, they shed most of Interplay's publisher functions and signed a long-term agreement under which Vivendi Universal Games would distribute Interplay's games. Eventually, Interplay founder Brian Fargo departed at the start of 2002 to found InXile Entertainment as Fargo's plan to change Interplay's main focus from PC gaming to console gaming failed. Herve Caen took over the role of CEO to perform triage and made several unpopular but arguably necessary decisions to cancel various projects, in order to save the company. Interplay sold Shiny Entertainment to Infogrames and several game properties while closing BlueSky Software. Due to a low share price, Interplay's shares were delisted from the NASDAQ in 2002 and now trade on the over the counter (OTC) market. Interplay's European operations were completely sold to Titus Interactive, which included their share of Virgin Interactive, which Titus renamed to Avalon Interactive in August 2003. With this, Titus had complete control over publishing and distributing Interplay's games in Europe under the Avalon Interactive name. On September 29, 2003, Interplay announced it had canceled its distribution deal with Vivendi Universal Games, due to Vivendi suing them for alleged breaches of the working agreement and failure of payment. On December 8, 2003, Interplay laid off the entire Black Isle Studios staff. The company was also involved in issues including debt. Feargus Urquhart later left Black Isle Studios and Interplay suffered a loss of US$20 million in that year. In 2005, Titus Interactive, S.A. filed for bankruptcy and closed down all their assets parts of which Interplay acquired. The bankruptcy of Titus led to Interplay being burdened with debt. Interplay faced bankruptcy again and was brought to bankruptcy court in 2006. To pay off creditors, the company altered its licensing agreement with Bethesda Software and then sold the Fallout IP to Bethesda Softworks in 2007. In September 2008, several games from Interplay's catalog were re-released on the digital distribution service GOG.com after being unavailable in retail distribution for years. In August 2013, Interplay acquired the remaining rights to the FreeSpace franchise for $7,500 after THQ went to bankruptcy court. In September 2016, Interplay announced its intent to sell off its intellectual property, composed of 70 games, working together with Wedbush Securities. Interplay is co-publishing, with 3D Realms, a remaster of Xatrix Entertainment's 1999 game Kingpin: Life of Crime, which was originally published by Interplay. Known as Kingpin: Reloaded, the game will be developed by Slipgate Ironworks. This was announced on January 17, 2020. In 2021 Interplay, via Black Isle Studios, re-released Baldur's Gate: Dark Alliance on modern consoles, and later that year also released a port of it on PC for the first time. Litigation In 2003 and 2004 Snowblind Studios and Interplay Entertainment were engaged in a dispute regarding the Dark Alliance Engine for Fallout: Brotherhood of Steel, Baldur's Gate: Dark Alliance II, and the GameCube version of the original Dark Alliance. The dispute was resolved and Interplay would be allowed to work with materials already using the Dark Alliance Engine. Bethesda Softworks sued Interplay in 2009, regarding the Fallout Online license and selling of Fallout Trilogy and sought an injunction to stop development of Fallout Online and sales of Fallout Trilogy. After several trials spanning almost three years, and in exchange for $2 million dollars, Interplay gave Bethesda the full rights for Fallout Online. Interplay's rights to sell and merchandise Fallout, Fallout 2, and Fallout Tactics: Brotherhood of Steel expired on December 31, 2013. In 2010, TopWare Interactive revealed that they were developing Battle vs. Chess to be published by SouthPeak Games. Interplay sued them and won an injunction to stop sales in the United States. In 2012, Interplay won the case via default and a settlement for $200,000 plus interest was agreed upon on November 15, 2012. Games Studios Interplay Discovery; a subdivision founded in 2010 aimed at publishing games made by independent video game developers. Black Isle Studios in Orange County, California, started in 1996. Defunct studios 14 Degrees East, the strategy division of Interplay, located in Beverly Hills and founded in 1999. BlueSky Software in California, started in 1988, closed in 2001. Brainstorm in Irvine, California. Digital Mayhem, an Interplay development studio that ported Giants: Citizen Kabuto to the PS2 and developed Run Like Hell. FlatCat Interplay Films, a division of Interplay Entertainment, was formed in 1998 and was supposed to develop seven of the company's most popular video game titles into movies, including Descent, Redneck Rampage, and Fallout. Its president was Tom Reed. Interplay Sports located in Beverly Hills was the internal sports division at Interplay. The division was founded in 1995 as VR Sports, but changed its name in 1998. MacPlay, ported games to Mac OS from 1990–1997. The brand was licensed to United Developers, LLC in 2000. Shiny Entertainment in Laguna Beach, California, founded in 1993, acquired in 1995, sold to Atari in 2002. It later merged with The Collective to form Double Helix Games in 2007. Tantrum Entertainment References External links American companies established in 1983 1983 establishments in California Brentwood, Los Angeles Companies based in Los Angeles Companies traded over-the-counter in the United States Video game companies based in California Video game companies established in 1983 Video game companies of the United States Video game development companies Video game publishers
36918646
https://en.wikipedia.org/wiki/London%203%20South%20West
London 3 South West
London 3 South West is an English rugby union league at the eighth level of club rugby union in England involving sides based in Hampshire, Surrey and south-west London. Promoted clubs move into London 2 South West. Relegated clubs move into either Surrey 1 or Hampshire Premier depending on their location, with sides coming up from these divisions, although only 1st XV clubs are allowed in London 3 South West. Each year all clubs in the division also take part in the RFU Senior Vase - a level 8 national competition. Teams for 2021–22 The teams competing in 2021-22 achieved their places in the league based on performances in 2019-20, the 'previous season' column in the table below refers to that season not 2020-21. Season 2020–21 On 30th October the RFU announced that a decision had been taken to cancel Adult Competitive Leagues (National League 1 and below) for the 2020/21 season meaning London 3 South West was not contested. Teams for 2019–20 United Services Portsmouth who finished 5th in 2018-19 were unable to fulfil their fixtures in and withdrew from the league in November 2019. Teams for 2018–19 Teams for 2017–18 Participating Clubs 2016-17 Battersea Ironsides Basingstoke (relegated from London 2 South West) Bognor (promoted from Hampshire 1) Eastleigh Farnham Milbrook (promoted from Hampshire 1) Old Cranleighans (promoted from Surrey 1) Old Tiffinians Teddington Trojans United Services Portsmouth Weybridge Vandals (relegated from London 2 South West) Participating Clubs 2015-16 Battersea Ironsides (promoted from Surrey 1) Camberley (promoted from Surrey 1) Eastleigh Farnham (relegated from London 2 South West) New Milton & District Old Tiffinians Old Mid-Whitgiftian Old Tonbridgians Purley John Fisher Teddington Trojans United Services Portsmouth (promoted from Hampshire 1 (winners)) Participating Clubs 2014-15 Andover (promoted from Hampshire 1) Eastleigh Ellingham & Ringwood London Exiles New Milton & District Old Tiffinians (promoted from Surrey 1) Old Mid-Whitgiftian Old Tonbridgians (promoted from Surrey 1) Purley John Fisher Sandown & Shanklin Teddington (relegated from London 2 South West) Trojans (relegated from London 2 South West) Participating Clubs 2013-14 Camberley (relegated from London 2 South West) Eastleigh Ellingham & Ringwood Farnham (promoted from Surrey 1 (winners)) KCS Old Boys (relegated from London 2 South West) London Exiles New Milton & District (promoted from Hampshire 1 (winners)) Old Cranleighans (promoted from Surrey 1 (play-off winners) Old Mid-Whitgiftian Old Wellingtonian Purley John Fisher Sandown & Shanklin Participating Clubs 2012-13 Bognor (relegated from London 2 South West) Eastleigh (promoted from Hampshire 1 (winners)) Ellingham & Ringwood London Exiles (promoted from Surrey 1 (winners)) Old Blues Old Mid-Whitgiftian (relegated from London 2 South East) Old Paulines (promoted from Surrey 1 (play-off winners)) Old Wellingtonians Purley John Fisher Sandown & Shanklin Weybridge Vandals Winchester Participating Clubs 2011-12 Alton Andover (promoted from Hampshire 1 (winners) Ellingham & Ringwood KCS Old Boys (relegated from London 2 South West) Old Alleynian Old Blues (promoted from Surrey 1 (play-off winners)) Old Freemans (promoted from Surrey 1 (winners)) Old Wellingtonians Purley John Fisher (relegated from London 2 South East) Sandown & Shanklin Weybridge Vandals (relegated from London 2 South West) Winchester Participating Clubs 2010-11 Alton (Promoted from Hampshire 1 (winners)) Camberley Ellingham & Ringwood Fordingbridge London South Africa (relegated from London 2 South West) Old Alleynian Old Wellingtonians Old Wimbledonians Petersfield (Promoted from Hampshire 1 (play-off winners)) Sandown & Shanklin Teddington (Promoted from Surrey 1 (winners)) Winchester (relegated from London 2 South West) Participating Clubs 2009-10 Andover Camberley Ellingham & Ringwood Fordingbridge (promoted from Hampshire 1 (winners)) Gosport & Fareham Kingston Old Alleynian Old Mid-Whitgiftian Old Paulines (promoted from Surrey 1 (winners) Old Wellingtonians Old Wimbledonians Sandown & Shanklin (promoted from Hampshire 1 (play-off winners)) Original teams When this division was introduced in 2000 (as London 4 South West) it contained the following teams: Barnes - relegated from London 3 South West (8th) Chobham - promoted from Surrey 1 (champions) Cobham - relegated from London 3 South West (7th) Cranleigh - relegated from London 3 South West (9th) Fawley - relegated from London 3 South West (14th) Old Alleynians - relegated from London 3 South West (15th) Purley John Fisher - relegated from London 3 South West (13th) Reeds Weybridge - relegated from London 3 South West (10th) Southampton - relegated from London 3 South West (11th) Tottonians - relegated from London 3 South West (12th) United Services Portsmouth - promoted from Hampshire 1 (champions) London 3 South West honours London 4 South West (2000–2009) Originally known as London 4 South West, this division was a tier 8 league with promotion up to London 3 South West and relegation down to either Hampshire 1 or Surrey 1. London 3 South West (2009–present) League restructuring by the RFU ahead of the 2009–10 season saw London 4 South West renamed as London 3 South West. Remaining as a tier 8 league promotion was to London 2 South West (formerly London 3 South West), while relegation continued to either Hampshire 1 or Surrey 1. Number of league titles Farnham (2) Winchester (2) Camberley (1) Chobham (1) Cobham (1) Dorking (1) Gosport & Fareham (1) London Exiles (1) Old Alleynians (1) Old Cranleighans (1) Old Reigatian (1) Purley John Fisher (1) Reeds Weybridge (1) Richmond (1) Teddington (1) Tottonians (1) Warlingham (1) Weybridge Vandals (1) Notes See also English rugby union system Rugby union in England References 8 4 Rugby union in Surrey
220355
https://en.wikipedia.org/wiki/Vaio
Vaio
VAIO is a brand of personal computers and consumer electronics, currently developed by Japanese manufacturer , headquartered in Azumino, Nagano Prefecture. VAIO () was originally a brand of Sony, introduced in 1996. In February 2014, Sony created VAIO Corporation Inc., a special purpose company with investment firm Japan Industrial Partners, as part of its restructuring effort to focus on mobile devices. Sony maintains a minority stake in the new, independent company, which currently sells computers in the United States, Japan, India, and Brazil, and maintains exclusive marketing agreements in other regions. Sony still holds the intellectual property rights for the VAIO brand and logo. Etymology Originally an acronym of Video Audio Integrated Operation, this was amended to Visual Audio Intelligent Organizer in 2008 to celebrate the brand's 10th anniversary. The logo, along with the first of the VAIO computers, were designed by Teiyu Goto, supervisor of product design from the Sony Creative Center in Tokyo. He incorporated many meanings into the logo and acronym: the pronunciation in both English (VAIO) and Japanese () is similar to "bio", which is symbolic of life and the product's future evolution; it's also near "violet", which is why most early Vaios were purple or included purple components. Additionally, the logo is stylized to make the "VA" look like a sine wave and the "IO" like binary digits 1 and 0, the combination representing the merging of analog and digital signals. The sound some Vaio models make when starting up is derived from the melody created when pressing a telephone keypad to spell the letters V-A-I-O. History As part of Sony Although Sony made computers in the 1980s, such as MSX-based HitBit computers mainly for the Japanese market, the company withdrew from the computer business around the beginning of the 1990s. Under the then-new VAIO brand, Sony's re-entry into the global computer market began in 1996. Sony's then-president Nobuyuki Idei thought "there was no point making an ordinary PC", so the VAIO lineup was to focus on Audio Visual (as the VAIO name suggests), portability, and design. The PCV-90 was the first series of desktops introduced in 1996, and designed with a 3D graphical interface as a novelty for new users. The first VAIO laptop computers followed in 1997 with the US$2,000 PCG-505 "SuperSlim" model, constructed out of a four-panel magnesium body. Over the years, many audio visual technologies and interfaces pioneered by Sony became a key focus for its VAIO computers, including Memory Stick, i.Link, and even MiniDisc. In 2001, Steve Jobs presented a Vaio PC running MacOS to Sony executives, suggesting the possibility of collaboration. Sony's Vaio team ultimately turned down the proposal they regarded a "diversion of resources", as the popularity of the Windows-based premium PC brand was growing. Sony Vaio's later designs were released during a period of low PC sales and included models with innovations such as magnetized stands and the Vaio Tap, which was designed with a completely separate keyboard. The latest models were complemented by the Windows 10 operating system. Spin-off from Sony On 4 February 2014, Sony announced that it would sell its Vaio PC business due to poor sales. In March 2014, it was announced that Japan Industrial Partners had purchased a 95% stake in the VAIO division. The sale was closed on 1 July 2014; on the same day, the company announced refreshed entries in the VAIO Fit and Pro lines. The re-launched products initially distributed in Japan, then later in Brazil. In August 2015, Vaio announced plans to re-enter international markets, beginning with Brazil and the United States. Vaio CEO Yoshimi Ota stated that the company planned to focus more on high-end products in niche segments (such as the creative industries), as they felt Sony was somewhat too focused on attempting to garner a large market share in its PC business. The Canvas Z tablet was released in the United States on 5 October 2015, through Microsoft Store and the Vaio website. On 16 October 2015, Vaio agreed to introduce their products in Brazil through a partnership with a local manufacturer Positivo Informática. On 2 February 2016, Vaio announced that it would unveil a Windows 10 smartphone. Also that month, it was also reported that Vaio was negotiating with Toshiba and Fujitsu Technology Solutions to consolidate their personal computer businesses together. On 4 June 2018, Nexstgo Company Limited announced that they will be licensed by VAIO Corporation to oversee the business in Asia. This license agreement between Hong Kong-based Nexstgo and the Japan-based VAIO Corporation will include manufacturing, sales and marketing as well as servicing of VAIO laptops under the VAIO trademark in the Hong Kong, Macau, Malaysia, Singapore and Taiwan markets. Currently in the US, VAIO business products are sold by Trans Cosmos America, Inc. Products Sony VAIO (1996 to 2014) Sony's VAIO brand included product lines across notebooks, subnotebooks, desktops, media centres, and even Network media solutions. Computers Sony's VAIO range of computers consisted of the following lineups: Desktops Desktops PCV series (1996-2005) Multimedia Desktops M series (1998-1999) MX series (2000, built-in FM radio, MiniDisc player and amplifier) Tablet PC Desktops LX series (2000-2008) Media Center PCs VGX-XL series (2005, audio receiver form factor) VGX-TP series (2007, cylindrical disc form factor) All-In-One Computers VGC series W series (2002-2006) VA series (2005-2006, 20", integrated TV tuner) L series (2006-2013, 15.4" or 19" touchscreen display, integrated TV tuner, Sony's Living Room PC) Vaio Tap 20 (2013, 20" touchscreen display) Vaio Tap 21 (2014, 21.5" 1920x1080 touchscreen display) Notebooks Ultraportable Premium 505 series (1997-2004, 10.4" or 12.1" display, external floppy and CD drives, originally called SuperSlim) 700 series (1997-1998, 12.1" display, external floppy and CD drives) 800 series (1998-1999, 13.3" display, external floppy and CD drives) TX series (2005-2007, 11.1" 1366x768 display, first laptop with 16:9 LED backlit display) TZ series (2007-2008, 11.1" 1366x768 display) TT series (2008-2010, 11.1" 1366x768 display) SZ series (2006-2008, 13.3" 1280x800 display, switchable graphics) Z series (2008-2014, 13.1" display, switchable graphics) Ultraportable Mainstream SR series (2001, 10.4” SVGA display, circular trackpad) SRX series (2001, 10.4” 1024x768 display, circular trackpad) TR series (2003, 10.6" 1280x768 display) VX series (2002, 10.4" or 12.1" display) SR series (2008-2010, 13.3" 1280x800 display) S series (2010-2013, 13.3" 1600x900 display) T series (2012-2014, 13.3" 1366x768 display) Y series (13.3" 1366x768 display, no optical drive) Ultraportable Netbooks G series (2007, 12.1" 1024x768 display, Intel Core processor) M series (2008, 10.1" 1024x600 display, Intel Atom processor) W series (2009, 10.1" 1366x768 display, Intel Atom processor) X series (11.1" 1366x768 display, Intel Atom processor) Consumer, Home & Work F series (1999-2000, 13.0" or 14.1" 1024x768 display, desktop replacement) FX/FXA series (2001-2003, 14.1" display, desktop replacement) XG/XE/XR series (1999-2001, 13.3" or 14.1" 1024x768 display, modular DVD/CD-RW/Floppy/2nd battery/2nd hard drive bay) QR series (2001, 13.3” 1024x768 display) FRV series (2003, 15" 1024x768 display, desktop replacement) GRX series (2002, 15” 1024x768 or 16.1" 1600x1200 display, desktop replacement) GRZ series (2003, 15” 1024x768 display, desktop replacement) NV/NVR series (2002-2005, 15" 1024x768 or 1440x1050 display, modular Floppy/MiniDisc/Numeric Keypad/Compact Subwoofer bay) B series (2004) BX series (2005, 14.1" display) FJ series (2005, 14.1" display) C series (13.3" 1280x800 display, choice of colors) NR series (2007, 15" 1280x800 display) E series (2010, 15.5" or 17.3" display, choice of colors) XE series (2011, 15.5" 1920x1080 display) Vaio Fit 14 & 15 (2013, 14" or 15" touchscreen laptop, SVF) Vaio Duo (2013, 13.3" hybrid touchscreen laptop, SVD) Vaio Tap 11 (2013, 11.6" touchscreen convertible, SVT) Multimedia A series (2004, 17" 1920x1200 display) AX series (2005, 17" 1440x900 display) AR series (2006, 17" 1440x900 or 1920x1200 display, first with BD-R drive) AW series (2008, 18.4" 1680x945 or 1920x1200 display) Portable Entertainment FS series (2005-2006, 15.4" 1280x800 display) FE series (2006-2007, 15.4" 1280x800 display) FZ series (2007-2008, 15.4" 1280x800 display) FW series (2008-2010, 16.4" 1920x1080 display) F series (2010, 16.4" 1920x1080 display) NW series (2009, 15.4" 1366x768 display) Lifestyle & UMPC Subnotebooks C1 series (1998-2003, 8.9" 1024x480 display, branded as PictureBook) GT series (2001, Japan only, 6.4" display, built-in digital camera) U series (2002-2004, 6.4" or 7.1" 1024x768 display) UX series (2006, 4.5" 1024x600 display, Sony's first UMPC) P series (2009-2010, 8" 1600x768 display) Experience Included as part of the out-of-box experience are prompts to register at Club Vaio, an online community for Vaio owners and enthusiasts, which also provides automatic driver updates and technical support via email, along with exclusive desktop wallpapers and promotional offers. From 1997 to 2001 in Japan, the SAPARi program was also pre-installed on Vaio machines. On later models, the customer is also prompted to register the installed trial versions of Microsoft Office 2010 and the antivirus software (Norton AntiVirus on older models, and McAfee VirusScan or TrendMicro on newer ones) upon initial boot. Vaio computers come with components from companies such as Intel processors, Seagate Technology, Hitachi, Fujitsu or Toshiba hard drives, Infineon or Elpida RAM, Atheros and Intel wireless chipsets, Sony (usually made by Hitachi) or Matsushita optical drives, Intel, NVIDIA or AMD graphics cards and Sony speakers. Recent laptops have been shipped with Qimonda RAM, HP speakers with Realtek High Definition Audio Systems, and optional Dolby Sound Room technology. A selection of media centres were added to the Vaio range in 2006. These monitorless units (identified by a product code prefixed by VGX rather than VGN) are designed to form part of a home entertainment system. They typically take input from a TV tuner card, and output video via HDMI or composite video connection to an ideally high-definition television. The range included the XL and TP lines. The VGX-TP line is visually unique, featuring a circular, 'biscuit-tin' style design with most features obscured behind panels, rather than the traditional set-top box design. In 2013, Sony Vaio's range comprised seven products. The most basic were the E, T and S series while the high end models, the F and Z Series, were discontinued. Sony also had a range of hybrid tablet computers, with models called Vaio Duo 11/13, Vaio Tap 11/20 and Vaio Fit multi-flip, as well as a desktop computer under the L series. These models use Windows systems and Intel processors, as described above. Portable music players Sony released some of their early digital audio players (DAP) under the Vaio line. The first model, the "VAIO Music Clip", was released in 1999, powered by an AA battery and featuring 64 MB of internal memory. It differed from Sony's players in the "Network Walkman" line which used external Memory Stick medium instead at the time. Succeeding models were also released, but it was mainly sold domestically, with Walkman-branded players more widespread internationally. In 2004 the brand made a comeback with the VAIO Pocket (model VGF-AP1L), featuring a 40 GB hard disk drive for up to 26,000 songs, and a 2.0-inch color LCD display. Like Walkman DAPs it used SonicStage software. Music streamers Sony had also released several other products under the VAIO lineup, including the VAIO WA1 wireless digital music streamer, essentially a portable radio and speaker. VAIO (2014 to present) The current lineup of Vaio computers, developed by VAIO Corporation, continues the same product line naming, and currently include: Vaio Z Vaio SX14 Vaio SX12 Vaio FH14 Z Canvas The first new VAIO computer developed by VAIO corporation was the Vaio Z Canvas 2-in-1 PC, which began sales on 23 September 2015 starting from $2,199 in the USA. The Z Canvas is focused on creative professionals as its target audience. Graphic artists, illustrators, animators, etc. With a 12.3-inch LCD WQXGA+ 2560 x 1704 IPS multi-touch display with digitizer stylus (pen) capability, the Z Canvas looks similar in design to the Microsoft Surface Pro 3, but comes with Windows 10 Pro and is available as a Microsoft Signature PC. It has an Intel Core i7 processor, an Intel Iris Pro Graphics 5200, a 2nd generation PCIe SSD with PCIe Gen.3 compatibility (up to 1 TB) or SATA/M.2 for the 256 GB model, and up to 16 GB of memory. Smartphones In February 2016, Vaio announced the Vaio Phone Biz which is a premium built mid-range Windows 10 Mobile device. This is Vaio's first Windows smartphone. In March 2017, Vaio announced Vaio Phone A, which have look of Vaio Phone Biz, but used Android operating system instead. Technology Innovations Over the years, the Sony VAIO lineup has been responsible for many 'firsts' in desktops and laptops, as well as for setting trends for what would now be considered standard equipment. Integrated webcam The Sony VAIO C1 PictureBook subnotebook, first released in 1998, was among the first to feature a built in web-cam, at 0.27 megapixels, and could swivel around to capture photos on both sides. Chiclet keyboards The Sony VAIO X505 laptop, released in 2004, popularized the chiclet keyboard in laptops. Displays Some Sony Vaio models come with Sony's proprietary XBRITE (known as ClearBright in Japan and the Asia-Pacific region) displays. The first model to introduce this feature was the Vaio TR series, which was also the first consumer product to utilize such technology. It is a combination of smooth screen, anti-reflection (AR) coating and high-efficiency lens sheet. Sony claims that the smooth finish provides a sharper screen display, the AR coating prevents external light from scattering when it hits the screen, and the high-efficiency lens sheet provides 1.5 times the brightness improvement over traditional LCD designs. Battery life is also extended through reduced usage of the LCD backlight. The technology was pioneered by Sony engineer Masaaki Nakagawa, who is in charge of the Vaio TR development. The TX series, introduced in September 2005, was the first notebook to implement a LED back-lit screen, which provides lower power consumption and greater color reproduction. This technology has since been widely adopted by many other notebook manufacturers. The TX series was also the first to use a 16:9 aspect ratio screen with 1366x768 resolution. The successor to the TX series was the TZ series in May 2007. This new design featured an optional 32 or 64GB Solid State Drive (SSD) for rapid boot-up times, quicker application launches and greater durability. If selected, a 250 GB Hard Drive could also have been included in place of the built-in CD/DVD drive to provide room for additional storage. For security, this model included a biometric fingerprint sensor and Trusted Platform Module. The TZ offered a built-in highly miniaturized Motion Eye camera built into the LCD panel for video conferencing. Additional features included the XBRITE LCD, integrated Wireless Wide Area Network (WWAN) technology and Bluetooth technology. Switchable graphics The SZ series was the first to use switchable graphics – the motherboard contained an Intel GMCH (Graphics Memory Controller Hub) featuring its own in-built graphics controller (complete memory hub controller and graphics accelerator on the one die) and a separate NVIDIA graphics accelerator chipset directly interfaced with the GMCH. The GMCH could be used to reduce power consumption and extend battery life whereas the NVIDIA chipset would be used when greater graphics processing power was needed. A switch is used to toggle between the graphics options but required the user to preselect the mode to be used before the motherboard could initialize. The Z series, which replaced the SZ series, can change graphics modes "on the fly" on Windows Vista, and does not require a restart of the system. This feature has subsequently been used by other manufacturers, including Apple, Asus and Alienware. Blu-ray The AR Series was the first to incorporate a Blu-ray Disc burner, at the height of the Blu-Ray vs. HD-DVD format war. This series was designed to be the epitome of high-definition products including a 1080p capable WUXGA (1920 × 1200 pixels) screen, HDMI output and the aforementioned Blu-ray burner. The AR series also includes an illuminated logo below the screen. Blu-ray/HDMI capable models have been the subject of intense promotion since mid-2007, selling with a variety of bundled Blu-ray Discs. The AR series was subsequently replaced by the AW series, and in 2011, replaced by the F Series, which incorporates all of these features in a 16.4" 16:9 display. Startup Chime The chime heard when a VAIO computer is booted are the DTMF notes corresponding to V-A-I-O (8-2-4-6) dialed on a telephone keypad. Bundled Software Sony has been criticized for loading its Vaio laptops with bloatware, or ineffective and unrequested software that supposedly allows the user to immediately use the laptop for multimedia purposes. This includes trial versions of Adobe Premiere Elements & Adobe Photoshop Elements with Vaio Media Gate and XMB. Sony later offered a "Fresh start" option in some regions with several of their business models. With this option, the computer is shipped only with a basic Windows operating system and very little trial software already installed. The default webcam software in Vaio notebooks is ArcSoft WebCam Companion. It offers a set of special effects called Magic-i visual effects, through which users can enhance the images and videos taken through the webcam. It also features a face detection feature. Certain other Sony proprietary software such as Click to Disc Editor, Vaio Music Box, Vaio Movie Story, Vaio Media Plus are also included with recent models. Those shipped with ATI Radeon Video cards feature the Catalyst Control Centre, which enables control of brightness, contrast, resolution etc., and also enables connection to an external display the best laptop. Recovery Media Early Sony VAIO models included recovery media in the form of CDs and/or DVDs. Beginning in mid-2005, a hidden partition on the hard drive, accessible at boot via the BIOS or within Windows via a utility was used instead. Pressing [F10] at the Vaio logo during boot-up will cause the notebook to boot from the recovery partition; where the user has the choice of either running hardware diagnostics without affecting the installed system, or restoring (re-imaging) the hard drive to factory condition – an option that destroys all user installed applications and data). The first time a new VAIO PC is started up, users are prompted to create their own recovery media. This physical media would be required in case of hard disk failure and/or replacement. In cases where the system comes with Windows 7 64-bit pre-installed, the provided recovery media restores the system to Windows 7 32- or 64-bit. Explanatory notes See also Sony NEWS Splashtop covers Vaio Quick Web Access References External links Official Website (Japan) Official Website (United States) Sony US VAIO Product Support Sony VAIO UK Sony VAIO India Official Website (Hong Kong) Official Website (Taiwan) Official Website (Singapore) Official Website (Malaysia) Sony products Consumer electronics brands Computer-related introductions in 1996 Electronics companies established in 2014 Computer companies established in 2014 Japanese companies established in 2014 Companies based in Nagano Prefecture Corporate spin-offs Japanese brands
16092632
https://en.wikipedia.org/wiki/Rhizome%20%28organization%29
Rhizome (organization)
Rhizome is an American not-for-profit arts organization that supports and provides a platform for new media art. History Artist and curator Mark Tribe founded Rhizome as an email list in 1996 while living in Berlin. The list included a number of people Tribe had met at Ars Electronica By August, Rhizome had launched its website, which by 1998 had developed a significant readership within the Internet art community. Originally designated a business, Rhizome became a nonprofit organization in 1998, switching to the domain-name suffix ".org.". In an interview with Laurel Ptak for the Bard Center for Curatorial Studies and Art in Contemporary Culture Archive, Tribe explains "I thought of it as Artforum meets AltaVista (AltaVista was one of the first web search engines), as a kind of bottom-up alternative to the top-down hierarchies of the art world." Rhizome established an online archive called the ArtBase in 1999. The ArtBase was initially conceived exclusively as a database of net art works. Today, the scope of the ArtBase has expanded to include other forms of art engaged with technology, including games, software, and interdisciplinary projects with online elements. The works are submitted by the artists themselves. In addition to hosting archived work, Rhizome's digital preservation work includes conservation of digital art and updating obsolete code. In 2003, Rhizome became affiliated with the New Museum of Contemporary Art in New York City. Today, Rhizome's programs include events, exhibitions at the New Museum and elsewhere, an active website, and an archive of more than 2,000 new media artworks. This relationship has been contentious at times, with Rhizome members citing the museum's toxic working environment practices including verbal harassment and abuse. The organization has published one book with Link Editions, "The Best of Rhizome 2012" edited by former editor Joanne McNeil. In 2015, the organization relaunched rhizome.org with a new design created by Wieden+Kennedy. Digital Preservation Program Rhizome operates a digital preservation program, led by Dragan Espenschied, which is focused on the creation of free, open source software tools to decentralize web archiving and software preservation practices and ensure continuing access to its collections of born-digital art. ArtBase Founded in 1999, the Rhizome ArtBase is an online archive of new media art containing some 2,110 art works. The ArtBase encompasses a vast range of projects by artists all over the world that employ materials including software, code, websites, moving image, games and browsers to aesthetic and critical ends. Web archiving In response to the needs of the ArtBase—as well as to the increasing number of artists creating works on social media platforms and as interactive websites—in 2014 Rhizome began a program to develop open source web archiving tools that could both serve its mission and a broader community of users. Rhizome launched the social media archiving tool Colloq in 2014, which works by replicating the interface of social media platforms. Amalia Ulman's instagram project "Excellences and Perfections" (2014) was the first social media artwork archived with Colloq. Colloq pays special attention to the way a user interacts with the social media interface at the time of creation, using a technique called "web capturing" to store website behaviors. The tool was developed by Ilya Kremer and Rhizome's Digital Conservator Dragan Espenscheid. In 2015, Rhizome unveiled its archive of the influential art blog VVORK, marking the first time Colloq was used to archive an entire website. Archiving VVORK allowed Rhizome to tackle the challenge of archiving embedded video content, which is often hosted on a third-party site. The website had been previously archived by Internet Archive, but this recording did not include embedded media like videos that Colloq was built to capture. Of the tool, Jon Ippolito, professor of new media at the University of Maine, said: it makes archives "as close as possible, you’re going to get the experience of interacting with the actual site." In 2015, Rhizome folded the Colloq project into a more expansive Webrecorder initiative. In August 2016, the organization launched the public release of a more fully realized Webrecorder tool, which is a free web archiving tool that allows users to create their own archives of the dynamic web. Funded by the Andrew W. Mellon Foundation, Webrecorder is targeted towards archiving social media, video content, and other dynamic content, rather than static webpages. Webrecorder is an attempt to place web archiving tools in the hands of individual users and communities. It uses a "symmetrical web archiving" approach, meaning the same software is used to record and play back the website. While other web archiving tools run a web crawler to capture sites, Webrecorder takes a different method, actually recording a user browsing the site to capture its interactive features. Oldweb.today In December 2015, Rhizome launched oldweb.today, a project that allows users to view archived webpages within emulated versions of legacy web browsers. Users are given the option of browsing the site of their choice within versions of Mosaic, Netscape Navigator, Internet Explorer, Mozilla Firefox, and Google Chrome. The project gives users a deeper understanding of web history and the way browsing environments alter one's experience of the internet. It is an example of "Emulation as a Service" technology, imitating old software programs so that they can run on new computers. Conifer In 2020, Rhizome added renamed their Webrecorder.io. project to Conifer. Conifer lets its users “create high-fidelity, interactive captures of any web site you browse and a platform to make those captured websites accessible.” Conifer is powered by its users and gives the power to “create, curate, and share their own collections of web materials. This can even include items that would be only revealed after logging in or performing complicated actions on a web site.” This tool also lets users save items with “complex scripting, such as embedded videos, fancy navigation, or 3D graphics,” which “have a much higher success rate for capture with Conifer than with traditional web archives.” According to their user guide, Conifer works by putting web pages into “sessions." These "sessions" work by “requests sent by the browser and responses from the web are captured while you are interacting with sites.” Conifer defines a collection as a series of these sessions. When someone wants to view the sessions, Conifer “makes the browser request resources from the collection instead of the live web. Viewers of a collection should be able to repeat any action during access that were performed during capture.” Conifer allows users to upload data in multiple formats, including: ·       WARCs created with any web archiving tool (an ISO standard for web archiving) ·       ARC files (the predecessor of WARC) ·       HAR files (a browser and web site debugging protocol format) Conifer offers different approaches to capturing with the software. Through a browser, one can capture ·       Via your local browser ·       Via remote browser ·       Via the ArchiveWeb.page desktop app The choice of browser effects how the data will be captured. Conifer states that “There are four factors in a capture session: the browser that is operated by the user, its connection to the web archiving backend that writes the data, network location, and user identity.” The browser performs the network requests, and anything that is not requested cannot be captured. Ad blocker and privacy features can affect these requests. Also, webpages could appear differently due to the capabilities of each different browser. Conifer also gives the option for users to use their remote browser, which lets users “use the exact same browser for both capture and access.” These browsers run in the cloud and are pre-configured by Conifer for use in capturing websites. There are also different ways that the browser can connect to Conifer: through “rewriting mode” or “proxy mode.” In the Rewriting mode, “all resources the browser requests are changed on the fly so that instead of reaching out to the original URL on the live web, everything goes through the conifer.rhizome.org web archiving server.” Proxy mode “has the web archiving backend connected to the browser via a web proxy…The browser can make requests as usual and the web archiving backend will have access to all of them, with almost no rewriting required. This makes proxy mode generally a more stable and reliable capture method that doesn’t require constant updating.” Conifer can also capture content when a user is logged into a website but say that a website may look different depending on if the user is logged in or not. However, Conifer warns that “You may log in to websites during a session, however, do note that your credentials may be captured as data within your collection…If you need to capture a site that requires login, consider creating a throwaway account just for the purpose of capturing.” Despite that Adobe Flash Player went offline at the end of 2020, Conifer can still capture sites that used Flash, saying “As long as a Flash site remains online it will still be accessible and able to be archived … even after the deadline.” Nicola Jayne Bingham and Helena Byrne have stated that programs such as Conifer offer “potential for collecting and creating much more heritage; in practice however, ‘recording’ websites is a manual, extremely time-consuming process and can only be used very selectively due to resource constraints.” However, Byrne and Bingham also state that “Conifer has great potential to democratise the web archiving process as websites archived by individuals external to the LDLs can be added to UKWA, creating possibilities for more diversity within the archive.” The Conifer tool also has been suggested for use in Special Interest Archival groups, such as the group for art. According to Sumitra Duncan, founder of the Web Archiving Special Interest Group, “For the last few years we have been toying with the idea of using … Conifer service … to create a SIG web archive collection that we can use as a teaching tool for members who are new to web archiving. Unfortunately, a lack of ‘staff’ time and funds for long-term data storage has prevented us from enacting this idea in the past and still applies.” Rhizome Commissions Program Founded in 2001 to support artists working with technology, the Rhizome Commissions Program has awarded more than 100 commissions as of 2016. In 2008, Rhizome expanded the scope of the commissions from strictly Internet-based art to the broad range of forms and practices that fall under the category of new media art. This includes projects that creatively engage new and networked technologies or reflect on the impact of these tools and media. With this expanded format, commissioned works can take the final form of online works, performance, video, installation or sound art. Projects can be made for the context of the gallery, the public, the web or networked devices. Among the artists awarded a Rhizome commission: Heba Amin, Aleksandra Domanović, Aram Bartholl, Knifeandfork (Brian House and Sue Huang), Mendi & Keith Obadike, Trevor Paglen, Jon Rafman, Tao Lin, Tristan Perich, Angelo Plessas. Brody Condon, Jona Bechtolt, Kristin Lucas, Evan Roth, Rafaël Rozendaal, eteam, Steve Lambert, Zach Lieberman, Porpentine (game designer). Exhibition Program In its two decades of activity, Rhizome has presented exhibitions online and in gallery spaces. ArtBase 101 In 2005 at the New Museum, Rhizome presented this exhibition of 40 selections from its online archive of new media art, the ArtBase. Cocurated by then-director Lauren Cornell and former director Rachel Greene, the exhibition addressed dirt style, net cinema, games, e-commerce, data visualization and databases, online celebrity, public space, software, cyberfeminism, and early net.art. Selected artists included John F. Simon Jr., M. River and T. Whid Art Associates, 0100101110101101.org, Young-Hae Chang Heavy Industries, and Cory Arcangel. Sarah Boxer, reviewing the exhibition for the New York Times called Artbase 101 "an ambitious and risky thing to do." New York New York Happy Happy (NY NY HP HP) In 2013, the organization presented an experiential artwork by artist Ed Fornieles, which sent up art world and high society debauchery with "forced undressing," eating salami slices from nude bodies, the exploitation of unpaid performance artists, and male strippers. Writing for Noisy, Zach Sokol said of the event: "Fornieles may be tinkering with the idea that we force imagined social archetypes and social spaces into existence... We all become sociopaths when there are beautiful people, fancy spaces, exclusivity, and of course documentation with iPhones, cameras, and video cameras." Net Art Anthology In October 2016, Rhizome launched Net Art Anthology, a two-year online exhibition devoted to restaging 100 key artworks from the history of net art. One project per week will be restaged and conceptualized through an online exhibition page. Devised in tandem with Rhizome's digital conservation department, Net Art Anthology makes use of the tools Rhizome has developed for preserving dynamic web-based artworks. The project was launched with an artists' panel at the New Museum on October 27, 2016, featuring Olia Lialina, Martha Wilson, Mark Tribe, and Ricardo Dominguez. Seven on Seven Since 2010, Rhizome has held an annual conference at the New Museum pairing leading technologists and contemporary artists to create something new—art, apps, often arguments about digital culture. The program has led to many influential projects such as a start-up called Monegraph; a short documentary film for The New York Times by Laura Poitras; and artworks later shown at major art institutions, like Image Atlas by Taryn Simon and Aaron Swartz. Artists that have participated in Seven on Seven: Evan Roth, Aaron Koblin, Monica Narula, Ryan Trecartin, Tauba Auerbach, Marc Andre Robinson, Kristin Lucas, Michael Bell-Smith, Ricardo Cabello (mr.doob), Liz Magic Laser, Zach Lieberman, Rashaad Newsome, Ryder Ripps, Camille Utterback, Emily Royston, Aram Bartholl, Xavier Cha, LaToya Ruby Frazier, Naeem Mohaiemen, Jon Rafman, Taryn Simon and Stephanie Syjuco. Technologists who have participated in Seven on Seven: Jeff Hammerbacher, Joshua Schachter, Matt Mullenweg, Andrew Kortina, Hilary Mason, Ayah Bdeir, David Karp, Andy Baio, Ben Cerveny, Jeri Ellsworth, Kellan Elliott-McCrea, Bre Pettis, Chris Poole (moot), Erica Sadun, Jeremy Ashkenas, Blaine Cook, Michael Herf, Charles Forman, Aaron Swartz, Grant Olney Passmore, Khoi Vinh and Anthony Volodkin. Previous Seven on Sevens have been supported by Ace Hotel and HTC. See also Digital Preservation List of digital preservation initiatives Digital art Digital curation Net.art Surfing club Internet art New media art References Further reading (describes Colloq, a "tool that records all the content you experience on a website as you click around, then uses that information to create a simulation of the website") External links Official website ArtBase Webrecorder Oldweb.Today Net Art Anthology Search the Rhizome server resources using the (full) URL Repository created by the Webrecorder project that contains a socially constructed experimental list of publicly available archives Internet art Computer art Non-profit organizations based in New York City Culture of Manhattan Arts organizations based in New York City Digital preservation
983754
https://en.wikipedia.org/wiki/MIFARE
MIFARE
MIFARE is the NXP Semiconductors-owned trademark of a series of integrated circuit (IC) chips used in contactless smart cards and proximity cards. The brand name covers proprietary solutions based upon various levels of the ISO/IEC 14443 Type A 13.56 MHz contactless smart card standard. It uses AES and DES/Triple-DES encryption standards, as well as an older proprietary encryption algorithm, Crypto-1. According to NXP, 10 billion of their smart card chips and over 150 million reader modules have been sold. MIFARE is owned by NXP Semiconductors, which was spun off from Philips Electronics in 2006. Variants MIFARE products are embedded in contactless and contact smart cards, smart paper tickets, wearables and phones. The MIFARE brand name (derived from the term MIKRON FARE Collection and created by the company Mikron) covers four families of contactless cards: MIFARE Classic Employs a proprietary protocol compliant to parts 1–3 of ISO/IEC 14443 Type A, with an NXP proprietary security protocol for authentication and ciphering. Subtype: MIFARE Classic EV1 (other subtypes are no longer in use). MIFARE Plus Drop-in replacement for MIFARE Classic with certified security level (AES-128 based) and is fully backward compatible with MIFARE Classic. Subtypes MIFARE Plus S, MIFARE Plus X and MIFARE Plus SE. MIFARE Ultralight Low-cost ICs that are useful for high volume applications such as public transport, loyalty cards and event ticketing. Subtypes: MIFARE Ultralight C, MIFARE Ultralight EV1 and MIFARE Ultralight Nano. MIFARE DESFire Contactless ICs that comply with parts 3 and 4 of ISO/IEC 14443-4 Type A with a mask-ROM operating system from NXP. The DES in the name refers to the use of a DES, two-key 3DES, three-key 3DES and AES encryption; while Fire is an acronym for Fast, innovative, reliable, and enhanced. Subtypes: MIFARE DESFire EV1, MIFARE DESFire EV2, MIFARE DESFire EV3. There is also the MIFARE SAM AV2 contact smart card. This can be used to handle the encryption in communicating with the contactless cards. The SAM (Secure Access Module) provides the secure storage of cryptographic keys and cryptographic functions. MIFARE Classic family The MIFARE Classic IC is just a memory storage device, where the memory is divided into segments and blocks with simple security mechanisms for access control. They are ASIC-based and have limited computational power. Due to their reliability and low cost, those cards are widely used for electronic wallets, access control, corporate ID cards, transportation or stadium ticketing. The MIFARE Classic with 1K memory offers 1,024 bytes of data storage, split into 16 sectors; each sector is protected by two different keys, called A and B. Each key can be programmed to allow operations such as reading, writing, increasing value blocks, etc. MIFARE Classic with 4K memory offers 4,096 bytes split into forty sectors, of which 32 are the same size as in the 1K with eight more that are quadruple size sectors. MIFARE Classic Mini offers 320 bytes split into five sectors. For each of these IC types, 16 bytes per sector are reserved for the keys and access conditions and can not normally be used for user data. Also, the very first 16 bytes contain the serial number of the card and certain other manufacturer data and are read-only. That brings the net storage capacity of these cards down to 752 bytes for MIFARE Classic with 1K memory, 3,440 bytes for MIFARE Classic with 4K memory, and 224 bytes for MIFARE Mini. It uses an NXP proprietary security protocol (Crypto-1) for authentication and ciphering. The Samsung TecTile NFC tag stickers use MIFARE Classic chips. This means only devices with an NXP NFC controller chip can read or write these tags. At the moment BlackBerry phones, the Nokia Lumia 610 (August 2012), the Google Nexus 4, Google Nexus 7 LTE and Nexus 10 (October 2013) can't read/write TecTile stickers. MIFARE Classic encryption has been compromised; see below for details. MIFARE Plus family MIFARE Plus is a replacement IC solution for the MIFARE Classic. Key applications: Public transportation Access management; e.g., employee, school, or campus cards Electronic toll collection Car parking Loyalty programs It is less flexible than a MIFARE DESFire EV1 contactless IC. MIFARE Plus was publicly announced in March 2008 with first samples in Q1 2009. MIFARE Plus, when used in older transportation systems that do not yet support AES on the reader side, still leaves an open door to attacks. Though it helps to mitigate threats from attacks that broke the Crypto-1 cipher through the weak random number generator, it does not help against brute force attacks and cryptoanalytic attacks. During the transition period from MIFARE Classic to MIFARE Plus where only a few readers might support AES in the first place, it offers an optional AES authentication in Security Level 1 (which is in fact MIFARE Classic operation). This does not prevent the attacks mentioned above but enables a secure mutual authentication between the reader and the card to prove that the card belongs to the system and is not fake. In its highest security level SL3, using 128-bit AES encryption, MIFARE Plus is secured from attacks. MIFARE Plus EV1 MIFARE Plus EV1 was announced in April 2016. New features compared to MIFARE Plus X include: Sector-wise security-level switching The choice of crypto algorithm used in the authentication protocol can be set separately for each sector. This makes it possible to use the same card with both readers that can read MIFARE Classic products (with sectors protected by 48-bit CRYPTO1 keys, "Security Level 1") and readers that can read MIFARE Plus products (with sectors protected by 128-bit AES keys, "Security Level 3"). This feature is intended to make it easier to gradually migrate existing MIFARE Classic product-based installations to MIFARE Plus, without having to replace all readers at the same time. ISO 7816-4 wrapping The card can now be accessed in either the protocol for MIFARE (which is not compliant with the ISO 7816-4 APDU format), or using a new protocol variant that runs on top of ISO 7816-4. This way the cards become compatible with NFC reader APIs that can only exchange messages in ISO 7816-4 APDU format, with a maximum transfer data buffer size of 256 bytes. Proximity check While the protocol for MIFARE Classic tolerated message delays of several seconds, and was therefore vulnerable to relay attacks, MIFARE Plus EV1 now implements a basic "ISO compliant" distance-bounding protocol. This puts tighter timing constraints on the permitted round-trip delay during authentication, to make it harder to forward messages to far-away cards or readers via computer networks. Secure end-2-end channel Permits AES-protected over-the-air updates even to Crypto1 application sectors (SL1SL3 mix mode). Transaction MAC The card can produce an additional message-authentication code over a transaction that can be verified by a remote clearing service, independent of the keys used by the local reader during the transaction. MIFARE Plus EV2 The MIFARE Plus EV2 was introduced to the market on 23 June 2020. It comes with an enhanced read performance and transaction speed compared to MIFARE Plus EV1. New features compared to MIFARE Plus EV1 include: Transaction Timer To help mitigate man-in-the-middle attacks, the Transaction Timer feature, which is also available on NXP’s MIFARE DESFire EV3 IC, makes it possible to set a maximum time per transaction, so it’s harder for an attacker to interfere with the transaction. MIFARE Ultralight family The MIFARE Ultralight has only 512 bits of memory (i.e. 64 bytes), without cryptographic security. The memory is provided in 16 pages of 4 bytes. Cards based on these chips are so inexpensive that they are often used for disposable tickets for events such as the Football World Cup 2006. It provides only basic security features such as one-time-programmable (OTP) bits and a write-lock feature to prevent re-writing of memory pages but does not include cryptography as applied in other MIFARE product-based cards. MIFARE Ultralight EV1 MIFARE Ultralight EV1 introduced in November 2012 the next generation of paper ticketing smart card ICs for limited-use applications for ticketing schemes and additional security options. It comes with several enhancements above the original MIFARE Ultralight: 384 and 1024 bits user memory product variants OTP, lock bits, configurable counters for improved security Three independent 24-bit one-way counters to stop reloading Protected data access through 32-bit password NXP Semiconductors originality signature function, this is an integrated originality checker and is effective cloning protection that helps to prevent counterfeit of tickets. However, this protection is applicable only to "mass penetration of non NXP originated chips and does not prevent hardware copy or emulation of a single existing valid chip" Applications: Limited-use tickets in public transport Event ticketing (stadiums, exhibitions, leisure parks) Loyalty MIFARE Ultralight C Introduced at the Cartes industry trade show in 2008, the MIFARE Ultralight C IC is part of NXP's low-cost MIFARE product offering (disposable ticket). With Triple DES, MIFARE Ultralight C uses a widely adopted standard, enabling easy integration in existing infrastructures. The integrated Triple DES authentication provides an effective countermeasure against cloning. Key applications for MIFARE Ultralight C are public transportation, event ticketing, loyalty and NFC Forum tag type 2. MIFARE DESFire family The MIFARE DESFire (MF3ICD40) was introduced in 2002 and is based on a core similar to SmartMX, with more hardware and software security features than MIFARE Classic. It comes pre-programmed with the general-purpose MIFARE DESFire operating system which offers a simple directory structure and files. They are sold in four variants: One with Triple-DES only and 4 kiB of storage, and three with AES (2, 4, or 8 kiB; see MIFARE DESFire EV1). The AES variants have additional security features; e.g., CMAC. MIFARE DESFire uses a protocol compliant with ISO/IEC 14443-4. The contactless IC is based on an 8051 processor with 3DES/AES cryptographic accelerator, making very fast transactions possible. The maximal read/write distance between card and reader is , but the actual distance depends on the field power generated by the reader and its antenna size. In 2010, NXP announced the discontinuation of the MIFARE DESFire (MF3ICD40) after it had introduced its successor MIFARE DESFire EV1 (MF3ICD41) in late 2008. In October 2011 researchers of Ruhr University Bochum announced that they had broken the security of MIFARE DESFire (MF3ICD40), which was acknowledged by NXP (see MIFARE DESFire attacks). MIFARE DESFire EV1 First evolution of MIFARE DESFire contactless IC, broadly backwards compatible. Available with 2 kiB, 4 kiB, and 8 kiB non-volatile memory. Other features include: Support for random ID. Support for 128-bit AES Hardware and operating system are Common Criteria certified at level EAL 4+ MIFARE DESFire EV1 was publicly announced in November 2006. Key applications: Advanced public transportation Access management Loyalty Micropayment MIFARE DESFire EV2 The second evolution of the MIFARE DESFire contactless IC family, broadly backwards compatible. New features include: MIsmartApp enabling to offer or sell memory space for additional applications of 3rd parties without the need to share secret keys Transaction MAC to authenticate transactions by 3rd parties Virtual Card Architecture for privacy protection Proximity check against relay attacks MIFARE DESFire EV2 was publicly announced in March 2016 at the IT-TRANS event in Karlsruhe, Germany MIFARE DESFire EV3 The latest evolution of the MIFARE DESFire contactless IC family, broadly backward compatible. New features include: ISO/IEC 14443 A 1–4 and ISO/IEC 7816-4 compliant Common Criteria EAL5+ certified for IC hardware and software NFC Forum Tag Type 4 compliant SUN message authentication for advanced data protection within standard NDEF read operation Choice of open DES/2K3DES/3K3DES/AES crypto algorithms Flexible file structure hosts as many applications as the memory size supports Proof of transaction with card generated MAC Transaction Timer mitigates risk of man-in-the-middle attacks MIFARE DESFire EV3 was publicly announced on 2 June 2020. MIFARE SAM AV2 MIFARE SAMs are not contactless smart cards. They are secure access modules designed to provide the secure storage of cryptographic keys and cryptographic functions for terminals to access the MIFARE products securely and to enable secure communication between terminals and host (backend). MIFARE SAMs are available from NXP in the contact-only module (PCM 1.1) as defined in ISO/IEC 7816-2 and the HVQFN32 format. Integrating a MIFARE SAM AV2 in a contactless smart card reader enables a design that integrates high-end cryptography features and the support of cryptographic authentication and data encryption/decryption. Like any SAM, it offers functionality to store keys securely and perform authentication and encryption of data between the contactless card and the SAM and the SAM towards the backend. Next to a classical SAM architecture, the MIFARE SAM AV2 supports the X-mode which allows a fast and convenient contactless terminal development by connecting the SAM to the microcontroller and reader IC simultaneously. MIFARE SAM AV2 offers AV1 mode and AV2 mode where in comparison to the SAM AV1 the AV2 version includes public key infrastructure (PKI), hash functions like SHA-1, SHA-224, and SHA-256. It supports MIFARE Plus and secure host communication. Both modes provide the same communication interfaces, cryptographic algorithms (Triple-DES 112-bit and 168-bit key, MIFARE products using Crypto1, AES-128 and AES-192, RSA with up to 2048-bit keys), and X-mode functionalities. The MIFARE SAM AV3 is the third generation of NXP’s Secure Access Module, and it supports MIFARE ICs as well as NXP’s UCODE DNA, ICODE DNA and NTAG DNA ICs. MIFARE 2GO A cloud-based platform that digitizes MIFARE product-based smart cards and makes them available on NFC-enabled smartphones and wearables. With this, new Smart City use cases such as mobile transit ticketing, mobile access and mobile micropayments are being enabled. Applications MIFARE products can be used in different applications: Automated fare collection system Identification cards Access management Campus cards Loyalty cards (reward points) Tourist cards Micropayment (mobile wallet, contactless payment, cashless payment) Road tolling Transport ticketing Event ticketing Mobile ticketing Citizen card Membership cards Parking Library cards Fuel cards Hotel key cards NFC Tag (NFC apps, MIFARE4Mobile) Taxi cards Smart meter Museum access cards Product authentication Production control Health cards Ferry Cards Car rentals Fleet management Amusement parks Bike rentals Blood donor cards Information services Interactive exhibits Interactive lotteries Password storage Smart advertising Social welfare Waste management Formerly most access systems used MIFARE Classic, but today these systems have switched to MIFARE DESFire because this product has more security than MIFARE Classic. Byte layout History 1994 – MIFARE Classic IC with 1K user memory introduced. 1996 – First transport scheme in Seoul using MIFARE Classic with 1K memory. 1997 – MIFARE PRO with Triple DES coprocessor introduced. 1999 – MIFARE PROX with PKI coprocessor introduced. 2001 – MIFARE Ultralight introduced. 2002 – MIFARE DESFire introduced, microprocessor based product. 2004 – MIFARE SAM introduced, secure infrastructure counterpart of MIFARE DESFire. 2006 – MIFARE DESFire EV1 is announced as the first product to support 128-bit AES. 2008 – MIFARE4Mobile industry Group is created, consisting of leading players in the Near Field Communication (NFC) ecosystem. 2008 – MIFARE Plus is announced as a drop-in replacement for MIFARE Classic based on 128-bit AES. 2008 – MIFARE Ultralight C is introduced as a smart paper ticketing IC featuring Triple DES Authentication. 2010 – MIFARE SAM AV2 is introduced as secure key storage for readers AES, Triple DES, PKI Authentication. 2012 – MIFARE Ultralight EV1 introduced, backward compatible to MIFARE Ultralight but with extra security. 2014 – MIFARE SDK was introduced, allowing developers to create and develop their own NFC Android applications. 2014 – NXP Smart MX2 the world's first secure smart card platform supporting MIFARE Plus and MIFARE DESFire EV1 with EAL 50 was released. 2015 – MIFARE Plus SE, the entry-level version of NXP's proven and reliable MIFARE Plus product family, was introduced. 2016 – MIFARE Plus EV1 was introduced, the proven mainstream smart card product compatible with MIFARE Classic in its backward compatible security level. 2016 – MIFARE DESFire EV2 is announced with improved performance, security, privacy and multi-application support. 2016 – MIFARE SDK is rebranded to TapLinx, with additional supported products. 2018 – MIFARE 2GO cloud service was introduced, allows to manage MIFARE DESFire and MIFARE Plus (in SL3) product-based credentials onto NFC-enabled mobile and wearable devices. 2020 – MIFARE DESFire EV3 is announced 2020 – MIFARE Plus EV2 was introduced, adding SL3 to support MIFARE 2GO, EAL5+ certification & Transaction Timer to help mitigate man-in-the-middle attacks. The MIFARE product portfolio was originally developed by Mikron in Gratkorn, Austria. Mikron was acquired by Philips in 1995. Mikron sourced silicon from Atmel in the US, Philips in the Netherlands, and Siemens in Germany. Infineon Technologies (then Siemens) licensed MIFARE Classic from Mikron in 1994 and developed both stand alone and integrated designs with MIFARE product functions. Infineon currently produces various derivatives based on MIFARE Classic including 1K memory (SLE66R35) and various microcontrollers (8 bit (SLE66 series), 16 bit (SLE7x series), and 32 bit (SLE97 series) with MIFARE implementations, including devices for use in USIM with Near Field Communication. Motorola tried to develop MIFARE product-like chips for the wired-logic version but finally gave up. The project expected one million cards per month for start, but that fell to 100,000 per month just before they gave up the project. In 1998 Philips licensed MIFARE Classic to Hitachi Hitachi licensed MIFARE products for the development of the contactless smart card solution for NTT's IC telephone card which started in 1999 and finished in 2006. In the NTT contactless IC telephone card project, three parties joined: Tokin-Tamura-Siemens, Hitachi (Philips-contract for technical support), and Denso (Motorola-only production). NTT asked for two versions of chip, i.e. wired-logic chip (like MIFARE Classic) with small memory and big memory capacity. Hitachi developed only big memory version and cut part of the memory to fit for the small memory version. The deal with Hitachi was upgraded in 2008 by NXP (by then no longer part of Philips) to include MIFARE Plus and MIFARE DESFire to the renamed semiconductor division of Hitachi Renesas Technology. In 2010 NXP licensed MIFARE products to Gemalto. In 2011 NXP licensed Oberthur to use MIFARE products on SIM cards. In 2012 NXP signed an agreement with Giesecke & Devrient to integrate MIFARE product-based applications on their secure SIM products. These licensees are developing Near Field Communication products Security MIFARE Classic The encryption used by the MIFARE Classic IC uses a 48-bit key. A presentation by Henryk Plötz and Karsten Nohl at the Chaos Communication Congress in December 2007 described a partial reverse-engineering of the algorithm used in the MIFARE Classic chip. Abstract and slides are available online. A paper that describes the process of reverse engineering this chip was published at the August 2008 USENIX security conference. In March 2008 the Digital Security research group of the Radboud University Nijmegen made public that they performed a complete reverse-engineering and were able to clone and manipulate the contents of an OV-Chipkaart which is using MIFARE Classic chip. For demonstration they used the Proxmark3 device, a 125 kHz / 13.56 MHz research instrument. The schematics and software are released under the free GNU General Public License by Jonathan Westhues in 2007. They demonstrate it is even possible to perform card-only attacks using just an ordinary stock-commercial NFC reader in combination with the libnfc library. The Radboud University published four scientific papers concerning the security of the MIFARE Classic: A Practical Attack on the MIFARE Classic Dismantling MIFARE Classic Wirelessly Pickpocketing a MIFARE Classic Card Ciphertext-only Cryptanalysis on Hardened MIFARE Classic Cards In response to these attacks, the Dutch Minister of the Interior and Kingdom Relations stated that they would investigate whether the introduction of the Dutch Rijkspas could be brought forward from Q4 of 2008. NXP tried to stop the publication of the second article by requesting a preliminary injunction. However, the injunction was denied, with the court noting that, "It should be considered that the publication of scientific studies carries a lot of weight in a democratic society, as does inform society about serious issues in the chip because it allows for mitigating of the risks." Both independent research results are confirmed by the manufacturer NXP. These attacks on the cards didn't stop the further introduction of the card as the only accepted card for all Dutch public transport the OV-chipkaart continued as nothing happened but in October 2011 the company TLS, responsible for the OV-Chipkaart announced that the new version of the card will be better protected against fraud. The MIFARE Classic encryption Crypto-1 can be broken in about 200 seconds on a laptop from 2008, if approx. 50 bits of known (or chosen) keystream are available. This attack reveals the key from sniffed transactions under certain (common) circumstances and/or allows an attacker to learn the key by challenging the reader device. The attack proposed in recovers the secret key in about 40 ms on a laptop. This attack requires just one (partial) authentication attempt with a legitimate reader. Additionally, there are a number of attacks that work directly on a card and without the help of a valid reader device. These attacks have been acknowledged by NXP. In April 2009 new and better card-only attack on MIFARE Classic has been found. It was first announced at the rump session of Eurocrypt 2009. This attack was presented at SECRYPT 2009. The full description of this latest and fastest attack to date can also be found in the IACR preprint archive. The new attack improves by a factor of more than 10 all previous card-only attacks on MIFARE Classic, has instant running time, and does not require a costly precomputation. The new attack allows recovering the secret key of any sector of the MIFARE Classic card via wireless interaction, within about 300 queries to the card. It can then be combined with the nested authentication attack in the Nijmegen Oakland paper to recover subsequent keys almost instantly. Both attacks combined and with the right hardware equipment such as Proxmark3, one should be able to clone any MIFARE Classic card in 10 seconds or less. This is much faster than previously thought. In an attempt to counter these card-only attacks, new "hardened" cards have been released in and around 2011, such as the MIFARE Classic EV1. These variants are insusceptible for all card-only attacks publicly known until then, while remaining backward compatible with the original MIFARE Classic. In 2015, a new card-only attack was discovered that is also able to recover the secret keys from such hardened variants. Since the discovery of this attack, NXP is officially recommending to migrate from MIFARE Classic product-based systems to higher security products. MIFARE DESFire In November 2010, security researchers from the Ruhr University released a paper detailing a side-channel attack against MIFARE product-based cards. The paper demonstrated that MIFARE DESFire product-based cards could be easily emulated at a cost of approximately $25 in "off the shelf" hardware. The authors asserted that this side-channel attack allowed cards to be cloned in approximately 100 ms. Furthermore, the paper's authors included hardware schematics for their original cloning device, and have since made corresponding software, firmware and improved hardware schematics publicly available on GitHub. In October 2011 David Oswald and Christof Paar of Ruhr-University in Bochum, Germany, detailed how they were able to conduct a successful "side-channel" attack against the card using equipment that can be built for nearly $3,000. Called "Breaking MIFARE DESFire MF3ICD40: Power Analysis and Templates in the Real World", they stated that system integrators should be aware of the new security risks that arise from the presented attacks and can no longer rely on the mathematical security of the used 3DES cipher. Hence, to avoid, e.g. manipulation or cloning of smart cards used in payment or access control solutions, proper actions have to be taken: on the one hand, multi-level countermeasures in the back end allow to minimize the threat even if the underlying RFID platform is insecure," In a statement NXP said that the attack would be difficult to replicate and that they had already planned to discontinue the product at the end of 2011. NXP also stated "Also, the impact of a successful attack depends on the end-to-end system security design of each individual infrastructure and whether diversified keys – recommended by NXP – are being used. If this is the case, a stolen or lost card can be disabled simply by the operator detecting the fraud and blacklisting the card, however, this operation assumes that the operator has those mechanisms implemented. This will make it even harder to replicate the attack with a commercial purpose." MIFARE Ultralight In September 2012 a security consultancy Intrepidus demonstrated at the EU SecWest event in Amsterdam, that MIFARE Ultralight product-based fare cards in the New Jersey and San Francisco transit systems can be manipulated using an Android application, enabling travelers to reset their card balance and travel for free in a talk entitled "NFC For Free Rides and Rooms (on your phone)". Although not a direct attack on the chip but rather the reloading of an unprotected register on the device, it allows hackers to replace value and show that the card is valid for use. This can be overcome by having a copy of the register online so that values can be analysed and suspect cards hot-listed. NXP has responded by pointing out that they had introduced the MIFARE Ultralight C in 2008 with 3DES protection and in November 2012 introduced the MIFARE Ultralight EV1 with three decrement only counters to foil such reloading attacks. Considerations for systems integration For systems based on contactless smartcards (e.g. public transportation), security against fraud relies on many components, of which the card is just one. Typically, to minimize costs, systems integrators will choose a relatively cheap card such as a MIFARE Classic and concentrate security efforts in the back office. Additional encryption on the card, transaction counters, and other methods known in cryptography are then employed to make cloned cards useless, or at least to enable the back office to detect a fraudulent card, and put it on a blacklist. Systems that work with online readers only (i.e., readers with a permanent link to the back office) are easier to protect than systems that have offline readers as well, for which real-time checks are not possible and blacklists cannot be updated as frequently. Certification Another aspect of fraud prevention and compatibility guarantee is to obtain certification called to live in 1998 ensuring the compatibility of several certified MIFARE product-based cards with multiple readers. With this certification, the main focus was placed on the contactless communication of the wireless interface, as well as to ensure proper implementation of all the commands of MIFARE product-based cards. The certification process was developed and carried out by the Austrian laboratory called Arsenal Research. Today, independent test houses such as Arsenal Testhouse, UL and LSI-TEC, perform the certification tests and provide the certified products in an online database. Places that use MIFARE products Transportation Application references Institutions Northwest University, South Africa – Student/staff ID, access control, library, student meals, sport applications, payments Linkoping university, Sweden – Student/staff ID, access control, library, copy/print, student discount, payments London School of Economics – Access control (Unprotected MIFARE Classic 1K) New College School in Oxford – Building access. Imperial College London – Staff and student ID access card in London, UK. Cambridge University – Student/Staff ID and access card, library card, canteen payments in some colleges University of Warwick – Staff and student ID card and separate Eating at Warwick stored value card in Coventry, UK. Regent's College, London – Staff and student ID access card in London, UK. University of New South Wales – Student ID access card. The University of Queensland – Staff and student ID, access control, library, copy/print, building access (MIFARE DESFire EV1) University of Alberta – Staff OneCard trial currently underway. Northumbria University – Student/staff building and printer access. City University of Hong Kong – Student/staff building, library, amenities building. Hong Kong Institute of Vocational Education – Student ID card, attendance, library, printers and computers access. The Chinese University of Hong Kong – Student ID card, attendance, library, printers and door access control University of Bayreuth – Student ID card and canteen card for paying. University of Ibadan, Nigeria – Student ID card and examination verification and attendance.(Solutions Colony Ltd) Bowen University, Iwo, Nigeria – Student ID card and examination verification and attendance.(Solutions Colony Ltd) Afe Babalola University, Ado-Ekiti, Nigeria – Student ID card and examination verification and attendance.(Solutions Colony Ltd) Achievers University, Owo, Nigeria – Student ID card and examination verification and attendance.(Solutions Colony Ltd) Adekunle Ajasin University, Akungba, Ondo State, Nigeria – Student ID card and examination Verification and Attendance.(Solutions Colony Ltd) Auchi Polytechnic, Auchi, Nigeria – Student ID card and examination verification and attendance.(Solutions Colony Ltd) University College Hospital, Ibadan (UCH), Nigeria – Student ID card and staff attendance.(Solutions Colony Ltd) Federal University of Technology, Minna, Niger State (FUTM), Nigeria – Student ID card and Examination Verification and Attendance.(Solutions Colony Ltd) Benson Idahosa University, Benin City, Edo State (BIU), Nigeria – Student ID card and Examination Verification and Attendance.(Solutions Colony Ltd) Federal University of Technology, Akure, Ondo State (FUTA), Nigeria – Student ID card and Examination Verification and Attendance.(Solutions Colony Ltd) Covenant University, Nigeria – Student ID card and Examination Verification and Attendance.(Solutions Colony Ltd) Lead City University, Nigeria – Student ID card and Examination Verification and Attendance.(Solutions Colony Ltd) Hogeschool-Universiteit Brussel, Belgium – Student ID card, canteen card for paying, library and building access. Southampton University – Student ID card, library and building access – MIFARE Classic 4K. Delft University of Technology, Netherlands – Student/Staff ID card, staff coffee machines, lockers, printers and building access. Eindhoven University of Technology, Netherlands – Student/Staff ID card, staff coffee machines, lockers, printers and building access currently (2016) rolling out DESfire EV1. Dresden University of Technology, Germany – Building access, canteen card for payment Chemnitz University of Technology, Germany – Student ID card Leipzig University, Germany – Student ID card, canteen card for payment Freiberg University of Mining and Technology, Germany – Student/Stuff ID card, building access, canteen card for payment University of Jena, Germany – Student/Staff ID card, building access, canteen card for payment University of Würzburg, Germany – Student/Staff ID card, building access, library access and fee payment, canteen card for payment Technical University of Denmark, Denmark – Student ID card, building access University of Duisburg-Essen, Germany – Student/Staff ID card, library access, canteen card for payment Walt Disney World Resort – used for tickets, Disney Dining Plan, and room key access University of Northampton – Car park access, building access – MIFARE Classic 1K. Assumption University (Thailand), Thailand – Student/Staff ID card, library and computers access, canteen, transportation and parking payment, election verification – MIFARE Classic 4K Claude Bernard University Lyon 1 Student ID, access control, library (MIFARE 1K) University of Strasbourg Student ID, access control (MIFARE 1K) Aberystwyth University Student/staff ID, access control, library, copy/print, student discount, payments, building access (MIFARE Classic 4K) University of Nottingham – Student ID, access control, library, payments, building access (MIFARE Classic 1K) See also RFID Physical security NFC Smart card References Further reading Dayal, Geeta, "How they hacked it: The MiFare RFID crack explained; A look at the research behind the chip compromise, Computerworld, 19 March 2008. External links Comparison Table MIFARE DESFire EV1 / EV2 / EV3 NXP in eGovernment 24C3 Talk about MIFARE Classic Video of the 24C3 Talk presenting the results of reverse engineering the MIFARE Classic family, raising serious security concerns Presentation of 24th Chaos Computer Congress in Berlin Claiming that the MIFARE classic chip is possibly not safe Demonstration of an actual attack on MIFARE Classic (a building access control system) by the Radboud University Nijmegen. Contactless smart cards Near-field communication NXP Semiconductors
34089314
https://en.wikipedia.org/wiki/Pinguy%20OS
Pinguy OS
Pinguy OS is a free computer operating system (a Linux distribution) for x86-based PCs, based on Ubuntu Linux. General info Pinguy OS is an Ubuntu-based distribution with many applications and tweaks installed by default. Such software includes ZRam and Preload. According to Distrowatch.com Pinguy OS "features numerous user-friendly enhancements, out-of-the-box support for multimedia codecs and browser plugins, a heavily tweaked GNOME user interface with enhanced menus, panels and dockbars, and a careful selection of popular desktop applications for many common computing tasks." Although the distribution comes with many pre-installed applications, browser plugins, multimedia codecs and system utilities, many users could perceive this as unnecessary bloat, given that Ubuntu and many of its descendants come with a lot of useful software as well, without retaining a large memory and storage footprint. Features The following features are found in the Pinguy OS distribution: Installation: Graphical (GUI) Default Desktop: modified GNOME 3 (as of 14.04) Package Management: DEB (Ubuntu Software Center and Synaptic Package Manager installed) Processor Architecture: i686, x86-64 Journaled File Systems: ext3, ext4, JFS, ReiserFS, XFS Multilingual: Yes Release history The following is the release history for Pinguy OS Beta: The 6 month Pinguy OS releases will be missing features that will be in the final LTS, but the release will be very usable. Availability Pinguy OS is available in both 32-bit and 64-bit versions. See also Linux distribution Ubuntu Linux References External links Ubuntu derivatives X86-64 Linux distributions Linux distributions
202311
https://en.wikipedia.org/wiki/E-services
E-services
E-services (electronic services) are services which make use of information and communication technologies (ICTs). The three main components of e-services are: service provider; service receiver; and the channels of service delivery (i.e., technology). For example, with respect to public e-service, public agencies are the service provider and citizens as well as businesses are the service receiver. For public e-service the internet is the main channel of e-service delivery while other classic channels (e.g. telephone, call center, public kiosk, mobile phone, television) are also considered. Since its inception in the late 1980s in Europe and formal introduction in 1993 by the US Government, the term ‘E-Government’ has now become one of the recognized research domains especially in the context of public policy and now has been rapidly gaining strategic importance in public sector modernization. E-service is one of the branches of this domain and its attention has also been creeping up among the practitioners and researchers. E-service (or eservice) is a highly generic term, usually referring to "The provision of services via the Internet (the prefix 'e' standing for ‘electronic’, as it does in many other usages), thus e-Service may also include e-Commerce, although it may also include non-commercial services (online), which is usually provided by the government." (Irma Buntantan & G. David Garson, 2004: 169-170; Muhammad Rais & Nazariah, 2003: 59, 70-71). "E-Service constitutes the online services available on the Internet, whereby a valid transaction of buying and selling (procurement) is possible, as opposed to the traditional websites, whereby only descriptive information are available, and no online transaction is made possible." (Jeong, 2007). Importance of E-service Lu (2001) identifies a number of benefits for e-services, some of these are: Accessing a greater customer base Broadening market reach Lowering of entry barrier to new markets and cost of acquiring new customers Alternative communication channel to customers Increasing services to customers Enhancing perceived company image Gaining competitive advantages Enhancing transparency Potential for increasing Customer knowledge Importance and advantages of E-shopping E-shops are open 24 hours a day. There is no need to travel to the malls or wait at the checkout counters. There is usually a wide selection of goods and services. It is easy to compare prices and quality by using the E-shopping tool. Price reduction and discounts are electronically conveyed. E-service domain The term ‘e-service’ has many applications and can be found in many disciplines. The two dominant application areas of e-services are: E-business (or e-commerce): e-services mostly provided by businesses or [NGO|non-government organizations] (NGOs) (private sector). E-government: e-services provided by government to citizens or business (public sector is the supply side). The use and description of the e-service in this page will be limited to the context of e-government only where of the e-service is usually associated with prefix 'public' as in "public e-services". In some cases, we will have to describe aspects that are related to both fields like some conferences or journals which cover the concept of e-service in both domains of e-government and e-business.[example: www.eserviceforyou.com] Architecture Depending on the types of services, there are certain functionalities required in the certain layers of e-service architectural framework, these include but are not limited to: Data layer (data sources), processing layers (customer service systems, management systems, data warehouse systems, integrated customer content systems), exchange layer (Enterprise Application Integration– EAI), interaction layer ( integrating e-services), and presentation layer (customer interface through which the web pages and e-services are linked). E-service quality Measuring service quality and service excellence are important in a competitive organizational environment. The SERVQUAL- service quality model is one of the widely used tools for measuring quality of the service on various aspects. The five attributes of this model are: reliability, responsiveness, assurance, tangibles, and empathy. The following table summarizes some major of these: The LIRNEasia study [Alawattegama & Wattegama (2008)] focuses more on content than on accessibility and ease of use, unlike the other studies mentioned in the table. Websites are increasingly important portals to government agencies, especially in the context of information society reforms. Stakeholders, including businesses, investors and even the general public, are interested in information produced by government agencies, and websites can help to increase their transparency and accountability. The quality of its website also demonstrates how advanced a regulatory agency is. E-service cost factor Some major cost factors are (Lu, 2001): Expense of setting up applications Maintaining applications Internet connection Hardware/software Security concerns legal issues Training; and Rapid technology changes Practical examples of e-services in the Developing World Information technology is a powerful tool for accelerating economic development. Developing countries have focused on the development of ICT during the last two decades and as a result, it has been recognized that ICT is critical to economy and is as a catalyst of economic development. So, in recent years there seems to have been efforts for providing various e-services in many developing countries since ICT is believed to offer considerable potential for the sustainable development of e-Government and as a result, e-Services. Many government agencies in developed countries have taken progressive steps toward the web and ICT use, adding coherence to all local activities on the Internet, widening local access and skills, opening up interactive services for local debates, and increasing the participation of citizens on promotion and management of the territory(Graham and Aurigi, 1997). But the potential for e-government in developing countries remains largely unexploited, even though. ICT is believed to offer considerable potential for the sustainable development of e-government. Different human, organizational and technological factors, issues and problems pertain in these countries, requiring focused studies and appropriate approaches. ICT, in general, is referred to as an “enabler”, but on the other hand, it should also be regarded as a challenge and a peril in itself. The organizations, public or private, which ignore the potential value and use of ICT may suffer pivotal competitive disadvantages. Nevertheless, some e-government initiatives have flourished in developing countries too, e.g. Brazil, India, Chile, etc. What the experience in these countries shows, is that governments in the developing world can effectively exploit and appropriate the benefits of ICT, but e-government success entails the accommodation of certain unique conditions, needs and obstacles. The adaptive challenges of e-government go far beyond technology, they call for organizational structures and skills, new forms of leadership, transformation of public-private partnerships (Allen et al., 2001). Following are a few examples regarding e-services in some developing countries: E-services in Bangladesh Bangladesh first e-service system is National E-Service System ([ NESS]) and 2nd e-Service For you [eserviceforyou.com]. E-services and e-commerce in Rwanda Only a decade after emerging from the fastest genocide of the 20th Century, Rwanda, a small country in Eastern Central Africa, has become one of the continent's leaders in, and model on, bridging the digital divide through e-government. Rwanda has undergone a rapid turnaround from one of the most technologically deficient countries only a decade ago to a country where legislative business is conducted online and wireless access to the Internet is available anywhere in the country. This is puzzling when viewed against the limited progress made in other comparable developing countries, especially those located in the same region, sub-Saharan Africa, where the structural and institutional constraints to e-government diffusion are similar. E-services in South Africa In South Africa, there continues to be high expectations of government in respect to improved delivery of service and of closer consultation with citizens. Such expectations are not unique to this country, and in this regard there is a need for governments to recognise that the implementation of e-government systems and e-services affords them the opportunity to enhance service delivery and good governance. The implementation of e-Government has been widely acclaimed in that it provides new impetus to deliver services quickly and efficiently (Evans & Yen, 2006:208). In recognition of these benefits, various arms of the South African government have embarked on a number of e-government programmes for example the Batho Pele portal, SARS e-filing, the e-Natis system, electronic processing of grant applications from remote sites, and a large number of departmental information websites. Also a number of well publicised e-government ventures such as the latter, analysts and researchers consider the state of e-government in South Africa to be at rudimentary stages. There are various factors which collectively contribute to such an assessment. Amongst these, key factors relate to a lack of a clear strategy to facilitate uptake and adoption of e-government services as well as evaluation frameworks to assess expectations of citizens who are one of the primary user groups of these services. E-services in Malaysia E-Services is one of the pilot projects under the Electronic Government Flagship within the Multimedia Super Corridor (MSC) initiative. With E-Services, one can now conduct transactions with Government agencies, such as the Road Transport Department (RTD) and private utility companies such as Tenaga Nasional Berhad (TNB) and Telekom Malaysia Berhad (TM) through various convenient channels such as the eServices kiosks and internet. No more queuing, traffic jams or bureaucratic hassles and one can now conduct transaction at one's own convenience. Also, Electronic Labour Exchange (ELX)is one stop-centre for labor market information, as supervised by the Ministry of Human Resource (MOHR), to enable employers and job seekers to communicate on the same platform. e-Syariah is the seventh project under the Electronic Government flagship application of the Multimedia Super Corridor (MSC). A case management system that integrates the processes related to management of cases for the Syariah Courts. Examples of E-services in Established Countries E-services in the United States of America In America, citizens have many options and opportunities to follow and understand government actions through e-government. Government 2.0 (Gov. 2.0) is currently in place to bring the people and governments together to learn new information, increase government transparency, and better means for communicating to one another. Gov. 2.0 offers increased citizen participation through on-line applications such as social media and other apps. Through the internet and websites such as USA.gov, an individual can perform actions such as contacting elected officials, find information about the work force such as retirement plans and labor laws, learn about money and consumer issues such as taxes, loans, and welfare, learn about citizenship and obtaining a visa or passport, and other topics such as health and welfare, education, and environmental issues. E-commerce is another growing E-service in the United States for both big and small businesses. E-commerce sales are projected to grow 10 to 12 percent annually. Amazon.com is the largest on-line marketplace in the country with annual sales of $79 billion. Wal-Mart is also a widely popular retailer. They have grown their business by having electronic services. Wal-Mart's sales for E-commerce in 2015 was roughly $13 billion. Apple develops and sells a wide variety of technological goods and services such as cell phones, music players, and computers. Apple's sales for E-commerce in 2015 was $12 billion. E-services allows businesses to reach new clientele and offer new services. Companies such as eBay and Etsy have achieved great success, with eBay posting a net income in 2016 of nearly $9 billion and Esty claiming roughly $200 million in profits from nearly $2 billion sales. The majority or eBay's business is conducted in the United States but it does a great deal of international business including the United Kingdom and Germany. The global reach of Etsy is seen in nearly every country in the world with 31% of gross merchandise sales occurring outside of the United States. E-services in China China's recent realization of the continuing growth of internet usage has caused the government to recognize the need to expand their E-government services. Some steps the government wants to take in order to increase their E-government services are to develop more online functions, use government sites to integrate on-line services, have supplementary open data available to citizens to further government transparency, and to combine services from local and country-wide governments for convenience. China's plan of action to incorporate the internet into everyday business and grow the economy is known as “Internet Plus.” The government plans to have this plan in full effect by 2025 to be the main driving force for economic and social improvements. Internet Plus will help to grow the job market as the government plans to use local citizens for development, and to generate more areas dedicated to technological growth such as Zhongguancun. Because of the large population, China has the most internet and cell phone users in the world.(consider rewording) This causes a need for technological growth and a demand for increased E-services. In 2016, Chinese consumers spent more money for on-line goods and services than the United States and United Kingdom combined. There is(are) a wide variety of reasons as to why E-commerce flourishes in China including easy access to mobile internet, low cost of shipping, and a vast selection of cheap, unbranded products. Alibaba is China's largest on-line marketplace with an annual revenue stream of $16 billion. Its services are globally available in Russia and Brazil through AliExpress. Tencent is another internet company with an annual revenue income of $16 billion. Tencent is used mainly for instant messaging but has other applications as well including mobile games and other digital content. By the end of 2015, Tencent's WeChat messaging app reached around 700 million users. The biggest competitor for Tencent is Facebook's WhatsApp. Baidu Is the most visited website in the country and it is used as a search engine and has an annual revenue of $10 billion. In March 2016, there were roughly 663 million users. Google challenges Baidu as the major internet search engines in the world. Huawei is a tech company that produces phones, tablets, and develops the equipment used in fixed-line networks. Huawei has an annual revenue income of $61 billion. It is currently located throughout 100 countries worldwide and in 2015, it filed 3,898 patent applications, more than any other country in the world. The biggest competitors to Huawei is Apple and Samsung. Challenges to e-services in the Developing World The future of e-service is bright but some challenges remain. There are some challenges in e-service, as Sheth & Sharma (2007) identify, are: Low penetration of ICT especially in the developing countries; Fraud on the internet space which is estimated around 2.8billion USD Privacy due to the emergence of various types of spyware and security holes, and intrusive characteristics of the service (e.g. mobile phones based) as customers may not like to be contacted with the service providers at any time and at any place. The first challenge and primary obstacle to the e-service platform will be the penetration of the internet. In some developing countries, access to the internet is limited and speeds are also limited. In these cases, firms and customers will continue to use traditional platforms. The second issue of concern is a fraud on the internet. It is anticipated that the fraud on the e-commerce internet space costs $2.8 billion. The possibility of fraud will continue to reduce the utilization of the internet. The third issue is privacy. Due to both spyware and security holes in operating systems, there is a concern that the transactions that consumers undertake have privacy limitations. For example, by stealthily following online activities, firms can develop fairly accurate descriptions of customer profiles. The possibility of privacy violations will reduce the utilization of the internet. The final issue is that e-service can also become intrusive as they reduce time and location barriers of other forms of contract. For example, firms can contact people through mobile devices at any time and at any place. Customers do not take like intrusive behavior and may not use the e-service platform. (Heiner and lyer, 2007) However, in the last years, one can observe appearing of different e-services and related initiatives in developing countries such as Project Nemmadi, MCA21 Mission Mode Project or Digital India even more, in India; Electronic Government Directorate in Pakistan; The E-government citizen program in Iraq; E-government Development Center in Azerbaijan etc. Major e-service keywords A considerable amount of research efforts already exists on the subject matter exploring different aspects of e-service and e-service delivery ; one worth noting effort is Rowley's study (2006) who did a review study on the e-service literature. The key finding of his study is that there is need to explore dimensions of e-service delivery not focusing only on service quality “In order to understand e-service experiences it is necessary to go beyond studies of e-service quality dimensions and to also take into account the inherent characteristics of e-service delivery and the factors that differentiate one service experience from another.” Some of the major keywords of e-service as found in the e-government research are as follows: Acceptance User acceptance of technology is defined according to Morris (1996, referred by Wu 2005, p. 1) as “the demonstrable willingness within a user group to employ information technology for the tasks it is designed to support”. This definition can be brought into the context of e-service where acceptance can be defined as the users’ willingness to use e-service or the willingness to decide when and how to use the e-service. Accessibility Users’ ability to access to the e-service is important theme in the previous literature. For example, Huang (2003) finds that most of the websites in general fail to serve users with disabilities. Recommendation to improve accessibility is evident in previous literature including Jaeger (2006) who suggests the following to improve e-services’ accessibility like: design for accessibility from the outset of website development, Involve users with disabilities in the testing of the site ...Focus on the benefits of an accessible Web site to all users. Administrative literacy According to Grönlund et al. (2007), for a simple e-service, the needs for knowledge and skills, content and procedures are considerably less. However, in complicated services there are needed to change some prevailed skills, such as replacing verbal skills with skill in searching for information online. Benchmarking This theme is concerned with establishing standards for measuring e-services or the best practices within the field. This theme also includes the international benchmarking of e-government services (UN reports, EU reports); much critic has been targeting these reports being incomprehensive and useless. According to Bannister (2007) “… benchmarks are not a reliable tool for measuring real e-government progress. Furthermore, if they are poorly designed, they risk distorting government policies as countries may chase the benchmark rather than looking at real local and national needs” Digital divide Digital divide is considered one of the main barriers to implementing e-services; some people do not have means to access the e-services and some others do not know how to use the technology (or the e-service). According to Helbig et al. (2009), “we suggest E-Government and the digital divide should be seen as complementary social phenomena (i.e., demand and supply). Moreover, a serious e-government digital divide is that services mostly used by social elites." E-readiness Most of the reports and the established criteria focus on assessing the services in terms of infrastructure and public policies ignoring the citizen participation or e-readiness. According to by Shalini (2009), “the results of the research project reveal that a high index may be only indicating that a country is e-ready in terms of ICT infrastructure and info-structure, institutions, policies, and political commitment, but it is a very poor measure of the e-readiness of citizens. To summarize the findings, it can be said that Mauritius is ready but the Mauritians are not” ``E-readiness, as the Economist Intelligence Unit defines, is the measure of a country’s ability to leverage digital channels for communication, commerce and government in order to further economic and social development. Implied in this measure is the extent to which the usage of communications devices and Internet services creates efficiencies for business and citizens, and the extent to which this usage is leveraged in the development of information and communications technology (ICT) industries. In general terms, the definition of e-readiness is relative, for instance depending on a country in question's priorities and perspective. Efficiency As opposed to effectiveness, efficiency is focused on the internal competence within the government departments when delivering e-services. There is a complaint that researchers focus more on effectiveness “There is an emerging trend seemingly moving away from the efficiency target and focusing on users and governance outcome. While the latter is worthwhile, efficiency must still remain a key priority for eGovernment given the budget constraints compounded in the future by the costs of an ageing population. Moreover, efficiency gains are those that can be most likely proven empirically through robust methodologies” Security Security is the most important challenge that faces the implementation of e-services because without a guarantee of privacy and security citizens will not be willing to take up e-government services. These security concerns, such as hacker attacks and the theft of credit card information, make governments hesitant to provide public online services. According to the GAO report of 2002 “security concerns present one of the toughest challenges to extending the reach of e-government. The rash of hacker attacks, Web page defacing, and credit card information being posted on electronic bulletin boards can make many federal agency officials—as well as the general public—reluctant to conduct sensitive government transactions involving personal or financial data over the Internet.” By and Large, Security is one of the major challenges that faces the implementation and development of electronic services. people want to be assured that they are safe when they are conducting online services and that their information will remain secure and confidential Stakeholders Axelsson et al. (2009) argue that the stakeholder concept-which was originally used in private firms-, can be used in public setting and in the context of e-government. According to them, several scholars have discussed the use of the stakeholder theory in public settings. The stakeholder theory suggests that need to focus on all the involved stakeholder s when designing the e-service; not only on the government and citizens. Usability Compared to Accessibility, There is sufficient literature that addresses the issue of usability; researchers have developed different models and methods to measure the usability and effectiveness of eGovernment websites. However, But still there is call to improve these measures and make it more compressive ``The word usability has cropped up a few times already in this unit. In the context of biometric identification, usability referred to the smoothness of enrollment and other tasks associated with setting up an identification system. A system that produced few false matches during enrollment of applicants was described as usable. Another meaning of usability is related to the ease of use of an interface. Although this meaning of the term is often used in the context of computer interfaces, there is no reason to confine it to computers.´´ Social, cultural and ethical implications of e-services The perceived effectiveness of e-service can be influenced by public’s view of the social and cultural implications of e-technologies and e-service. Impacts on individuals’ rights and privacy – as more and more companies and government agencies use technology to collect, store, and make accessible data on individuals, privacy concerns have grown. Some companies monitor their employees' computer usage patterns in order to assess individual or workgroup performance. Technological advancements are also making it much easier for businesses, government and other individuals to obtain a great deal of information about an individual without their knowledge. There is a growing concern that access to a wide range of information can be dangerous within politically corrupt government agencies. Impact on Jobs and Workplaces - in the early days of computers, management scientists anticipated that computers would replace human decision-makers. However, despite significant technological advances, this prediction is no longer a mainstream concern. At the current time, one of the concerns associated with computer usage in any organization (including governments) is the health risk – such as injuries related to working continuously on a computer keyboard. Government agencies are expected to work with regulatory groups in order to avoid these problems. Potential Impacts on Society – despite some economic benefits of ICT to individuals, there is evidence that the computer literacy and access gap between the haves and have-nots may be increasing. Education and information access are more than ever the keys to economic prosperity, yet access by individuals in different countries is not equal - this social inequity has become known as the digital divide. Impact on Social Interaction – advancements in ICT and e-Technology solutions have enabled many government functions to become automated and information to be made available online. This is a concern to those who place a high value on social interaction. Information Security - technological advancements allow government agencies to collect, store and make data available online to individuals and organizations. Citizens and businesses expect to be allowed to access data in a flexible manner (at any time and from any location). Meeting these expectations comes at a price to government agencies where it concerns managing information – more specifically, ease of access; data integrity and accuracy; capacity planning to ensure the timely delivery of data to remote (possibly mobile) sites; and managing the security of corporate and public information. E-service awards The benefits of e-services in advancing businesses efficiency and in promoting good governance are huge; recognizing the importance of these benefits has resulted in number of international awards that are dedicated to recognize the best designed e-services. In the section, we will provide description of some international awards Best online e-service in Europe European eGovernment Awards program started 2003 to recognize the best online public service in Europe. The aim of Awards is to encourage the deployment of e-services and to bring the attention to best practices in the field. The winners of the |4th European eGovernment Awards were announced in the award ceremony that took place at the 5th Ministerial eGovernment Conference on 19 November 2009 (Sweden); the winners in their respective categories are: Category 1. eGovernment supporting the Single Market: EU-OPA, the European Order for Payment Application ( and ) Category 2a. eGovernment empowering citizens: Genvej () Category 2b. eGovernment empowering businesses: MEPA, the Public Administration eMarketplace () Category 3. eGovernment enabling administrative efficiency and effectiveness: Licensing of Hunters via the “Multibanco” ATM Network () Public prize: SMS Information System () Other awards Sultan Qaboos Award for excellence in eGovernance (Started 2009) The award has five categories: Best eContent, Best eService, Best eProject, eEconomy, eReadiness. eGovernment Excellence Awards (Started 2007) The program has three categories: Government Awards: Best eContent, Best eService, Best eProject, eEconomy, eEducation, eMaturity Business Awards: Best ICT solution Provider, eEconomy, eEducation Citizen Awards: Best eContent, eCitizen. Philippines e-Service Awards (Started 2001) Categories: Outstanding Client Application of the Year, Outstanding Customer Application of the year, Groundbreaking Technology of the Year, Most Progressive Homegrown Company of the Year. Major journals focusing on e-services There are some journals particularly interested for “e-Service “. Some of these are: International Journal of E-services and Mobile Applications eService Journal European Journal of Information Systems MIS Quarterly Information & Management Information Systems Journal International Journal of Electronic Government Electronic Journal of e-Government International Journal of Electronic Commerce Internet Research Journal Information Technology Journal of Strategic Information Systems Journal of the Association for Information Systems Government Information Quarterly Public Administration Review See also Electronic services delivery Customer knowledge References External links E-services delivery The Best E-Government Sites The World Bank (InfoDev) e-Government toolkit Digital divide E-commerce Information technology Knowledge representation E-government
49925967
https://en.wikipedia.org/wiki/ShimmerCat
ShimmerCat
ShimmerCat is a web server designed from ground-up for HTTP/2 and written in Haskell. The purported purpose of the server is to take full advantage of HTTP/2 features, including HTTP/2 PUSH, to enhance the perceived page load speed of served websites. ShimmerCat uses machine learning to accelerate asset delivery to the browser. Overview As of September 2016, ShimmerCat is at version 1.5.0 and runs on Linux and OS X. The software can be used for development of web applications through its SOCKS5 and HTTP/2 implementations, and it is also possible to develop web applications without having to modify /etc/hosts nor use different sets of URLs for development and production. References Web server software for Linux Reverse proxy Application layer protocols Proxy server software for Linux Internet Protocol based network software Unix network-related software
14968255
https://en.wikipedia.org/wiki/Digital%20television%20in%20the%20Netherlands
Digital television in the Netherlands
The Netherlands now has three major forms of broadcast digital television. Terrestrial (DVB-T), Cable (DVB-C), and Satellite (DVB-S). In addition IPTV services are available. At the end of the first quarter of 2013 almost 84% of the households in the Netherlands had some form of digital television. Terrestrial The Netherlands was the second European country to complete the move to digital terrestrial broadcasting on December 11, 2006. The switch-off was helped greatly by the fact that about 90% of the households have cable that continues to use analog distribution. Due to the very extensive penetration of cable systems, usage of terrestrial television in the Netherlands is largely confined to remote rural areas and for portable televisions in caravans, etc. Since then all terrestrial television broadcast in the Netherlands are digital. The national public television channels NPO 1, NPO 2, NPO 3 and the regional public television channels are free-to-air. DVB-T2 transmissions in the Netherlands are provided commercially by KPN daughter company Digitenne. They offer 25 TV channels and 16 radio channels, including the free-to-air channels. The Digitenne service uses Conax encryption. Handheld KPN launched a DVB-H service MobileTV on Thursday, June 5, 2008 with a bouquet of ten channels. The ten channels are NPO 1, NPO 3, RTL 4, RTL 24, SBS 6, Disney XD (Netherlands)/Veronica, MTV, Discovery Channel, Xite and Nick Toons. RTL24 is a made-for-mobile channel with news and current affair. Xite is a new Dutch music channel. In November 2008, a new dedicated mobile TV channel was added. Nu.tv from Ilse Media and the nu.nl news web site. The service was closed on June 1, 2011, KPN is now using the freed up capacity for adding new channels to its Digitenne DTT platform. Cable Over 90% of the households in the Netherlands receive their television signal by cable, making it one of the highest cable penetrated countries. Some cable viewers still watch analogue because no set-top box is necessary. But with the uptake of LCD and plasma televisions customers are looking for better picture quality in digital cable. In addition digital cable offers hundreds of channels compared to the about thirty channels analogue cable offers. All the major cable companies in the Netherlands offer a digital television service. They all use the DVB-C standard for their digital signal but use different encryption techniques, most used are Irdeto 2 and Nagravision. The largest cable company, Ziggo, supports the CI+ standard making it possible for their customers to use televisions with an integrated digital tuner without the need for an additional set-top-box. All cable companies offer a number of high-definition channels. The most watched channels are being transmitted in the clear on the Ziggo and Caiway digital cable networks. The three largest cable companies in the Netherlands are: Ziggo Caiway DELTA Satellite Digital satellite television in the Netherlands is available via CanalDigitaal, using the SES' Astra satellites at 19.2° east and 23.5° east. Services from both satellite positions can be received using a single dish with a Duo LNB, specifically designed for this purpose. It is only possible to register as a customer of CanalDigitaal using a Dutch postal address, due to copyright restrictions. A standard DVB-S receiver is used, which can also receive other free-to-air broadcasts. CanalDigitaal uses the Mediaguard/Nagravision encryption. In 2017 a second provider of digital satellite television began its services named Joyne, using the Eutelsat's Eurobird satellites at 9° east. Joyne uses the Conax encryption. IPTV Since May 1, 2006 KPN offers Mine TV, an IPTV service based on their DSL service, with the ability to receive Video on demand and replay a missed TV episodes besides regular TV programming. During 2007, the KPN service was renamed KPN Interactieve TV. Tele2 also offers an IPTV service called Tele2Vision. Since mid-2008 XMSNET also has started the rollout of IPTV over their FTTH (Fiber To The Home) network in several cities in the Netherlands. High definition In the Netherlands customers can receive high-definition television channels by cable or satellite and DVB-T2. History The first trials with high-definition television in the Netherlands began in the summer of 2006 with the broadcast of the 2006 World Cup in HD. The games where broadcast by the Netherlands Public Broadcasting (NPO) broadcaster NOS on a temporary 720p HD version of the NPO 2 channel. Only the live games where broadcast in HD, images from the studio and interviews were still SD. The NPO 2 HD channel went off-air after the World Cup. The larger cable companies continued a HD service with a small number of general interest channels like Discovery HD and National Geographic Channel HD. But because no Dutch network had made the move to HD, already broadcast in widescreen and the quality of the standard-definition PAL signal was good enough for most people, demand was low. Since the 2006 trials none of the main Dutch networks made the move to HD until the summer of 2008 when from June 1 until August 24, 2008 the NPO made their primary channel, NPO 1 temporary available in HD. This made it possible to broadcast Euro 2008, the 2008 Tour de France, and the 2008 Summer Olympics in HD and additionally allowed them to test their systems before the scheduled launch of their permanent HD service. Technicolor Netherlands, the company responsible for the technical realisation of the broadcasts for all the NPOs television and radio channels, began the summer 2008 test broadcast of NPO 1 HD in 720p and by doing so following the European Broadcasting Union (EBU) recommendations for HD broadcasting. During the test period an additional 1080i version of the channel was made available to the cable companies because of quality complaints from viewers. On July 4, 2009 the NPO started their permanent HD service when all three channels, NPO 1, NPO 2, and NPO 3, began simulcasting in 1080i high-definition. Most programming in the early stages is upscaled as in time more programs will become available in native HD. On October 15, 2009 RTL Nederland started simulcasting their RTL 7 and RTL 8 channels in 1080i high-definition. RTL Nederland then also announced plans for HD versions of their two other channels, RTL 4 and RTL 5, for 2010. Also these are available in HD since then. Current The Netherlands has ten main television channels, three public and seven commercial. All main television channels are simulcasted in high-definition. Furthermore other general interest high-definition channels are available with Dutch audio or subtitles. Main Dutch channels that broadcast in HD: NPO 1 (started 4 July 2009) NPO 2 (started 4 July 2009) NPO 3 (started 4 July 2009) RTL 4 (started 2010) RTL 5 (started 2010) RTL 7 (started 15 October 2009) RTL 8 (started 15 October 2009) RTL Z (started 7 September 2015) SBS 6 (started 2010) Veronica (started 2010) NET 5 (started 2010) SBS 9 (started 2014) Fox (started 2013) Other HD channels available in the Nederlands: 24Kitchen HD Al Jazeera English HD Animal Planet HD BBC First HD BBC World News HD Boomerang HD Cartoon Network HD CNN HD Comedy Central HD DanceTelevision HD Discovery HD Discovery Science HD Disney Channel HD Disney Junior HD Disney XD HD E! Entertainment HD Euronews HD Eurosport 1 HD Eurosport 2 HD Family 7 HD Fashion TV HD Film1 Premiere HD Film1 Action HD Film1 Comedy & KidsFamily HD Film1 Drama HD Fox Sports 1/2/3 HD Fox Sports 4/5/6 HD History HD Horse & Country TV HD Investigation Discovery HD Love Nature HD MTV HD MTV Live HD myZen.tv HD National Geographic HD National Geographic Wild HD Nickelodeon HD OutTV HD RTL Crime HD RTL Lounge HD RTL Telekids HD ShortsTV HD Spike HD Stingray Classica HD Stingray Djazz HD Stingray iConcerts HD TLC HD TV 538 HD Viceland HD Xite HD Ziggo Sport HD Ziggo Sport Select HD Ziggo Sport Voetbal HD Ziggo Sport Racing HD Ziggo Sport Golf HD Ziggo Sport Docu HD Ziggo Sport Extra HD Also available on most platforms: BBC One HD BBC Two HD VRT één HD VRT Canvas HD Ketnet HD Das Erste HD ZDF HD Arte HD TV5 Monde HD Satellite viewers can receive a number of additional HD channels from the surrounding countries when broadcasting free-to-air. But most of these channels are not part of HD services offered in the Netherlands nor broadcast programming aimed at the Dutch market. See also Digital television transition Television in the Netherlands List of cable companies in the Netherlands SES satellite operator Astra satellite family Digital television High-definition television References External links SES fleet information and map Dutch satellite channels on Lyngsat Television in the Netherlands Netherlands Science and technology in the Netherlands
167596
https://en.wikipedia.org/wiki/Protected%20mode
Protected mode
In computing, protected mode, also called protected virtual address mode, is an operational mode of x86-compatible central processing units (CPUs). It allows system software to use features such as virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software. When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors. Protected mode may only be entered after the system software sets up one descriptor table and enables the Protection Enable (PE) bit in the control register 0 (CR0). Protected mode was first added to the x86 architecture in 1982, with the release of Intel's 80286 (286) processor, and later extended with the release of the 80386 (386) in 1985. Due to the enhancements added by protected mode, it has become widely adopted and has become the foundation for all subsequent enhancements to the x86 architecture, although many of those enhancements, such as added instructions and new registers, also brought benefits to the real mode. History The Intel 8086, the predecessor to the 286, was originally designed with a 20-bit address bus for its memory. This allowed the processor to access 220 bytes of memory, equivalent to 1 megabyte. At the time, 1 megabyte was considered a relatively large amount of memory, so the designers of the IBM Personal Computer reserved the first 640 kilobytes for use by applications and the operating system and the remaining 384 kilobytes for the BIOS (Basic Input/Output System) and memory for add-on devices. As the cost of memory decreased and memory use increased, the 1 MB limitation became a significant problem. Intel intended to solve this limitation along with others with the release of the 286. The 286 The initial protected mode, released with the 286, was not widely used; for example, it was used by Coherent (from 1982), Microsoft Xenix (around 1984) and Minix. Several shortcomings such as the inability to access the BIOS or DOS calls due to inability to switch back to real mode without resetting the processor prevented widespread usage. Acceptance was additionally hampered by the fact that the 286 only allowed memory access in 16 bit segments via each of four segment registers, meaning only 4*216 bytes, equivalent to 256 kilobytes, could be accessed at a time. Because changing a segment register in protected mode caused a 6-byte segment descriptor to be loaded into the CPU from memory, the segment register load instruction took many tens of processor cycles, making it much slower than on the 8086; therefore, the strategy of computing segment addresses on-the-fly in order to access data structures larger than 128 kilobytes (the combined size of the two data segments) became impractical, even for those few programmers who had mastered it on the 8086/8088. The 286 maintained backwards compatibility with its precursor the 8086 by initially entering real mode on power up. Real mode functioned virtually identically to the 8086, allowing the vast majority of existing 8086 software to run unmodified on the newer 286. Real mode also served as a more basic mode in which protected mode could be set up, solving a sort of chicken-and-egg problem. To access the extended functionality of the 286, the operating system would set up some tables in memory that controlled memory access in protected mode, set the addresses of those tables into some special registers of the processor, and then set the processor into protected mode. This enabled 24 bit addressing which allowed the processor to access 224 bytes of memory, equivalent to 16 megabytes. The 386 With the release of the 386 in 1985, many of the issues preventing widespread adoption of the previous protected mode were addressed. The 386 was released with an address bus size of 32 bits, which allows for 232 bytes of memory accessing, equivalent to 4 gigabytes. The segment sizes were also increased to 32 bits, meaning that the full address space of 4 gigabytes could be accessed without the need to switch between multiple segments. In addition to the increased size of the address bus and segment registers, many other new features were added with the intention of increasing operational security and stability. Protected mode is now used in virtually all modern operating systems which run on the x86 architecture, such as Microsoft Windows, Linux, and many others. Furthermore, learning from the failures of the 286 protected mode to satisfy the needs for multiuser DOS, Intel added a separate virtual 8086 mode, which allowed multiple virtualized 8086 processors to be emulated on the 386. Hardware x86 virtualization required for virtualizing the protected mode itself, however, had to wait for another 20 years. 386 additions to protected mode With the release of the 386, the following additional features were added to protected mode: Paging 32-bit physical and virtual address space (The 32-bit physical address space is not present on the 80386SX, and other 386 processor variants which use the older 286 bus.) 32-bit segment offsets Ability to switch back to real mode without resetting Virtual 8086 mode Entering and exiting protected mode Until the release of the 386, protected mode did not offer a direct method to switch back into real mode once protected mode was entered. IBM devised a workaround (implemented in the IBM AT) which involved resetting the CPU via the keyboard controller and saving the system registers, stack pointer and often the interrupt mask in the real-time clock chip's RAM. This allowed the BIOS to restore the CPU to a similar state and begin executing code before the reset. Later, a triple fault was used to reset the 286 CPU, which was a lot faster and cleaner than the keyboard controller method (and does not depend on IBM AT-compatible hardware, but will work on any 80286 CPU in any system). To enter protected mode, the Global Descriptor Table (GDT) must first be created with a minimum of three entries: a null descriptor, a code segment descriptor and data segment descriptor. In an IBM-compatible machine, the A20 line (21st address line) also must be enabled to allow the use of all the address lines so that the CPU can access beyond 1 megabyte of memory (Only the first 20 are allowed to be used after power-up, to guarantee compatibility with older software written for the Intel 8088-based IBM PC and PC/XT models). After performing those two steps, the PE bit must be set in the CR0 register and a far jump must be made to clear the prefetch input queue. ; enter protected mode (set PE bit) mov EBX, CR0 or EBX, PE_BIT mov CR0, EBX ; clear prefetch queue; (using far jump instruction jmp) jmp CLEAR_LABEL CLEAR_LABEL: With the release of the 386, protected mode could be exited by loading the segment registers with real mode values, disabling the A20 line and clearing the PE bit in the CR0 register, without the need to perform the initial setup steps required with the 286. Features Protected mode has a number of features designed to enhance an operating system's control over application software, in order to increase security and system stability. These additions allow the operating system to function in a way that would be significantly more difficult or even impossible without proper hardware support. Privilege levels In protected mode, there are four privilege levels or rings, numbered from 0 to 3, with ring 0 being the most privileged and 3 being the least. The use of rings allows for system software to restrict tasks from accessing data, call gates or executing privileged instructions. In most environments, the operating system and some device drivers run in ring 0 and applications run in ring 3. Real mode application compatibility According to the Intel 80286 Programmer's Reference Manual, For the most part, the binary compatibility with real-mode code, the ability to access up to 16 MB of physical memory, and 1 GB of virtual memory, were the most apparent changes to application programmers. This was not without its limitations. If an application utilized or relied on any of the techniques below, it would not run: Segment arithmetic Privileged instructions Direct hardware access Writing to a code segment Executing data Overlapping segments Use of BIOS functions, due to the BIOS interrupts being reserved by Intel In reality, almost all DOS application programs violated these rules. Due to these limitations, virtual 8086 mode was introduced with the 386. Despite such potential setbacks, Windows 3.0 and its successors can take advantage of the binary compatibility with real mode to run many Windows 2.x (Windows 2.0 and Windows 2.1x) applications in protected mode, which ran in real mode in Windows 2.x. Virtual 8086 mode With the release of the 386, protected mode offers what the Intel manuals call virtual 8086 mode. Virtual 8086 mode is designed to allow code previously written for the 8086 to run unmodified and concurrently with other tasks, without compromising security or system stability. Virtual 8086 mode, however, is not completely backwards compatible with all programs. Programs that require segment manipulation, privileged instructions, direct hardware access, or use self-modifying code will generate an exception that must be served by the operating system. In addition, applications running in virtual 8086 mode generate a trap with the use of instructions that involve input/output (I/O), which can negatively impact performance. Due to these limitations, some programs originally designed to run on the 8086 cannot be run in virtual 8086 mode. As a result, system software is forced to either compromise system security or backwards compatibility when dealing with legacy software. An example of such a compromise can be seen with the release of Windows NT, which dropped backwards compatibility for "ill-behaved" DOS applications. Segment addressing Real mode In real mode each logical address points directly into physical memory location, every logical address consists of two 16 bit parts: The segment part of the logical address contains the base address of a segment with a granularity of 16 bits, i.e. a segment may start at physical address 0, 16, 32, ..., 220-16. The offset part of the logical address contains an offset inside the segment, i.e. the physical address can be calculated as physical_address : = segment_part × 16 + offset (if the address line A20 is enabled), respectively (segment_part × 16 + offset) mod 220 (if A20 is off) Every segment has a size of 216 bytes. Protected mode In protected mode, the is replaced by a 16-bit selector, in which the 13 upper bits (bit 3 to bit 15) contain the index of an entry inside a descriptor table. The next bit (bit 2) specifies whether the operation is used with the GDT or the LDT. The lowest two bits (bit 1 and bit 0) of the selector are combined to define the privilege of the request, where the values of 0 and 3 represent the highest and the lowest privilege, respectively. This means that the byte offset of descriptors in the descriptor table is the same as the 16-bit selector, provided the lower three bits are zeroed. The descriptor table entry defines the real linear address of the segment, a limit value for the segment size, and some attribute bits (flags). 286 The segment address inside the descriptor table entry has a length of 24 bits so every byte of the physical memory can be defined as bound of the segment. The limit value inside the descriptor table entry has a length of 16 bits so segment length can be between 1 byte and 216 byte. The calculated linear address equals the physical memory address. 386 The segment address inside the descriptor table entry is expanded to 32 bits so every byte of the physical memory can be defined as bound of the segment. The limit value inside the descriptor table entry is expanded to 20 bits and completed with a granularity flag (G-bit, for short): If G-bit is zero limit has a granularity of 1 byte, i.e. segment size may be 1, 2, ..., 220 bytes. If G-bit is one limit has a granularity of 212 bytes, i.e. segment size may be 1 × 212, 2 × 212, ..., 220 × 212 bytes. If paging is off, the calculated linear address equals the physical memory address. If paging is on, the calculated linear address is used as input of paging. The 386 processor also uses 32 bit values for the address offset. For maintaining compatibility with 286 protected mode a new default flag (D-bit, for short) was added. If the D-bit of a code segment is off (0) all commands inside this segment will be interpreted as 16-bit commands by default; if it is on (1), they will be interpreted as 32-bit commands. Structure of segment descriptor entry Where: A is the Accessed bit; R is the Readable bit; C (Bit 42) depends on X: if X = 1 then C is the Conforming bit, and determines which privilege levels can far-jump to this segment (without changing privilege level): if C = 0 then only code with the same privilege level as DPL may jump here; if C = 1 then code with the same or a lower privilege level relative to DPL may jump here. if X = 0 then C is the direction bit: if C = 0 then the segment grows up; if C = 1 then the segment grows down. X is the Executable bit: if X = 1 then the segment is a code segment; if X = 0 then the segment is a data segment. S is the Segment type bit, which should generally be cleared for system segments; DPL is the Descriptor Privilege Level; P is the Present bit; D is the Default operand size; G is the Granularity bit; Bit 52 of the 80386 descriptor is not used by the hardware. Paging In addition to adding virtual 8086 mode, the 386 also added paging to protected mode. Through paging, system software can restrict and control a task's access to pages, which are sections of memory. In many operating systems, paging is used to create an independent virtual address space for each task, preventing one task from manipulating the memory of another. Paging also allows for pages to be moved out of primary storage and onto a slower and larger secondary storage, such as a hard disk drive. This allows for more memory to be used than physically available in primary storage. The x86 architecture allows control of pages through two arrays: page directories and page tables. Originally, a page directory was the size of one page, four kilobytes, and contained 1,024 page directory entries (PDE), although subsequent enhancements to the x86 architecture have added the ability to use larger page sizes. Each PDE contained a pointer to a page table. A page table was also originally four kilobytes in size and contained 1,024 page table entries (PTE). Each PTE contained a pointer to the actual page's physical address and are only used when the four-kilobyte pages are used. At any given time, only one page directory may be in active use. Multitasking Through the use of the rings, privileged call gates, and the Task State Segment (TSS), introduced with the 286, preemptive multitasking was made possible on the x86 architecture. The TSS allows general-purpose registers, segment selector fields, and stacks to all be modified without affecting those of another task. The TSS also allows a task's privilege level, and I/O port permissions to be independent of another task's. In many operating systems, the full features of the TSS are not used. This is commonly due to portability concerns or due to the performance issues created with hardware task switches. As a result, many operating systems use both hardware and software to create a multitasking system. Operating systems Operating systems like OS/2 1.x try to switch the processor between protected and real modes. This is both slow and unsafe, because a real mode program can easily crash a computer. OS/2 1.x defines restrictive programming rules allowing a Family API or bound program to run in either real or protected mode. Some early Unix operating systems, OS/2 1.x, and Windows used this mode. Windows 3.0 was able to run real mode programs in 16-bit protected mode; when switching to protected mode, it decided to preserve the single privilege level model that was used in real mode, which is why Windows applications and DLLs can hook interrupts and do direct hardware access. That lasted through the Windows 9x series. If a Windows 1.x or 2.x program is written properly and avoids segment arithmetic, it will run the same way in both real and protected modes. Windows programs generally avoid segment arithmetic because Windows implements a software virtual memory scheme, moving program code and data in memory when programs are not running, so manipulating absolute addresses is dangerous; programs should only keep handles to memory blocks when not running. Starting an old program while Windows 3.0 is running in protected mode triggers a warning dialog, suggesting to either run Windows in real mode or to obtain an updated version of the application. Updating well-behaved programs using the MARK utility with the MEMORY parameter avoids this dialog. It is not possible to have some GUI programs running in 16-bit protected mode and other GUI programs running in real mode. In Windows 3.1, real mode was no longer supported and could not be accessed. In modern 32-bit operating systems, virtual 8086 mode is still used for running applications, e.g. DPMI compatible DOS extender programs (through virtual DOS machines) or Windows 3.x applications (through the Windows on Windows subsystem) and certain classes of device drivers (e.g. for changing the screen-resolution using BIOS functionality) in OS/2 2.0 (and later OS/2) and 32-bit Windows NT, all under control of a 32-bit kernel. However, 64-bit operating systems (which run in long mode) no longer use this, since virtual 8086 mode has been removed from long mode. See also Long mode Assembly language Intel Ring (computer security) x86 assembly language References External links Protected Mode Basics Introduction to Protected-Mode Overview of the Protected Mode Operations of the Intel Architecture TurboIRC.COM tutorial to enter protected mode from DOS Protected Mode Overview and Tutorial Code Project Protected Mode Tutorial Akernelloader switching from real mode to protected mode Programming language implementation X86 operating modes
34319676
https://en.wikipedia.org/wiki/NGP%20VAN
NGP VAN
NGP VAN, Inc. is an American privately owned voter database and web hosting service provider used by the Democratic Party, Democratic campaigns, and other non-profit organizations authorized by the Democratic Party. The platform or service is used by political and social campaigns for fundraising, campaign finance compliance, field organizing, and digital organizing. NGP VAN, Inc. was formerly known as Voter Activation Network, Inc. and changed its name to NGP VAN, Inc. in January 2011. The company was founded in 2001 and is based in Washington, District of Columbia, with an additional location in Somerville, Massachusetts. In 2009, the company was the largest partisan provider of campaign compliance software, used by most Democratic members of Congress. The company's services have been utilized by clients such as the Obama 2008 presidential campaign, the Obama 2012 presidential campaign, the Hillary Rodham Clinton 2016 presidential campaign, the Bernie Sanders 2016 presidential campaign, the British Liberal Democrats, and the Liberal Party of Canada. History NGP VAN was created in November 2010 by the merger of its two predecessor companies, NGP Software, founded in 1997 by Nathaniel Pearlman, who later served as chief technology officer for Hillary Clinton's 2008 presidential campaign, in his attic in Washington, DC, and Voter Activation Network, founded in 2001 by Mark Sullivan, in his study in Cambridge, Massachusetts. In October 2014, NGP VAN launched their EveryAction fundraising management platform for non-profits. There are occasional accusations that the Democratic Party has restricted access to Votebuilder to hold off a challenge to an incumbent office holder in a primary. For example, Rachel Ventura, running against an incumbent Democrat in IL-11, was told "I've heard from our Executive Director. Your request for Votebuilder for Illinois' 11th Congressional District through the Democratic Party of Illinois has been denied due to our regulations that we don't issue subscriptions to candidates challenging an incumbent." In 2019, the company made several major acquisitions, including ActionKit, BSD Tools, and DonorTrends. In 2021, NGP VAN's parent company, EveryAction, Inc., was acquired by London-based private equity firm Apax Partners. The company also named Amanda Coulombe President of NGP VAN. Products MiniVAN – A mobile canvassing application that allows for campaigns and organizations to contact voters or supporters, collect data, and sync the information back to their VAN or EveryAction database in real time. 71% of progressive voter contact attempts were made on MiniVAN instead of paper lists in 2018. VoteBuilder – A web-based service used by the Democratic Party and associated campaigns to track interactions with potential voters. Votebuilder stores information like phone calls and other methods of contact with voters in the system. It is used as part of campaign voter persuasion and "get out the vote" operations. The software was created in 2006 to bridge a perceived gap in microtargeting abilities between the Republican and Democratic parties. On Wednesday December 16, 2015, NGP VAN released a code update to their Votebuilder application which contained a bug that allowed two campaigns to see proprietary analytical scores. On the evening of Thursday, December 17 the DNC revoked the Sanders campaign's access to the national voter file, after the campaign accessed and saved data collected by the Clinton campaign. The Sanders campaign sued the DNC in District Court and concurrently fired Josh Uretsky, the staffer who managed 3 other members of the Sanders campaign who improperly accessed the data. On December 19, the DNC restored the Sanders campaign's access after the campaign agreed to cooperate with their investigation. NGP – A web-based service for digital engagement, fundraising, and compliance reporting used by most federal Democratic campaigns. NGP is also sometimes used by state and local campaigns. In August 2017, the company released NGP 8, an updated version of the service. Innovation Platform – A series of APIs and integrations that was rolled out in 2014. Several notable integrations include apps and services such as self-serve online advertising, broadcast and peer-to-peer text messaging tools, live calls, and do-it-yourself direct mail. Mobilize – A web-based service for event management and volunteer recruitment that connects campaigns with supporters. Mobilize emerged from the 2016 election and grew to become a vital piece of Democratic and progressive tech infrastructure, before being acquired in 2021. References External links NGP VAN Privately held companies based in Washington, D.C. Software companies based in Massachusetts Software companies based in Washington, D.C. Web design companies Political software Software companies of the United States
487361
https://en.wikipedia.org/wiki/Military%20citadels%20under%20London
Military citadels under London
A number of military citadels are known to have been constructed underground in central London, dating mostly from the Second World War and the Cold War. Unlike traditional above-ground citadels, these sites are primarily secure centres for defence co-ordination. A large network of tunnels exists below London for a variety of communications, civil defence and military purposes, however it is unclear how these tunnels, and the various facilities linked to them, fit together, if at all. Even the number and nature of these facilities is unclear; only a few have been officially admitted to. Pindar The most important military citadel in central London is Pindar, or the Defence Crisis Management Centre. The bunker is deep beneath the Ministry of Defence on Whitehall. Construction took ten years and cost £126.3 million. Pindar became operational in 1992, two years before construction was complete. Computer equipment was much more expensive to install than originally estimated as there was very little physical access to the site. Pindar's main function is to be a crisis management and communications centre, principally between the MOD headquarters and the actual centre of military operations, the Permanent Joint Headquarters in Northwood. It is reported to be connected to Downing Street and the Cabinet Office by a tunnel under Whitehall. Despite rumours, Armed Forces Minister Jeremy Hanley told the House of Commons on 29 April 1994 that "the facility is not connected to any transport system." Although the facility is not open to the public, it has had some public exposure. In the 2003 BBC documentary on the Iraq conflict, Fighting the War, BBC cameras were allowed into the facility to film a small part of a teleconference between ministers and military commanders. Also, in 2008 the British photographer David Moore published his series of photographs, The Last Things, widely believed to be an extensive photographic survey of Pindar. Photographs taken of the facility in 2008 show that it has stores including toothpaste, toothbrushes, and mouthwashes. It has bunks for up to 100 military officers, politicians and civilians as well as communication facilities, a medical centre and maps. The name Pindar is taken from the ancient Greek poet, whose house alone was left standing after Thebes was razed in 335 BC. Admiralty Citadel The Admiralty Citadel, London's most visible military citadel, is located just behind the Admiralty building on Horse Guards Parade. It was constructed in 1940–1941 as a bomb-proof operations centre for the Admiralty, with foundations deep and a thick concrete roof. It is also linked by tunnels to government buildings in Whitehall. Sir Winston Churchill described it in his memoirs as a "vast monstrosity which weighs upon the Horse Guards Parade" – and Boston Ivy has been encouraged to cover it in an apparent attempt to soften its harsh appearance. Its brutal functionality speaks of a very practical purpose; in the event of a German invasion, it was intended that the building would become a fortress, with loopholed firing positions provided to fend off attackers. In 1992 the Admiralty communications centre was established here as the stone frigate HMS St Vincent, which became MARCOMM COMCEN (St Vincent) in 1998. The Admiralty Citadel is still used today by the Ministry of Defence. Cabinet War Rooms The only central London citadel currently open to the public is the Cabinet War Rooms, located in Horse Guards Road in the basement of what is now HM Treasury. This was not a purpose-built citadel but was instead a reinforced adaptation of an existing basement built many years before. The War Rooms were constructed in 1938 and were regularly used by Winston Churchill during World War II. However, the Cabinet War Rooms were vulnerable to a direct hit and were abandoned not long after the war. The Cabinet War Rooms were a secret to all civilians until their opening to the public in 1984. They are now a popular tourist attraction maintained by the Imperial War Museum. The section of the War Rooms open to the public is in fact only a portion of a much larger facility. They originally covered three acres (1.2 hectares) and housed a staff of up to 528 people, with facilities including a canteen, hospital, shooting range and dormitories. The centrepiece of the War Rooms is the Cabinet Room itself, where Churchill's War Cabinet met. The Map Room is adjacent, from where the course of the war was directed. It is still in much the same condition as when it was abandoned, with the original maps still on the walls and telephones and other original artefacts on the desks. Churchill slept in a small bedroom nearby. There is a small telephone room (disguised as a toilet) down the corridor that provided a direct line to the White House in Washington DC, via a special scrambler in an annexe basement of Selfridges department store in Oxford Street. Q-Whitehall Q-Whitehall is the name given to a communications facility under Whitehall. The facility was built in a 12 ft (3.7 m) diameter tunnel during World War II, and extends under Whitehall. A similar facility was constructed in a tunnel that ran parallel to the Aldwych branch of the Piccadilly Line and was known as Trunks Kingsway (Kingsway Telephone Exchange). The project was known as 'Post Office scheme 2845'. A detailed description, with photographs, was published just after the war in the January 1946 edition of The Post Office Electrical Engineers' Journal. Sites equipped with unusual amounts of GPO/BT telecommunications plant are given a BT site engineering code. This site's code was L/QWHI. The site provided protected accommodation for the lines and terminal equipment serving the most important government departments, civil and military, to ensure the command and control of the war could continue despite heavy bombing of London. At the northern end, a tunnel connects to a shaft up to the former Trafalgar Square tube station (now merged with Charing Cross station), and to the BT deep level cable tunnels which were built under much of London during the Cold War. At the southern end, an 8 ft (2.4 m) diameter extension (Scheme 2845A) connects to a shaft under Court 6 of the Treasury Building: this provided the protected route from the Cabinet War Room. This was known as Y-Whitehall. The tunnel was further extended (Scheme 2845B) to the Marsham Street Rotundas. This extension housed the 'Federal' telephone exchange which had a dialling code of 333 from the public network. In the 1980s it housed Horseferry Tandem which provided a unified communications system for all government departments as well as the Palace of Westminster. Access to the tunnel is gained via an 8 ft (2.4 m) lateral tunnel and a lift shaft in the nearby Whitehall telephone exchange in Craig's Court. A further entrance is via the deep level portion of the Admiralty. Spur tunnels, 5 ft (1.5 m) in diameter, were built to provide protected cable routes to the major service buildings either side of Whitehall. The Whitehall tunnels appear to have been extended in the early 1950s. Some official documents refer to a Scheme 3245: this is the only numbered tunnel scheme that has never been officially revealed or located by researchers. Files in the National Archives which may relate to this have been closed for 75 years and will not be opened until the 2020s. The journalist Duncan Campbell managed to get into the BT deep level cable tunnels below London, and described his adventure in a New Statesman article in 1980. He found a (closed) entrance to Q-Whitehall below Trafalgar Square, and created a number of tunnel maps based on his investigation. See also Fortifications of London Central Government War Headquarters Subterranean London Civil defence centres in London Paddock (war rooms) References External links Churchill War Rooms on the Imperial War Museum website Infrastructure in London Local government in London Subterranean London Buildings and structures in the City of Westminster Military installations of the United Kingdom Fortifications of London United Kingdom nuclear command and control Military command and control installations
2786155
https://en.wikipedia.org/wiki/Apple%20Computer%2C%20Inc.%20v.%20Mackintosh%20Computers%20Ltd.
Apple Computer, Inc. v. Mackintosh Computers Ltd.
Apple Computer, Inc. v. Mackintosh Computers Ltd. [1990] 2 S.C.R. 209, is a Supreme Court of Canada case on copyright law regarding the copyrightability of software. The Court found that programs within ROM silicon chips (in this case, the Autostart ROM and Applesoft in Apple II+ systems) are protected under the Copyright Act, and that the conversion from the source code into object code was a reproduction that did not alter the copyright protection of the original work. Background The defendant Mackintosh Computers Ltd. was a manufacturer of unlicensed Apple II+ clones that were capable of running software designed for Apple II+ computers. At issue in this case were the Autostart ROM and Applesoft programs embedded in the computer chips of Apple's computers. At trial, the defendants conceded that they copied the chips in question by burning the contents of Apple's ROM chips into their own EPROMs They further conceded that software written in assembly code was copyrightable under the Copyright Act as literary works. However, the defendants argued that they had not infringed Apple's copyright in the assembly code because they had copied only the contents of the ROMs in question. The trial judge found that the software burned into Apple's ROMs were both a translation and reproduction of the assembly language source code, thus were protected by s. 3(1) of the Copyright Act. The Federal Court of Appeal dismissed the appeal. Two of the appellate judges held that the object code was a reproduction of the assembly code, while the third held that the object code could be considered either a translation or a reproduction, both protected by copyright. Ruling The Supreme Court held that the machine code embedded in the Apple ROM chips was an exact reproduction of the written assembly code, and as such were protected by s. 3(1) of the Copyright Act. The court further rejected the argument that the machine code fell under the merger doctrine, holding that the programs were a form of expression. The Supreme Court declined to follow the case of Computer Edge Pty. Ltd. v. Apple Computer, Inc. decided by the High Court of Australia, which had virtually identical facts. In that case, the court held that the chips contained a "sequence of electrical impulses" which could not be subject to copyright. Aftermath Not long after the case, the Copyright Act of Canada was amended to explicitly include software as a "literary work" within the Act. See also Apple Inc. litigation Apple Computer, Inc. v. Franklin Computer Corp., 714 F.2d 1240 (3d Cir. 1983), a similar case heard by the United States Court of Appeals for the Third Circuit Computer Edge Pty. Ltd. v. Apple Computer, Inc. (1986), 65 A.L.R. 33, a similar case heard by the High Court of Australia International Business Machines Corporation v. Computer Imports Ltd., [1989] 2 NZLR 395, a similar case heard by the High Court of New Zealand References External links Canadian copyright case law Supreme Court of Canada cases Apple Inc. litigation 1990 in Canadian case law
6096872
https://en.wikipedia.org/wiki/Bilateral%20key%20exchange
Bilateral key exchange
Bilateral key exchange (BKE) was an encryption scheme utilized by the Society for Worldwide Interbank Financial Telecommunication (SWIFT). The scheme was retired on January 1, 2009 and has now been replaced by the Relationship Management Application (RMA). All key management is now based on the SWIFT PKI that was implemented in SWIFT phase two. A bilateral key allowed secure communication across the SWIFT Network. The text of a SWIFT message and the authentication key were used to generate a message authentication code or MAC. The MAC ensured the origin of a message and the authenticity of the message contents. This was normally accomplished by the exchange of various SWIFT messages used specifically for establishing a communicating key pair. BKE keys were generated either manually inside the SWIFT software, or automatically with the use of a secure card reader (SCR). Since 1994, the keys used in the card reader and the authentication keys themselves were 1,024 bit RSA. Cryptographic protocols Society for Worldwide Interbank Financial Telecommunication
25976022
https://en.wikipedia.org/wiki/Synex%20Systems%20Corporation
Synex Systems Corporation
Synex Systems Corporation, a subsidiary of Synex International Inc. (Symbol SXI, TSX) was formed in 1983 in an effort to develop software for the microcomputer market and was run by Synex International Vice President Murray Hendren until 1992. In 2002, Synex Systems was acquired by privately owned Lasata Software of Perth, Australia. In 2005, Lasata was acquired by UK based Systems Union. In 2007, Systems Union was acquired by privately held Infor Global Solutions, a U.S. company that specializes in enterprise software. What was Synex Systems Corporation now operates as an independent business unit within Infor Global Solutions called F9 and continues to develop and partner with new and existing ERP and Accounting Software solutions. It is located in Vancouver, British Columbia. Products Synex Systems products were diverse and targeted accounting, civil engineering, minicomputer thin client, and file compression utility markets. By 2001 the concentration was only on the accounting reporting product F9 and all other products were discontinued or sold. PK Harmony PK Harmony was Synex System's first software offering. It is a terminal emulation package that would now be termed a thin client. PK Harmony provides an interface from a PC to a Pick host by emulating a terminal. Connectivity is achieved by serial port-to-port cabling or a modem. PK Harmony allows users to transfer data to and from a Pick operating system legacy host and a DOS or Windows based PC. Development started in 1983. PK Harmony was purchased by Pierre Bourbonnais, a former senior Synex marketing team member, and is now marketed as PK Harmony - PK Term Plus through TechnoDroids CyberCorp. F9 F9 Financial Reporting was developed starting in 1986 to allow a non-technical user, typically an accountant, to create a dynamic, customized general ledger financial report using a spreadsheet that is 'hot-linked' to an accounting system's general ledger This product is currently in wide use, is still being updated, and is the longest lasting and most profitable of the products developed by Synex Systems. SQZ! In the late 1980s microcomputer (PC) use in offices was ubiquitous and the most used program was the spreadsheet. Hard drive space was limited and expensive and many businesses found the plethora of spreadsheets they came to rely upon exceeded the space on the drives. SQZ was the first automatic file compression utility when it was introduced as Symantec's second product offering in the mid-1980s through its Turner Hall Publishing division, the first being Note-It, a notation utility for Lotus 1-2-3. SQZ! initially sold for US$79.95 and was a major part of Symantec's early success and helped form the basis of the 1990s acquisitions Symantec grew from. Several people working at Synex involved with SQZ! were hired by Symantec as a result of this success. SQZ! is a compression utility specifically designed to compress spreadsheet files and was marketed starting 1986. Sqz! is available as an add-in for 1-2-3 Release 2/2.01, and as a terminate-and-stay-resident (TSR) utility for any version of 1-2-3, Symphony or any other programs that read or write 1-2-3 R2 format worksheets. In 1988 an improved version, SQZ! Plus, was released. This was available as an upgrade priced at US$30. SQZ! was best at compressing complex business spreadsheets and could achieve compressions of between 78% and 90% on large business-analysis worksheets compared to 60% to 65% for competitors like The Worksheet Utilities 123 add-in Fileworks (Funk Software). At one time in the early 1990s SQZ! was one of the most used utility software programs in the world. SQZ! was also incorporated into Quattro Pro as a built-in utility. A Macintosh version of SQZ! called Mac SQZ! was available for Microsoft Excel. The primary architect of SQZ! was Dale White, who developed the highly efficient spreadsheet formula-specific compression algorithms. He also developed a sophisticated corrupt spreadsheet recovery system that was built into SQZ! Plus. SQZ! Plus was released in 1988 as a replacement for Sqz! Version 1.5. With the huge leaps in hard drive capacity and lower costs for these ever larger drives, and the building of compression into late versions of DOS and Windows, the need for SQZ! and other file compression utilities disappeared and the SQZ! product was discontinued by the mid 1990s. WaterWorks Analysis Program Synex's Waterworks is a water distribution network analyzer based on F9 computer spreadsheet add-in technology. Waterworks enhances municipal water distribution design by combining both graphical and numerical results in a single interface and allows data exchange with CAD packages. Waterworks core functionality was a Fortran coded algorithm developed at the University of British Columbia school of engineering. Much of the parent company Synex International Inc. was concerned with civil engineering and specialized in small hydroelectric projects and a synergy developed between the software and engineering divisions. Waterworks saw some success being used by various water districts across the United States for water distribution analysis including the Greater Los Angeles Water District. @trieve @trieve (pronounced at-reev) was a Synex Systems product developed in 1991 from the F9 project code. It was at first written and used as an internal utility that allowed easy examination of Btrieve database file contents and helped in designing new Btrieve based accounting package interfaces for F9. @trieve allowed the user to read and write the contents of Btrieve files to and from a spreadsheet. Many accounting products marketed at the time used Btrieve as the underlying database engine and developing an F9 interface to these accounting products required highly detailed access to the data structures which easy examination of the data greatly aided. Because Btrieve is not a true database in that it does not contain a definition of the data being stored @trieve used a data dictionary that was defined in a spreadsheet to determine the structure of the data records. Although limited in user appeal, @trieve was a useful utility for experts that required detailed access to Btrieve-based data. A Great Plains Accounting edition of @trieve was developed and marketed by Great Plains which was later purchased by Microsoft and became Microsoft Dynamics GP. See also List of companies of Canada Companies listed on the Toronto Stock Exchange (S) List of ERP software packages Synex International References Software companies of Canada
18933304
https://en.wikipedia.org/wiki/AmigaOS
AmigaOS
AmigaOS is a family of proprietary native operating systems of the Amiga and AmigaOne personal computers. It was developed first by Commodore International and introduced with the launch of the first Amiga, the Amiga 1000, in 1985. Early versions of AmigaOS required the Motorola 68000 series of 16-bit and 32-bit microprocessors. Later versions were developed by Haage & Partner (AmigaOS 3.5 and 3.9) and then Hyperion Entertainment (AmigaOS 4.0-4.1). A PowerPC microprocessor is required for the most recent release, AmigaOS 4. AmigaOS is a single-user operating system based on a preemptive multitasking kernel, called Exec. It includes an abstraction of the Amiga's hardware, a disk operating system called AmigaDOS, a windowing system API called Intuition and a desktop file manager called Workbench. The Amiga intellectual property is fragmented between Amiga Inc., Cloanto, and Hyperion Entertainment. The copyrights for works created up to 1993 are owned by Cloanto. In 2001, Amiga Inc. contracted AmigaOS 4 development to Hyperion Entertainment and, in 2009 they granted Hyperion an exclusive, perpetual, worldwide license to AmigaOS 3.1 in order to develop and market AmigaOS 4 and subsequent versions. On December 29, 2015, the AmigaOS 3.1 source code leaked to the web; this was confirmed by the licensee, Hyperion Entertainment. Components AmigaOS is a single-user operating system based on a preemptive multitasking kernel, called Exec. AmigaOS provides an abstraction of the Amiga's hardware, a disk operating system called AmigaDOS, a windowing system API called Intuition and a desktop file manager called Workbench. A command-line interface (CLI), called AmigaShell, is also integrated into the system, though it also is entirely window-based. The CLI and Workbench components share the same privileges. Notably, AmigaOS lacks any built-in memory protection. AmigaOS is formed from two parts, namely, a firmware component called Kickstart and a software portion usually referred to as Workbench. Up until AmigaOS 3.1, matching versions of Kickstart and Workbench were typically released together. However, since AmigaOS 3.5, the first release after Commodore's demise, only the software component has been updated and the role of Kickstart has been diminished somewhat. Firmware updates may still be applied by patching at system boot. That was until 2018 when Hyperion Entertainment (license holder to AmigaOS 3.1) released AmigaOS 3.1.4 with an updated Kickstart ROM to go with it. Firmware and bootloader Kickstart is the bootstrap firmware, usually stored in ROM. Kickstart contains the code needed to boot standard Amiga hardware and many of the core components of AmigaOS. The function of Kickstart is comparable to the BIOS plus the main operating system kernel in IBM PC compatibles. However, Kickstart provides more functionality available at boot time than would typically be expected on PC, for example, the full windowing environment. Kickstart contains many core parts of the Amiga's operating system, such as Exec, Intuition, the core of AmigaDOS and functionality to initialize Autoconfig-compliant expansion hardware. Later versions of the Kickstart contained drivers for IDE and SCSI controllers, PC card ports and other built-in hardware. Upon start-up or reset the Kickstart performs a number of diagnostic and system checks and then initializes the Amiga chipset and some core OS components. It will then examine connected boot devices and attempt to boot from the one with the highest boot priority. If no boot device is present a screen will be displayed asking the user to insert a boot disk, typically a floppy disk. At start-up Kickstart attempts to boot from a bootable device (typically, a floppy disk or hard disk drive). In the case of a floppy, the system reads the first two sectors of the disk (the bootblock), and executes any boot instructions stored there. Normally this code passes control back to the OS (invoking AmigaDOS and the GUI) and using the disk as the system boot volume. Any such disk, regardless of the other contents of the disk, was referred to as a "Boot disk" or "bootable disk". A bootblock could be added to a blank disk by use of the install command. Some games and demos on floppy disk used custom bootblocks, which allowed them to take over the boot sequence and manage the Amiga's hardware without AmigaOS. The bootblock became an obvious target for virus writers. Some games or demos that used a custom bootblock would not work if infected with a bootblock virus, as the code of the virus replaced the original. The first such virus was the SCA virus. Anti-virus attempts included custom bootblocks. These amended bootblock advertised the presence of the virus checker while checking the system for tell-tale signs of memory-resident viruses and then passed control back to the system. Unfortunately these could not be used on disks that already relied on a custom bootblock, but did alert users to potential trouble. Several of them also replicated themselves across other disks, becoming little more than viruses in their own right. Kernel Exec is the multi-tasking kernel of AmigaOS. Exec provides functionality for multi-tasking, memory allocation, interrupt handling and handling of dynamic shared libraries. It acts as a scheduler for tasks running on the system, providing pre-emptive multitasking with prioritized round-robin scheduling. Exec also provides access to other libraries and high-level inter-process communication via message passing. Other comparable microkernels have had performance problems because of the need to copy messages between address spaces. Since the Amiga has only one address space, Exec message passing is quite efficient. AmigaDOS AmigaDOS provides the disk operating system portion of the AmigaOS. This includes file systems, file and directory manipulation, the command-line interface, file redirection, console windows, and so on. Its interfaces offer facilities such as command redirection, piping, scripting with structured programming primitives, and a system of global and local variables. In AmigaOS 1.x, the AmigaDOS portion was based on TRIPOS, which is written in BCPL. Interfacing with it from other languages proved a difficult and error-prone task, and the port of TRIPOS was not very efficient. From AmigaOS 2.x onwards, AmigaDOS was rewritten in C and Assembler, retaining 1.x BCPL program compatibility, and it incorporated parts of the third-party AmigaDOS Resource Project, which had already written replacements for many of the BCPL utilities and interfaces. ARP also provided one of the first standardized file requesters for the Amiga, and introduced the use of more friendly UNIX-style wildcard (globbing) functions in command-line parameters. Other innovations were an improvement in the range of date formats accepted by commands and the facility to make a command resident, so that it only needs to be loaded into memory once and remains in memory to reduce the cost of loading in subsequent uses. In AmigaOS 4.0, the DOS abandoned the BCPL legacy completely and, starting from AmigaOS 4.1, it has been rewritten with full 64-bit support. File extensions are often used in AmigaOS, but they are not mandatory and they are not handled specially by the DOS, being instead just a conventional part of the file names. Executable programs are recognized using a magic number. Graphical user interface The native Amiga windowing system is called Intuition, which handles input from the keyboard and mouse and rendering of screens, windows and widgets. Prior to AmigaOS 2.0, there was no standardized look and feel, application developers had to write their own non-standard widgets. Commodore added the GadTools library and BOOPSI in AmigaOS 2.0, both of which provided standardized widgets. Commodore also published the Amiga User Interface Style Guide, which explained how applications should be laid out for consistency. Stefan Stuntz created a popular third-party widget library, based on BOOPSI, called Magic User Interface, or MUI. MorphOS uses MUI as its official toolkit, while AROS uses a MUI clone called Zune. AmigaOS 3.5 added another widget set, ReAction, also based on BOOPSI. An unusual feature of AmigaOS is the use of multiple screens shown on the same display. Each screen may have a different video resolution or color depth. AmigaOS 2.0 added support for public screens, allowing applications to open windows on other applications' screens. Prior to AmigaOS 2.0, only the Workbench screen was shared. A widget in the top-right corner of every screen allows screens to be cycled through. Screens can be overlaid by dragging each up or down by their title bars. AmigaOS 4 introduced screens that are draggable in any direction. File manager Workbench is the native graphical file manager and desktop environment of AmigaOS. Though the term Workbench was originally used to refer to the entire operating system, with the release of AmigaOS 3.1 the operating system was renamed AmigaOS and subsequently Workbench refers to the desktop manager only. As the name suggests, the metaphor of a workbench is used, rather than that of a desktop; directories are depicted as drawers, executable files are tools, data files are projects and GUI widgets are gadgets. In many other aspects the interface resembles Mac OS, with the main desktop showing icons of inserted disks and hard drive partitions, and a single menu bar at the top of every screen. Unlike the Macintosh mouse available at the time, the standard Amiga mouse has two buttons – the right mouse button operates the pull-down menus, with a "release to select" mechanism. Features Graphics Until the release of version 3, AmigaOS only natively supported the native Amiga graphics chipset, via graphics.library, which provides an API for geometric primitives, raster graphic operations and handling of sprites. As this API could be bypassed, some developers chose to avoid OS functionality for rendering and directly program the underlying hardware for gains in efficiency. Third-party graphics cards were initially supported via proprietary unofficial solutions. A later solution where AmigaOS could directly support any graphics system, was termed retargetable graphics (RTG). With AmigaOS 3.5, some RTG systems were bundled with the OS, allowing the use of common hardware cards other than the native Amiga chipsets. The main RTG systems are CyberGraphX, Picasso 96 and EGS. Some vector graphic libraries, like Cairo and Anti-Grain Geometry, are also available. Modern systems can use cross-platform SDL (simple DirectMedia Layer) engine for games and other multimedia programs. The Amiga did not have any inbuilt 3D graphics capability, and so had no standard 3D graphics API. Later, graphics card manufacturers and third-party developers provided their own standards, which included MiniGL, Warp3D, StormMesa (agl.library) and CyberGL. The Amiga was launched at a time when there was little support for 3D graphics libraries to enhance desktop GUIs and computer rendering capabilities. However, the Amiga became one of the first widespread 3D development platforms. VideoScape 3D was one of the earliest 3D rendering and animation systems, and Silver/TurboSilver was one of the first ray-tracing 3D programs. Then Amiga boasted many influential applications in 3D software, such as Imagine, maxon's Cinema 4D, Realsoft 3D, VistaPro, Aladdin 4D and NewTek's Lightwave (used to render movies and television shows like Babylon 5). Likewise, while the Amiga is well known for its ability to easily genlock with video, it has no built-in video capture interface. The Amiga supported a vast number of third-party interfaces for video capture from American and European manufacturers. There were internal and external hardware solutions, called frame-grabbers, for capturing individual or sequences of video frames, including: Newtronic Videon, Newtek DigiView, Graffiti external framebuffer, the Digilab, the Videocruncher, Firecracker 24, Vidi Amiga 12, Vidi Amiga 24-bit and 24RT (Real Time), Newtek Video Toaster, GVP Impact Vision IV24, MacroSystem VLab Motion and VLab PAR, DPS PAR (Personal Animation Recorder), VHI (Video Hardware Interface) by IOSPIRIT GmbH, DVE-10, etc. Some solutions were hardware plug-ins for Amiga graphics cards like the Merlin XCalibur module, or the DV module built for the Amiga clone Draco from the German firm Macrosystem. Modern PCI bus TV expansion cards and their capture interfaces are supported through tv.library by Elbox Computer and tvcard.library by Guido Mersmann. Following modern trends in evolution of graphical interfaces, AmigaOS 4.1 uses the 3D hardware-accelerated Porter-Duff image composition engine. Audio Prior to version 3.5, AmigaOS only officially supported the Amiga's native sound chip, via audio.device. This facilitates playback of sound samples on four DMA-driven 8-bit PCM sound channels. The only supported hardware sample format is signed linear 8-bit two's complement. Support for third-party audio cards was vendor-dependent, until the creation and adoption of AHI as a de facto standard. AHI offers improved functionality, such as seamless audio playback from a user-selected audio device, standardized functionality for audio recording and efficient software mixing routines for combining multiple sound channels, thus overcoming the four-channel hardware limit of the original Amiga chipset. AHI can be installed separately on AmigaOS v2.0 and later. AmigaOS itself did not support MIDI until version 3.1, when Roger Dannenberg's camd.library was adapted as the standard MIDI API. Commodore's version of camd.library also included a built-in driver for the serial port. The later open source version of camd.library by Kjetil Matheussen did not provide a built-in driver for the serial port, but provided an external driver instead. AmigaOS was one of the first operating systems to feature speech synthesis with software developed by SoftVoice, Inc., which allowed text-to-speech conversion of American English. This had three main components: narrator.device, which modulates the phonemes used in American English, translator.library, which translates English text to American English phonemes using a set of rules, and a high-level SPEAK: handler, which allows command-line users to redirect text output to speech. A utility called Say was included with the OS, which allowed text-to-speech synthesis with some control of voice and speech parameters. A demo was also included with AmigaBASIC programming examples. Speech synthesis was occasionally used in third-party programs, particularly educational software. For example, the word processors Prowrite and Excellence! could read out documents using the synthesizer. These speech synthesis components remained largely unchanged in later OS releases and Commodore eventually removed speech synthesis support from AmigaOS 2.1 onward because of licensing restrictions. Despite the American English limitation of the narrator.devices phonemes, Francesco Devitt developed an unofficial version with multilingual speech synthesis. This made use of an enhanced version of the translator.library which could translate a number of languages into phonemes, given a set of rules for each language. Storage The AmigaOS has a dynamically sized RAM disk, which resizes itself automatically to accommodate its contents. Starting with AmigaOS 2.x, operating system configuration files were loaded into the RAM disk on boot, greatly speeding operating system usage. Other files could be copied to the RAM disk like any standard device for quick modification and retrieval. Also beginning in AmigaOS 2.x, the RAM disk supported file-change notification, which was mostly used to monitor configuration files for changes. Starting with AmigaOS 1.3, there is also a fixed-capacity recoverable RAM disk, which functions as a standard RAM disk but can maintain its contents on soft restart. It is commonly called the RAD disk after its default device name, and it can be used as a boot disk (with boot sector). Previously, a recoverable RAM disk, commonly called the ASDG RRD or VD0, was introduced in 1987; at first, it was locked to ASDG expansion memory products. Later, the ASDG RRD was added to the Fred Fish series of freeware, shareware, and public domain software (disks 58 and 241). Scripting The AmigaOS has support for the Rexx language, called ARexx (short for "Amiga Rexx"), and is a script language which allows for full OS scripting, similar to AppleScript; intra-application scripting, similar to VBA in Microsoft Office; as well as inter-program communication. Having a single scripting language for any application on the operating system is beneficial to users, instead of having to learn a new language for each application. Programs can listen on an "ARexx port" for string messages. These messages can then be interpreted by the program in a similar fashion to a user pushing buttons. For example, an ARexx script run in an e-mail program could save the currently displayed email, invoke an external program which could extract and process information, and then invoke a viewer program. This allows applications to control other applications by sending data back and forth directly with memory handles, instead of saving files to disk and then reloading them. Since AmigaOS 4, the Python language is included with the operating system. Technical overview John C. Dvorak stated in 1996: Libraries and devices AmigaOS provides a modular set of system functions through dynamically-loaded shared libraries, either stored as a file on disk with a ".library" filename extension, or stored in the Kickstart firmware. All library functions are accessed via an indirect jump table, which is a negative offset to the library base pointer. That way, every library function can be patched or hooked at run-time, even if the library is stored in ROM. The core library of AmigaOS is the exec.library (Exec), which provides an interface to functions of the Amiga's microkernel. Device drivers are also libraries, but they implement a standardized interface. Applications do not usually call devices directly as libraries, but use the exec.library I/O functions to indirectly access them. Like libraries, devices are either files on disk (with the ".device" extension), or stored in the Kickstart ROM. Handlers, AmigaDOS and filesystems The higher-level part of device and resource management is controlled by handlers, which are not libraries, but tasks, and communicate by passing messages. One type of handler is a filesystem handler. The AmigaOS can make use of any filesystem for which a handler has been written, a possibility that has been exploited by programs like CrossDOS and by a few "alternative" file systems to the standard OFS and FFS. These file systems allow one to add new features like journaling or file privileges, which are not found in the standard operating system. Handlers typically expose a device name to the DOS, which can be used to access the peripheral (if any) associated with the handler. As an example of these concepts is the SPEAK: handler which could have text redirected to spoken speech, through the speech synthesis system. Device names are case insensitive (uppercase by convention) strings followed by a colon. After the colon a specifier can be added, which gives the handler additional information about what is being accessed and how. In the case of filesystem, the specifier usually consists of a path to a file in the filesystem; for other handlers, specifiers usually set characteristics of the desired input/output channel (for the SER: serial port driver, for example, the specifier will contain bit rate, start and stop bits, etc.). Filesystems expose drive names as their device names. For example, DF0: by default refers to the first floppy drive in the system. On many systems DH0: is used to refer to the first hard drive. Filesystems also expose volume names, following the same syntax as device names: these identify the specific medium in the file system-managed drive. If DF0: contains a disk named "Workbench", then Workbench: will be a volume name that can be used to access files in DF0:. If one wanted to access a file named "Bar" located in directory "Foo" of the disk with name "Work" in drive DF0:, one could write "DF0:Foo/Bar" or "Work:Foo/Bar". However, these are not completely equivalent, since when the latter form is used, the system knows that the wanted volume is''' "Work" and not just any volume in DF0:. Therefore, whenever a requested file on "Work" is being accessed without volume "Work" being present in any drive, it will say something to the effect of: Please insert volume Work in any drive. Programs often need to access files without knowing their physical location (either the drive or the volume): they only know the "logical path" of the file, i.e. whether the file is a library, a documentation file, a translation of the program's messages, and so on. This is solved in AmigaOS by the use of assigns. An assign follows, again, the same syntax as a device name; however, it already points to a directory inside the filesystem. The place an assign points to can be changed at any time by the user (this behavior is similar to, but nevertheless distinct from, the subst command in MS-DOS, for example). Assigns were also convenient because one logical assign could point to more than one different physical location at the same time, thereby allowing an assign′s contents to expand logically, while still maintaining a separate physical organization. Standard assigns that are generally present in an AmigaOS system include: SYS:, which points to the boot drive's root directory. C:, which points to a directory containing shell commands. At boot time, this is SYS:C, if it exists, otherwise SYS:. The command path defaults to C: and the current working directory, so putting executables in C: allows them to be executed simply by typing their name. DEVS:, which points to a directory containing the system's devices. At boot time, this is SYS:Devs if that directory exists, otherwise SYS:. L:, which points to a directory containing AmigaDOS handlers and filesystems. At boot time, this is SYS:L if it exists, otherwise L: is not automatically created. LIBS:, which points to a directory containing the system's libraries. At boot time, this is SYS:Libs if that directory exists, otherwise SYS:. S:, which points to a directory with scripts, including the startup-sequence which is executed automatically at boot time, if it exists. At boot time, this is SYS:S if it exists, otherwise S: is not automatically created. T:, which points to a temporary folder. PROGDIR:, a special assign that always points to the directory containing the currently running executable. So, if you run "SYS:Tools/Multiview" and "SYS:System/Format", PROGDIR: points at SYS:Tools for Multiview while simultaneously pointing at SYS:System for the Format command. This feature was introduced in Workbench 2.0. Memory paging and a swap partition in later versions AmigaOS 4 introduced new system for allocating RAM and defragmenting it "on the fly" during system inactivities. It is based on slab allocation method and there is also present a memory pager that arbitrates paging memory and allows the swapping of large portions of physical RAM on mass storage devices as a sort of virtual memory. Co-operative paging was finally implemented in AmigaOS 4.1. Versions Since the introduction of AmigaOS in 1985 there have been four major versions and several minor revisions. Up until release 3.1 of the Amiga's operating system, Commodore used Workbench to refer to the entire Amiga operating system. As a consequence Workbench was commonly used to refer to both the operating system and the file manager component. For end users Workbench was often synonymous with AmigaOS. From version 3.5 the OS was renamed "AmigaOS" and pre-3.5 versions were also retroactively referred to as "AmigaOS" (rather than Workbench). Subsequently, "Workbench" refers to the native graphical file manager only. From its inception, Workbench offered a highly customizable interface. The user could change the aspect of program icons replacing it with newer ones with different color combinations. Users could also take a "snapshot" of icons and windows so the icons will remain on the desktop at coordinates chosen by user and windows will open at the desired size. AmigaOS 1.0 – 1.4 AmigaOS 1.0 was released with the first Amiga, the Amiga 1000, in 1985. The 1.x versions of AmigaOS by default used a blue and orange color scheme, designed to give high contrast on even the worst of television screens (the colors can be changed by the user). Version 1.1 consists mostly of bug fixes and, like version 1.0, was distributed for the Amiga 1000 only. The display was highly customizable for the era. The user was free to create and modify system and user icons, which could be of arbitrary size and design and can have two image states to produce a pseudo-animated effect when selected. Users could customize four display colors and choose from two resolutions: or (interlaced) on NTSC, or or on PAL systems. In later revisions, the TV or monitor overscan could be adjusted. Several features were deprecated in later versions. For example, the gauge meter showing the free space on a file system was replaced with a percentage in AmigaOS 2.0 before being restored in 3.5. The default "busy" pointer (a comic balloon showing "Zzz...") was replaced with a stopwatch in later versions. AmigaOS 2.0, 2.1 AmigaOS 2.0 was released with the launch of the Amiga 3000 in 1990. Until AmigaOS 2.0 there was no unified look and feel design standard and application developers had to write their own widgets (both buttons and menus) if they wished to enhance the already-meager selection of standard basic widgets provided by Intuition. With AmigaOS 2.0 gadtools.library was created, which provided standard widget sets. The Amiga User Interface Style Guide, was published which explained how applications should be laid out for consistency. Intuition was improved with BOOPSI (Basic Object Oriented Programming System for Intuition) which enhanced the system with an object-oriented interface to define a system of classes in which every class individuates a single widget or describes an interface event. It can be used to program object oriented interfaces into Amiga at any level. AmigaOS 2.0 also added support for public screens. Instead of the AmigaOS screen being the only shareable screen, applications could create their own named screens to share with other applications. AmigaOS 2.0 rectified the problem of applications hooking directly into the input-events stream to capture keyboard and mouse movements, sometimes locking up the whole system. AmigaOS 2.0 provided Commodities, a standard interface for modifying or scanning input events. This included a standard method for specifying global "hotkey" key-sequences, and a Commodities Exchange registry for the user to see which commodities were running. AmigaOS 2.1 introduced AmigaGuide, a simple text-only hypertext markup scheme and browser, for providing online help inside applications. It also introduced Installer, a standard software installation program, driven by a LISP-like scripting language. AmigaOS 2.1 introduced multi-lingual locale support through locale.library and for the first time AmigaOS was translated to different languages. AmigaOS 3.0, 3.1 Version 3.0 was originally shipped with the Amiga 1200 and Amiga 4000 computers. Version 3.0 added datatypes support which allowed any application that supported datatypes to load any file format supported by datatypes. Workbench could load any background image in any format if the required datatype was installed. A tiny application called Multiview was included that could open and display any supported file. Its capabilities were directly related to the datatypes installed in Devs:Datatypes. The established AmigaGuide hypertext system gained more usability by using document links pointing to media files, for example pictures or sounds, all recognized by the datatypes. AmigaOS 3.5, 3.9 Around six years after AmigaOS 3.1 was released, following Commodore's demise, Haage & Partner were granted a license to update AmigaOS, which was released in 1999 as a software-only update for existing systems, that ran at least on a 68(EC)020 processor. The AmigaOS look and feel, though still largely based on the earlier 3.1 release was revised somewhat, with an improved user interface based on ReAction, improved icon rendering and official support for true color backdrops. These releases included support for existing third-party GUI enhancements, such as NewIcons, by integrating these patches into the system. The 3.5 and 3.9 releases included a new set of 256 color icons and a choice of desktop wallpaper. These replaced the default all-metal gray 4/8 color scheme used on AmigaOS from release 2.0 to 3.1. The 3.9 release of AmigaOS was again developed by Haage&Partner and released in 2000. The main improvements were the introduction of a program start bar called AmiDock, revised user interfaces for system settings and improved utility programs. AmigaOS 3.1.4, 3.2 In September 2018, Hyperion Entertainment released AmigaOS 3.1.4; this was both a software and hardware update for all Amigas. In 2019, AmigaOS 3.1.4.1 was released as a software only update to Amiga 3.1.4, mainly as a bug fix. It includes many fixes, modernizes several system components previously upgraded in OS 3.9, introduces support of larger hard drives (including at bootup), supports the entire line of Motorola 680x0 CPUs up to (and including) the Motorola 68060, and includes a modernized Workbench with a new, optional icon set. Unlike AmigaOS 3.5 / 3.9, AmigaOS 3.1.4 still supports the Motorola 68000 CPU. In May 2021, Hyperion Entertainment released AmigaOS 3.2, which includes all features of the previous version (3.1.4.1) and adds several new improvements such as support for ReAction GUI, management of Amiga Disk File images, help system and improved datatypes. AmigaOS 4.0, 4.1 This new AmigaOS, called AmigaOS 4.0 has been rewritten to become fully PowerPC compatible. It was initially developed on Cyberstorm PPC, as making it independent of the old Amiga chipsets was nontrivial. Since the fourth Developer Pre-Release Update a new technique was adopted and the screens are draggable in any direction. Drag and drop of Workbench icons between different screens is possible too. Also in AmigaOS 4.0 were a new version of Amidock, TrueType/OpenType fonts, and a movie player with DivX and MPEG-4 support. In AmigaOS 4.1, a new Start-up preferences feature was added which replaced the old WBStartup drawer. Additional enhancements were a new icon set to complement higher screen resolutions, new window themes including drop shadows, a new version of AmiDock with true transparency, scalable icons and AmigaOS with auto-update feature. Influence on other operating systems AROS Research Operating System (AROS) implements the AmigaOS API in a portable open-source operating system. Although not binary-compatible with AmigaOS (unless running on 68k), users have reported it to be highly source-code-compatible. MorphOS is a PowerPC native operating system which also runs on some Amiga hardware. It implements AmigaOS API and provides binary compatibility with "OS-friendly" AmigaOS applications (that is, those applications which do not access any native, legacy Amiga hardware directly just as AmigaOS 4.x unless executed on real Amiga models).pOS'' was a multiplatform closed-source operating system with source code-level compatibility with existing Amiga software. BeOS features also a centralized datatype structure similar to MacOS Easy Open after old Amiga developers requested Be to adopt Amiga datatype service. It allows the entire OS to recognize all kinds of files (text, music, videos, documents, etc.) with standard file descriptors. The datatype system provides the entire system and any productivity tools with standard loaders and savers for these files, without the need to embed multiple file-loading capabilities into any single program. AtheOS was inspired by AmigaOS, and originally intended to be a clone of AmigaOS. Syllable is a fork of AtheOS, and includes some AmigaOS- and BeOS-like qualities. FriendUP is a cloud based meta operating system. It has many former Commodore and Amiga developers and employees working on the project. The operating system retains several AmigaOS-like features, including DOS Drivers, mount lists, a TRIPOS based CLI and screen dragging. Finally, the operating system of the 3DO Interactive Multiplayer bore a very strong resemblance to AmigaOS and was developed by RJ Mical, the creator of the Amiga's Intuition user interface. See also Comparison of operating systems References External links CBM software Microkernel-based operating systems Microkernels Assembly language software 1985 software
8198246
https://en.wikipedia.org/wiki/Kernel%20Patch%20Protection
Kernel Patch Protection
Kernel Patch Protection (KPP), informally known as PatchGuard, is a feature of 64-bit (x64) editions of Microsoft Windows that prevents patching the kernel. It was first introduced in 2005 with the x64 editions of Windows XP and Windows Server 2003 Service Pack 1. "Patching the kernel" refers to unsupported modification of the central component or kernel of the Windows operating system. Such modification has never been supported by Microsoft because, according to Microsoft, it can greatly reduce system security, reliability, and performance. Although Microsoft does not recommend it, it is possible to patch the kernel on x86 editions of Windows; however, with the x64 editions of Windows, Microsoft chose to implement additional protection and technical barriers to kernel patching. Since patching the kernel is possible in 32-bit (x86) editions of Windows, several antivirus software developers use kernel patching to implement antivirus and other security services. These techniques will not work on computers running x64 editions of Windows. Because of this, Kernel Patch Protection resulted in antivirus makers having to redesign their software without using kernel patching techniques. However, because of the design of the Windows kernel, Kernel Patch Protection cannot completely prevent kernel patching. This has led to criticism that since KPP is an imperfect defense, the problems caused to antivirus vendors outweigh the benefits because authors of malicious software will simply find ways around its defenses. Nevertheless, Kernel Patch Protection can still prevent problems of system stability, reliability, and performance caused by legitimate software patching the kernel in unsupported ways. Technical overview The Windows kernel is designed so that device drivers have the same privilege level as the kernel itself. Device drivers are expected to not modify or patch core system structures within the kernel. However in x86 editions of Windows, Windows does not enforce this expectation. As a result, some x86 software, notably certain security and antivirus programs, were designed to perform needed tasks through loading drivers that modify core kernel structures. In x64 editions of Windows, Microsoft began to enforce restrictions on what structures drivers can and cannot modify. Kernel Patch Protection is the technology that enforces these restrictions. It works by periodically checking to make sure that protected system structures in the kernel have not been modified. If a modification is detected, then Windows will initiate a bug check and shut down the system, with a blue screen and/or reboot. The corresponding bugcheck number is 0x109, the bugcheck code is CRITICAL_STRUCTURE_CORRUPTION. Prohibited modifications include: Modifying system service descriptor tables Modifying the interrupt descriptor table Modifying the global descriptor table Using kernel stacks not allocated by the kernel Modifying or patching code contained within the kernel itself, or the HAL or NDIS kernel libraries Kernel Patch Protection only defends against device drivers modifying the kernel. It does not offer any protection against one device driver patching another. Ultimately, since device drivers have the same privilege level as the kernel itself, it is impossible to completely prevent drivers from bypassing Kernel Patch Protection and then patching the kernel. KPP does however present a significant obstacle to successful kernel patching. With highly obfuscated code and misleading symbol names, KPP employs security through obscurity to hinder attempts to bypass it. Periodic updates to KPP also make it a "moving target", as bypass techniques that may work for a while are likely to break with the next update. Since its creation in 2005, Microsoft has so far released two major updates to KPP, each designed to break known bypass techniques in previous versions. Advantages Patching the kernel has never been supported by Microsoft because it can cause a number of negative effects. Kernel Patch Protection protects against these negative effects, which include: Serious errors in the kernel. Reliability issues resulting from multiple programs attempting to patch the same parts of the kernel. Compromised system security. Rootkits can use kernel access to embed themselves in an operating system, becoming nearly impossible to remove. Microsoft's Kernel Patch Protection FAQ further explains: Criticisms Third-party applications Some computer security software, such as McAfee's McAfee VirusScan and Symantec's Norton AntiVirus, worked by patching the kernel on x86 systems. Anti-virus software authored by Kaspersky Lab has been known to make extensive use of kernel code patching on x86 editions of Windows. This kind of antivirus software will not work on computers running x64 editions of Windows because of Kernel Patch Protection. Because of this, McAfee called for Microsoft to either remove KPP from Windows entirely or make exceptions for software made by "trusted companies" such as themselves. Symantec's corporate antivirus software and Norton 2010 range and beyond worked on x64 editions of Windows despite KPP's restrictions, although with less ability to provide protection against zero-day malware. Antivirus software made by competitors ESET, Trend Micro, Grisoft AVG, avast!, Avira Anti-Vir and Sophos do not patch the kernel in default configurations, but may patch the kernel when features such as "advanced process protection" or "prevent unauthorized termination of processes" are enabled. Microsoft does not weaken Kernel Patch Protection by making exceptions to it, though Microsoft has been known to relax its restrictions from time to time, such as for the benefit of hypervisor virtualization software. Instead, Microsoft worked with third-party companies to create new Application Programming Interfaces that help security software perform needed tasks without patching the kernel. These new interfaces were included in Windows Vista Service Pack 1. Weaknesses Because of the design of the Windows kernel, Kernel Patch Protection cannot completely prevent kernel patching. This led the computer security providers McAfee and Symantec to say that since KPP is an imperfect defense, the problems caused to security providers outweigh the benefits, because malicious software will simply find ways around KPP's defenses and third-party security software will have less freedom of action to defend the system. In January 2006, security researchers known by the pseudonyms "skape" and "Skywing" published a report that describes methods, some theoretical, through which Kernel Patch Protection might be bypassed. Skywing went on to publish a second report in January 2007 on bypassing KPP version 2, and a third report in September 2007 on KPP version 3. Also, in October 2006 security company Authentium developed a working method to bypass KPP. Nevertheless, Microsoft has stated that they are committed to remove any flaws that allow KPP to be bypassed as part of its standard Security Response Center process. In keeping with this statement, Microsoft has so far released two major updates to KPP, each designed to break known bypass techniques in previous versions. Antitrust behavior In 2006, the European Commission expressed concern over Kernel Patch Protection, saying it was anticompetitive. However, Microsoft's own antivirus product, Windows Live OneCare, had no special exception to KPP. Instead, Windows Live OneCare used (and had always used) methods other than patching the kernel to provide virus protection services. Still, for other reasons a x64 edition of Windows Live OneCare was not available until November 15, 2007. References External links The Truth About PatchGuard: Why Symantec Keeps Complaining An Introduction to Kernel Patch Protection Microsoft executive clarifies recent market confusion about Windows Vista Security Kernel Patch Protection: Frequently Asked Questions Windows Vista x64 Security – Pt 2 – Patchguard Uninformed.org articles: Bypassing PatchGuard on Windows x64 Subverting PatchGuard Version 2 PatchGuard Reloaded: A Brief Analysis of PatchGuard Version 3 Working bypass approaches KPP Destroyer (including source code) - 2015 A working driver to bypass PatchGuard 3 (including source code) - 2008 Bypassing PatchGuard with a hex editor - 2009 Microsoft security advisories: June 13, 2006 update to Kernel Patch Protection August 14, 2007 update to Kernel Patch Protection Microsoft Windows security technology Windows NT kernel
30433
https://en.wikipedia.org/wiki/Transaction%20Processing%20Facility
Transaction Processing Facility
Transaction Processing Facility (TPF) is an IBM real-time operating system for mainframe computers descended from the IBM System/360 family, including zSeries and System z9. TPF delivers fast, high-volume, high-throughput transaction processing, handling large, continuous loads of essentially simple transactions across large, geographically dispersed networks. While there are other industrial-strength transaction processing systems, notably IBM's own CICS and IMS, TPF's specialty is extreme volume, large numbers of concurrent users, and very fast response times. For example, it handles VISA credit card transaction processing during the peak holiday shopping season. The TPF passenger reservation application PARS, or its international version IPARS, is used by many airlines. PARS is an application program; TPF is an operating system. One of TPF's major optional components is a high performance, specialized database facility called TPF Database Facility (TPFDF). A close cousin of TPF, the transaction monitor ALCS, was developed by IBM to integrate TPF services into the more common mainframe operating system MVS, now z/OS. History TPF evolved from the Airlines Control Program (ACP), a free package developed in the mid-1960s by IBM in association with major North American and European airlines. In 1979, IBM introduced TPF as a replacement for ACP — and as a priced software product. The new name suggests its greater scope and evolution into non-airline related entities. TPF was traditionally an IBM System/370 assembly language environment for performance reasons, and many TPF assembler applications persist. However, more recent versions of TPF encourage the use of C. Another programming language called SabreTalk was born and died on TPF. IBM announced the delivery of the current release of TPF, dubbed z/TPF V1.1, in September 2005. Most significantly, z/TPF adds 64-bit addressing and mandates use of the 64-bit GNU development tools. The GCC compiler and the DIGNUS Systems/C++ and Systems/C are the only supported compilers for z/TPF. The Dignus compilers offer reduced source code changes when moving from TPF 4.1 to z/TPF. Users Current users include Sabre (reservations), VISA Inc. (authorizations), American Airlines, American Express (authorizations), DXC Technology SHARES (reservations), Holiday Inn (central reservations), Amtrak, Marriott International, Travelport (Galileo, Apollo, Worldspan, Axess Japan GDS), Citibank, Air Canada, Trenitalia (reservations), Delta Air Lines (reservations and operations) and Japan Airlines. Operating environment Tightly coupled Although IBM's 3083 was aimed at running TPF on a "fast... uniprocessor,", TPF is capable of running on a multiprocessor, that is, on systems in which there is more than one CPU. Within the LPAR, the CPUs are referred to as instruction streams or simply I-streams. When running on a LPAR with more than one I-stream, TPF is said to be running tightly coupled. TPF adheres to SMP concepts; no concept of NUMA-based distinctions between memory addresses exist. The depth of the CPU ready list is measured as any incoming transaction is received, and queued for the I-stream with the lowest demand, thus maintaining continuous load balancing among available processors. In cases where loosely coupled configurations are populated by multiprocessor CPCs (Central Processing Complex, i.e. the physical machine packaged in one system cabinet), SMP takes place within the CPC as described here, whereas sharing of inter-CPC resources takes place as described under Loosely coupled, below. In the TPF architecture, all memory (except for a 4KB-sized prefix area) is shared among all I-streams. In instances where memory-resident data must or should be kept separated by I-stream, the programmer typically allocates a storage area into a number of subsections equal to the number of I-streams, then accesses the desired I-stream associated area by taking the base address of the allocated area, and adding to it the product of the I-stream relative number times the size of each subsection. Loosely coupled TPF is capable of supporting multiple mainframes (of any size themselves — be it single I-stream to multiple I-stream) connecting to and operating on a common database. Currently, 32 IBM mainframes may share the TPF database; if such a system were in operation, it would be called 32-way loosely coupled. The simplest loosely coupled system would be two IBM mainframes sharing one DASD (Direct Access Storage Device). In this case, the control program would be equally loaded into memory and each program or record on DASD could be potentially accessed by either mainframe. In order to serialize accesses between data records on a loosely coupled system, a practice known as record locking must be used. This means that when one mainframe processor obtains a hold on a record, the mechanism must prevent all other processors from obtaining the same hold and communicate to the requesting processors that they are waiting. Within any tightly coupled system, this is easy to manage between I-streams via the use of the Record Hold Table. However, when the lock is obtained offboard of the TPF processor in the DASD control unit, an external process must be used. Historically, the record locking was accomplished in the DASD control unit via an RPQ known as LLF (Limited Locking Facility) and later ELLF (extended). LLF and ELLF were both replaced by the Multipathing Lock Facility (MPLF). To run, clustered (loosely coupled) z/TPF requires either MPLF in all disk control units or an alternative locking device called a Coupling Facility. Processor shared records Records that absolutely must be managed by a record locking process are those which are processor shared. In TPF, most record accesses are done by using record type and ordinal. So if you had defined a record type in the TPF system of 'FRED' and gave it 100 records or ordinals, then in a processor shared scheme, record type 'FRED' ordinal '5' would resolve to exactly the same file address on DASD — clearly necessitating the use of a record locking mechanism. All processor shared records on a TPF system will be accessed via exactly the same file address which will resolve to exactly the same location. Processor unique records A processor unique record is one that is defined such that each processor expected to be in the loosely coupled complex has a record type of 'FRED' and perhaps 100 ordinals. However, if a user on any 2 or more processors examines the file address that record type 'FRED', ordinal '5' resolves to, they will note a different physical address is used. TPF attributes What TPF is not TPF is not a general-purpose operating system. TPF's specialized role is to process transaction input messages, then return output messages on a 1:1 basis at extremely high volume with short maximum elapsed time limits. TPF has no built-in graphical user interface functionality, and TPF has never offered direct graphical display facilities: to implement it on the host would be considered an unnecessary and potentially harmful diversion of real-time system resources. TPF's user interface is command-line driven with simple text display terminals that scroll upward, and there are no mouse-driven cursors, windows, or icons on a TPF Prime CRAS (Computer room agent set — which is best thought of as the "operator's console"). Character messages are intended to be the mode of communications with human users; all work is accomplished via the use of the command line, similar to UNIX without X. There are several products available which connect to Prime CRAS and provide graphical interface functions to the TPF operator, such as TPF Operations Server. Graphical interfaces for end users, if desired, must be provided by external systems. Such systems perform analysis on character content (see Screen scrape) and convert the message to/from the desired graphical form, depending on its context. Being a specialized purpose operating system, TPF does not host a compiler/assembler, text editor, nor implement the concept of a desktop as one might expect to find in a GPOS. TPF application source code is commonly stored in external systems, and likewise built "offline". Starting with z/TPF 1.1, Linux is the supported build platform; executable programs intended for z/TPF operation must observe the ELF format for s390x-ibm-linux. Using TPF requires a knowledge of its Command Guide since there is no support for an online command "directory" or "man"/help facility to which users might be accustomed. Commands created and shipped by IBM for the system administration of TPF are called "functional messages"—commonly referred to as "Z-messages", as they are all prefixed with the letter "Z". Other letters are reserved so that customers may write their own commands. TPF implements debugging in a distributed client-server mode; which is necessary because of the system's headless, multi-processing nature: pausing the entire system in order to trap a single task would be highly counter-productive. Debugger packages have been developed by 3rd party vendors who took very different approaches to the "break/continue" operations required at the TPF host, implementing unique communications protocols used in traffic between the human developer running the debugger client & server-side debug controller, as well as the form and function of debugger program operations at the client side. Two examples of 3rd party debugger packages are Step by Step Trace from Bedford Associates and CMSTPF, TPF/GI, & zTPFGI from TPF Software, Inc. Neither package is wholly compatible with the other, nor with IBM's own offering. IBM's debugging client offering is packaged in an IDE called IBM TPF Toolkit. What TPF is TPF is highly optimized to permit messages from the supported network to either be switched out to another location, routed to an application (specific set of programs) or to permit extremely efficient accesses to database records. Data records Historically, all data on the TPF system had to fit in fixed record (and memory block) sizes of 381, 1055 and 4K bytes. This was due in part to the physical record sizes of blocks located on DASD. Much overhead was saved by freeing up any part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling the same during read operations. Since IBM hardware does I/O via the use of channels and channel programs, TPF would generate very small and efficient channel programs to do its I/O — all in the name of speed. Since the early days also placed a premium on the size of storage media — be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource. Today, much of these limitations are removed. In fact, only because of legacy support are smaller-than-4K DASD records still used. With the advances made in DASD technology, a read/write of a 4K record is just as efficient as a 1055 byte record. The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible. Programs and residency TPF also had its program segments allocated as 381, 1055 and 4K byte-sized records at different points in its history. Each segment consisted of a single record; with a typically comprehensive application requiring perhaps tens or even hundreds of segments. For the first forty years of TPF's history, these segments were never link-edited. Instead, the relocatable object code (direct output from the assembler) was laid out in memory, had its internally (self-referential) relocatable symbols resolved, then the entire image was written to file for later loading into the system. This created a challenging programming environment in which segments related to one another could not directly address each other, with control transfer between them implemented as the ENTER/BACK system service. In ACP/TPF's earliest days (circa 1965), memory space was severely limited, which gave rise to a distinction between file-resident and core-resident programs—only the most frequently used application programs were written into memory and never removed (core-residency); the rest were stored on file and read in on demand, with their backing memory buffers released post-execution. The introduction of C language to TPF at version 3.0 was first implemented conformant to segment conventions, including the absence of linkage editing. This scheme quickly demonstrated itself to be impractical for anything other than the simplest of C programs. At TPF 4.1, truly and fully linked load modules were introduced to TPF. These were compiled with the z/OS C/C++ compiler using TPF-specific header files and linked with IEWL, resulting in a z/OS-conformant load module, which in no manner could be considered a traditional TPF segment. The TPF loader was extended to read the z/OS-unique load module file format, then lay out file-resident load modules' sections into memory; meanwhile, assembly language programs remained confined to TPF's segment model, creating an obvious disparity between applications written in assembler and those written in higher level languages (HLL). At z/TPF 1.1, all source language types were conceptually unified and fully link-edited to conform to the ELF specification. The segment concept became obsolete, meaning that any program written in any source language—including Assembler—may now be of any size. Furthermore, external references became possible, and separate source code programs that had once been segments could now be directly linked together into a shared object. A value point is that critical legacy applications can benefit from improved efficiency through simple repackaging—calls made between members of a single shared object module now have a much shorter pathlength at run time as compared to calling the system's ENTER/BACK service. Members of the same shared object may now share writeable data regions directly thanks to copy-on-write functionality also introduced at z/TPF 1.1; which coincidentally reinforces TPF's reentrancy requirements. The concepts of file- and memory- residency were also made obsolete, due to a z/TPF design point which sought to have all programs resident in memory at all times. Since z/TPF had to maintain a call stack for high-level language programs, which gave HLL programs the ability to benefit from stack-based memory allocation, it was deemed beneficial to extend the call stack to assembly language programs on an optional basis, which can ease memory pressure and ease recursive programming. All z/TPF executable programs are now packaged as ELF shared objects. Memory usage Historically and in step with the previous, core blocks— memory— were also 381, 1055 and 4 K bytes in size. Since ALL memory blocks had to be of this size, most of the overhead for obtaining memory found in other systems was discarded. The programmer merely needed to decide what size block would fit the need and ask for it. TPF would maintain a list of blocks in use and simply hand the first block on the available list. Physical memory was divided into sections reserved for each size so a 1055 byte block always came from a section and returned there, the only overhead needed was to add its address to the appropriate physical block table's list. No compaction or data collection was required. As applications got more advanced demands for memory increased, and once C became available memory chunks of indeterminate or large size were required. This gave rise to the use of heap storage and some memory management routines. To ease the overhead, TPF memory was broken into frames— 4 KB in size (1 MB with z/TPF). If an application needs a certain number of bytes, the number of contiguous frames required to fill that need are granted. References Bibliography Transaction Processing Facility: A Guide for Application Programmers (Yourdon Press Computing Series) by R. Jason Martin (Hardcover - April 1990), External links z/TPF (IBM) TPF User Group (TPF User Group) Real-time operating systems IBM mainframe operating systems Transaction processing facility
1286017
https://en.wikipedia.org/wiki/Slugs%20%28autopilot%20system%29
Slugs (autopilot system)
Slugs is an open-source autopilot system oriented toward inexpensive autonomous aircraft. Low cost and wide availability enable hobbyist use in small remotely piloted aircraft. The project started in 2009 and is being further developed and used at Autonomous Systems Lab of University of California Santa Cruz. Several vendors are currently producing Slugs autopilots and accessories. Overview An autopilot allows a remotely piloted aircraft to be flown out of sight. All hardware and software is open-source and freely available to anyone under the MIT licensing agreement. free software autopilots provide more flexible hardware and software. Users can modify the autopilot based on their own special requirements, such as forest fire evaluation. The free software approach from Slugs is similar to that from the paparazzi Project, PX4 autopilot, ArduCopter and OpenPilot where low cost and availability enables its hobbyist use in small remotely piloted aircraft such as micro air vehicles and miniature UAVs. Such frameworks are common in Open-source robotics. Software The open-source software suite contains everything to let airborne systems fly. See also Crowdsourcing Micro air vehicle References External links Slugs Homepage Avionics Aircraft instruments Unmanned aerial vehicles Free software
17188506
https://en.wikipedia.org/wiki/Disklavier
Disklavier
Disklavier is the brand name for a family of high-tech reproducing pianos made by Yamaha Corporation. The first Disklavier was introduced in the United States in 1987. The typical Disklavier is a real acoustic piano outfitted with electronic sensors for recording and electromechanical solenoids for player piano-style playback. Sensors record the movements of the keys, hammers, and pedals during a performance, and the system saves the performance data as a Standard MIDI File (SMF). On playback, the solenoids move the keys and pedals and thus reproduce the original performance. Modern Disklaviers typically include an array of electronic features, such as a built-in tone generator for playing back MIDI accompaniment tracks, speakers, MIDI connectivity that supports communication with computing devices and external MIDI instruments, additional ports for audio and SMPTE I/O, and Internet connectivity. Historically, a variety of devices have been used to control or operate the instrument, including buttons on a control box mounted on the piano, infrared handheld controllers, handheld wi-fi controllers, a Java application that runs on a personal computer, and apps that run on iOS-based portable devices. Disklaviers have been manufactured in the form of upright, baby grand, and grand piano styles (including a nine-foot concert grand). Reproducing systems have ranged from relatively simple, playback-only models to the PRO models which record performance data at resolutions that exceed the limits of normal MIDI data. From the late 1990s into the early 2000s, Yamaha also produced a GranTouch series of Disklaviers that were digital pianos with a grand piano action. In addition to recording, the GranTouch instruments were capable of playing back performances with moving keys although the moving keys were not necessary for the electronic reproduction of sound. Models History Early models Prior to the introduction of the Disklavier in the United States, Yamaha Corporation of Japan debuted an upright reproducing instrument in 1982 called "Piano Player". It featured a record-and-playback system, floppy disk storage of performance data, and the ability to playback multi-track performance files that included instrumental tracks whose sound was reproduced by a tone generator. There was also an upright model sold in Japan in 1985 known as the MX100R. The first model introduced in the United States was the studio model upright MX100A in 1987 (easiest way to identify this model is the LED Display on the front of the piano is red whilst all later models were changed to green or as in the case of the current E3, a white display). Shortly thereafter, it was slightly modified and renamed MX100B. This early upright was followed by the first grand piano model in 1989. This early grand piano version of the Disklavier lacked an official model designation and has become known as the Wagon Grand by virtue of the fact that the control unit was built into a 30” tall cabinet on wheels, this model in Japan does have a model designation of PPG-10R and it has been called DKW10. A third, early model series was introduced in the early 1990s in small uprights and was known as the MX80 series. Like the MX100A, MX100B, and Wagon Grand, the MX80 recorded on 3.5” double-density floppy disks and recorded performances in a Yamaha-proprietary file format called E-SEQ, a forerunner of the subsequent industry-standard file format known as Standard MIDI Files. All of these instruments featured ports for MIDI input and output. Technical innovations found on these early model instruments included hammer sensors for recording (MX100A, MX100B, and Wagon Grand), recording and playback of incremental pedal data (Wagon Grand), and moving pedals during playback (all models). Mark II, Mark IIXG The next generation of Disklaviers began with the Mark II in 1992. Standard features included hammers sensors for recording, support for recording and playback of incremental pedal data, and support for the emerging industry standard file format called Standard MIDI Files. Within two years of the introduction of the Mark II, the Mark IIXG system became available which included support for 3.5” high density floppy disks, built-in non-volatile memory for song storage, multi-track recording, and an on-board tone generator which supported several sound sets including General MIDI (GM), Roland's General Standard (GS), and Yamaha's XG. Upgrade kits became available to update Mark II pianos to include the Mark IIXG features. This included the DSR1 module which gave wagon grand, MX100A/B and Mark II disklavier owners most of the features of the Mark IIXG however it didn't change the fundamental recording and playback accuracy of the solenoids or sensors of those early systems. During the Mark II and Mark IIXG era, various models of uprights were introduced that included a silent system. When the silent system was engaged, the hammers were prevented from hitting the strings and the instrument produced no sound acoustically. The player was able to wear a headset and hear themselves play as though they were playing a digital piano with the sound of a nine-foot concert grand. Some Disklavier uprights with this system also contained a Celeste or practice pedal which when engaged brought a rail with a curtain of felt between the hammers and the strings thus significantly reducing the volume of the acoustic piano. This feature could also be used whilst the disklavier system was being used however this feature was a very rare option for pianos with a silent system fitted. Mark III The Mark III system followed in 2000. The Mark III included a variety of underlying technical improvements to the record and playback system. An especially noteworthy improvement was its ability to play back performances at very low volume levels. Additional user features included recording and playback of synchronous audio tracks, playback of specially encoded CD-ROM disks from a built-in CD player, and the SmartKey system that provided a play-along feature in which the user is prompted to press silently wiggling keys. The Mark III also introduced support for video-sync recording and playback based on the generation and reception of MIDI Time Code. Another upgrade known as the DCD1 was available that could provide early Disklavier owners with a CD drive for reading Cd's like the Mk III. Pro In 1999, near the end of the Mark IIXG model series, Yamaha introduced the Disklavier PRO. A key selling feature of this model was the claim of greater recording and playback accuracy than had been possible with previously available models. These instruments recorded not only hammer velocity (as MIDI note-on velocity) but key down velocity and key up velocity (MIDI note-off velocity) as well. The instrument was also capable of recording and reproducing key movements that resulted in no audible sound. Before the PRO, Disklaviers were limited by design, like all MIDI keyboard instruments, to working within a 0–127 range of values for note-on velocity, note-off velocity, and incremental pedal movement. To break this accuracy limit, Yamaha's Disklavier engineers pioneered a unique use of normally undefined MIDI controllers for the purpose of substantially extending the range of values for note-on/note-off to 0–1023 and for pedal movement to 0–255. In Disklavier lingo, this "extended precision" data was referred to as "XP" data. The recording and reproduction quality of the PRO have been validated by the International Piano-e-Competition, formerly known as the Minnesota International Piano-e-Competition. In 2002, the Piano-e-Competition used the Disklavier PRO on two continents to enable Yefim Bronfman to participate as a member of the competition jury from Hamamatsu, Japan, 6,000 miles from where the competition was taking place in St. Paul, MN. Following each solo performance, synchronized MIDI and video files were transmitted over the Internet, and Bronfman was able to watch performances on a large screen while the local piano reproduced the playing. Since that time, the Disklavier PRO has been used by the competition to enable pianists to participate in a screening-round of the competition ("virtual auditions") by submitting a video-synchronized performance recorded on a Disklavier PRO. All rounds of the competition are recorded on the PRO and made available as downloadable files from the competition's website. The original PRO was the first model Disklavier grand to include the silent system. Ever since the instrument's introduction during the Mark IIXG model era, newer versions of the PRO have been available in subsequent model series and have been known as Mark III PRO, Mark IV PRO, and E3 PRO. Disklavier PRO 2000 In celebration of its 100th year of piano manufacturing, Yamaha debuted a concept piano called the Disklavier PRO 2000. The instrument's unusual physical design featured cherry wood, aluminum chassis material, a clear split lid, and a built-in Windows computer with a touch-screen monitor. Internally, this piano with a AAA–c′′′′′ (88 keys) compass was based on the Mark III PRO Disklavier system. The instrument offered a glimpse into the future of Disklavier and piano manufacturing. This was the first Disklavier to support playback of video-synchronized recordings. There was a performance mode that enabled a player to layer a variety of independently zoned sounds on top of their playing, and the built-in computer offered a program called Home Concert 2000 from TimeWarp Technologies that was capable of displaying music on the screen, tracking the performer, turning the pages automatically, and outputting a coordinated accompaniment. Only nine of these pianos were built. The suggested retail price was $333,000, which made the instrument the most expensive Disklavier ever produced. Mark IV Introduced in 2004, the Mark IV series of Disklaviers was available in grand pianos only. The Mark IV series overlapped the Mark III model era. The control system for the Mark IV was built on an embedded Linux operating system, and it offered a wi-fi-based PDA-style controller (PRC100) as well as an optional tablet-style controller. The instrument had an Ethernet port which enabled it to be connected to a local area network. There was also an embedded Java application known as the Virtual PRC which could be accessed and run on Mac and Windows computers that were on the same network as the piano. In January 2011, Yamaha expanded the control features of the instrument by offering a free iOS application that was able to control the instrument over the local network. Other features of the instrument included an 80-gigabyte hard drive, an unobtrusive console, located under the left side of the keyboard, an expanded array of audio ports, support for USB storage devices, and support for USB MIDI communications. Another enhancement of the Disklavier system was the support for SMPTE time code generation and reception, enabling the recording and playback of video-synchronized performance without additional hardware. Although firmware updates had been available occasionally for earlier models of Disklavier, the Mark IV's Linux-based system was capable of being updated over the Internet. As of 2014, the Mark IV is using the 4th generation of its operating system. Along with system updates to the Mark IV, Yamaha expanded the functionality of the instrument via the Internet. In 2006, the 2.0 system update was accompanied by the additional ability to purchase recorded performances using the remote controller of the instrument as well as the opportunity to subscribe to a new cloud-based service called DisklavierRadio. DisklavierRadio (sometimes known as Piano Radio) offers a number of "channels" that can be received as performance data streams that are reproduced by the instrument itself. In 2013, Yamaha combined the built-in technologies of video-synchronized playback and the streaming capabilities of DisklavierRadio and offered customers an additional service called DisklavierTV. DisklavierTV is powered by Yamaha's RemoteLive technology and enables the reception of broadcasts that include video and audio as well as performance data that drives the playback of the piano itself. Yamaha has offered a large number of DisklavierTV concerts to its Mark IV and E3 customers, including performances by Elton John and Sarah McLachlan, performances from the Monterey Jazz Festival, Newport Music Festival, and the International Piano-e-Competition. Much of this content is also made available on-demand, allowing customers to receive these concerts whenever they would like. E3 The E3 Disklavier system was introduced in 2009 while the Mark IV system was still in production, and in the United States, both systems were offered at the same time. Although there was some system overlap in several piano models, the E3 system was only available in smaller grand pianos (5′ 8″ and smaller). In 2012, Yamaha ended production of the Mark IV system, and in the U.S., the E3 became available in virtually all Yamaha grand pianos and a studio model upright piano (DU1E3). During the time that the Mark IV was still in production, the available E3 models had a less sophisticated and less costly record-and-playback system. When the E3 series was expanded to include the larger model Disklaviers, Yamaha added the PRO features to the instruments that are 6′ 1″ and larger. In the US, these larger models are only available with the PRO system, and today, the E3 PRO represents the most advanced Disklavier to date. The control unit for the E3 more closely resembles the control unit of the Mark II, Mark IIXG, and Mark III systems although it is the first Disklavier system that does not include an internal floppy drive. The instrument is controlled by an infrared, handheld remote. Like the Mark IV, the E3 can be connected to a local area network via Ethernet cable and then be controlled by a wireless app running on an iOS device. Like the Mark IV, the E3 enjoys the same cloud-based services such as firmware updates, DisklavierRadio, and DisklavierTV. In order to bring many older model Disklaviers up to the same or similar feature set as the E3, Yamaha introduced the DKC-850 replacement control unit for Mark IIXG and Mark III Disklaviers in 2010. Outwardly, the control unit looks and functions identically to the E3 control unit and provides access to the same cloud-based services, though it does not upgrade the tone generator and has substantially fewer performance/editing features compared with the original control units. The DKC-850 can also update earlier model Disklaviers by connecting to the old control unit via MIDI cables. In this context, the DKC-850 does not support the reception of streaming performances. Disklavier ENSPIRE In January 2016, Yamaha introduced its seventh-generation Disklavier, the Disklavier ENSPIRE. Replacing the Disklavier E3, the ENSPIRE remains the only fully integrated, factory-installed reproducing piano available that can both natively play and record a piano performance. The ENSPIRE is available in 14 models ranging from 48” upright pianos to a 9’ concert grand and is offered in three system variations – CL, ST and PRO. The CL introduces a “playback only” model omitting the recording and Silent System functionality that are offered in ST and PRO models. Currently, the CL type ENSPIRE is only offered in Yamaha's entry-level grand piano, the GB1K, and is only sold in certain markets. ST models include a non-contact optical sensing system, featuring continuous grayscale shutters for each key and optical window style shutters on each hammer. Optical sensors are also used for the damper, soft and sostenuto pedals. This sensor system allows the user to natively capture their own performance in standard MIDI format, without the need for external or special software. In addition, a “Silent System” that does not require special installation or instrument modification is added to allow for headphone connectivity and access to the instrument's digital sounds, which include a special binaurally captured CFX Concert Grand sample. Because piano components and solenoids can be affected by environmental changes, a patented DSP servo drive system that monitors and controls key and pedal movement to ensure accurate performance reproduction is active during playback. This DSP system provides feedback to the instrument's processor effectively making the system a “closed-loop”. If the system detects any physical movement that does not correlate with the provided performance data, it will automatically adjust itself to correct any deviation in real-time. PRO models are high-resolution systems equipped with non-contact optical sensors as well, but also incorporate continuous grayscale shutters on each hammer to measure their speed and distance. The addition of continuous grayscale shutters for each hammer allows for even greater recording and performance accuracy allowing the user to natively record and playback high-resolution performances with 1024 levels of key and hammer velocity as well as 256 increments of positional pedaling using Yamaha's proprietary XP format. ENSPIRE PRO models also utilize Yamaha's AccuPlay technology, an advanced DSP servo drive system that monitors the important mechanical elements of the piano during performance reproduction. In PRO type models, AccuPlay will monitor the movements of the keys, hammers, pedals and solenoids. Like the ST type, data fed back to the playback processor from the instrument's sensing system is used to ensure accurate reproduction of the original performance. Currently, no other system on the market utilizes this type of technology. Aesthetic changes have been made to the Disklavier ENSPIRE, including the removal of the “box” style user interface featured in past generations. While tactile functionality and controls still exist on the instrument itself, the control panel is almost invisible to the user. Operationally, all functions and features can be accessed by any compatible HTML5 browser; however, Yamaha recommends using an Apple iOS device or Android device. The instrument comes with 500 built-in songs, many of which are in Yamaha's PianoSoft Audio format. The PianoSoft Audio format, currently only compatible with Disklavier ENSPIRE, features stereo audio recordings that play in sync with piano performances. The main differentiation between this format when compared to the older PianoSoft Plus Audio format or when compared to competitor's offerings is that the audio recordings are in true stereo, not mono. Included in the built-in song library are performances by Yamaha artists such as Sarah McLachlan, Bob James, Jamie Cullum and Frederic Chiu. In addition to the built-in songs, users have access to over 6,000 additional titles for purchase through the Yamaha MusicSoft online store, directly accessible through the instrument's user interface. The Disklavier ENSPIRE also offers Internet streaming services including PANDORA style Disklavier Radio, which currently provides users with over 30 channels of streaming piano music 24 hours a day, seven days a week. Along with Disklavier Radio, users can also access DisklavierTV, a video streaming service that allows users to view live and on-demand musical performances that play in sync with their piano. Additional Disklavier ENSPIRE features include: An included USB Wi-Fi adaptor (UD-WL01) that allows for peer-to-peer connectivity with a mobile device or connectivity to a network via WPS Automatic system calibration and troubleshooting Instruments do not require special maintenance or piano action regulation to play properly Digital tone generator with 16 playable voices and 480 ensemble voices (256-note polyphony) Direct to USB audio recording function V-sync technology which allows users to create video recordings that sync to recorded piano performances using a standard camcorder or mobile device USB storage connectivity MIDI connectivity via standard MIDI ports or USB Coaxial digital output Specialized uses In 2006, Matthew Teeter and Chris Dobrian, researchers at the University of California, Irvine, developed third-party Disklavier software controller running on Windows, Mac and Linux operating systems, which replicated the functionality provided by the PDA/Tablet PC remotes. The software and its source code were made freely available. In November 2007, Kevin Goroway used that example code to create DKVBrowser which is an open source project. This software is also multiplatform, and has provided features that are not available on the proprietary interfaces provided by Yamaha, such as wildcard searching. The software running on the Disklavier Mark IV and Mark IV PRO onboard Linux control computer continues to undergo development and the manufacturer makes firmware updates available to users. As with other MIDI instruments, one potential benefit of the readily edited MIDI data output by a Disklavier is in the professional recording domain, where a recorded performance could be edited, allowing the correction of minor errors after a take. Artistic uses At the end of the 1980s, the researcher and composer Jean-Claude Risset became interested in the Disklavier for his research and compositions. He composed Duet For One Pianist in 1989, a series of interactive piano pieces, where the notes played by the pianist are used by the computer to be transformed and sent back to the Disklavier in real time. With his team at the Laboratoire d'Acoustique Musicale de Marseille, he develops the programs that will serve as elementary operations for his pieces such as symmetries, note arpeggiation and many others. In his album Music For Choking Disklavier, released in 2015, the musician and composer Hans Tammen focuses on the sound qualities of the automation of the Disklavier, namely the sounds of mechanics. He uses notes with such low dynamics that the hammer of the piano does not hit the string, leaving only the noises of the mechanics and places microphones near the hammers and the motor-driven keyboard. Sometimes the MIDI signal processor stops for a few seconds on a chord due to data overload, hence the title "Choking Disklavier". The Texan composer Kyle Gann used the Disklavier for many of his compositions, to explore complex and unplayable rhythms for a pianist, and to compose microtonal music for piano. In Hyperchromatica, released in 2018, he uses three Disklaviers tuned with intervals smaller than a semitone. This allows him to compose on 243 piano notes instead of 88. In 2018, the pianist Dan Tepfer release the video album Natural Machines. Dan Tepfer processed MIDI data in real time while improvising on the Disklavier. In the same way Jean-Claude Risset used the Disklavier, what is played can for example be repeated as a mirror or perform unplayable tremolo. His performances are accompanied by abstract visualization programs, evoking the algorithmic principles used in his performances. Educational and professional applications The Disklavier has been used extensively in music education, including colleges, universities, conservatories, community music schools, K-12 institutions, and private studios. Applications include: record/playback of student performances which enable a student to listen critically to their own playing interactive play-along with pre-recorded, pedagogical accompaniment files practice of piano concerto repertoire using score-following software, such as Home Concert Xtreme, developed by TimeWarp Technologies use of the instrument as a MIDI input device with compositional software algorithmic compositions that involve interactivity between performer and computer using software such as Max/MSP from Cycling '74 multimedia performance using VJ-style software such as Arkaos Grand VJ from Arkaos piano accompaniments for singers and instrumentalists In recognition of its contributions to the field of piano pedagogy, the Music Teachers National Conference and the Frances Clark Institute awarded the Disklavier the MTNA Frances Clark Keyboard Pedagogy Award in 2006. In 1997, Yamaha undertook a successful, large-scale experiment that connected MIDI instruments together over the Internet, enabling Ryuichi Sakamoto to transmit a keyboard performance to thousands of locations simultaneously. The next year, Yamaha announced a new technology called MidLive RS that developed this concept further, incorporating MIDI data into the RealSystem G2 video/audio SDK provided by RealNetworks. This technology enabled a Disklavier performance in one part of the world to be accurately reproduced in near real time on a similar instrument elsewhere in the world. Although those early efforts did not directly result in a commercial product, Yamaha continued to explore real-time transmission of Disklavier performances over the Internet. In 2007, Yamaha introduced "Remote Lesson" at the Winter NAMM show. Since then, educators at schools all over the U.S. have undertaken long distance lessons and master classes using the Remote Lesson technology. Remote Lesson is a feature that is available exclusively in Mark IV and E3 pianos and is available to select educators and institutions. Similar capability is available in a software program called Internet MIDI that was developed by TimeWarp Technologies. Internet MIDI will connect Disklaviers with other Disklaviers as well as with other MIDI keyboard instruments. When Disklavier pianos are connected over the Internet, there is some amount of delay that is introduced by virtue of the routing of Internet communications as well as the normal buffering of real-time data. In addition, the instrument itself introduces a mechanical delay of about a quarter of a second between the time that MIDI data is received and the moment when the hammers audibly impact the strings. Although the delay is generally too great for the purpose of performing a traditional piano duet, the delay is adjustable to match the delay that is experienced during a video conference, using software such as Skype. In that context, the back-and-forth playing that takes place during a typical lesson is not impeded by the delay. References External links Yamaha Disklavier webpage MIPeC Hi-Def and e-SEQ midibank at Minnesota International Piano-e-Competition Disklavier World Webpage and blog Disklavier Software Webpage – Isadar – original new-age styled solo piano music in e-SEQ Disklavier format Livecoding a Disklavier – playing a Disklavier with a computer program. Lexikon-Sonate – interactive realtime composition for computer-controlled piano (Yamaha Disklavier) by Karlheinz Essl PianoSoft Audio Format webpage Mechanical musical instruments Piano Yamaha music products
24190824
https://en.wikipedia.org/wiki/SVFlux
SVFlux
SVFLUX is a finite element seepage analysis program developed by SoilVision Systems Ltd.. The software is designed to analyze both saturated and unsaturated flow through the ground through the solving of Richard's equation. The program is used in the fields of civil engineering and hydrology in order to analyze seepage and groundwater regional flow. The software is used for the calculation of flow rates, pore-water pressures, and pumping rates associated with regional groundwater flow. The software can be coupled with CHEMFLUX in order to calculate diffusion, advection, and decay rates or with SVHEAT in order to calculate thermal gradients and freeze/thaw fronts. Methodology SVFLUX makes use of a general finite element solver to solve the Richards equation for both saturated and unsaturated flow. The finite element solver makes use of automatic mesh generation and automatic mesh refinement in order to aid in problem solution. The software has been used on large projects including the Questa Weathering Study which examined the flow regime through waste rock piles. Several forms of the flow governing equation are implemented in the software which provides greater flexibility in solving unique flow situations. The user enters geometry, material properties, and analysis constraints through a CAD-type graphical user interface (GUI). The results may also be viewed in the context of a graphical user interface. The geometry is simply entered as regions which may be drawn, pasted in from Excel, or imported from AutoCAD DXF files. The factor of safety for a specific failure surface is computed as the forces driving failure along the surface divided by the shear resistance of the soils along the surface. A library of benchmark models are distributed with the software. Features The developers of SVFLUX have implemented all of the classic features traditionally found in seepage analysis software as well as an interesting list of new features. The following is a list of some of the more distinct features of SVFLUX: Probabilistic analysis Unsaturated analysis with improved convergence Coupled climatic boundary conditions and calculation of actual evaporation (AE) Automatic mesh generation Automatic mesh refinement Support for parallel processing Large library of example models Simple and intuitive graphical user interface Classic features also supported by the software include: Right-click application of boundary conditions and properties Help system and tutorial manual Solution for saturated and unsaturated flow Regional groundwater analysis Plotting of flowlines and streamtraces Reporting of fluxes References External links SoilVision Systems Ltd. Geotechnical engineering software
34793825
https://en.wikipedia.org/wiki/Schema%20migration
Schema migration
In software engineering, schema migration (also database migration, database change management) refers to the management of incremental, reversible changes and version control to relational database schemas. A schema migration is performed on a database whenever it is necessary to update or revert that database's schema to some newer or older version. Migrations are performed programmatically by using a schema migration tool. When invoked with a specified desired schema version, the tool automates the successive application or reversal of an appropriate sequence of schema changes until it is brought to the desired state. Most schema migration tools aim to minimize the impact of schema changes on any existing data in the database. Despite this, preservation of data in general is not guaranteed because schema changes such as the deletion of a database column can destroy data (i.e. all values stored under that column for all rows in that table are deleted). Instead, the tools help to preserve the meaning of the data or to reorganize existing data to meet new requirements. Since meaning of the data often cannot be encoded, the configuration of the tools usually needs manual intervention. Risks and benefits Schema migration allows for fixing mistakes and adapting the data as requirements change. They are an essential part of software evolution, especially in agile environments (see below). Applying a schema migration to a production database is always a risk. Development and test databases tend to be smaller and cleaner. The data in them is better understood or, if everything else fails, the amount of data is small enough for a human to process. Production databases are usually huge, old and full of surprises. The surprises can come from many sources: Corrupt data that was written by old versions of the software and not cleaned properly Implied dependencies in the data which no one knows about anymore People directly changing the database without using the designated tools Bugs in the schema migration tools Mistakes in assumptions how data should be migrated For these reasons, the migration process needs a high level of discipline, thorough testing and a sound backup strategy. Schema migrations may take a long time to complete and for systems that operate 24/7 it is important to be able to do database migrations without downtime. Usually it is done with the help of feature flags and continuous delivery. Schema migration in agile software development When developing software applications backed by a database, developers typically develop the application source code in tandem with an evolving database schema. The code typically has rigid expectations of what columns, tables and constraints are present in the database schema whenever it needs to interact with one, so only the version of database schema against which the code was developed is considered fully compatible with that version of source code. In software testing, while developers may mock the presence of a compatible database system for unit testing, any level of testing higher than this (e.g. integration testing or system testing) it is common for developers to test their application against a local or remote test database schematically compatible with the version of source code under test. In advanced applications, the migration itself can be subject to migration testing. With schema migration technology, data models no longer need to be fully designed up-front, and are more capable of being adapted with changing project requirements throughout the software development lifecycle. Relation to revision control systems Teams of software developers usually use version control systems to manage and collaborate on changes made to versions of source code. Different developers can develop on divergent, relatively older or newer branches of the same source code to make changes and additions during development. Supposing that the software under development interacts with a database, every version of the source code can be associated with at least one database schema with which it is compatible. Under good software testing practice, schema migrations can be performed on test databases to ensure that their schema is compatible to the source code. To streamline this process, a schema migration tool is usually invoked as a part of an automated software build as a prerequisite of the automated testing phase. Schema migration tools can be said to solve versioning problems for database schemas just as version control systems solve versioning problems for source code. In practice, many schema migration tools actually rely on a textual representation of schema changes (such as files containing SQL statements) such that the version history of schema changes can effectively be stored alongside program source code within VCS. This approach ensures that the information necessary to recover a compatible database schema for a particular code branch is recoverable from the source tree itself. Another benefit of this approach is the handling of concurrent conflicting schema changes; developers may simply use their usual text-based conflict resolution tools to reconcile differences. Relation to schema evolution Schema migration tooling could be seen as a facility to track the history of an evolving schema. Advantages Developers no longer need to remove the entire test database in order to create a new test database from scratch (e.g. using schema creation scripts from DDL generation tools). Further, if generation of test data costs a lot of time, developers can avoid regenerating test data for small, non-destructive changes to the schema. References Links Martin Fowler: Evolutionary Database Design Active Record Migrations Databases Software maintenance Agile software development
38328989
https://en.wikipedia.org/wiki/Rocket%20U2
Rocket U2
Rocket U2 is a suite of database management (DBMS) and supporting software now owned by Rocket Software. It includes two MultiValue database platforms: UniData and UniVerse. Both of these products are operating environments which run on current Unix, Linux and Windows operating systems. They are both derivatives of the Pick operating system. The family also includes developer and web-enabling technologies including SystemBuilder/SB+, SB/XA, U2 Web Development Environment (WebDE), UniObjects and wIntegrate. History UniVerse was originally developed by VMark Software and UniData was originally developed by the Unidata Corporation. Both Universe and Unidata are used for vertical application development and are embedded into the vertical software applications. In 1997, the Unidata Corporation merged with VMark Systems to form Ardent Software. In March 2000, Ardent Software was acquired by Informix. IBM subsequently acquired the database division of Informix in April 2001, making UniVerse and UniData part of IBM's DB2 product family. IBM subsequently created the Information Management group of which Data Management is one of the sub-areas under which the IBM U2 family comprised UniData and UniVerse along with the tools, SystemBuilder Extensible Architecture (SB/XA), U2 Web Development Environment (U2 Web DE) and wIntegrate. On 1 October 2009 it was announced that Rocket Software had purchased the entire U2 portfolio from IBM. The U2 portfolio is grouped under the name RocketU2. System structure Accounts Systems are made of one or more accounts. Accounts are directories stored on the host operating system that initially contain the set of files needed for the system to function properly. This includes the system's VOC (vocabulary) file that contains every command, filename, keyword, alias, script, and other pointers. Each of these classes of VOC entries can also be created by a user. Files Files are similar to tables in a relational database in that each file has a unique name to distinguish it from other files and zero to multiple unique records that are logically related to each other. Files are made of two parts: a data file and a file dictionary (DICT). The data file contains records that store the actual data. The file dictionary may contain metadata to describe the contents or to output the contents of a file. Hashed files For hashed files, a U2 system uses a hashing algorithm to allocate the file's records into groups based on the record IDs. When searching for data in a hashed file, the system only searches the group where the record ID is stored, making the search process more efficient and quicker than searching through the whole file. Nonhashed files Nonhashed files are used to store data with little or no logical structure such as program source code, XML or plain text. This type of file is stored as a subdirectory within the account directory on the host operating system and may be read or edited using appropriate tools. Records Files are made of records, which are similar to rows within tables of a relational database. Each record has a unique key (called a "record ID") to distinguish it from other records in the file. These record IDs are typically hashed so that data can be retrieved quickly and efficiently. Records (including record IDs) store the actual data as pure ASCII strings; there is no binary data stored in U2. For example, the hardware representation of a floating-point number would be converted to its ASCII equivalent before being stored. Usually these records are divided into fields (which are sometimes called "attributes" in U2). Each field is separated by a "field mark" (hexadecimal character FE). Thus this string: might represent a record in the EMPLOYEE file with 123-45-6789 as the Record ID, JOHN JONES as the first field, [email protected] as the second field and $4321.00 as a monthly salary stored in the third field. (The up-arrow (^) above is the standard Pick notation of a field mark; that is, xFE). Thus the first three fields of this record, including the record ID and trailing field mark, would use 49 bytes of storage. A given value uses only as many bytes as needed. For example, in another record of the same file, JOHN JONES (10 bytes) may be replaced by MARJORIE Q. HUMPERDINK (21 bytes) yet each name uses only as much storage as it needs, plus one for the field mark. Fields may be broken down into values and even subvalues. Values are separated by value marks (character xFD); subvalues are separated by subvalue marks (character xFC). Thus, if John Jones happened to get a second email address, the record may be updated to: where the close bracket (]) represents a value mark. Since each email address can be the ID of a record in separate file (in SQL terms, an outer join; in U2 terms, a "translate"), this provides the reason why U2 may be classified as a MultiValued database. Data Raw information is called Data. A record is a set of logical grouped data. e.g. an employee record will have data stored in the form of fields/attributes like his name, address etc. Programmability Both UniVerse and UniData have a structured BASIC language (UniVerse Basic and UniBasic, respectively), similar to Pick/BASIC which naturally operates on the structures of the MultiValue database. They also have a structured database query language (RetrieVe and UniQuery) used to select records for further processing and for ad hoc queries and reports. RocketU2 provides a set of Client Tools to allow software developers to access U2 databases from other software languages. Client Tool interfaces include: ODBC / JDBC Intercall (C/C++) UniOLEDB - OLEDB Driver UniObjects (COM) UniObjects (.NET) UniObjects (Java) Native XML U2 Web Services JSON (JavaScript Object Notation) Python (available as of UniVerse 11.3 and UniData 8.2) Security Both UniVerse and UniData support TLS transport level data encryption and record and file level encryption of data at rest using OpenSSL. Additional API encryption functionality is also available to allow custom solutions or meet specific regulatory requirements. Professional certification RocketU2 offers three professional certification designations related to the U2 product family. Rocket U2 Application Developer Rocket UniVerse Administration Rocket UniData Administration Web-based applications for U2 data Rocket Software Universe and Unidata have limited ability to create web-based front-ends to Universe/UniData content. Since Rocket Software provides SQL access to its database products, a SQL-based product can be used to build a web-based UI to the databases; regardless of using Files or Tables in U2. A third-party application framework, can be used to build such web interfaces. See also Pick operating system OpenInsight Reality Notes External links U2UG, a recognized international user group Proprietary database management systems 1990s software NoSQL companies Big data companies Database companies Data companies NoSQL Divested IBM products
62070770
https://en.wikipedia.org/wiki/Disco%20Elysium
Disco Elysium
Disco Elysium is a role-playing video game developed and published by ZA/UM. The game takes place in a large city still recovering from a war decades prior to the game's start, with players taking the role of an amnesiac detective who has been tasked with solving a murder mystery. During the investigation, he comes to recall events about his own past as well as current forces trying to affect the city. Inspired by Infinity Engine–era role-playing games, particularly Planescape: Torment, Disco Elysium was written and designed by Estonian novelist Robert Kurvitz. It features a distinctive oil painting art style, and music by the band Sea Power. It was released for Microsoft Windows in October 2019 and macOS in April 2020. An expanded version of the game, subtitled The Final Cut, featuring full voice acting and new content, was released for consoles in 2021 alongside a free update for the PC versions. Disco Elysium is a non-traditional role-playing game featuring no combat. Instead, events are resolved through skill checks and dialog trees via a system of 24 skills that represent different aspects of the protagonist, such as his perception and pain threshold. In addition, a system called the Thought Cabinet represents his other ideologies and personality traits, with players having the ability to freely support or suppress them. The game is based on a tabletop role-playing game setting that Kurvitz had previously created, with him forming ZA/UM in 2016 to work on the game. Disco Elysium was critically acclaimed, being named as a game of the year by several publications, along with numerous other awards for its narrative and art. A television series adaptation was announced in 2022. Gameplay Disco Elysium is a role-playing video game that features an open world and dialogue-heavy gameplay mechanics. The game is presented in an isometric perspective in which the player character is controlled. The player takes the role of a detective, who suffers from alcohol and drug-induced amnesia, on a murder case. The player can move the detective about the current screen to interact with non-player characters (NPC) and highlighted objects or move onto other screens. Early in the game they gain a partner, Kim Kitsuragi, another detective who acts as the protagonist's voice of professionalism and who may be able to offer advice or support in certain dialog options. Other NPCs may be influenced to become temporary companions that join the group and provide similar support. The gameplay features no combat in the traditional sense; instead, it is handled through skill checks and dialogue trees. There are four primary abilities in the game: Intellect, Psyche, Physique, and Motorics, and each ability has six distinct secondary skills for a total of 24. The player improves these skills through skill points earned from leveling up. The choice of clothing that the player equips on the player-character can impart both positive and negative effects on certain skills. Upgrading these skills help the player character pass skill checks, based on a random dice roll, but potentially result in negative effects and character quirks, discouraging minmaxing. For instance, a player character with high Drama may be able to detect and fabricate lies effectively, but may also become prone to hysterics and paranoia. Likewise, high Electrochemistry shields the player character from the negative effects of drugs and provides knowledge on them, but may also lead to substance abuse and other self-destructive behaviors. Disco Elysium features a secondary inventory system known as the "Thought Cabinet". Thoughts are unlockable through conversations with other characters, as well as through internal dialogues within the mind of the player character himself. The player is then able to "internalize" a thought through a certain amount of in-game hours, which, once completed, grants the player character permanent benefits but also occasionally negative effects, a concept that ZA/UM compared to the trait system used in the Fallout series. A limited number of slots are available in the Thought Cabinet at the start, though more can be gained with experience levels. For example, an early possible option for the Thought Cabinet is the "Hobocop" thought, in which the character ponders the option of living on the streets to save money, which reduces the character's composure with other NPCs while the thought is internalized. When the character has completed the Hobocop thought, it allows them to find more junk on the streets that can be sold for money. The 24 skills also play into the dialogue trees, creating a situation where the player-character may have an internal debate with one aspect of their mind or body, creating the idea that the player is communicating with a fragmented persona. These internal conversations may provide suggestions or additional insight that can guide the player into actions or dialogue with the game's non-playable characters, depending on the skill points invested into the skill. For example, the Inland Empire, a subskill of the Psyche, is described by ZA/UM as a representation of the intensity of the soul, and may come into situations where the player-character may need to pass themselves off under a fake identity with the conviction behind that stance, should the player accept this suggestion when debating with Inland Empire. Synopsis Setting Disco Elysium takes place in the fantastic realist world of Elysium, developed by Kurvitz and his team in the years prior, which includes a fleshed-out six-thousand-year history of conflicts, with the game taking place during the setting's most modern period, known as "The Fifties". Elysium is made of "isolas", masses of land and sea that are separated from each other by the Pale, an inscrutable, mist-like "connective tissue" in which the laws of reality gradually break down. Prolonged exposure to the Pale can cause mental instability, and traversing the Pale, which is typically done with aerostatics, is considered highly dangerous. The setting's political and cultural history is also markedly different. Nations and people within Disco Elysium generally follow four main ideologies: communism, fascism, moralism, and ultraliberalism. Communism, also called Mazovianism, was founded by an economist and historical materialist named Kras Mazov, and rather than being associated with the color red and the hammer and sickle, the ideology is instead represented by the color white and a pentagram flanked by a pair of deer antlers. Moralism, despite being a centrist ideology, carries religious overtones due to its association with Elysium's largest religion, Dolorianism. One of Dolorianism's dominant features is its "Innocences", saint-like figures who are said to be "embodiments of history" and wield great religious and political power during their lives, akin to the position of pope. The greatest and most influential among the historical Innocences was Dolores Dei, a woman of mysterious origins, who allegedly had glowing lungs and founded many of the world's modern institutions. Due to Dolores Dei's influence, the symbol of love in Disco Elysium world is a set of lungs rather than a heart. Events in the game take place in the poverty-plagued district of Martinaise within the city of Revachol on the isola of Insulinde. Forty-nine years before the events of the game, a wave of communist revolutions swept multiple countries; the sclerotic monarchy of Revachol, which up to that point had been a powerful kingdom with colonies across Elysium, was overthrown and replaced by a commune. Six years later, the Revachol commune was toppled by an invading alliance of moralist-capitalist nations called "the Coalition". Revachol has since been designated a Special Administrative Region under the Coalition, which holds a strong grip over the city's local economy and keeps its autonomy at a minimum. One of the few governmental functions that Revachol is allowed to have is upholding day to day law and order, which is the task of the Revachol Citizens Militia (RCM). While starting as a voluntary citizens brigade, the RCM has since grown and evolved into a semi-professional police force. Plot The player character wakes up in a trashed hostel room in Martinaise with a severe hangover and no memory of his own identity. He meets Lieutenant Kim Kitsuragi, who informs him that they have been assigned to investigate the death of a hanged man in the cafeteria's backyard. His identity is unclear and initial investigation indicates that he was lynched by a group of people. The detectives explore the rest of the district, following up on leads while helping residents with a variety of tasks. The player character gradually learns that he is a decorated RCM detective, Lieutenant Double-Yefreitor (meaning he twice declined promotion from his current rank) Harrier "Harry" Du Bois. Harry experienced an event several years ago that began a mid-life crisis, and on the night he was assigned to the hanged man case he finally snapped and embarked on a self-destructive bender around Martinaise. Through Harry and Kim's work, they discover the killing is connected to an ongoing strike by the Martinaise's dockworkers union against the Wild Pines corporation. They seek out representatives of the dockworkers and the Wild Pines corporation, meeting up with union boss Evrart Claire and Wild Pines negotiator Joyce Messier. Joyce reveals that the hanged man, named Lely, was the commander of a squad of mercenaries sent by Wild Pines to break the strike and warns that the rest of the squad has gone rogue and will likely seek retribution. This leads them to discover that Lely was killed before the hanging. The Hardie Boys, a group of dockworkers who act as vigilantes, claim responsibility for the murder. They assert that Lely attempted to rape a cafeteria guest named Klaasje. They meet with Klaasje, who reveals that Lely was shot in the mouth while the two were having consensual sex. Unable to figure out the origin of the bullet and fearful of the authorities due to her past as a corporate spy, Klaasje enlisted the help of a truck driver and union sympathizer named Ruby, who staged Lely's death with the rest of the Hardie Boys. The detectives find Ruby hiding in an abandoned building, where she incapacitates them with a Pale device. She claims that the cover-up was Klaasje's idea and has no idea who shot Lely. The player manages to resist or disable the Pale device and tries to arrest her. Ruby, who believes Harry to be a corrupt cop, either escapes or kills herself. The detectives return to find themselves in a standoff between the mercenaries and the Hardie Boys, the former seeking revenge over Lely's death. A firefight breaks out and the player is wounded, blacking out and waking up a few days later. Most or all the mercenaries are killed and Kim may be hospitalized, in which case street urchin Cuno offers to take his place. The detectives begin chasing down their last leads, determining that the shot that killed Lely came from an old sea fort off the shore of Martinaise. The detectives explore the fort and find the shooter, a former Commissar from the Revachol communist army named Iosef Lilianovich Dros. Iosef reveals that he shot Lely in a fit of anger and jealousy; his motivations are born out of his bitterness towards the capitalist system Lely represented, as well as sexual envy for Klaasje. The detectives arrest him for the murder. At this point, an insectoid cryptid known as the Insulindian Phasmid appears from the reeds. The player may have a psychic conversation with the Phasmid, who tells Harry that it finds the notion of his unstable mind to be fearful, but is in awe at his ability to continue existing. It comforts Harry, telling him to move on from the wreck of his life. Harry and his partner are confronted by his old squad upon their return to Martinaise. They reflect on Harry's actions during the game, whether he has solved the case and how he handled the mercenaries. Harry's usual partner Lieutenant Jean Vicquemare confirms that Harry's emotional breakdown was the result of his ex-fiancé leaving him years ago. Depending on player choices, the squad expresses hope that Harry's state will improve in the future, and invites him and either Kim or Cuno to a special RCM unit. Development Disco Elysium was developed by ZA/UM, a company founded in 2016 by Estonian novelist Robert Kurvitz, who served as the game's lead writer and designer. Kurvitz since 2001 had been part of a band called Ultramelanhool, and in 2005, while in Tallinn, Estonia, with the group struggling for finances, conceived of a fictional world during a drunken evening while listening to Tiësto's "Adagio for Strings". Feeling they had a solid idea, the group created a collective of artists and musicians, which included oil painter Aleksander Rostov, to expand upon the work of that night and developed a tabletop RPG based on Dungeons & Dragons on this steampunk-like concept. During this period, Kurvitz met Estonian author Kaur Kender who helped him to write a novel set in this world, Sacred and Terrible Air, which was published in 2013 but only sold about one thousand copies. Kurvitz fell into a period of depression and alcoholism for about three years following the book's failing. Kurvitz eventually managed to overcome this period of alcoholism and helped Kender to also overcome his own alcoholism. As a sign of gratitude, Kender suggested to Kurvitz that instead of pursuing a novel, that he try capturing his world as a video game instead as to draw a larger interest. Kurvitz had no experience in video games before, but once he had seen artwork of the game's setting of Revachol as easily fitting into an isometric format, as well as Rostov's agreement that they might as well continue taking the risk of failing on a video game together, Kurvitz proceeded with the idea. Kurvitz wrote a concise description of what the game would be: "D&D meets '70s cop-show, in an original 'fantastic realist' setting, with swords, guns and motor-cars. Realised as an isometric CRPG – a modern advancement on the legendary Planescape: Torment and Baldur's Gate. Massive, reactive story. Exploring a vast, poverty-stricken ghetto. Deep, strategic combat." Kender was impressed by the strong statement, investing into the game's development, with additional investment coming from friends and family. The game was announced as an upcoming 2017 game under the title No Truce With the Furies, taken from the poem "Reflections" by R.S. Thomas and published in Thomas' No Truce with the Furies in 1995. Kurvitz established the ZA/UM team to create the game, using the name "za um", a reference to the Zaum constructed language created by Russian avant-garde poets in the early 1900s. Its name can be read in Russian as either "for the mind" or "from the mind", while the use of all-capitals and the slash to present the team as "something that definitely exists and weighs eight tonnes". Work on the game started around 2016, with the local team living in a squat in a former gallery in Tallinn. They were able to secure venture capital into the game during that first year which allowed Kurvitz to seek out the band British Sea Power for their music for the game's soundtrack. While in Birmingham to speak to the band, Kurvitz realised England was a better location for the main development team as there were more local resources for both development and for voice-overs. During development, some of the staff relocated from Estonia to London and Brighton, with other designers working out of Poland, Romania, and China. Overall, by the time of the game's release, ZA/UM had about 20 outside consultants and 35 in-house developers, with a team of eight writers assisting Kurvitz in the game's dialog. The majority of the game's funding was provided by Estonian businessman Margus Linnamäe. The game uses the Unity engine. As originally planned, the game was to focus on action in a single city location to make the 2017 release. However, as ZA/UM had indicated to investors that this was to be a game that spanned a larger world, they found the need to spread beyond that single location, forcing them to delay the game's release, along with the name change to Disco Elysium. This title plays on a few double meanings related to the word "disco"; in one sense, it refers to ideas that briefly gain the spotlight before burning out similar to the fad of disco music, and reflected in the protagonist's clothing style, while in a more literal sense, "disco" is Latin for "I learn", thus reflecting on the protagonist's overcoming his amnesia to learn about the world of Elysium. Kurvitz had always anticipated the No Truce title to be more of a working title and wanted to reserve it for when they had bundled Disco Elysium with a second planned game. Though ZA/UM had initially planned to publish the game through Humble Bundle, they ultimately chose to self-publish it. Design, voices, and influences The game's art, drawn mostly in a painterly style, was led by Aleksander Rostov, while the game's soundtrack was written by the English indie rock band British Sea Power. The voice-acting cast includes progressive metal musicians Mikee Goodman of SikTh and Mark Holcomb of Periphery. The original release also had voice-acting by Dasha Nekrasova of the cultural commentary podcast Red Scare and four of the hosts from the political satire podcast Chapo Trap House, but these would later be replaced in The Final Cut. ZA/UM cited several works that influenced the writing and style of Disco Elysium. One major influence is the 1999 video game Planescape: Torment, which, like Disco Elysium, features an amnesiac player character, heavily emphasises dialogue, and is rendered isometrically. The television show The Wire was also used as an influence for the game's working class setting, while Émile Zola's writings shared stories on the misery of human life that narrative writer Helen Hindpere said they felt resonated within the game. Other works that influenced Disco Elysium included: the video game Kentucky Route Zero; television shows True Detective and The Shield; the literary works of Dashiell Hammett, China Miéville, and the Strugatsky brothers; and artists Rembrandt, Ilya Repin, Jenny Saville, Alex Kanevsky, and Wassily Kandinsky. The creators have also said that their work owes a lot to the Estonian urbanist poet Arvi Siig: "Without his modernism, Elysium - the world the game is placed in - would not be half of what it is," Kurvitz said while accepting the Estonian President's Young Cultural Figure annual award for 2020, adding that Siig's vision of an international, radical and humanist Estonian culture lives on in "Disco Elysium". Kurvitz said that an aim was to have a full, complex depth of choices and outcomes, limited by the practicalities of game development. Knowing they could not realistically cover all possible choices, Kurvitz and his team instead focused more on what he called "microreactivity", small acts and decisions the player may make such as an embarrassing comment, and how that may propagate throughout events. The dialog of the player's various skills helped then to provide critique and internalization of how these small decisions had larger effects on the game world, so that the player would become more aware of such choices in the future. An additional factor in writing was the recognition that there was no real solution to the game; while the player may resolve some portions of the story, the primary case is nearly unworkable, similar to the rest of Revachol. They created the companion Kim as a no-nonsense character to help keep the player on track of resolving some part of the game and recognizing that there were some story threads they simply could not fix or resolve. The Final Cut An expanded and reworked edition of the game, subtitled The Final Cut, was announced in December 2020. According to lead writer Helen Hindpere, The Final Cut was directed based on input from players of the original game. It included complete voicework for the nearly 300 characters including the game's narration and the player-character skills, encompassing over 1.2 million words according to Hindpere. Because of the importance of the characters to the game, ZA/UM kept voice directing in-house rather than outsourcing the task as typically done with RPG games of this nature. It took about fourteen months to complete the global casting and recording processing for the additional voice overs. While they brought back some of the prior voice actors who had read introductory dialog lines in conversation trees for their respective characters, ZA/UM sought out new voice actors they felt were a better fit for many roles, especially for minor characters. They came upon jazz musician Lenval Brown for the voice of the narrator and of the player skills, representing nearly half of the game's dialog, and considered him essential to The Final Cut. Brown spent about eight months with the vocal directors in recording his lines, keeping his voice otherwise constant, slow and meticulous for all of the different characters skills since these were explaining things to the player, but including small nuances to try to distinguish the various facets of each skill's personality. The voice-acting by Nekrasova and the Chapo Trap House hosts was completely replaced. The Final Cut allows players the option to use a selection of voice acting for the game, such as only having the narrator's voiceover while the other characters presented as text. There are four quests that were cut from the original game but reworked to explore some of the political implications of the game's story, now called Political Vision Quests. These quests were designed to encourage the player to consider how they have developed their player-character and where their decisions have taken the character, and how committed they are to seeing that out, according to Hindpere. Additionally, the expansion includes new art and animations, including two additional songs from British Sea Power. Release Disco Elysium was released for Microsoft Windows on 15 October 2019. The macOS version was released on 27 April 2020. One of the first languages that ZA/UM had translated the game for was Chinese, which was released in March 2020. Its release had bypassed the typical approval process needed to release games in China as the virtue of its content, which included themes of communism, did not meet the Chinese governmental typical restrictions on content. After its release, reviews left by Chinese players had stated that they were drawn to the game as it reflected similar periods of communism that they had gone through. In May 2020, ZA/UM released an update that improved some of the game's performance on lower-end hardware, as well as adding support for additional language translations, which are being developed by the community and by the localization firm Testronic Labs. After its original release, Kurvitz announced plans for an expansion for the game as well a full sequel. In addition, a tabletop RPG based on the systems the game used, tentatively titled You Are Vapor, was also announced, with Kurvitz also announcing plans to translate his novel Sacred and Terrible Air in English, which narratively takes place 20 years after the events of Disco Elysium. ZA/UM launched a limited edition clothing line, Atelier, in March 2021, featuring pieces based on the game. The Final Cut The Final Cut was released on 30 March 2021 for PlayStation 4, PlayStation 5, and Stadia, and as a free update for existing copies of the game on PC. The Nintendo Switch, Xbox One and Xbox Series X/S versions were released digitally on 12 October 2021. Physical copies of the game for PlayStation 4 and Xbox One are set to arrive on 9 December 2021 with physical copies for Nintendo Switch set to arrive in early 2022. While the original game was not submitted for rating for the Australian Classification Board as it was only released digitally for personal computers, the planned console release of The Final Cut required a Board review. The game was refused classification by the Board, making it illegal to sell in the country, due to its depiction of sex, drug misuse or addiction, crime, cruelty, and violence, as well as showing "revolting or abhorrent phenomena in such a way that they offend against the standards of morality, decency, and propriety generally accepted by reasonable adults". The ban was appealed by ZA/UM then subsequently dropped, with the game reclassified to an adults-only R18+ rating and allowed to be sold, as the Board acknowledged that the game "does provide disincentives related to drug-taking behavior, to the point where regular drug use leads to negative consequences for the player's progression in the game". Reception Disco Elysium received "universal acclaim" according to review aggregator Metacritic, with it being praised for its narrative and conversational systems. PC Gamer praised the game for its depth, freedom, customization, and storytelling and called it one of the best RPGs on the PC. IGN praised the game's open world and compared it favorably to The Witcher 3 and Red Dead Redemption 2, despite being much smaller. The Washington Post said that the game is "conspicuously well written". GameSpot awarded it a 10 out of 10, their first perfect score since 2017. PCGamesN wrote that the game set new genre standards for exploration and conversation systems. Conversely, Eurogamer criticized the game for not offering enough choice in role-playing and for a distinct lack of focus. The Final Cut was re-reviewed by IGN and Game Informer, both which praised the addition of voice lines and new quests. The PlayStation releases were initially found to have game-breaking bugs that made some of the quests impossible to finish. In June 2020, ZA/UM and dj2 Entertainment announced that a television series based on the game was under development. The PC version of the Final Cut was placed #11 on the best games of all time list on Metacritic, with a score of 97. Awards The game was nominated for four awards at The Game Awards 2019 and won all of them, the most at the event. Slant Magazine, USGamer, PC Gamer, and Zero Punctuation chose it as their game of the year, while Time included it as one of their top 10 games of the 2010s. The game was also nominated for the 2020 Nebula Award for Best Game Writing. References External links 2019 video games Detective video games Interactive Achievement Award winners MacOS games Nintendo Switch games Open-world video games Organized crime video games PlayStation 4 games PlayStation 5 games Political video games Role-playing video games Single-player video games Stadia games Video games about amnesia Video games about police officers Video games developed in Estonia Video games developed in the United Kingdom Video games with isometric graphics Windows games Xbox One games Xbox Series X and Series S games
4511569
https://en.wikipedia.org/wiki/Atego%20%28company%29
Atego (company)
Atego was a software development corporation headquartered in the United States and the United Kingdom with subsidiaries in France, Germany, and Italy. Formed from Interactive Development Environments, Inc. and Thomson Software Products, it was called Aonix from 1996 until 2010. It was acquired by PTC in 2014. History The company might be considered an example of the "merger mania" of the 1990s and beyond. Aonix was formed in November 1996 by merging two software development tools companies: Interactive Development Environments, a modelling, analysis and design tools developer, and Thomson Software Products (TSP). TSP was based in Norwalk, Connecticut with engineering and support facilities in Norwalk and San Diego, California. TSP was established in July 1995, as a United States subsidiary of the French firm Thomson-CSF, formed by merging the Thomson subsidiary Alsys of San Diego with Must Software International of Norwalk. The staff in Norwalk continued to provide client/server fourth-generation language (4GL) and middleware products, while the ex-Alsys worked on high-performance Ada programming language development environments and the TeleUSE family of graphical user interface development tools. Acquisition In December, 1998, Aonix was acquired by private equity firm The Gores Group (then known as Gores Technology Group). Aonix owned the product lines Nomad software, Ultraquest and Select Solution Factory until two groups split in January 2003 in a management buy-out. The new company, based in Boulder, Colorado and still owned by Gores, was named Select Business Solutions. Aonix merged with real-time and embedded Java tools vendor NewMonics, of Tucson, Arizona in 2003, acquiring the PERC product line. In January 2010, Aonix and Artisan Software Tools (based in Cheltenham, UK) agreed to merge, forming a new company called Atego. The combined company was headquartered in San Diego. In March 2010 Atego acquired BlueRiver Software, the Germany-based maker of the X32 C/C++ interactive development environment. In 2011 it acquired the ApexAda family of Ada compilers from the Rational Software division of IBM. PTC, Inc. announced on July 1, 2014, it acquired Atego, for approximately $50 million in cash. Product lines included AdaWorld, Ameos, Architecture Component Development, ObjectAda (now PTC ObjectAda), PERC (now PTC Perc), RAVEN, SmartKernel, Software Through Pictures, and TeleUSE (now PTC TeleUSE). References External links Official Website Westech Website Tetrabyte Website Software companies based in California Software companies of the United Kingdom Technology companies based in San Diego Privately held companies based in California Software companies of the United States
160260
https://en.wikipedia.org/wiki/FASM
FASM
FASM (flat assembler) is an assembler for x86 processors. It supports Intel-style assembly language on the IA-32 and x86-64 computer architectures. It claims high speed, size optimizations, operating system (OS) portability, and macro abilities. It is a low-level assembler and intentionally uses very few command-line options. It is free and open-source software. All versions of FASM can directly output any of the following: flat "raw" binary (usable also as MS-DOS COM executable or SYS driver), objects: Executable and Linkable Format (ELF) or Common Object File Format (COFF) (classic or MS-specific), or executables in either MZ, ELF, or Portable Executable (PE) format (including WDM drivers, allows custom MZ DOS stub). An unofficial port targeting the ARM architecture (FASMARM) also exists. History The project was started in 1999 by Tomasz Grysztar, a.k.a. Privalov, at that time an undergraduate student of mathematics from Poland. It was released publicly in March 2000. FASM is completely written in assembly language and comes with full source. It is self-hosting and has been able to assemble itself since version 0.90 (May 4, 1999). FASM originally ran in 16-bit flat real mode. 32-bit support was added and then supplemented with optional DPMI support. Designed to be easy to port to any operating system with flat 32-bit addressing, it was ported to Windows, then Linux. Design FASM does not support as many high-level statements as MASM or TASM. It provides syntax features and macros, which make it possible to customize or create missing statements. Its memory-addressing syntax is similar to TASM's ideal mode and NASM. Brackets are used to denote memory operands as in both assemblers, but their size is placed outside the brackets, like in NASM. FASM is a multi-pass assembler. It makes extensive code-size optimization and allows unconstrained forward referencing. An unusual FASM construct is defining procedures only if they are used somewhere in the code, something that in most languages is done per-object by the linker. FASM is based on the "same source, same output" principle: the contents of the resulting file are not affected by the command line. Such an approach saves FASM sources from compiling problems often present in many assembly projects. On the other hand, it makes it harder to maintain a project that consists of multiple separately compiled source files or mixed-language projects. However, there exists a Win32 wrapper called FA, which mitigates this problem. FASM projects can be built from one source file directly into an executable file without a linking stage. IDE Fresh, an internet community supported project started by John Found, is an integrated development environment for FASM. Fresh currently supports Microsoft Windows and Linux. Use Operating systems written with FASM: MenuetOS – 32- and 64-bit GUI operating systems by Ville Turijanmaa KolibriOS Compilers that use FASM as a backend: PureBasic High Level Assembly (HLA) BlitzMax See also Comparison of assemblers References External links FASM project: FASMLIB 0.8.0 – portable 32-bit x86 asm lib for FASM/MASM/YASM/NASM/GASM FASMARM – FASM for ARM processors, v1.27, The Fresh IDE 2000 software Assemblers DOS software Free software primarily written in assembly language Linux programming tools Programming tools for Windows Self-hosting software Unix programming tools
26787800
https://en.wikipedia.org/wiki/ASi-Profile
ASi-Profile
ASi-Profile is a 3D-CAD add-on application for Autodesk Inventor developed by company ITB Paul Schneider. Uses The software aims to extend the field of application of Autodesk Inc.'s CAD software Autodesk Inventor, which is usually used for mechanical construction in mechanical engineering and plant engineering. ASi-Profile enables the user to create supporting structures, substructures, control- and maintenance platforms, stairways, barriers, etc. which are typical elements in the constructional steelwork area. Use in mechanical engineering, plant engineering Conventional 3D CAD systems for mechanical engineering usually lack of functions to create steel constructions. As an add-on application, the software tries to fill the gap between mechanical engineering and constructional steelworking, enabling the designer, for example, to create a supporting frame for a machine or the required maintenance platform in the same software. Use in locksmithing, metal fabrication The scope in locksmithing and metal fabrication is often to plan and design stairways, platforms, balconies or railings, etc., with the intention of creating a customized, individual structure, setting a special focus on its design. ASi-Profile is used to simplify the construction process and to create realistic views of the structures. Distribution, Platforms The software is being used and distributed worldwide by local dealers, who assume user training and support. Available for: Base software: Autodesk Inventor 5 to 2016 Operating systems: Windows Win7 und Win8/8.1 (32 + 64 Bit) Version history 2000: First version 1.0.0 for Inventor 5 2009: Version 9.0.x for Inventor 2010 2010: Version 10.0.x for Inventor 2011 2011: Version 12.0.x for Inventor 2012 2012: version 13.2.n for Inventor 2013 2013: version 14.1.n for Inventor 2014 2014: version 15.0.n for Inventor 2015 2015: version 16.0.n for Inventor 2016 Notes External links Official Website Aertist Website 3D graphics software
209935
https://en.wikipedia.org/wiki/University%20of%20Birmingham
University of Birmingham
The University of Birmingham (informally Birmingham University) is a public research university located in Edgbaston, Birmingham, United Kingdom. It received its royal charter in 1900 as a successor to Queen's College, Birmingham (founded in 1825 as the Birmingham School of Medicine and Surgery), and Mason Science College (established in 1875 by Sir Josiah Mason), making it the first English civic or 'red brick' university to receive its own royal charter. It is a founding member of both the Russell Group of British research universities and the international network of research universities, Universitas 21. The student population includes undergraduate and postgraduate students, which is the largest in the UK (out of ). The annual income of the university for 2020–21 was £774.1 million of which £168.3 million was from research grants and contracts, with an expenditure of £738.5 million. The university is home to the Barber Institute of Fine Arts, housing works by Van Gogh, Picasso and Monet; the Shakespeare Institute; the Cadbury Research Library, home to the Mingana Collection of Middle Eastern manuscripts; the Lapworth Museum of Geology; and the 100-metre Joseph Chamberlain Memorial Clock Tower, which is a prominent landmark visible from many parts of the city. Academics and alumni of the university include former British Prime Ministers Neville Chamberlain and Stanley Baldwin, the British composer Sir Edward Elgar and eleven Nobel laureates. History Queen's College The earliest beginnings of the university were originally traced back to the Queen's College, which is linked to William Sands Cox in his aim of creating a medical school along strictly Christian lines, unlike the contemporary London medical schools. Further research revealed the roots of the Birmingham Medical School in the medical education seminars of John Tomlinson, the first surgeon to the Birmingham Workhouse Infirmary, and later to the Birmingham General Hospital. These classes, held in the winter of 1767–68, were the first such lectures ever held in England or Wales. The first clinical teaching was undertaken by medical apprentices at the General Hospital, founded in 1779. The medical school which grew out of the Birmingham Workhouse Infirmary was founded in 1828, but Cox began teaching in December 1825. Queen Victoria granted her patronage to the Clinical Hospital in Birmingham and allowed it to be styled "The Queen's Hospital". It was the first provincial teaching hospital in England. In 1843, the medical college became known as Queen's College. Mason Science College In 1870, Sir Josiah Mason, the Birmingham industrialist and philanthropist, who made his fortune in making key rings, pens, pen nibs and electroplating, drew up the Foundation Deed for Mason Science College. The college was founded in 1875. It was this institution that would eventually form the nucleus of the University of Birmingham. In 1882, the Departments of Chemistry, Botany and Physiology were transferred to Mason Science College, soon followed by the Departments of Physics and Comparative Anatomy. The transfer of the Medical School to Mason Science College gave considerable impetus to the growing importance of that college and in 1896 a move to incorporate it as a university college was made. As the result of the Mason University College Act 1897 it became incorporated as Mason University College on 1 January 1898, with Joseph Chamberlain becoming the President of its Court of Governors. Royal charter It was largely due to Chamberlain's enthusiasm that the university was granted a royal charter by Queen Victoria on 24 March 1900. The Calthorpe family offered twenty-five acres (10 hectares) of land on the Bournbrook side of their estate in July. The Court of Governors received the Birmingham University Act 1900, which put the royal charter into effect on 31 May. The transfer of Mason University College to the new University of Birmingham, with Chamberlain as its first chancellor and Sir Oliver Lodge as the first principal, was complete. A remnant of Josiah Mason's legacy is the Mermaid from his coat-of-arms, which appears in the sinister chief of the university shield and of his college, the double-headed lion in the dexter. The commerce faculty was founded by Sir William Ashley in 1901, who from 1902 until 1923 served as first Professor of Commerce and Dean of the Faculty. From 1905 to 1908, Edward Elgar held the position of Peyton Professor of Music at the university. He was succeeded by his friend Granville Bantock. The university's own heritage archives are accessible for research through the university's Cadbury Research Library which is open to all interested researchers. During the First World War, the Great Hall in the Aston Webb Building was requisitioned by the War Office to create the 1st Southern General Hospital, a facility for the Royal Army Medical Corps to treat military casualties; it was equipped with 520 beds and treated 125,000 injured servicemen. In June 1921, the university appointed Linetta de Castelvecchio as Serena Professor of Italian: she was the first woman to hold a chair at the university and one of the first women professors in Great Britain. Expansion In 1939, the Barber Institute of Fine Arts, designed by Robert Atkinson, was opened. In 1956, the first MSc programme in Geotechnical Engineering commenced under the title of "Foundation Engineering", and has been run annually at the university since. The UK's longest-running MSc programme in Physics and Technology of Nuclear Reactors also started at the university in 1956, the same year that the world's first commercial nuclear power station was opened at Calder Hall in Cumbria. In 1957, Sir Hugh Casson and Neville Conder were asked by the university to prepare a masterplan on the site of the original 1900 buildings which were incomplete. The university drafted in other architects to amend the masterplan produced by the group. During the 1960s, the university constructed numerous large buildings, expanding the campus. In 1963, the university helped in the establishment of the faculty of medicine at the University of Rhodesia, now the University of Zimbabwe (UZ). UZ is now independent but both institutions maintain relations through student exchange programmes. Birmingham also supported the creation of Keele University (formerly University College of North Staffordshire) and the University of Warwick under the Vice-Chancellorship of Sir Robert Aitken who acted as 'godfather' to the University of Warwick. The initial plan was to establish a satellite university college in Coventry but Aitken advised an independent initiative to the University Grants Committee. Malcolm X, the Afro-American human rights activist, addressed the University Debating Society in 1965. Scientific discoveries and inventions The university has been involved in many scientific breakthroughs and inventions. From 1925 until 1948, Sir Norman Haworth was Professor and Director of the Department of Chemistry. He was appointed Dean of the Faculty of Science and acted as Vice-Principal from 1947 until 1948. His research focused predominantly on carbohydrate chemistry in which he confirmed a number of structures of optically active sugars. By 1928, he had deduced and confirmed the structures of maltose, cellobiose, lactose, gentiobiose, melibiose, gentianose, raffinose, as well as the glucoside ring tautomeric structure of aldose sugars. His research helped to define the basic features of the starch, cellulose, glycogen, inulin and xylan molecules. He also contributed towards solving the problems with bacterial polysaccharides. He was a recipient of the Nobel Prize in Chemistry in 1937. The cavity magnetron was developed in the Department of Physics by Sir John Randall, Harry Boot and James Sayers. This was vital to the Allied victory in World War II. In 1940, the Frisch–Peierls memorandum, a document which demonstrated that the atomic bomb was more than simply theoretically possible, was written in the Physics Department by Sir Rudolf Peierls and Otto Frisch. The university also hosted early work on gaseous diffusion in the Chemistry department when it was located in the Hills building. Physicist Sir Mark Oliphant made a proposal for the construction of a proton-synchrotron in 1943, however he made no assertion that the machine would work. In 1945, phase stability was discovered; consequently, the proposal was revived, and construction of a machine that could surpass proton energies of 1 GeV began at the university. However, because of lack of funds, the machine did not start until 1953. The Brookhaven National Laboratory managed to beat them; they started their Cosmotron in 1952, and had it entirely working in 1953, before the University of Birmingham. In 1947, Sir Peter Medawar was appointed Mason Professor of Zoology at the university. His work involved investigating the phenomenon of tolerance and transplantation immunity. He collaborated with Rupert E. Billingham and they did research on problems of pigmentation and skin grafting in cattle. They used skin grafting to differentiate between monozygotic and dizygotic twins in cattle. Taking the earlier research of R. D. Owen into consideration, they concluded that actively acquired tolerance of homografts could be artificially reproduced. For this research, Medawar was elected a Fellow of the Royal Society. He left Birmingham in 1951 and joined the faculty at University College London, where he continued his research on transplantation immunity. He was a recipient of the Nobel Prize in Physiology or Medicine in 1960. Recent history In 1999 talks commenced on the possibility of Aston University integrating itself into the University of Birmingham as the University of Birmingham, Aston Campus. This would have resulted in the University of Birmingham expanding to become one of the largest universities in the UK, with a student body of 30,000. Talks were halted in 2001 after Aston University determined the timing to be inopportune. While Aston University management was in favour of the integration, and reception among staff was generally positive, the Aston student union voted two-to-one against the integration. Despite this set back, the Vice Chancellor of the University of Birmingham said the door remained open to recommence talks when Aston University is ready. The final round of the first ever televised leaders' debates, hosted by the BBC, was held at the university during the 2010 British general election campaign on 29 April 2010. On 9 August 2010 the university announced that for the first time it would not enter the UCAS clearing process for 2010 admission, which matches under-subscribed courses to students who did not meet their firm or insurance choices, due to all places being taken. Largely a result of the financial crisis of 2007–2010, Birmingham joined fellow Russell Group universities including Oxford, Cambridge, Edinburgh and Bristol in not offering any clearing places. The university acted as a training camp for the Jamaican athletics team prior to the 2012 London Olympics. A new library was opened for the 2016/17 academic year, and a new sports centre opened in May 2017. The previous Main Library and the old Munrow Sports Centre, including the athletics track, have both since been demolished, with the demolition of the old library being completed in November 2017. Controversies The discipline of cultural studies was founded at the university and between 1964 and 2002 the campus was home to the Centre for Contemporary Cultural Studies, a research centre whose members' work came to be known as the Birmingham School of Cultural Studies. Despite being established by one of the key figures in the field, Richard Hoggart, and being later directed by the theorist Stuart Hall, the department was controversially closed down. Analysis showed that the university was fourth in a list of British universities that faced the most employment tribunal claims between 2008 and 2011. They were the second most likely to settle these before the hearing date. In 2011 a parliamentary early day motion was proposed, arguing against the Guild suspending the elected Sabbatical Vice President (Education), who was arrested while taking part in protest activity. In December 2011 it was announced that the university had obtained a 12-month-long injunction against a group of around 25 students, who occupied a residential building on campus from 23 to 26 November 2011, preventing them from engaging in further "occupational protest action" on the university's grounds without prior permission. It was misreported in the press that this injunction applied to all students, however the court order defines the defendants as: Persons unknown (including students of the University of Birmingham) entering or remaining upon the buildings known as No. 2 Lodge Pritchatts Road, Birmingham at the University of Birmingham for the purpose of protest action (without the consent of the University of Birmingham). The university and the Guild of Students also clarified the scope of the injunction in an e-mail sent to all students on 11 January 2012, stating: "The injunction applies only to those individuals who occupied the lodge". The university said that it sought this injunction as a safety precaution based on a previous occupation. Three separate human rights groups, including Amnesty International, condemned the move as restrictive on human rights. In 2019 several women said the university refused to investigate allegations of campus rape. One student who complained of rape in university accommodation was told by employees of the university that there were no specific procedures for handling rape complaints. In other cases students were told they would have to prove the alleged rapes occurred on university property. The university has been criticized by legal professionals for not adequately assessing the risk to students by refusing to investigate complaints of criminal conduct. Campuses Edgbaston campus Original buildings The main campus of the university occupies a site some south-west of Birmingham city centre, in Edgbaston. It is arranged around Joseph Chamberlain Memorial Clock Tower (affectionately known as 'Old Joe' or 'Big Joe'), a grand campanile which commemorates the university's first chancellor, Joseph Chamberlain. Chamberlain may be considered the founder of Birmingham University, and was largely responsible for the university gaining its Royal Charter in 1900 and for the development of the Edgbaston campus. The university's Great Hall is located in the domed Aston Webb Building, which is named after one of the architects – the other was Ingress Bell. The initial site was given to the university in 1900 by Lord Calthorpe. The grand buildings were an outcome of the £50,000 given by steel magnate and philanthropist Andrew Carnegie to establish a "first class modern scientific college" on the model of Cornell University in the United States. Funding was also provided by Sir Charles Holcroft. The original domed buildings, built in Accrington red brick, semicircle to form Chancellor's Court. This sits on a drop, so the architects placed their buildings on two tiers with a drop between them. The clock tower stands in the centre of the Court. The campanile itself draws its inspiration from the Torre del Mangia, a medieval clock tower that forms part of the Town Hall in Siena, Italy. When it was built, it was described as 'the intellectual beacon of the Midlands' by the Birmingham Post. The clock tower was Birmingham's tallest building from the date of its construction in 1908 until 1969; it is now the third highest in the city. It is one of the top 50 tallest buildings in the UK, and the tallest free-standing clock tower in the world, although there is some confusion about its actual height, with the university listing it both as and tall in different sources. The campus has a wide diversity in architectural types and architects. "What makes Birmingham so exceptional among the Red Brick universities is the deployment of so many other major Modernist practices: only Oxford and Cambridge boast greater selections". The Guild of Students original section was designed by Birmingham inter-war architect Holland Hobbiss who also designed the King Edward's School opposite. It was described as "Redbrick Tudorish" by Nikolaus Pevsner. The statue on horseback fronting the entrance to the university and Barber Institute of Fine Arts is a 1722 statue of George I rescued from Dublin in 1937. This was saved by Bodkin, a director of the National Gallery of Ireland and first director of the Barber Institute. The statue was commissioned by the Dublin Corporation from the Flemish sculptor John van Nost. Final negotiations for part of what is now the Vale were only completed in March 1947. By then, properties which would have their names used for halls of residences such as Wyddrington and Maple Bank were under discussion and more land was obtained from the Calthorpe estate in 1948 and 1949 providing the setting for the Vale. Landscape architect Mary Mitchell designed the layout of the campus and she included mature trees that were retained from the former gardens. Construction on the Vale started in 1962 with the creation of a artificial lake and the building of Ridge, High, Wyddrington and Lake Halls. The first, Ridge Hall, opened for 139 women in January 1964, with its counterpart High Hall admitting its first male residents the following October. 1960s and modern expansion The university underwent a major expansion in the 1960s due to the production of a masterplan by Casson, Conder and Partners. The first of the major buildings to be constructed to a design by the firm was the Refectory and Staff House which was built in 1961 and 1962. The two buildings are connected by a bridge. The next major buildings to be constructed were the Wyddrington and Lake Halls and the Faculty of Commerce and Social Science, all completed in 1965. The Wyddrington and Lake Halls, on Edgbaston Park Road, were designed by H. T. Cadbury-Brown and contained three floors of student dwellings above a single floor of communal facilities. The Faculty of Commerce and Social Science, now known as the Ashley Building, was designed by Howell, Killick, Partridge and Amis and is a long, curving two-storey block linked to a five-storey whorl. The two-storey block follows the curve of the road, and has load-bearing brick cross walls. It is faced in specially-made concrete blocks. The spiral is faced with faceted pre-cast concrete cladding panels. It was statutorily listed in 1993 and a refurbishment by Berman Guedes Stretton was completed in 2006. Chamberlain, Powell and Bon were commissioned to design the Physical Education Centre which was built in 1966. The main characteristic of the building is the roof of the changing rooms and small gymnasium which has hyperbolic paraboloid roof light shells and is completely paved in quarry tiles. The roof of the sports hall consists of eight conoidal 2½-inch thick sprayed concrete shells springing from long pre-stressed valley beams. On the south elevation, the roof is supported on raking pre-cast columns and reversed shells form a cantilevered canopy. Also completed in 1966 was the Mining and Minerals Engineering and Physical Metallurgy Departments, which was designed by Philip Dowson of Arup Associates. This complex consisted of four similar three-storey blocks linked at the corners. The frame is of pre-cast reinforced concrete with columns in groups of four and the whole is planned as a tartan grid, allowing services to be carried vertically and horizontally so that at no point in a room are services more than ten feet away. The building received the 1966 RIBA Architecture Award for the West Midlands. It was statutorily listed in 1993. Taking the full five years from 1962 to 1967, Birmingham erected twelve buildings which each cost in excess of a quarter of a million pounds. In 1967, Lucas House, a new hall of residence designed by The John Madin Design Group, was completed, providing 150 study bedrooms. It was constructed in the garden of a large house. The Medical School was extended in 1967 to a design by Leonard J. Multon and Partners. The two-storey building was part of a complex which covers the southside of Metchley Fort, a Roman fort. In 1968, the Institute for Education in the Department for Education was opened. This was another Casson, Conder and Partners-designed building. The complex consisted of a group of buildings centred around an eight-storey block, containing study offices, laboratories and teaching rooms. The building has a reinforced concrete frame which is exposed internally and the external walls are of silver-grey rustic bricks. The roofs of the lecture halls, penthouse and Child Study wing are covered in copper. Arup Associates returned in the 1960s to design the Arts and Commerce Building, better known as Muirhead Tower and houses the Institute of Local Government Studies. This was completed in 1969. A £42 million refurbishment of the 16-storey tower was completed in 2009 and it now houses the Colleges of Social Sciences and the Cadbury Research Library, the new home for the university's Special Collections. The podium was remodelled around the existing Allardyce Nicol studio theatre, providing additional rehearsal spaces and changing and technical facilities. The ground floor lobby now incorporates a Starbucks coffee shop. The name, Muirhead Tower, came from that of the first philosophy professor of the university John Henry Muirhead. Recently completed is a 450-seat concert hall, called the Bramall Music Building, which completes the redbrick semicircle of the Aston Webb building designed by Glenn Howells Architects with venue design by Acoustic Dimensions. This auditorium, with its associated research, teaching and rehearsal facilities, houses the Department of Music. In August 2011 the university announced that architects Lifschutz Davidson Sandilands and S&P were appointed to develop a new Indoor Sports Centre as part of a £175 million investment in the campus. Other features In 1978, University station, on the Cross-City Line, was opened to serve the university and its hospital. It is the only university campus in mainland Britain with its own railway station. Nearby, the Steampipe Bridge, which was constructed in 2011, transports steam across the Cross-City Railway Line and Worcester & Birmingham Canal from the energy generation plant to the medical school as part of the university's sustainable energy strategy. Its laser-cut exterior is also a public art feature. Located within the Edgbaston site of the university is the Winterbourne Botanic Garden, a 24,000 square metre (258,000 square foot) Edwardian Arts and Crafts style garden. The large statue in the foreground was a gift to the university by its sculptor Sir Edward Paolozzi – the sculpture is named 'Faraday', and has an excerpt from the poem 'The Dry Salvages' by T. S. Eliot around its base. The University of Birmingham operates the Lapworth Museum of Geology in the Aston Webb Building in Edgbaston. It is named after Charles Lapworth, a geologist who worked at Mason Science College. Since November 2007, the university has been holding a farmers' market on the campus. Birmingham is the first university in the country to have an accredited farmers' market. The considerable extent of the estate meant that by the end of the 1990s it was valued at £536 million. University of Birmingham marked its grand ending of Green Heart Project at the start of 2019. Selly Oak campus The university's Selly Oak campus is a short distance to the south of the main campus. It was the home of a federation of nine colleges, known as Selly Oak Colleges, mainly focused on theology, social work, and teacher training. The Federation was for many years associated with the University of Birmingham. A new library, the Orchard Learning Resource Centre, was opened in 2001, shortly before the Federation ceased to exist. The OLRC is now one of Birmingham University's site libraries. Among the Selly Oak Colleges was Westhill College, (later the University of Birmingham, Westhill), which merged with the university's School of Education in 2001. In the following years most of the remaining colleges closed, leaving two colleges which continue today, Woodbrooke College, a study and conference centre for the Society of Friends, and Fircroft College, a small adult education college with residential provision. Woodbrooke College's Centre for Postgraduate Quaker Studies, established in 1998, works with the University of Birmingham to deliver research supervision for the degrees of MA by research and PhD. The Selly Oak campus is now home to the Department of Drama and Theatre Arts in the newly refurbished Selly Oak Colleges Old Library and George Cadbury Hall 200-seat theatre. The UK daytime television show Doctors is filmed on this campus. The University of Birmingham School occupies a brand new, purpose-built building located on the university's Selly Oak campus. The University of Birmingham School is sponsored by the University of Birmingham and managed by an Academy Trust. The University of Birmingham School opened in September 2015. Mason College and Queen's College campus The Victorian neo-gothic Mason College Building in Birmingham city centre housed Birmingham University's Faculties of Arts and Law for over 50 years after the founding of the university in 1900. The Faculty of Arts building on the Edgbaston campus was not constructed until 1959–61. The Faculties of Arts and Law then moved to the Edgbaston campus. The original Mason College Building was demolished in 1962 as part of the redevelopment within the inner ring road. The 1843 Gothic Revival building constructed opposite the Town Hall between Paradise Street (the main entrance) and Swallow Street served as Queen's College, one of the founder colleges of the university. In 1904 the building was given a new buff-coloured terracotta and brick front. The medical and scientific departments merged with Mason College in 1900 to form the University of Birmingham and sought new premises in Edgbaston. The theological department of Queen's College did not merge with Mason College, but later moved in 1923 to Somerset Road in Edgbaston, next to the University of Birmingham as the Queen's Foundation, maintaining a relationship with the University of Birmingham until a 2010 review. In the mid 1970s, the original Queen's College building was demolished, with the exception of the grade II listed façade. Organisation and administration Academic departments Birmingham has departments covering a wide range of subjects. On 1 August 2008, the university's system was restructured into five 'colleges', which are composed of numerous 'schools': Arts and Law (English, Drama and Creative Studies; History and Cultures; Languages, Cultures, Art History and Music; Birmingham Law School; Philosophy, Theology and Religion) Engineering and Physical Sciences (Chemistry; Chemical Engineering; Computer Science; Engineering (comprising the Departments of civil, Mechanical and Electrical, Electronic & Systems Engineering); Mathematics; Metallurgy and Materials; Physics and Astronomy) Life and Environmental Sciences (Biosciences; Geography, Earth and Environmental Sciences; Psychology; Sport and Exercise Sciences) Medical and Dental Sciences (Institute of Cancer and Genomic Sciences; Institute of Clinical Sciences; Institute of Inflammation and Ageing; Institute of Applied Health Research; Institute of Cardiovascular Science; Institute of Immunology and Immunotherapy; Institute of Metabolism and Systems Research; Institute of Microbiology and Infection). Social Sciences (Birmingham Business School; Education; Government and Society; Social Policy) Liberal Arts and Sciences The university is home to a number of research centres and schools, including the Birmingham Business School, the oldest business school in England, the University of Birmingham Medical School, the International Development Department, the Institute of Local Government Studies, the Centre of West African Studies, the Centre for Russian and East European Studies, the Centre of Excellence for Research in Computational Intelligence and Applications and the Shakespeare Institute. An Institute for Research into Superdiversity was established in 2013. Apart from traditional research and PhDs, under the department of Engineering and Physical Sciences, the university offers split-site PhD in Computer Science. The university is also home to the Birmingham Solar Oscillations Network (BiSON) which consists of a network of six remote solar observatories monitoring low-degree solar oscillation modes. It is operated by the High Resolution Optical Spectroscopy group of the School of Physics and Astronomy, funded by the Science and Technology Facilities Council (STFC). International Development Department The International Development Department (IDD) is a multi-disciplinary academic department focused on poverty reduction through developing effective governance systems. The department is one of the leading UK centres for the postgraduate study of international development. The department has been described as being a "highly regarded, long-established specialist unit" with a "global reputation" by The Independent. Careers Network The University of Birmingham careers advisory service has been called Careers Network since 2012. Key people include: Eluned Jones, Director of Student Employability; Sophie Miller, Deputy Director - Guidance & Information; Sue Welland, Deputy Director - External Engagement. Off-campus establishments A number of the university's centres, schools and institutes are located away from its two campuses in Edgbaston and Selly Oak: The Shakespeare Institute, in Stratford-upon-Avon, which is a centre for postgraduate study dedicated to the study of William Shakespeare and the literature of the English Renaissance. The Ironbridge Institute, in Ironbridge, which offers postgraduate and professional development courses in heritage. The School of Dentistry (the UK's oldest dental school), in Birmingham City Centre. The Raymond Priestley Centre, near Coniston in the Lake District, which is used for outdoor pursuits and field work. There is also a Masonic Lodge that has been associated with the university since 1938. University of Birmingham Observatory In the early 1980s, the University of Birmingham constructed an observatory next to the university playing fields, approximately south of the Edgbaston campus. The site was chosen because the night sky was ~100 times darker than the skies above campus. First light was on 8 December 1982, and the Observatory was officially opened by the Astronomer Royal, Francis Graham-Smith, on 13 June 1984. The observatory was upgraded in 2013. The Observatory is used primarily for undergraduate teaching. It has two main instruments, a 16" Cassegrain (working at f/19) and a 14" Meade LX200R (working at f/6.35). A third telescope is also present and is used exclusively for visual observations. Members of the public are given chance to visit the Observatory at regular Astronomy in the City events during the winter months. These events include a talk on the night sky from a member of the university's student Astronomical Society; a talk on current astrophysics research, such as exoplanets, galaxy clusters or gravitational-wave astronomy, a question-and-answer session, and the chance to observing using telescopes both on campus and at the Observatory. Branding The original coat of arms was designed in 1900. It features a double headed lion (on the left) and a mermaid holding a mirror and comb (to the right). These symbols owe to the coat of arms of the institution's predecessor, Mason College. In 2005 the university began rebranding itself. A simplified edition of the shield which had been introduced in the 1980s reverted to a detailed version based on how it appears on the university's original Royal Charter. Academic profile Libraries and collections Library Services operates six libraries. They are the Barber Fine Art Library, Barnes Library, Main Library, Orchard Learning Resource Centre, Dental Library, and the Shakespeare Institute Library. Library Services also operates the Cadbury Research Library. The Shakespeare Institute's library is a major United Kingdom resource for the study of English Renaissance literature. The Cadbury Research Library is home to the University of Birmingham's historic collections of rare books, manuscripts, archives, photographs and associated artefacts. The collections, which have been built up over a period of 120 years consist of over 200,000 rare printed books including significant incunabula, as well as over 4 million unique archive and manuscript collections. The Cadbury Research Library is responsible for directly supporting the university's research, learning and teaching agenda, along with supporting the national and international research community. The Cadbury Research Library contains the Chamberlain collection of papers from Neville Chamberlain, Joseph Chamberlain and Austen Chamberlain, the Avon Papers belonging to Anthony Eden with material on the Suez Crisis, the Cadbury Papers relating to the Cadbury firm from 1900 to 1960, the Mingana Collection of Middle Eastern Manuscripts of Alphonse Mingana, the Noël Coward Collection, the papers of Edward Elgar, Oswald Mosley, and David Lodge, and the records of the English YMCA and of the Church Missionary Society. The Cadbury Research Library has recently taken in the complete archive of UK Save the Children. The Library holds important first editions such as De Humani Corporis (1543) by Versalius, the Complete Works (1616) of Ben Jonson, two copies of The Temple of Flora (1799-1807) by Robert Thornton and comprehensive collections of the works of Joseph Priestley and D H Lawrence as well as many other significant works. In 2015, a Quranic manuscript in the Mingana Collection was identified as one of the oldest to have survived, having been written between 568 and 645. At the beginning of the 2016/17 academic year, a new main library opened on the Edgbaston campus and the old library has now been demolished as part of the plans to create a 'Green Heart' as per the original plans for the university whereby the clock tower would be visible from the North Gate. The Harding Law Library was closed and renovated to become the university's Translation and Interpreting Suite. Medicine The University of Birmingham's medical school is one of the largest in Europe with well over 450 medical students being trained in each of the clinical years and over 1,000 teaching, research, technical and administrative staff . The school has centres of excellence in cancer, pharmacy, immunology, cardiovascular disease, neuroscience and endocrinology and renowned nationally and internationally for its research and developments in these fields. The medical school has close links with the NHS and works closely with 15 teaching hospitals and 50 primary care training practices in the West Midlands. The University Hospital Birmingham NHS Foundation Trust is the main teaching hospital in the West Midlands. It has been given three stars for the past four consecutive years. The trust also hosts the Royal Centre for Defence Medicine, based at Selly Oak Hospital, which provides medical support to military personnel such as military returned from fighting in the Iraq War. Rankings and reputation The 2022 U.S. News & World Report ranks Birmingham 91st in the world. In 2019, it is ranked 137th among the universities around the world by SCImago Institutions Rankings. In 2021 the Times Higher Education placed Birmingham 12th in the UK. In 2013, Birmingham was crowned 'University of the Year 2014' in the Times Higher Education awards. The 2013 QS World University Rankings places Birmingham University at 10th in the UK and 62nd internationally. Birmingham was ranked 12th in the UK in the 2008 Research Assessment Exercise with 16 percent of the university's research regarded as world-leading and a further 41 percent as internationally excellent, with particular strengths in the fields of music, physics, biosciences, computer science, mechanical engineering, political science, international relations and law. Course satisfaction was at 85% in 2011 which grew to 88% in 2012. In 2015 the Complete University Guide placed Birmingham 5th in the UK for graduate prospects, behind only Imperial, St. George's, Cambridge and Bath. Data from the Higher Education Funding Council for England (HEFCE) placed the university amongst the twelve elite institutions who among them take more than half of the students with the highest A-level grades. Owing to Birmingham's role as a centre of light engineering, the university traditionally had a special focus on science, engineering and commerce, as well as coal mining. It now teaches a full range of academic subjects and has five-star rating for teaching and research in several departments. It is widely regarded as making a prominent contribution to cancer studies, hosting the first Cancer Research UK Centre, and making notable contributions to gravitational-wave astronomy, hosting the Institute of Gravitational Wave Astronomy. The School of Computer Science ranked 1st in the 2014 Guardian University Guide, 4th in the 2013 Sunday Times League Table and 6th in the 2014 Sunday Times League Table. The Department of Philosophy ranked 3rd in the 2017 Guardian University League Tables, below the University of Oxford and above the University of Cambridge, with the first being the University of St Andrews. The combined course of Computer Science and Information Systems, titled Computer Systems Engineering was ranked 4th in the 2016 Guardian University guide. The Department of Political Science and International Studies (POLSIS) ranked 4th in the UK and 22nd in the world in the Hix rankings of political science departments. The sociology department also ranked 4th by The Guardian University guide. The Research Fortnight's University Power Ranking, based on quality and quantity of research activity, put the University of Birmingham 12th in the UK, leading the way across a broad range of disciplines including Primary Care, Cancer Studies, Psychology and Sport and Exercise Sciences. The School of Physics and Astronomy also performed well in the rankings, being ranked 3rd in the 2012 Guardian University Guide and 7th in The Complete University Guide 2012. The School of Chemical Engineering is ranked second in the UK by the 2014 Guardian University Guide. Admissions In terms of average UCAS points of entrants, Birmingham ranked 25th in Britain in 2014. According to the 2017 Times and Sunday Times Good University Guide, approximately 20% of Birmingham's undergraduates come from independent schools. The university gives offers of admission to 79.2% of its applicants, the 8th highest amongst the Russell Group. In the 2016–17 academic year, the university had a domicile breakdown of 76:5:18 of UK:EU:non-EU students respectively with a female to male ratio of 56:44. Birmingham Heroes To highlight leading areas of research, the university has launched the Birmingham Heroes scheme. Academics who lead research that impacts on the lives of people regionally, nationally and globally can be nominated for selection. Heroes include: Alberto Vecchio and Andreas Freise for their work as part of the LIGO Scientific Collaboration towards the first observation of gravitational waves Martin Freer, Toby Peters and Yulong Ding for their work on energy efficient cooling Philip Newsome, Thomas Solomon and Patricia Lalor for tackling the silent killers, liver disease and diabetes James Arthur, Kristján Kristjánsson, Sandra Cooke and Tom Harrison for promoting character in education Lisa Bortolotti, Ema Sullivan-Bissett and Michael Larkin for their work on how to break down the stigma associated with mental illness Kate Thomas, Joe Alderman, Rima Dhillon and Shayan Ahmed for their research in and teaching of life sciences Pam Kearns, Charlie Craddock and Paul Moss for cancer research Anna Phillips, Glyn Humphreys and Janet Lord who research healthy ageing Pierre Purseigle, Peter Gray and Bob Stone for using their historical knowledge to advise government organisations Paul Bowen and Nick Green for research into new materials to improve energy generation Lynne Macaskie, William Bloss and Jamie Lead for their study of pollutants, particularly nanoscale pollutants Paul Jackson, Scott Lucas and Stefan Wolff for their work helping with post-conflict and advice on the application of aid Hongming Xu, Clive Roberts and Roger Reed for work on sustainable transport Moataz Attallah, Kiran Trehan and Tim Daffron for driving economic growth through improving aerospace engineering, developing enterprise and pioneering industrial applications of synthetic biology Birmingham Fellows The Birmingham Fellowship scheme was launched in 2011. The scheme encourages high potential early career researchers to establish themselves as rounded academics and continue pursuing their research interests. This scheme was the first of its kind, and has since been emulated in several other Russell Group universities across the UK. Since 2014, the scheme has been divided into Birmingham Research Fellowships and Birmingham Teaching Fellowships. Birmingham Fellows are appointed to permanent academic posts (with two or three year probation periods), with five years protected time to develop their research. Birmingham Fellows are usually recruited at a lecturer or senior lecturer level. In the first period of the fellowship, emphasis is placed on the research aspect, publishing high quality academic outputs, developing a trajectory for their work and gaining external funding. However, development of teaching skills is encouraged. Teaching and supervisory responsibilities, as well as administrative duties, then steadily increase to a normal lecturer's load in the Fellow's respective discipline by the fifth year of the fellowship. Birmingham Fellows are not expected to carry out academic administration during their term as Fellows, but will do once their posts turn into lectureships (‘three-legged contract’). When accepted into the Birmingham Research Fellowship, Fellows receive a start-up package to develop or continue their research projects, an academic mentor and support for both research and teaching. All fellows are said to become part of the Birmingham Fellows Cohort, which provides them a university-wide network and an additional source of support and mentoring. International cooperation In Germany the University of Birmingham cooperates with the Goethe University in Frankfurt/Main. Both cities are linked by a long-lasting partnership agreement. Student life Guild of Students The University of Birmingham Guild of Students is the university's student union. Originally the Guild of Undergraduates, the institution had its first foundations in the Mason Science College in the centre of Birmingham around 1876. The University of Birmingham itself formally received its Royal Charter in 1900 with the Guild of Students being provided for as a Student Representative Council. It is not known for certain why the name 'Guild of Students' was chosen as opposed to 'Union of Students', however, the Guild shares its name with Liverpool Guild of Students, another 'redbrick university'; both organisations subsequently founded the National Union of Students. The Union Building, the Guild's bricks and mortar presence, was designed by the architect Holland W. Hobbiss. The Guild's official purposes are to represent its members and provide a means of socialising, though societies and general amenities. The university provides the Guild with the Union Building effectively rent free as well as a block grant to support student services. The Guild also runs several bars, eateries, social spaces and social events. The Guild supports a variety of student societies and volunteering projects, roughly around 220 at any one time. The Guild complements these societies and volunteering projects with professional staffed services, including its walk-in Advice and Representation Centre (ARC), Student Activities, Jobs/Skills/Volunteering, Student Mentors in halls, and Community Wardens around Bournbrook. The Guild of Students was where the international volunteering charity InterVol was conceived and developed as a student-led volunteering project; the group currently supports charitable organisations in four developing countries. Another two of the Guild's long-standing societies are Student Advice and Nightline (previously Niteline), which both provide peer-to-peer welfare support. The Guild was one of the first universities in the United Kingdom to publish a campus newspaper, Redbrick, supported financially by the Guild of Students and advertising revenue. The Guild undertakes its representative function through its officer group, seven of whom are full-time, on sabbatical from their studies, and ten of whom are part-time and hold their positions whilst still studying. Elections are held yearly, conventionally February, for the following academic year. These officers have regular contact with the university's officer-holders and managers. In theory, the Guild's officers are directed and kept to account over their year in office by Guild Council, an 80-seat decision-making body. The Guild also supports the university "student reps" scheme, which aims to provide an effective channel of feedback from students on more of a departmental level. Sport The university provides sports and fitness facilities for students and the community to use with a membership. Such facilities include a gym, a dojo, a climbing wall and outdoor football pitches. As of the 2019 league, the university is ranked seventh in the British Universities and Colleges Sport league table. University of Birmingham Sport provides a range of competitive and participation sports, for both the student and local community. Services include 180 fitness classes a week, 56 different sport clubs, including rowing, basketball, cricket, football, rugby union, netball, field hockey, American football, and triathlon. The wide selection has ensured the university has over 4,000 students participating in sport each year. The university also opened the Raymond Priestley Centre in 1981 on the shores of Coniston Water in the Lake District, offering students, staff and community alike to explore outdoor activities and learning in the area. In the 2018 Commonwealth Games in Gold Coast, Australia, six students and eighteen alumni attended, and Birmingham was selected as the next host city for the Birmingham 2022 Commonwealth Games. The university is set to host hockey and squash competitions on campus for the 2022 games, and provide a village for the athletes. Sir Raymond Priestley, vice-chancellor of the university in 1938, and his Director of Sport A.D. Munrow, helped establish the first undergraduate courses in Physical Education in 1946, developed their sports facilities – starting with the gymnasium in 1939, and made participation in recreational sport compulsory for all new undergraduates from 1940 to 1968. Birmingham became the first UK university to offer a sports degree. Many University of Birmingham students and alumni have competed at Olympics, Paralympics and Commonwealth Games. In 2004, six graduates and one student competed in the 2004 Athens Summer Olympics, and four alumni competed at the 2008 Beijing Olympics, including cyclist Paul Manning who won an Olympic Gold. In 2012, Pamela Relph MBE was part of the rowing mixed coxed four that won Paralympic gold, and she successfully defended her title in Rio: the only current international para-rower to be a double Paralympic Champion. The university hosted the Jamaican track and field team prior to the 2012 London Olympics. The team, including the world's fastest man, Usain Bolt – who became the first man in history to defend his 100 metres and 200 metres titles at the Olympics – won team gold for the along with Nesta Carter, Yohan Blake and Michael Frater. Shelly-Ann Fraser-Pryce won gold in the women's 100 metres. The team returned to the university in 2017 to prepare for London's Indoor Championships, staying in the Chamberlain Hall on the Vale Village, and using the newly established Sport & Fitness facilities and athletics track. University of Birmingham Sport has since been host to a number of international teams; the Australian and South Africa teams ahead of the men's Rugby World Cup in 2015, the Jamaican, England and New Zealand netball teams before the Vitality Nations Cup in January 2020, and 19 nations for the individual competition at the World University Squash Championships in 2018. University of Birmingham Sport also offers around 30 scholarships and bursaries to national and international students of exceptional athletic ability. Housing The university provides housing for most first-year students, running a guarantee scheme for all those UK applicants who choose Birmingham as their firm UCAS choice. 90 per cent of university-provided housing is inhabited by first-year students. The university maintained gender-segregated halls until 1999 when Lake and Wyddrington "halls" (treated as two different halls, despite being physically one building) were renamed as Shackleton Hall. Chamberlain Hall (Eden Tower), a seventeen-storey tower block, was originally known as High Hall, for male students, and the connected Ridge Hall (later renamed to the Hampton Wing), for female students. University House was decommissioned as accommodation to house the expanding Business School, while Mason Hall has been demolished and rebuilt, opening in 2008. In the summer of 2006, the university sold three of its most distant halls (Hunter Court, the Beeches and Queens Hospital Close) to private operators, while later in the year and during term, the university was forced urgently to decommission both the old Chamberlain Tower (High Hall) and also Manor House over fire safety inspection failures. The university has rebranded its halls offerings into three villages. Vale Village The Vale Village includes Chamberlain Hall, Shackleton, Maple Bank, Tennis Court, Elgar Court and Aitken residences. A sixth hall of residence, Mason Hall, re-opened in September 2008 following a complete rebuild. Approximately 2,700 students live in the village. Shackleton Hall (originally Lake Hall, for male students, and Wyddrington Hall, for female students) underwent an £11 million refurbishment and was re-opened in Autumn 2004. There are 72 flats housing a total of 350 students. The majority of the units consist of six to eight bedrooms, together with a small number of one, two, three or five bedroom studio/apartments. The redevelopment was designed by Birmingham-based architect Patrick Nicholls while employed at Aedas, now a director of Glancy Nicholls Architects. Maple Bank was refurbished and opened in summer 2005. It consists of 87 five bedroom flats, housing 435 undergraduates. The Elgar Court residence consists of 40 six bedroom flats, housing a total of 236 students. It opened in September 2003. Tennis Court consists of 138 three, four, five and six bedroom flats and houses 697 students. The Aitken wing is a small complex consisting of 23 six and eight bedroom flats. It houses 147 students. Construction of the new Mason Hall commenced in June 2006 following complete demolition of the original 1960s structures. It was designed by Aedas Architects. The entire project is thought to have cost £36.75 million. It has since been completed, with the first year of students moving in September 2008. The new Chamberlain Tower and neighbouring low rise blocks opened in September 2015. Chamberlain is home to more than 700 first year students. It replaced the old 1964-built 18-storey (above ground level) High Hall (later renamed Eden Tower), for male students and low rise Ridge Hall (later renamed Hampton Wing) for female students, which closed in 2006. The 50-year-old Eden Tower was removed at the start of 2014. Previously known as High Hall, the tower and its associated low rise blocks were demolished after studies revealed it would be uneconomical to refurbish them and would not provide the quality of accommodation which the University of Birmingham desires for students. The largest student-run event, the Vale Festival or 'ValeFest', is held annually on the Vale. The Festival celebrated its 10th event in 2014, raising £25,000 for charity. The 2019 event was headlined by The Hunna and Saint Raymond. Pritchatts Park Village The Pritchatts Park Village houses over 700 undergraduate and postgraduate students. Halls include 'Ashcroft', 'The Spinney' and 'Oakley Court', as well as 'Pritchatts House' and the 'Pritchatts Road Houses'. The Spinney is a small complex of six houses and twelve smaller flats, housing 104 students in total. Ashcroft consists of four purpose built blocks of flats and houses 198 students. The four-storey Pritchatts House consists of 24 duplex units and houses 159 students. Oakley Court consists of 21 individual purpose-built flats, ranging in size from five to thirteen bedrooms. Also included are 36 duplex units. A total of 213 students are housed in Oakley Court, made up of undergraduates. Oakley Court was completed in 1993 at a cost of £2.9 million. It was designed by Birmingham-based Associated Architects. Pritchatts Road is a group of four private houses that were converted into student residences. There is a maximum of 16 bedrooms per house. Selly Oak Village Selly Oak Village consists of three residences in the Selly Oak and Bournbrook areas: Jarratt Hall, which is owned by the university, Douper Hall, and The Metalworks. As of 2008, the village had 637 bed spaces for students. Jarratt Hall is a large complex designed around a central courtyard and three landscaped areas. It housed 587 undergraduate students as of 2012. Jarratt Hall did not accommodate postgraduate students until September 2013, due to ongoing refurbishment of kitchens and the heating system. Student Housing Co-operative accommodation Birmingham Student Housing Co-operative was opened in 2014 by students of the university to provide affordable self managed housing for its members. The co-operative manages a property on Pershore Road in Selly Oak. Notable people Academics The faculty and staff members connected with the university include Nobel laureates Sir Norman Haworth (Professor of Chemistry, 1925–1948), Sir Peter Medawar (Mason Professor of Zoology, 1947–1951), John Robert Schrieffer (NSF Fellow at Birmingham, 1957), David Thouless, Michael Kosterlitz, and Sir Fraser Stoddart. Physicists include John Henry Poynting, Freeman Dyson, Sir Otto Frisch, Sir Rudolf Peierls, Sir Marcus Oliphant, Sir Leonard Huxley, Harry Boot, Sir John Randall, and Edwin Ernest Salpeter. Chemists include Sir William A. Tilden. Mathematicians include Jonathan Bennett, Henry Daniels, Daniela Kühn, Deryk Osthus, Daniel Pedoe and G. N. Watson. In music, faculty members include the composers Sir Edward Elgar and Sir Granville Bantock. Geologists include Charles Lapworth, Frederick Shotton, and Sir Alwyn Williams. In medicine, faculty members include Sir Melville Arnott and Sir Bertram Windle. Author and literary critic David Lodge taught English from 1960 until 1987. Poet and playwright Louis MacNeice was a lecturer in classics 1930–1936. English novelist, critic, and man of letters Anthony Burgess taught in the extramural department (1946–50). Richard Hoggart founded the Centre for Contemporary Cultural Studies. Sir Alan Walters was Professor of Econometrics and Statistics (1951–68) and later became Chief Economic Adviser to Prime Minister Margaret Thatcher. Lord Zuckerman was Professor of Anatomy 1946–1968 and also served as chief scientific adviser to the British government from 1964 to 1971. Lord King of Lothbury was a Professor in the Faculty of Commerce and later became Governor of the Bank of England. Sir William James Ashley was first Dean and the founder of the Birmingham Business School. Sir Nathan Bodington was Professor of Classics. Sir Michael Lyons was Professor of Public Policy from 2001 to 2006. Sir Kenneth Mather was Professor of genetics (1948) and recipient of the 1964 Darwin Medal. Sir Richard Redmayne was Professor of Mining and later became first Chief Inspector of Mines. The art historian Sir Nikolaus Pevsner held a research post at the university. Sir Ellis Waterhouse was Barber Professor of Fine Art (1952–1970). Lord Cadman taught petroleum engineering and is credited with creating the course 'Petroleum Engineering'. The philosopher Sir Michael Dummett held an assistant lectureship at the university. Lord Borrie was a professor of law and dean of the faculty of law. Sir Charles Raymond Beazley was Professor of History. Prison reformer Margery Fry was first warden of University House. Vice-Chancellors and Principals include Sir Oliver Lodge, Lord Hunter of Newington, Sir Charles Grant Robertson, Sir Raymond Priestley, and Sir Michael Sterling. Alumni Four Nobel Prize laureates are Birmingham University alumni: Francis Aston, Maurice Wilkins, Sir John Vane, and Sir Paul Nurse. In addition soil scientist Peter Bullock contributed to the reports of the IPCC, which was awarded the Nobel Peace Prize in 2007. The university's alumni in the sphere of British government and politics include: British Prime Ministers Stanley Baldwin and Neville Chamberlain; Chief Minister of Gibraltar Joe Bossano; British cabinet minister and UN Under-Secretary-General Baroness Amos; Cabinet Ministers Julian Smith and Hilary Armstrong; British ministers of state Ann Widdecombe, Richard Tracey, Derek Fatchett, and Anna Soubry; British High Commissioner to New Zealand and Ambassador to South Africa Sir David Aubrey Scott; Governor of the Turks and Caicos Islands Nigel Dakin; Welsh Assembly Government minister Jane Davidson; and UN weapons inspector David Kelly. Birmingham's alumni in the field of government and politics in other countries include Prime Minister of St. Lucia Kenny Anthony; Prime Minister of the Bahamas Perry Christie; Singapore Minister of Finance Hu Tsu Tau Richard; Singapore Senior Minister of State Matthias Yao; Minister of Defence of Kenya Mohamed Yusuf Haji; Tanzanian minister Mark Mwandosya; Tongan minister ʻAna Taufeʻulungaki; Ethiopian cabinet minister Junedin Sado; Deputy Prime Minister of Mauritius Rashid Beebeejaun; Saudi minister Abdulaziz bin Mohieddin Khoja; Foreign Minister of Gambia Bala Garba Jahumpa; Ghanaian minister Juliana Azumah-Mensah; Egyptian Minister William Selim Hanna; Nigerian minister Emmanuel Chuka Osammor; Saint Lucian minister Alvina Reynolds; Lebanese foreign minister Lucien Dahdah; Zambian President Hakainde Hichilema and Zimbabwean ministers David Karimanzira and Didymus Mutasa. Alumni in the world of business include: director of the Bank of England Lord Roll of Ipsden; CEO of J Sainsbury plc Mike Coupe; Chairman of the Shell Transport and Trading Company plc Sir John Jennings; automobile executive Sir George Turnbull; President of the Confederation of British Industry Sir Clive Thompson; CEO and chairman of BP Sir Peter Walters; Chairman of British Aerospace Sir Austin Pearce; mobile communications entrepreneur Mo Ibrahim; fashion designer and retailer George Davis; founder of Osborne Computer Corporation Adam Osborne; and chairman & CEO of Bass plc Sir Ian Prosser. Alumni in the legal arena include Hong Kong Chief Justice of the Court of Final Appeal Geoffrey Ma Tao-li; Hong Kong Judge of the Court of Final Appeal Robert Tang; Justice of Appeal at the Court of Appeal in Tanzania Robert Kisanga; Justice of the Supreme Court of Belize Michelle Arana; Lord Justice of Appeal Sir Philip Otton; and High Court Judges Dame Nicola Davies, Sir Michael Davies, Sir Henry Globe, and Dame Lucy Theis. Alumni in the armed forces include Chief of the General Staff General Sir Mike Jackson; and Director General of the Army Medical Services Alan Hawley. Alumni in the sphere of religion include Metropolitan Archbishop and Primate of the Anglican Church in South East Asia Bolly Lapok; Anglican Bishops Paul Bayes, Alan Smith, Stephen Venner, Michael Langrish, and Eber Priestley; Anglican Suffragan Bishops Brian Castle and Colin Docker; Catholic Archbishop Kevin McDonald; and Catholic bishop Philip Egan. Alumni in the field of healthcare include: chair of the National Institute for Clinical Excellence David Haslam; Dame Hilda Lloyd, the first woman to be elected as president of the Royal College of Obstetricians and Gynaecologists; Chief Scientific Officer in the NHS Sue Hill; Chief Dental Officer for England Barry Cockcroft; and Chief Medical officer for England Sir Liam Donaldson. Alumni in the domain of engineering include: Chairman of the United Kingdom Atomic Energy Authority and of the Central Electricity Generating Board Lord Marshall of Goring; Chairman of British Aerospace Sir Austin Pearce; Chief Engineer of the PWD Shaef in World War II Sir Francis McLean; and Director of Production at the Ministry of Munitions during World War I Sir Henry Fowler. Alumni in the creative industries include actors Madeleine Carroll, Tim Curry, Tamsin Greig, Matthew Goode, Nigel Lindsay, Elliot Cowan, Geoffrey Hutchings, Judy Loe, Jane Wymark, Mariah Gale, Hadley Fraser, Elizabeth Henstridge, and Norman Painting; actors and comedians Victoria Wood and Chris Addison; dancer/choreographer and co-creator of 'Riverdance' Jean Butler, social media influencer and YouTuber Hannah Witton, children's author and scholar Fawzia Gilani-Williams, musicians Simon Le Bon of Duran Duran and Christine McVie of Fleetwood Mac, and travel writer Alan Booth. Alumni in academia include: University Vice-Chancellors Frank Horton, Sir Robert Howson Pickard, Sir Louis Matheson, Derek Burke, Sir Alex Jarratt, Sir Philip Baxter, Vincent Watts, P. B. Sharma, Berrick Saul, and Wahid Omar; neurobiologist and Emeritus Professor at the University of Cambridge Sir Gabriel Horn, physicians Sir Alexander Markham, Sir Gilbert Barling, Brian MacMahon, Aaron Valero, and Sir Arthur Thomson; neurologist Sir Michael Owen; physicists John Stewart Bell, Sir Alan Cottrell, Lord Flowers, Harry Boot, Elliott H. Lieb (recipient of the 2003 Henri Poincaré Prize), Stanley Mandelstam, Edwin Ernest Salpeter (recipient of the 1997 Crafoord Prize in Astronomy), Sir Ernest William Titterton, and Raymond Wilson (recipient of the 2010 Kavli Prize in Astrophysics); statistician Peter McCullagh; chemist Sir Robert Howson Pickard; biologists Sir Kenneth Murray and Lady Noreen Murray; zoologists Desmond Morris and Karl Shuker; behavioural neuroscientist Barry Everitt; palaeontologist Harry B. Whittington; computer scientist Mike Cowlishaw; Women's writing academic Lorna Sage; philosopher John Lewis; economist and historian Homa Katouzian; theologian and biochemist Arthur Peacocke; labour economist David Blanchflower; Professor of Social Policy at the London School of Economics Sir John Hills; geographer Geoffrey J.D. Hewings; Professor of Geology and ninth President of Cornell University Frank H. T. Rhodes; Government Chief Scientific Adviser Sir Alan Cottrell; and former astronaut Rodolfo Neri Vela. Alumni in the world of sport are many. They include Lisa Clayton, the first woman to sail the globe single-handed; 400 metres runner Allison Curbishley, who won silver at the 1998 Commonwealth Games; team pursuit cyclist Paul Manning, who won bronze, silver and gold at the Olympics of 2004, 2008 and 2012; sports scholar Izzy Christiansen, who played football for Birmingham City, Everton and Manchester City before her call up to the senior England squad; Warwickshire and England cricketer Jim Troughton; and Adam Pengilly who competed as a skeleton racer at the 2006 and 2010 Winter Olympics and was elected to the International Olympic Committee Athletes' Commission in 2010. Triathlete Chrissie Wellington and Rachel Joyce won the ITU Long Distance World Championship on 2008 and 2011, and Chrissie holds the four fastest times in the World Ironman competition. She received an OBE in 2009, and the current world-class gym at the Sport & Fitness club on campus is named in her honour. Whilst still studying at the university, student Lily Owsley scored the gold medal-winning goal at the 2016 Rio Olympics with the help of teammate and fellow UoB graduate Sophie Bray. Middle-distance athlete Hannah England won the World Championship 1500m silver in 2011 and after retiring from athletics officially in 2019, worked alongside fellow athlete and husband Luke Gunn in the Sport department at the university. In recent years, Birmingham has seen scholars such as athletes Jonny Davies, 2020 British indoor champion over 3000m; Sarah McDonald, a former 1500m British Champion, and Mari Smith, the current British indoor silver medallist over 800m pass through the doors. Fran Williams, senior England netballer player, won Bronze with the England Roses at the Vitality Netball World Cup in Liverpool in 2019, the youngest player on the squad at 22 years old, and Laura Keates, England international rugby player, who was part of the 2014 World Cup-winning squad. Barbara Slater, daughter of Wolverhampton Wanderer's legend and UoB's Director of Sport in 1972 Bill Slater, became Director of BBC Sport from 2009, and was the first woman to hold this title. She led the broadcast of the London 2012 Olympics - the biggest television event in British broadcasting history. Former Manchester United Chief Executive David Gill learned the ropes of financing at Birmingham, studying Industrial, Economic and Business Studies in 1978; sports commentator Simon Brotherton developed his career whilst studying at UoB, and Sir Patrick Head, founder of the Williams team which dominated Formula One in the 1990s, studied Mechanical Engineering. See also List of modern universities in Europe (1801–1945) List of universities in the United Kingdom References Notes Bibliography External links Guild of Students (The Guild functions as the Students' Union) University of Birmingham Foundation University of Birmingham Educational institutions established in 1900 University of Birmingham Russell Group 1900 establishments in England Universities established in the 20th century Universities UK
3095080
https://en.wikipedia.org/wiki/Data%20retention
Data retention
Data retention defines the policies of persistent data and records management for meeting legal and business data archival requirements. Although sometimes interchangeable, it is not to be confused with the Data Protection Act 1998. The different data retention policies weigh legal and privacy concerns against economics and need-to-know concerns to determine the retention time, archival rules, data formats, and the permissible means of storage, access, and encryption. In the field of telecommunications, data retention generally refers to the storage of call detail records (CDRs) of telephony and internet traffic and transaction data (IPDRs) by governments and commercial organisations. In the case of government data retention, the data that is stored is usually of telephone calls made and received, emails sent and received, and websites visited. Location data is also collected. The primary objective in government data retention is traffic analysis and mass surveillance. By analysing the retained data, governments can identify the locations of individuals, an individual's associates and the members of a group such as political opponents. These activities may or may not be lawful, depending on the constitutions and laws of each country. In many jurisdictions access to these databases may be made by a government with little or no judicial oversight. In the case of commercial data retention, the data retained will usually be on transactions and web sites visited. Data retention also covers data collected by other means (e.g., by Automatic number-plate recognition systems) and held by government and commercial organisations. Data Retention Policy A data retention policy is a recognized and proven protocol within an organization for retaining information for operational use while ensuring adherence to the laws and regulations concerning them. The objectives of a data retention policy are to keep important information for future use or reference, to organize information so it can be searched and accessed at a later date and to dispose of information that is no longer needed. The data retention policies within an organization are a set of guidelines that describes which data will be archived, how long it will be kept, what happens to the data at the end of the retention period (archive or destroy) and other factors concerning the retention of the data. A part of any effective data retention policy is the permanent deletion of the retained data; achieving secure deletion of data by encrypting the data when stored, and then deleting the encryption key after a specified retention period. Thus, effectively deleting the data object and its copies stored in online and offline locations. Australia In 2015, the Australian government introduced mandatory data retention laws that allows data to be retained up to two years. The scheme is estimated to cost at least AU$400 million per year to implement, working out to at least $16 per user per year. It will require telecommunication providers and ISPs to retain telephony, Internet and email metadata for two years, accessible without a warrant, and could possibly be used to target file sharing. The Attorney-General has broad discretion on which agencies are allowed to access metadata, including private agencies. The Greens were strongly opposed to the introduction of these laws, citing privacy concerns and the increased prospect of 'speculative invoicing' over alleged copyright infringement cases. The Labor Party initially opposed as well, but later agreed to passing the law after additional safeguards were put in place to afford journalists some protection. European Union On 15 March 2006, the European Union adopted the Data Retention Directive, on "the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks and amending Directive 2002/58/EC". It requires Member States to ensure that communications providers retain the necessary data as specified in the Directive for a period of between 6 months and 2 years in order to: Trace and identify the source of a communication; Trace and identify the destination of a communication; Identify the date, time, and duration of a communication; Identify the type of communication; Identify the communication device; Identify the location of mobile communication equipment. The data is required to be available to "competent" national authorities in specific cases, "for the purpose of the investigation, detection and prosecution of serious crime, as defined by each Member State in its national law". The Directive covers fixed telephony, mobile telephony, Internet access, email, and VoIP. Member States were required to transpose it into national law within 18 months—no later than September 2007. However, they may if they wish postpone the application of the Directive to Internet access, email, and VoIP for a further 18 months after this date. A majority of Member States exercised this option. All 28 EU States have notified the European Commission about the transposition of the Directive into their national law. Of these, however, Germany and Belgium have only transposed the legislation partially. A report evaluating the Directive was published by the European Commission in April 2011. It concluded that data retention was a valuable tool for ensuring criminal justice and public protection, but that it had achieved only limited harmonisation. There were serious concerns from service providers about the compliance costs and from civil society organisations who claim that mandatory data retention was an unacceptable infringement of the fundamental right to privacy and the protection of personal data. The commission is now reviewing the legislation. In response to the report, on May 31, 2011, the European Data Protection Supervisor expressed some concerns on the European Data Retention Directive, underlining that the Directive "does not meet the requirements imposed by the fundamental rights to privacy and data protection". On 8 April 2014, the Court of Justice of the European Union declared the Directive 2006–24/EC invalid for violating fundamental rights. The council's Legal Services have been reported to have stated in closed session that paragraph 59 of the European Court of Justice's ruling "suggests that general and blanket data retention is no longer possible". A legal opinion funded by the Greens/EFA Group in the European Parliament finds that the blanket retention data of unsuspicious persons generally violates the EU Charter of Fundamental Rights, both in regard to national telecommunications data retention laws and to similar EU data retention schemes (PNR, TFTP, TFTS, LEA access to EES, Eurodac, VIS). United Kingdom Data Retention and Investigatory Powers Act 2014 The Data Retention and Investigatory Powers Act came into force in 2014. It is the answer by the United Kingdom parliament after a declaration of invalidity was made by the Court of Justice of the European Union in relation to Directive 2006/ 24/EC in order to make provision, about the retention of certain communications data. In addition, the purpose of the act is to: Amend the grounds for issuing interception warrants, or granting or giving certain authorizations or notices. Make provision about the extraterritorial application of that Part and about the meaning of "telecommunications service" for the purposes of that Act; Make provision about a review of the operation and regulation of investigatory powers; and for connected purposes. The act is also to ensure that communication companies in the UK retain communications data so that it continues to be available when it is needed by law enforcement agencies and others to investigate committed crimes and protect the public. Data protection law requires data that isn't of use to be deleted. This means that the intention of this Act could be using data retention to acquire further policing powers using, as the Act make data retention mandatory. An element of this Act is the provision of the investigatory powers to be reported by 1 May 2015. Controversy The Data Retention and Investigatory Powers Act 2014 was referred to as the "snooper's charter" communications data bill. Theresa May, a strong supporter of the Parliament Act, said in a speech that "If we (parliament) do not act, we risk sleepwalking into a society in which crime can no longer be investigated and terrorists can plot their murderous schemes undisrupted." The United Kingdom parliament its new laws increasing the power of data retention is essential to tackling crime and protecting the public. However, not all agree and believe that the primary objective in the data retention by the government is mass surveillance. After Europe's highest court said the depth of data retention breaches citizens' fundamental right to privacy and the UK created its own Act, it has led to the British government being accused of breaking the law by forcing telecoms and internet providers to retain records of phone calls, texts and internet usage. From this information, governments can identify an individual's associates, location, group memberships, political affiliations and other personal information. In a television interview, the EU Advocate General Pedro Cruz Villalón highlighted the risk that the retained data might be used illegally in ways that are "potentially detrimental to privacy or, more broadly, fraudulent or even malicious". Retention of other data Postal data – retention period unknown Information written on the outside of a postal item (such as a letter or parcel), online tracking of postal items, records of special postal items (such as records of registered, recorded or special delivery postal items), records of parcel consignment, delivery and collection. Banking data – seven years The Economist reported that UK banks are required to retain data on all financial transactions for seven years though this has not been verified. It is not clear whether data on credit card transactions is also retained for seven years. Vehicle movement data – two years Documents leaked from the Association of Chief Police Officers (ACPO) have revealed that the UK is planning to collect data from a nationwide network of automatic numberplate recognition cameras and store the data for two years in a controversial new centre being built at Hendon. This data could then be linked to other data held by the government and watchlists from the police and security services. Access to retained data The bodies that are able to access retained data in the United Kingdom are listed in the Regulation of Investigatory Powers Act 2000 (RIPA). These are the following: Police forces, as defined in section 81(1) of RIPA National Criminal Intelligence Service Serious Organised Crime Agency, formerly the National Crime Squad HM Customs and Excise Inland Revenue (the latter two have been merged into HM Revenue and Customs) Security Service Secret Intelligence Service Government Communications Headquarters (GCHQ) However, the Regulation of Investigatory Powers Act 2000 (RIPA) also gives the Home Secretary powers to change the list of bodies with access to retained data through secondary legislation. The list of authorised bodies now includes: Food Standards Agency Local authorities National Health Service Reasons for accessing retained data The justifications for accessing retained data in the UK are set out in the Regulation of Investigatory Powers Act 2000 (RIPA). They include: Interests of national security; Preventing or detecting crime or of preventing disorder; Economic well-being of the United Kingdom; Public safety; Protecting public health; Assessing or collecting any tax, duty, levy or other imposition, contribution or charge payable to a government department; Preventing death or injury in an emergency or any damage to a person's physical or mental health, or of mitigating any injury or damage to a person's physical or mental health; Any other purpose not listed above which is specified for the purposes of this subsection by an order made by the Secretary of State. Czech Republic Implementation of the directive was part of Act. No. 259/2010 Coll. on electronic communications as later amended. Under Art. 97 (3), telecommunication data are to be stored between 6 and 12 months. The Czech Constitutional Court has deemed the law unconstitutional and found it to be infringing on the peoples right to privacy. As of July 2012, new legislation was on its way. Italy In July 2005 new legal requirements on data retention came into force in Italy. Subscriber information Internet cafés and public telephone shops with at least three terminals must seek a license permit within 30 days from the Ministry of Home Affairs. They must also store traffic data for a period which may be determined later by administrative decree. Wi-Fi hotspots and locations that do not store traffic data have to secure ID information from users before allowing them to log on. For example, users may be required to enter a number from an ID card or driving license. It is not clear how this information is validated. Mobile telephony users must identify themselves before service activation, or before a SIM card may be obtained. Resellers of mobile subscriptions or pre-paid cards must verify the identity of purchasers and retain a photocopy of identity cards. Telephony data Data, including location data, on fixed line and mobile telephony must be retained for 24 months. There is no requirement to store the content of calls. Telephony operators must retain a record of all unsuccessful dial attempts. ISP data Internet service providers must retain all data for at least 12 months. The law does not specify exactly what traffic data must be retained. There is no requirement to store the content of internet communications. Legality The legislation of July 2005 enables data retention by outlawing all the relevant data protection provisions until 31 December 2007. Under the data protection provisions, service providers are obliged to store traffic data and user data for no less than 365 days, even if they no longer need it to process the communication or to send bills, policy requires user id information, location, tracking data be stored and kept on file for easy access by law enforcement and/or other authorities who request this information (permission must be asked to view sensitive user ID data on file). The traffic data which will now be retained can be used for anti-terrorism purposes and for general penal enforcement of criminal offences large and small. Italy already required the retention of telephony traffic data for 48 months, but without location data. Italy has adopted the EU Directive on Privacy and Electronic Communications 2002 but with an exemption to the requirement to erase traffic data. Denmark Denmark has implemented the EU data retention directive and much more, by logging all internet flow or sessions between operators and operators and consumers. "2.2.1. Session logging (section 5(1) of the Executive Order) Providers of access to the internet must, in respect of the initiating and terminating package of an internet session, retain data that identifies the sending and receiving internet protocol address (in the following called IP address), the sending and receiving port number and the transmission protocol." "2.2.2. Sampling (section 5(4) of the Executive Order) The obligation to retain data about the initiating and terminating package of an internet session does not apply to providers in case such retention is not technically feasible in their systems. In that case, data must instead be retained for every 500th package that is part of an end user's communication on the internet." "2.2.5. Hot spots (section 5(3) of the Executive Order) In addition to the internet data that must otherwise be retained, the provider must retain data that identifies the precise geographic or physical location of a hot spot and the identity of the communication equipment used. This means that a provider of internet access via a hot spot must retain data on a user's access to the internet and, at the same time, retain data that identifies the geographic location of the hot spot in question." Sweden Sweden implemented the EU's 2006 Data Retention Directive in May 2012, and it was fined €3 million by the Court of Justice of the European Union for its belated transposition (the deadline was 15 September 2007). The directive allowed member states to determine the duration data is retained, ranging from six months to two years; the Riksdag, Sweden's legislature, opted for six months. In April 2014, however, the CJEU struck down the Data Retention Directive. PTS, Sweden's telecommunications regulator, told Swedish ISPs and telcos that they would no longer have to retain call records and internet metadata. But after two government investigations found that Sweden's data retention law did not break its obligations to the European Convention on Human Rights, the PTS reversed course. Most of Sweden's major telecommunications companies complied immediately, though Tele2 lodged an unsuccessful appeal. The one holdout ISP, Bahnhof, was given an order to comply by November 24 deadline or face a five million krona ($680,000) fine. Germany The German Bundestag had implemented the directive in "Gesetz zur Neuregelung der Telekommunikationsüberwachung und anderer verdeckter Ermittlungsmaßnahmen sowie zur Umsetzung der Richtlinie 2006/24/EG". The law became valid on 1 January 2008. Any communications data had to be retained for six months. On 2 March 2010, the Federal Constitutional Court of Germany ruled the law unconstitutional as a violation of the guarantee of the secrecy of correspondence. On 16 October 2015, a second law for shorter, up to 10 weeks long, data retention excluding email communication was passed by parliament. However, this act was ruled incompatible with German and European laws by an injunction of the Higher Administrative Court of North Rhine-Westphalia. As a result, on June 28, 2017, three days before the planned start of data retention, the Federal Network Agency suspended the introduction of data retention until a final decision in the principle proceedings. Romania The EU directive has been transposed into Romanian law as well, initially as Law 298/2008. However, the Constitutional Court of Romania subsequently struck down the law in 2009 as violating constitutional rights. The court held that the transposing act violated the constitutional rights of privacy, of confidentiality in communications, and of free speech. The European Commission has subsequently sued Romania in 2011 for non-implementation, threatening Romania with a fine of 30,000 euros per day. The Romanian parliament passed a new law in 2012, which was signed by president Traian Băsescu in June. The Law 82/2012 has been nicknamed "Big Brother" (using the untranslated English expression) by various Romanian non-governmental organizations opposing it. On July 8, 2014, this law too was declared unconstitutional by the Constitutional Court of Romania. Slovakia Slovakia has implemented the directive in Act No. 610/2003 Coll. on electronic communications as later amended. Telecommunication data are stored for six months in the case of data related to Internet, Internet email and Internet telephony (art. 59a (6) a), and for 12 months in the case of other types of communication (art. 59a (6) b). In April 2014, the Slovak Constitutional Court preliminary suspended effectiveness of the Slovak implementation of Data Retention Directive and accepted the case for the further review. In April 2015 Constitutional court decided that some parts of Slovak laws implementing DR Directive are not in compliance with Slovak constitution and Convention for the Protection of Human Rights and Fundamental Freedoms. According to now invalid provisions of the Electronic Communications Act, the providers of electronic communications were obliged to store traffic data, localization data and data about the communicating parties for a period of 6 months (in the case Internet, email or VoIP communication) or for a period of 12 months (in case of other communication). Russia A 2016 anti-terrorist federal law 374-FZ known as Yarovaya Law requires all telecommunication providers to store phone call, text and email metadata, as well as the actual voice recordings for up to 6 months. Messaging services like WhatsApp are required to provide cryptographic backdoors to law-enforcement. The law has been widely criticized both in Russia and abroad as an infringement of human rights and a waste of resources. Norway The EU's Data Retention Directive has been implemented into Norwegian law in 2011, but this will not be in effect before 1 January 2015. Serbia On 29 June 2010, the Serbian parliament adopted the Law on Electronic Communications, according to which the operator must keep the data on electronic communications for 12 months. This provision was criticized as unconstitutional by opposition parties and by Ombudsman Saša Janković. Switzerland As from 7 July 2016, the Swiss Federal Law about the Surveillance of the Post and Telecommunications entered into force, passed by the Swiss government on 18 March 2016. Mobile phones Swiss mobile phone operators have to retain the following data for six months according to the BÜPF: Phone numbers of incoming and outgoing calls SIM- (Subscriber Identity Module), IMSI- (International Mobile Subscribers Identity) and IMEI-numbers (International Mobile Equipment Identity) „the location and the electrical boresight of the antenna of the mobile phone with which the monitored person is connected to the communications system at the time of the communication" date, time and duration of the connection Email All Internet service providers must retain the following data for six months: type of the connections (telephone, xDSL, Cable, permanent line etc.) and if known login data, address information of the origin (MAC address, telephone number), name, address and occupation of the user and duration of the connection from beginning to end time of the transmission or reception of an email, header information according to the SMTP-protocol and the IP addresses of the sending and receiving email application. Email application refers to SMTP-, POP3-, IMAP4, webmail- and remail-server. Exemptions Switzerland only applies data retention to the largest Internet service providers with over 100 million CHF in annual Swiss-sourced revenue. This notably exempts derived communications providers such as ProtonMail, a popular encrypted email service based in Switzerland. United States The National Security Agency (NSA) commonly records Internet metadata for the whole planet for up to a year in its MARINA database, where it is used for pattern-of-life analysis. U.S. persons are not exempt because metadata are not considered data under US law (section 702 of the FISA Amendments Act). Its equivalent for phone records is MAINWAY. The NSA records SMS and similar text messages worldwide through DISHFIRE. Leveraging commercial data retention Various United States agencies leverage the (voluntary) data retention practised by many U.S. commercial organizations through programs such as PRISM and MUSCULAR. Amazon is known to retain extensive data on customer transactions. Google is also known to retain data on searches, and other transactions. If a company is based in the United States the Federal Bureau of Investigation (FBI) can obtain access to such information by means of a National Security Letter (NSL). The Electronic Frontier Foundation states that "NSLs are secret subpoenas issued directly by the FBI without any judicial oversight. These secret subpoenas allow the FBI to demand that online service providers or ecommerce companies produce records of their customers' transactions. The FBI can issue NSLs for information about people who haven't committed any crimes. NSLs are practically immune to judicial review. They are accompanied by gag orders that allow no exception for talking to lawyers and provide no effective opportunity for the recipients to challenge them in court. This secret subpoena authority, which was expanded by the controversial USA PATRIOT Act, could be applied to nearly any online service provider for practically any type of record, without a court ever knowing". The Washington Post has published a well researched article on the FBI's use of National Security Letters. Failed mandatory ISP retention legislation attempts The United States does not have any Internet Service Provider (ISP) mandatory data retention laws similar to the European Data Retention Directive, which was retroactively invalidated in 2014 by the Court of Justice of the European Union. Some attempts to create mandatory retention legislation have failed: In 1999 two models of mandatory data retention were suggested for the United States: What IP address was assigned to a customer at a specific time. In the second model, "which is closer to what Europe adopted", telephone numbers dialed, contents of Web pages visited, and recipients of e-mail messages must be retained by the ISP for an unspecified amount of time. The Internet Stopping Adults Facilitating the Exploitation of Today's Youth Act (SAFETY Act) of 2009 also known as H.R. 1076 and S.436 would require providers of "electronic communication or remote computing services" to "retain for a period of at least two years all records or other information pertaining to the identity of a user of a temporarily assigned network address the service assigns to that user". This bill never became a law. Arguments against data retention While it is often argued that data retention is necessary to combat terrorism and other crimes, there are still others who oppose data retention. Data retention may assist the police and security services to identify potential terrorists and their accomplices before or after an attack has taken place. For example, the authorities in Spain and the United Kingdom stated that retained telephony data made a significant contribution to police enquires into the 2004 Madrid train bombings and the 2005 London bombings. The opponents of data retention make the following arguments: The Madrid train bombings can also be seen as proof that the current data retention level is sufficient and hence the EU directive is not necessity. Schemes for data retention do not make provisions for adequate regulation of the data retention process and for independent judicial oversight. Data retention is an invasion of privacy and a disproportionate response to the threat of terrorism. It is easy for terrorists to avoid having their communications recorded. The Home Office Voluntary Code of Practice of Data Retention admits that there are some internet protocols which cannot be effectively monitored. It would be possible for terrorists to avoid monitoring by using anonymous P2P technologies, internet cafés, anonymous proxies or several other methods. Some police officers in the EU are sceptical about the value of data retention. For example, Heinz Kiefer, president of Eurocop, the European Confederation of Police, issued a press statement saying "it remains easy for criminals to avoid detection through fairly simple means, for example mobile phone cards can be purchased from foreign providers and frequently switched. The result would be that a vast effort is made with little more effect on criminals and terrorists than to slightly irritate them. Activities like these are unlikely to boost citizens' confidence in the EU's ability to deliver solutions to their demand for protection against serious crime and terrorism". The hardware and software required to store all the retained data would be extremely costly. The costs of retaining data would not only fall on Internet Service Providers and telephone companies, but also on all companies and other organisations which would need to retain records of traffic passing through their switchboards and servers. Data retention gives excessive power to the state to monitor the lives of individual citizens. Data retention may be abused by the police to monitor the activities of any group which may come into conflict with the state; including ones which are engaged in legitimate protests. The UK police have used anti-terrorism powers against groups opposed to the war in Iraq and protesters at an arms fair. The definition of terrorism in the UK Terrorism Act 2000 includes not only action, but the threat of action, involving serious violence against a person, or serious damage to property, for the purposes of advancing a "political, religious or ideological cause". There is concern that the definition is vaguely worded and could be applied to supporters of animal liberation, anti-war demonstrators, and many others. Even if data retention may be justified, the retention periods proposed in some cases are excessive. It has been argued that a period of five days for web activity logs and ninety days for all other data would be adequate for police purposes. Data retention by search engines provides an unfair advantage to dominant search engines. Protection against data retention The current directive proposal (see above) would force ISPs to record the internet communications of its users. The basic assumption is that this information can be used to identify with whom someone, whether innocent citizen or terrorist, communicated throughout a specific timespan. Believing that such as mandate would be useful is ignoring that some very committed community of crypto professionals has been preparing for such legislation for decades. Below are some strategies available today to anyone to protect themselves, avoid such traces, and render such expensive and legally dubious logging operations useless. Anonymizing proxy services: Web There are anonymizing proxies that provide slightly more private web access. Proxies must use HTTPS encryption in order to provide any level of protection at all. Unfortunately, proxies require the user to place a large amount of trust in the proxy operator (since they see everything the user does over HTTP), and may be subject to traffic analysis. P2P communications Some P2P services like file transfer or voice over IP use other computers to allow communication between computers behind firewalls. This means that trying to follow a call between two citizens might, mistakenly, identify a third citizen unaware of the communication. Privacy enhancing tools For security conscious citizens with some basic technical knowledge, tools like I2P – The Anonymous Network, Tor, Mixmaster and the cryptography options integrated into any many modern mail clients can be employed. I2P is an international peer-to-peer anonymizing network, which aims at not only evading data retention, but also at making spying by other parties impossible. The structure is similar to the one TOR (see next paragraph) uses, but there are substantial differences. It protects better against traffic analysis and offers strong anonymity and for net-internal traffic end-to-end encryption. Due to unidirectional tunnels it is less prone to timing attacks than Tor. In I2P, several services are available: anonymous browsing, anonymous e-mails, anonymous instant messenger, anonymous file-sharing, and anonymous hosting of websites, among others. Tor is a project of the U.S. non-profit Tor Project to develop and improve an onion routing network to shield its users from traffic analysis. Mixmaster is a remailer service that allows anonymous email sending. JAP is a project very similar to Tor. It is designed to route web requests through several proxies to hide the end user's Internet address. Tor support has been included into JAP. Initiative against extensive data retention The Arbeitskreis Vorratsdatenspeicherung (German Working Group on Data Retention) is an association of civil rights campaigners, data protection activists and Internet users. The Arbeitskreis coordinates the campaign against the introduction of data retention in Germany. An analysis of federal Crime Agency (BKA) statistics published on 27 January 2010 by civil liberties NGO AK Vorrat revealed that data retention did not make a prosecution of serious crime any more effective. As the EU Commission is currently considering changes to the controversial EU data retention directive, a coalition of more than 100 civil liberties, data protection and human rights associations, jurists, trade unions and others are urging the commission to propose the repeal of the EU requirements regarding data retention in favour of a system of expedited preservation and targeted collection of traffic data. Plans for extending data retention to social networks In November 2012, answers to a parliamentary inquiry in the German Bundestag revealed plans of some EU countries including France to extend data retention to chats and social media. Furthermore, the German Federal Office for the Protection of the Constitution (Germany's domestic intelligence agency) has confirmed that it has been working with the ETSI LI Technical Committee since 2003. See also Data security Data Retention Directive Data retention hardware Data Protection Act 1998 Computer data storage Customer proprietary network information Data privacy Electronic discovery Lawful interception Mass surveillance NSA call database Privacy Secrecy of correspondence Traffic analysis I2P - The Anonymous Network References External links Data Retention on the Open Rights Group wiki The Politics of the EU Court Data Retention Opinion: End to Mass Surveillance? Boehm, F. and Cole, M.: Data Retention after the Judgement of the Court of Justice of the European Union (2014). (PDF-file) Centre for European Policy Studies (CEP): Policy Brief on Data Retention (2011). (PDF-File) Cybertelecom :: Records Keeping / Data Retention Digital Rights Ireland: Digital Rights Ireland's challenge against the EU Data Retention Directive and Irish retention legislation on the grounds of European and Irish constitutional law. Electronic Privacy Information Center: EPIC data retention page (to 2007) European Digital Rights: EDRI news tracking page on data retention (current) Feiler, L.: The Data Retention Directive (2008). Seminar paper. (PDF-File) Frost & Sullivan Whitepaper: "Meeting the challenges of Data Retention: Now and in the future" Ganj, C.: "The Lives of Other Judges: Effects of the Romanian Data Retention Judgment" (December 4, 2009). (PDF-File) Goemans, C. and Dumortier, J.: "Mandatory retention of traffic data in the EU: possible impact on privacy and on-line anonymity. Digital Anonymity and the Law, series IT & Law/2, T.M.C. Asser Press, 2003, p 161–183. (PDF-File) Milford, P.: "The Data Retention Directive: too fast, too furious a response? (2008). LLM Dissertation – Southampton Business School. (PDF-File) Mitrou, L.: "Communications Data Retention: A Pandora's Box for Rights and Liberties?" From Digital Privacy: Theory, Technologies, and Practices edited by Alessandro Acquisti, Stefanos Gritzalis, Costos Lambrinoudakis and Sabrina di Vimercati. Auerbach Publications, 2008. (PDF-File) Statewatch: The surveillance of telecommunications in the EU. UK Data Retention Requirements with full references to legislation, codes of practice, etc. UK Home Office: Consultation papers on data retention and on access to communications data. Working Group on Data Retention: List of documents relating to communications data retention in the EU (current) Data laws Telephony Privacy of telecommunications Intelligence analysis Mass surveillance
5778098
https://en.wikipedia.org/wiki/Tree%20model
Tree model
In historical linguistics, the tree model (also Stammbaum, genetic, or cladistic model) is a model of the evolution of languages analogous to the concept of a family tree, particularly a phylogenetic tree in the biological evolution of species. As with species, each language is assumed to have evolved from a single parent or "mother" language, with languages that share a common ancestor belonging to the same language family. Popularized by the German linguist August Schleicher in 1853, the tree model has always been a common method of describing genetic relationships between languages since the first attempts to do so. It is central to the field of comparative linguistics, which involves using evidence from known languages and observed rules of language feature evolution to identify and describe the hypothetical proto-languages ancestral to each language family, such as Proto-Indo-European and the Indo-European languages. However, this is largely a theoretical, qualitative pursuit, and linguists have always emphasized the inherent limitations of the tree model due to the large role played by horizontal transmission in language evolution, ranging from loanwords to creole languages that have multiple mother languages. The wave model was developed in 1872 by Schleicher's student Johannes Schmidt as an alternative to the tree model that incorporates horizontal transmission. The tree model also has the same limitations as biological taxonomy with respect to the species problem of quantizing a continuous phenomenon that includes exceptions like ring species in biology and dialect continua in language. The concept of a linkage was developed in response and refers to a group of languages that evolved from a dialect continuum rather than from linguistically isolated child languages of a single language. History Old Testament and St. Augustine Augustine of Hippo supposed that each of the descendants of Noah founded a nation and that each nation was given its own language: Assyrian for Assur, Hebrew for Heber, and so on. In all he identified 72 nations, tribal founders and languages. The confusion and dispersion occurred in the time of Peleg, son of Heber, son of Shem, son of Noah. Augustine made a hypothesis not unlike those of later historical linguists, that the family of Heber "preserved that language not unreasonably believed to have been the common language of the race ... thenceforth named Hebrew." Most of the 72 languages, however, date to many generations after Heber. St. Augustine solves this first problem by supposing that Heber, who lived 430 years, was still alive when God assigned the 72. Ursprache, the language of paradise St. Augustine's hypothesis stood without major question for over a thousand years. Then, in a series of tracts, published in 1684, expressing skepticism concerning various beliefs, especially Biblical, Sir Thomas Browne wrote: "Though the earth were widely peopled before the flood ... yet whether, after a large dispersion, and the space of sixteen hundred years, men maintained so uniform a language in all parts, ... may very well be doubted." By then, discovery of the New World and exploration of the Far East had brought knowledge of numbers of new languages far beyond the 72 calculated by St. Augustine. Citing the Native American languages, Browne suggests the "confusion of tongues at first fell only upon those present in Sinaar at the work of Babel ...." For those "about the foot of the hills, whereabout the ark rested ... their primitive language might in time branch out into several parts of Europe and This is an inkling of a tree. In Browne's view, simplification from a larger aboriginal language than Hebrew could account for the differences in language. He suggests ancient Chinese, from which the others descended by "confusion, admixtion and corruption". Later he invokes "commixture and alteration." Browne reports a number of reconstructive activities by the scholars of the times: "The learned Casaubon conceiveth that a dialogue might be composed in Saxon, only of such words as are derivable from the Greek ... Verstegan made no doubt that he could contrive a letter that might be understood by the English, Dutch, and East Frislander ... And if, as the learned Buxhornius contendeth, the Scythian language as the mother tongue runs throughout the nations of Europe, and even as far as Persia, the community on many words, between so many nations, hath more reasonable traduction and were rather derivable from the common tongue diffused through them all, than from any particular nation, which hath also borrowed and holdeth but at second hand." The confusion at the Tower of Babel was thus removed as an obstacle by setting it aside. Attempts to find similarities in all languages were resulting in the gradual uncovering of an ancient master language from which all the other languages derive. Browne undoubtedly did his writing and thinking well before 1684. In that same revolutionary century in Britain James Howell published of Epistolae Ho-Elianae, quasi-fictional letters to various important persons in the realm containing valid historical information. In Letter LVIII the metaphor of a tree of languages appears fully developed short of being a professional linguist's view: "I will now hoist sail for the Netherlands, whose language is the same dialect with the English, and was so from the beginning, being both of them derived from the high Dutch [Howell is wrong here]: The Danish also is but a branch of the same tree ... Now the High Dutch or Teutonick Tongue, is one of the prime and most spacious Maternal Languages of Europe ... it was the language of the Goths and Vandals, and continueth yet of the greatest part of Poland and Hungary, who have a Dialect of hers for their vulgar tongue ... Some of her writers would make this world believe that she was the language spoken in paradise." The search for "the language of paradise" was on among all the linguists of Europe. Those who wrote in Latin called it the lingua prima, the lingua primaeva or the lingua primigenia. In English it was the Adamic language; in German, the Ursprache or the hebräische Ursprache if one believed it was Hebrew. This mysterious language had the aura of purity and incorruption about it, and those qualities were the standards used to select candidates. This concept of Ursprache came into in use well before the neo-grammarians adopted it for their proto-languages. The gap between the widely divergent families of languages remained unclosed. Indo-European model On February 2, 1786, Sir William Jones delivered his Third Anniversary Discourse to the Asiatic Society as its president on the topic of the Hindus. In it he applied the logic of the tree model to three languages, Greek, Latin and Sanskrit, but for the first time in history on purely linguistic grounds, noting "a stronger affinity, both in the roots of the verbs and in the forms of grammar, than could possibly have been produced by accident; ...." He went on to postulate that they sprang from "some common source, which, perhaps, no longer exists." To them he added Gothic, Celtic and Persian as "to the same family." Jones did not name his "common source" nor develop the idea further, but it was taken up by the linguists of the times. In the (London) Quarterly Review of late 1813-1814, Thomas Young published a review of Johann Christoph Adelung's Mithridates, oder allgemeine Sprachenkunde ("Mithridates, or a General History of Languages"), Volume I of which had come out in 1806, and Volumes II and III, , continued by Johann Severin Vater. Adelung's work described some 500 "languages and dialects" and hypothesized a universal descent from the language of paradise, located in Kashmir central to the total range of the 500. Young begins by pointing out Adelung's indebtedness to Conrad Gesner's Mithridates, de Differentiis Linguarum of 1555 and other subsequent catalogues of languages and Young undertakes to present Adelung's classification. The monosyllabic type is most ancient and primitive, spoken in Asia, to the east of Eden, in the direction of Adam's exit from Eden. Then follows Jones' group, still without a name, but attributed to Jones: "Another ancient and extensive class of languages united by a greater number of resemblances than can well be altogether accidental." For this class he offers a name, "Indoeuropean," the first known linguistic use of the word, but not its first known use. The British East India Company was using "Indo-European commerce" to mean the trade of commodities between India and Europe. All the evidence Young cites for the ancestral group are the most similar words: mother, father, etc. Adelung's additional classes were the Tataric, the African and the American, which depend on geography and a presumed descent from Eden. Young does not share Adelung's enthusiasm for the language of paradise, and brands it as mainly speculative. Young's designation, successful in English, was only one of several candidates proposed between 1810 and 1867: indo-germanique (Conrad Malte-Brun, 1810), japetisk (Rasmus Christian Rask, 1815), Indo-Germanisch (Julius Klaproth, 1823), indisch-teutsch (F. Schmitthenner, 1826), sanskritisch (Wilhelm von Humboldt, 1827), indokeltisch (A. F. Pott, 1840), arioeuropeo (Graziadio Isaia Ascoli, 1854), Aryan (Max Müller, 1861) and aryaque (H. Chavée, 1867). These men were all polyglots and prodigies in languages. Klaproth, author of the successful German-language candidate, Indo-Germanisch, who criticised Jones for his uncritical method, knew Chinese, Japanese, Tibetan and a number of other languages with their scripts. The concept of a Biblical Ursprache appealed to their imagination. As hope of finding it gradually died they fell back on the growing concept of common Indo-European spoken by nomadic tribes on the plains of Eurasia, and although they made a good case that this language can be deduced by the methods of comparative linguistics, in fact that is not how they obtained it. It was the one case in which their efforts to find the Ursprache succeeded. Neogrammarian model The model is due in its most strict formulation to the Neogrammarians. The model relies on earlier conceptions of William Jones, Franz Bopp and August Schleicher by adding the exceptionlessness of the sound laws and the regularity of the process. The linguist perhaps most responsible for establishing the link to Darwinism was August Schleicher. That he was comparing his Stammbaum, or family tree of languages, to Darwin's presentation of evolution shortly after that presentation, is proved by the open letter he wrote in 1863 to Ernst Haeckel, published posthumously, however. In 1869, Haeckel had suggested he read Origin of Species. After reading it Schleicher wrote Die Darwinische Theorie und die Sprachwissenschaft, "Darwinism tested by the Science of Language." In a scenario reminiscent of that between Darwin and Wallace over the discovery of evolution (both discovered it independently), Schleicher endorsed Darwin's presentation, but criticised it for not inserting any species. He then presented a Stammbaum of languages, which, however, was not the first he had published. The evolution of languages was not the source of Darwin's theory of evolution. He had based that on variation of species, such as he had observed in finches in the Galapagos Islands, who had appeared to be modifications of a common ancestor. Selection of domestic species to produce a new variety also played a role in his conclusions. The first edition of Origin of Species in 1859 discusses the language tree as though de novo under the topic of classification. Darwin criticises the synchronic method devised by Linnaeus, suggesting that it be replaced by a "natural arrangement" based on evolution. He says: "It may be worth while to illustrate this view of classification, by taking the case of languages. If we possessed a perfect pedigree of mankind, a genealogical arrangement of the races of man would afford the best classification of the various languages now spoken throughout the world; and if all extinct languages, and all intermediate and slowly changing dialects, had to be included, such an arrangement would, I think, be the only possible one. Yet it might be that some very ancient language had altered little, and had given rise to few new languages, whilst others (owing to the spreading and subsequent isolation and states of civilisation of the several races, descended from a common race) had altered much, and had given rise to many new languages and dialects. The various degrees of difference in the languages from the same stock, would have to be expressed by groups subordinate to groups; but the proper or even only possible arrangement would still be genealogical; and this would be strictly natural, as it would connect together all languages, extinct and modern, by the closest affinities, and would give the filiation and origin of each tongue." Schleicher had never heard of Darwin before Haeckel brought him to Schleicher's attention. He had published his own work on the Stammbaum in an article of 1853, six years before the first edition of Origin of Species in 1859. The concept of descent of languages was by no means new. Thomas Jefferson, a devout linguist himself, had proposed that the continual necessity for neologisms implies that languages must "progress" or "advance." These ideas foreshadow evolution of either biological species or languages, but after the contact of Schleicher with Darwin's ideas, and perhaps Darwin's contact with the historical linguists, Evolution and language change were inextricably linked, and would become the basis for classification. Now, as then, the main problems would be to prove specific lines of descent, and to identify the branch points. Phylogenetic tree The old metaphor was given an entirely new meaning under the old name by Joseph Harold Greenberg in a series of essays beginning about 1950. Since the adoption of the family tree metaphor by the linguists, the concept of evolution had been proposed by Charles Darwin and was generally accepted in biology. Taxonomy, the classification of living things, had already been invented by Carl Linnaeus. It used a binomial nomenclature to assign a species name and a genus name to every known living organism. These were arranged in a biological hierarchy under several phyla, or most general groups, branching ultimately to the various species. The basis for this biological classification was the observed shared physical features of the species. Darwin, however, reviving another ancient metaphor, the tree of life, hypothesized that the groups of the Linnaean classification (today's taxa), descended in a tree structure over time from simplest to most complex. The Linnaean hierarchical tree was synchronic; Darwin envisioned a diachronic process of common descent. Where Linnaeus had conceived ranks, which were consistent with the great chain of being adopted by the rationalists, Darwin conceived lineages. Over the decades after Darwin it became clear that the ranks of Linnaeus' hierarchy did not correspond exactly to the lineages. It became the prime goal of taxonomy to discover the lineages and alter the classification to reflect them, which it did under the overall guidance of the Nomenclature Codes, rule books kept by international organizations to authorize and publish proposals to reclassify species and other taxa. The new approach was called phylogeny, the "generation of phyla," which devised a new tree metaphor, the phylogenetic tree. One unit in the tree and all its offspring units were a clade and the discovery of clades was cladistics. Greenberg began writing during a time when phylogenetic systematics lacked the tools available to it later: the computer (computational systematics) and DNA sequencing (molecular systematics). To discover a cladistic relationship researchers relied on as large a number of morphological similarities among species as could be defined and tabulated. Statistically the greater the number of similarities the more likely species were to be in the same clade. This approach appealed to Greenberg, who was interested in discovering linguistic universals. Altering the tree model to make the family tree a phylogenetic tree he said: "Any language consists of thousands of forms with both sound and meaning ... any sound whatever can express any meaning whatever. Therefore, if two languages agree in a considerable number of such items ... we necessarily draw a conclusion of common historical origin. Such genetic classifications are not arbitrary ... the analogy here to biological classification is extremely close ... just as in biology we classify species in the same genus or high unit because the resemblances are such as to suggest a hypothesis of common descent, so with genetic hypotheses in language." In this analogy, a language family is like a clade, the languages are like species, the proto-language is like an ancestor taxon, the language tree is like a phylogenetic tree and languages and dialects are like species and varieties. Greenberg formulated large tables of characteristics of hitherto neglected languages of Africa, the Americas, Indonesia and northern Eurasia and typed them according to their similarities. He called this approach "typological classification", arrived at by descriptive linguistics rather than by comparative linguistics. Dates and glottochronology The comparative method has been used by historical linguists to piece together tree models utilizing discrete lexical, morphological, and phonological data. Chronology can be found but there is no absolute date estimates utilizing this system. Glottochronology enables absolute dates to be estimated. Shared cognates (cognates meaning to have common historical origin) are calculate divergence times. However the method was found to be later discredited due to the data being unreliable. Due to this historical linguists have trouble with exact age estimation when pinpointing the age of the Indo-European language family. It could range from 4000 BP to 40,000 BP, or anywhere in-between those dates according to Dixon sourced from the rise and fall of language, (Cambridge University Press). As seen in the article here. Possible solutions for Glottochronology are forthcoming due to computational phylogenetic methods. Techniques such as using models of evolution improves accuracy of tree branch length and topology. There for, using computational phylogenetic methods computational methods enable researchers to analyze linguistic data from evolutionary biology. This further assists in testing theories against each other, such as the Kurgan theory and the Anatolian theory, both claiming origins of Info-European languages. Computational phylogenetics in historical linguistics The comparative method compares features of various languages to assess how similar one language is to another. The results of such an assessment are data-oriented; that is, the results depend on the number of features and the number of languages compared. Until the arrival of the computer on the historical linguistics landscape, the numbers in both cases were necessarily small. The effect was of trying to depict a photograph using a small number of large pixels, or picture units. The limitations of the Tree Model were all too painfully apparent, resulting in complaints from the major historical linguists. In the late 20th century, linguists began using software intended for biological classification to classify languages. Programs and methods became increasingly sophisticated. In the early 21st century, the Computational Phylogenetics in Historical Linguistics (CPHL) project, a consortium of historical linguists, received funding from the National Science Foundation to study phylogenies. The Indo-European family is a major topic of study. As of January, 2012, they had collected and coded a "screened" database of "22 phonological characters, 13 morphological characters, and 259 lexical characters," and an unscreened database of more. Wordlists of 24 Indo-European languages are included. Larger numbers of features and languages increase the precision, provided they meet certain criteria. Using specialized computer software, they test various phylogenetic hypotheses for their ability to account for the characters by genetic descent. Limitations of the model One endemic limitation of the tree model is the very founding presumption on which it is based: it requires a classification based on languages.or, more generally, on language varieties. Since a variety represents an abstraction from the totality of linguistic features, there is the possibility for information loss during the translation of data (from a map of isoglosses) into a tree. For example, there is the issue of dialect continua. They provide varieties that are not unequivocally one language or another but contain features characteristic of more than one. The issue of how they are to be classified is similar to the issue presented by ring species to the concept of species classification in biology. The limitations of the tree model, in particular its inability to handle the non-discrete distribution of shared innovations in dialect continua, have been addressed through the development of non-cladistic (non-tree-based) methodologies. They include the Wave model; and more recently, the concept of linkage. An additional limitation of the tree model involves mixed and hybrid languages, as well as language mixing in general since the tree model allows only for divergences. For example, according to Zuckermann (2009:63), "Israeli", his term for Modern Hebrew, which he regards as a Semito-European hybrid, "demonstrates that the reality of linguistic genesis is far more complex than a simple family tree system allows. 'Revived' languages are unlikely to have a single parent." Perfect phylogenies The purpose of phylogenetic software is to generate cladograms, a special kind of tree in which the links only bifurcate; that is, at any node in the same direction only two branches are offered. The input data is a set of characters that can be assigned states in different languages, such as present (1) or absent (0). A language therefore can be described by a unique coordinate set consisting of the state values for all of the characters considered. These coordinates can be like each other or less so. Languages that share the most states are most like each other. The software massages all the states of all the characters of all the languages by one of several mathematical methods to accomplish a pairwise comparison of each language with all the rest. It then constructs a cladogram based on degrees of similarity; for example, hypothetical languages, a and b, which are closest only to each other, are assumed to have a common ancestor, a-b. The next closest language, c, is assumed to have a common ancestor with a-b, and so on. The result is a projected series of historical paths leading from the overall common ancestor (the root) to the languages (the leaves). Each path is unique. There are no links between paths. Every leaf and node have one and only one ancestor. All the states are accounted for by descent from other states. A cladogram that conforms to these requirements is a perfect phylogeny. At first there seemed to be little consistency of results in trials varying the factors presumed to be relevant. A new cladogram resulted from any change, which suggested that the method was not capturing the underlying evolution of languages but only reflecting the extemporaneous judgements of the researchers. In order to find the factors that did bear on phylogeny the researchers needed to have some measure of the accuracy of their results; i.e., the results needed to be calibrated against known phylogenies. They ran the experiment using different assumptions looking for the ones that would produce the closest matches to the most secure Indo-European phylogenies. Those assumptions could be used on problem areas of the Indo-European phylogeny with greater confidence. To obtain a reasonably valid phylogeny, the researchers found they needed to enter as input all three types of characters: phonological, lexical and morphological, which were all required to present a picture that was sufficiently detailed for calculation of phylogeny. Only qualitative characters produced meaningful results. Repeated states were too ambiguous to be correctly interpreted by the software; therefore characters that were subject to back formation and parallel development, which reverted a character to a prior state or adopted a state that evolved in another character, respectively, were screened from the input dataset. Perfect phylogenetic networks Despite their care to code the best qualitative characters in sufficient numbers, the researchers could obtain no perfect phylogenies for some groups, such as Germanic and Albanian within Indo-European. They reasoned that a significant number of characters, which could not be explained by genetic descent from the group's calculated ancestor, were borrowed. Presumably, if the wave model, which explained borrowing, were a complete explanation of the group's characters, no phylogeny at all could be found for it. If both models were partially effective, then a tree would exist, but it would need to be supplemented by non-genetic explanations. The researchers therefore modified the software and method to include the possibility of borrowing. The researchers introduced into the experiment the concept of the interface, or allowed boundary over which character states would flow. A one-way interface, or edge, existed between a parent and a child. If only one-way edges were sufficient to explain the presence of all the states in a language, then there was no need to look beyond the perfect phylogeny. If not, then one or more contact edges, or bidirectional interfaces, could be added to the phylogeny. A language therefore might have more than one source of states: the parent or a contact language. A tree so modified was no longer a tree as such: there could be more than one path from root to leaf. The researchers called this arrangement a network. The states of a character still evolved along a unique path from root to leaf, but its origin could be either the root under consideration or a contact language. If all the states of the experiment could be accounted for by the network, it was termed a perfect phylogenetic network. Compatibility and feasibility The generation of networks required two phases. In the first phase, the researchers devised a number of phylogenies, called candidate trees, to be tested for compatibility. A character is compatible when its origin is explained by the phylogeny generated. In a perfect phylogeny, all the characters are compatible and the compatibility of the tree is 100%. By the principle of parsimony, or Occam's razor, no networks are warranted. Candidate trees were obtained by first running the phylogeny-generation software using the Indo-European dataset (the strings of character states) as input, then modifying the resultant tree into other hypotheses to be tested. None of the original candidate trees were perfect phylogenies, although some of the subtrees within them were. The next phase was to generate networks from the trees of highest compatibility scores by adding interfaces one at a time, selecting the interface of highest compatibility, until sufficiency was obtained; that is, the compatibility of the network was highest. As it turned out, the number of compatible networks generated might vary from none to over a dozen. However, not all the possible interfaces were historically feasible. Interfaces between some languages were geographically and chronologically not very likely. Inspecting the results, the researchers excluded the non-feasible interfaces until a list of only feasible networks remained, which could be arranged in order of compatibility score. Most feasible network for Indo-European The researchers began with five candidate trees for Indo-European, lettered A-E, one generated from the phylogenetic software, two modifications of it and two suggested by Craig Melchert, a historical linguist and Indo-Europeanist. The trees differed mainly in the placement of the most ambiguous group, the Germanic languages, and Albanian, which did not have enough distinctive characters to place it exactly. Tree A contained 14 incompatible characters; B, 19; C, 17; D, 21; E,18. Trees A and C had the best compatibility scores. The incompatibilities were all lexical, and A's were a subset of C's. Subsequent generation of networks found that all incompatibilities could be resolved with a minimum of three contact edges except for Tree E. As it did not have a high compatibility, it was excluded. Tree A had 16 possible networks, which a feasibility inspection reduced to three. Tree C had one network, but as it required an interface to Baltic and not Slavic, it was not feasible. Tree A, the most compatible and feasible tree, hypothesizes seven groups separating from Proto-Indo-European between about 4000 BC and 2250 BC, as follows. The first to separate was Anatolian, about 4000 BC. Tocharian followed at about 3500 BC. Shortly thereafter, about 3250, Proto-Italo-Celtic (western Indo-European) separated, becoming Proto-Italic and Proto-Celtic at about 2500 BC. At about 3000, Proto-Albano-Germanic separated, becoming Albanian and Proto-Germanic at about 2000. At about 3000 Proto-Greco-Armenian (southern Indo-European) divided, becoming Proto-Greek and Proto-Armenian at about 1800. Balto-Slavic appeared about 2500, dividing into Proto-Baltic and Proto-Slavic at about 1000. Finally, Proto-Indo-European became Proto-Indo-Iranian (eastern Indo-European) at about 2250. Trees B and E offer the alternative of Proto-Germano-Balto-Slavic (northern Indo-European), making Albanian an independent branch. The only date for which authors vouch is the last, based on the continuity of the Yamna culture, the Andronovo Culture and known Indo-Aryan speaking cultures. All others are described as "dead reckoning." Given the phylogeny of best compatibility, A, three contact edges are required to complete the compatibility. This is group of edges with the fewest borrowing events: First, an edge between Proto-Italic and Proto-Germanic, which must have begun after 2000, according to the dating scheme given. A second contact edge was between Proto-Italic and Proto-Greco-Armenian, which must have begun after 2500. The third contact edge is between Proto-Germanic and Proto-Baltic, which must have begun after 1000. Tree A with the edges described above is described by the authors as "our best PPN." In all PPNs, it is clear that although the initial daughter languages became distinct in relative isolation, the later evolution of the groups can be explained only by evolution in proximity to other languages with which an exchange takes place by the wave model. See also Comparative method Evolutionary linguistics Genetic relationship (linguistics) Indo-European studies Language family Linkage (linguistics) Wave model (linguistics) Father Tongue hypothesis Notes Bibliography . . External links Historical linguistics
9730832
https://en.wikipedia.org/wiki/Hardware%20certification
Hardware certification
Hardware certification is the process through which computer hardware is tested to ensure it is compatible with specific software packages, and operates as intended in critical situations. With ever dropping prices of hardware devices, the market for networking devices and systems is undergoing a kind of change that can be loosely termed as ‘generalization’. Big established enterprises like Cisco, Novell, Sun Microsystems etc., no longer manufacture all the hardware required in the market, instead they ‘license’ or ‘certify’ small hardware players operating out from countries like Taiwan or China. Certification process Vendor certification To obtain certification, the hardware or software has to conform to a set of protocols and quality standards that are put in place by the original creator of technology. Usually the certification process is done by a “Certification partner”. Certification partners are selected by the original creators of the technology and these partners are given the authority to do the testing and certification process. After a product is found to be compatible, it is labelled as xxx certified where xxx is the name of the original creator of the technology. Vendors use a “label” on the products to advertise the fact of compatibility with the said technology. The process of certification ensures that the products made by different manufacturers are standardized and are compatible with each other as indicated in hardware or software platform. Third-party certification Third-party certification is undertaken by an independent body. To obtain third-party certification, the hardware or software has to confirm to a set of quality standards determined by the third-party. Various certifications Cisco certification: Cisco manufactures networking hardware devices and systems. Hardware and software companies use the certification issued by Cisco’s partners to ensure that their ware is compatible with the Cisco standards and protocols. Sun Certification:This is for Independent Software vendors (ISV) involved in developing applications that are compatible with Java technology. Java was created by Sun Microsystems and is widely used to write enterprise and consumer application solutions. Novell Certification: Novell began certifying partner hardware with NetWare in the 1980s. The name of the certification is "YES Certified". The Yes Certified program involves certification of Partner hardware with SUSE Linux Enterprise software. Since 2005 the program has shifted from Novell NetWare to SuSE Linux and has become the SUSE YES Certified Program. The certified Hardware is given a Yes Certification Bulletin which can be viewed online. More information about the Yes Certification program can be found at SUSE's website. Microsoft's certification for Vista: This certification is called "WHQL" Windows Hardware Quality Labs". The dominant position of Windows in the computing world ensured that there are enormous number of applications written and hardware manufactured. This WHQL signifies that the hardware or software works with the Vista operating system. Real Certification: This certification is given by RealNetworks for ensuring compatibility with its media player “RealPlayer”. Mobile computing devices like cell phones and PDAs are equipped with RealPlayer to enable them to play streaming content. Linux certification programs: Hardware vendors use this certification for their products to ensure compatibility with Linux based systems. SpectraLink certification: SpectraLink VIEW Certification Program is designed to ensure interoperability for enterprise Wi-Fi based infrastructure products that support NetLink Wireless Telephones. Altiris Developer Program Certification: Altiris is a leading vendor of IT service-oriented management solutions. EMC Centera Proven: This certification is a good way to demonstrate that your solution takes full advantage of EMC Centera Platform which can be crucial to sales. Skype Certification: Skype-certified products satisfy both the Technical Requirements and the Design Guidelines of Skype. Linux-Tested Certification: Hardware Vendors can use the Linux-Tested program from AppLabs to certify their products are compatible with the Linux OS and have been tested for complete functionality. References Product certification
51554650
https://en.wikipedia.org/wiki/Arthur%20video%20games
Arthur video games
The Arthur video games franchise was a series of learning and interactive story video games based on the American-Canadian children's TV show Arthur. The games were released in the 1990s and 2000s for PlayStation and Windows and Mac OS computers. Creative Wonders games Arthur was a 1990s video game series developed by Creative Wonders and published by The Learning Company. The games were created as part of the LearningBuddies line. Titles Arthur's Kindergarten has the player learn kindergarten skills while attempting to fix Arthur's treehouse, which has been damaged in a storm. The game covers basic reading, arithmetic memory skills, and social skills. Arthur's Preschool Arthur's 1st Grade has the player participate in Bionic Bunny's "Good Deeds Contest" by doing good deeds around the neighbourhood. The game covers reading and math skills. Arthur's 2nd Grade has the player participate in "Take Your Kids to Work Day" by completing tasks and chores. The game covers reading, math, grammar, and geography. A new edition was released in 2002. Arthur's Reading is a two-disc CD-based game that contains more than 50 activities featuring Arthur characters. Disc 1 covers letter recognition, phonics, and word families, as well as containing an art room for players to print out. Disc 2 covers reading comprehension, grammar, and spelling. Arthur's Math Games contains five math-related activities. Arthur's Reading Games (1997) contains four reading games and the interactive story Arthur's Reading Race, written by Marc Brown. Arthur's Thinking Games (1999) contains six activities related to building critical thinking and logic skills. It was released in 2001 by The Learning Company. Production A Bangor Daily News article hinted that a new series of Arthur video games would be released in fall 1999. In February 1999, The Learning Company announced that it had "signed an exclusive, multi-year contract with Marc Brown to develop and publish interactive software worldwide". The aim was to utilise Arthur's equity by "broadening his visibility in the interactive software category", specifically within the core curriculum areas. The Learning Company announced Arthur's Reading, the first game in the series, in a news release on July 13, 1999. The subject was chosen because "reading is a natural subject for this lovable character whose nationally televised adventures have become so popular with young children", according to The Learning Company. The series was "developed with the help of educators". Most games have an auto-levelling feature to cater to each player's own skill. A kid-friendly website was also available for players to seek further activities that supplemented the games' content. Spyware concerns The Congressional Record, V. 146, Pt. 15 wrote that a spyware expert found that educational software such as Reader Rabbit and Arthur's Thinking Games may contain spyware. U.S. News & World Report noted that a cause could be the free Arthur screensaver that players of Arthur's Thinking Games have the option to download. The New York Times reported that the Broadcast program, which ran in the background as an application called DSS Agent, used to be included on the installation discs of many software titles made by The Learning Company, including the Arthur video games; while Arthur's Reading Race "was billed as a product updater and communications tool"; online privacy groups put it in the category of spyware for this reason. Commercial performance According to PC Data, Arthur's Thinking Games was the ninth top-selling software of September 1999, and the top-selling home education software for that month. Critical reception PCMag gave Arthur's Preschool, Kindergarten, First Grade, and Second Grade a joint rating of five out of five, writing that the "charming" games covered the same content as the Reader Rabbit series, though also saying that players could easily own both. SuperKids deemed Arthur's Kindergarten an underwhelming entry in the kindergarten edutainment space, due to having "tedious and overly repetitious" activities. Math and Science for Young Children and Experiences in Math for Young Children suggested that the game could be used within schools. Discovery Education said that Arthur's Kindergarten was "packed with 'smart' features and excellent educational content". Discovery Education also said that Arthur's Preschool was filled with "smart features and a good range of educational content". MacWorld said the game was easy for young players to pick up. Teaching Reading in Today's Elementary Schools said that Arthur's 1st Grade was not limited to its target market, but that it could also be adapted for children in higher grades who had special needs. MacWorld said that while Arthur's 1st Grade could be initially overwhelming, it was ultimately rewarding. Discovery Education deemed Arthur's 2nd Grade "edutainment at its best". The Eugene Register-Guard gave Arthur's Reading four out of four, deeming all of the activities "well designed, educational, and fun". The Bangor Daily News said that the game would challenge and engage players of all ages. SuperKids said that the game would leave veteran video gamers "unimpressed" and "disappointed". MacWorld deemed Arthur's Reading Games an "amusing, interactive product". In 1999, Forbes wrote a piece questioning if wrapping up educational content under the guise of video games featuring children's characters such as Arthur and Dr. Seuss was enough to "entice parents with the promise of easy learning for their kids". Living Books games There were several interactive storybooks in the Living Books series based on Arthur, such as Arthur's Birthday and Arthur's Teacher Trouble. The games were developed by Living Books and published by Brøderbund Software and Random House. Titles Arthur's Teacher Trouble (1993) Arthur's Birthday (1994) Arthur's Reading Race (1997) Arthur's Computer Adventure (1998) D.W., the Picky Eater (1998) Critical reception Aktueller Software Markt praised two entries in the series and concluded the review by begging for a German version of the games. World Village thought Arthur's Reading Race was "very well written", while All Game gave it 4.5 stars out of 5. Just Adventure gave Arthur's Computer Adventure a top rating of A. All Game gave it 4/5 stars, while SuperKids wrote that it wasn't the strongest entry in the Living Books product line. The Daily Gazette warned that Arthur's Computer Adventure wouldn't hold kids' attention for long. Other games Arthur! Ready to Race (2000): A racing game developed by Runecraft and published by The Learning Company. Released between 1999 and 2000 for PlayStation, this game has Arthur search for parts to build a cardboard box racer. It consists primarily of minigames in which the player partakes to gain parts, although the player is confined to exploring a small area in Elwood City. The graphics are in 3D, with three pre-rendered CGI cutscenes. The voice acting in the game is not done by the voice actors from the television show. Arthur's Absolutely Fun Day: Mattel Interactive/The Learning Company/Ed Magnin and Associates (GBC): Released in 2000, this game has the player control Arthur through part of Elwood City and partake in minigames so that he can visit the amusement park. Arthur's Camping Adventure (2000): A point-and-click adventure game. Superkids deemed it "interesting and fun". The game was also known as Arthur's Wilderness Rescue. Arthur's Pet Chase (2003): A side-scrolling platform game developed by ImaginEngine Corp and published by The Learning Company. Arthur's Sand Castle Contest (2003): An arcade game developed by ImaginEngine Corp and published by Riverdeep Interactive Learning Limited. External links Main page At The Complete Sourcebook on Children's Software At Children's Software and New Media Revue At Media Review Digest At Software and CD-ROM Reviews on File At CSR References Houghton Mifflin Harcourt franchises Creative Wonders games The Learning Company games Children's educational video games
292943
https://en.wikipedia.org/wiki/Comparison%20of%20free%20software%20for%20audio
Comparison of free software for audio
This list of free software for audio lists notable free and open source software for use by sound engineers, audio producers, and those involved in sound recording and reproduction. Players Audio analysis Converters DJ software Distributions and other platforms Various projects have formed to integrate the existing free software audio packages. Modular systems Notation Programming languages Many computer music programming languages are implemented in free software. See also the comparison of audio synthesis environments. Radio broadcasting See also streaming below. Recording and editing The following packages are digital audio editors. Softsynths Streaming These programs are for use with streaming audio. Technologies Trackers These music sequencer programs allow users to arrange notes (pitch-shifted sound samples) on a timeline: see tracker (music software). Other See also ABC notation List of Linux audio software References Audio software
38597235
https://en.wikipedia.org/wiki/Ontario%20power%20plant%20scandal
Ontario power plant scandal
The Ontario power plant scandal (also called the gas plants scandal) relates to the decisions by the Liberal government to cancel the construction of two natural gas power plants: one in Mississauga and another in Oakville. Members of the Progressive Conservative Party of Ontario (PC) as well as the Ontario New Democratic Party (NDP) also voted to cancel the power plant. The Mississauga cancellation was made as a late campaign promise in the 2011 general election. From immediately following the election until March 18, 2013, the Liberal government stated that the cost of the cancellations was $230 million -- $190 million for the Mississauga plant and $40 million for the Oakville plant. A final report by the Auditor General of Ontario that was released on October 8, 2013 found the total cost of the cancellations was $950 Million ($275 Million for the Mississauga plant and $675 Million for the Oakville plant). This cost included estimates of future costs to the rate payers. The scandal contributed to the resignation of Premier Dalton McGuinty and Energy Minister Chris Bentley. Tendering and bidders In April 2005, the McGuinty Government closed the coal-fired Lakeview Generating Station in the Greater Toronto Area (GTA). This closure created a need to establish new power plants to support the electricity needs of the GTA. In 2007, the Ontario Power Authority's (OPA) Integrated Power System Plan report recommended new natural gas-fired electric power generation plants (gas plants) be constructed. In August 2008, the Minister of Energy and Infrastructure directed the OPA to competitively procure a combined-cycle gas generation facility in the southwest Greater Toronto Areas with a capacity of up to 850 megawatts (MWs). These plants were to be in operation no later than December 31, 2013. Under this procurement the contractors would continue to own and operate the asset, not just build it as had been typical in the past. In the May 2009 information to bidders, the OPA stated that the constructors' proposals would need to consider municipal requirements for their proposed sites that were in effect on January 16, 2009. The constructors were responsible for obtaining local permits, not the government. In September 2009, the OPA announced it had accepted a bid by TransCanada Energy (TCE) to build a 900-megawatt natural gas-fired power generation facility in southeast Oakville. Local opposition In December 11, 2009, the fast-growing Citizens for Clean Air coalition in Oakville stepped up opposition to the project with campaign slogan: 'It just doesn't make sense.' By June 2010, TCE had missed the contract's milestone dates for obtaining pre-construction approvals and permits from the Town of Oakville. On October 1, 2010, local opponents rallied at the Ontario legislature and brought in American environmentalist Erin Brockovich to help generate publicity for their fight with the government. Liberal MPP for Oakville Kevin Flynn battled his own government's plan for the gas plant. On October 7, 2010, Liberal energy minister Brad Duguid announced the cancellation of the Oakville gas plant. Ceding to increasing opposition, Duguid proposed to feed the GTA's power demand by improving transmission lines. In the 2011 Provincial election, the Mississauga candidates of all three parties openly expressed opposition to the Mississauga power plant in pre-election debates. NDP candidate Anju Sikka wrote in an open letter to Dalton McGuinty "An NDP government would never allow construction to begin before a thorough and independent Environmental Assessment has been completed", and PC candidate Geoff Janoscik stated in a press release "A Tim Hudak Government will cancel this plant". Oakville cancellation On October 9, 2009 the OPA and TCE signed a contract for the Oakville plant. Amid local protest and opposition from the Town of Oakville, in June 2010 TCE missed the milestone date under the contract for obtaining all pre-construction approvals and permits for the Oakville plant from the Town of Oakville. On October 7, 2010 the Government announced the cancellation of the Oakville plant. The Ministry communique stated that TCE was entitled to reasonable damages and the anticipated financial value of the original contract. On the same day the Minister sent a letter to the OPA advising them of his directive. This commitment to make TCE "whole" became central to the massive cost escalation of the project. Minister Kathleen Wynne, later Premier, signed the Cabinet directive. Mississauga cancellation On September 28, 2011, a week prior to a general election, the Ontario Liberal Party's campaign announced that if elected, the government would cancel the Mississauga gas plant. Former Premier Kathleen Wynne was co-chair of the Liberal campaign. Editorials questioned the role politics should play in controlling electricity policy. The election On October 6, 2011 the Liberals won the most seats in the new legislature but not a majority. The lack of a majority would allow opposition parties to use legislative committees to probe the gas plants scandal. On October 7, 2011 the Minister of Energy announced the Mississauga project had been cancelled by the Cabinet. Oakville negotiation The OPA directed to negotiate a settlement with TCE. However, the Premier's direction to guarantee the full financial value of the contract would greatly increase the cost of cancellation. The contract's force majeure clause allowed for cancellation of the contract without costs if the project lagged by more than 24 months due to a failure to obtain municipal permits. TCE had already missed key dates and the Town of Oakville was threatening to fight the project to the Supreme Court of Canada if necessary. These delays might have allowed the government to exit the contract without penalty. Alternatively, another contract clause limited the government's liability to costs paid in the event that the government unilaterally terminated the contract. However, enforcement of the contract was not pursued. Negotiations took place on the Premier's Office's terms—a commitment to make TCE "whole." An initial proposal was developed by TCE to build a replacement power plant in the Kitchener-Waterloo or Cambridge area. A memorandum of understanding on the project was signed. However the MOU expired in June 2011 with no plan adopted. On August 5, 2011 TCE and the government met to arbitrate the settlement. According to the Auditor-General's report "as with the Premier’s Office’s commitment to TCE the year before, the [arbitration] framework waived the clause in the Oakville plant contract that gave the OPA a defensible claim of not owing TCE lost profits (that is, the clause stating that only if the Government took discriminatory action through legislation or similar means would the OPA be liable for damages such as loss of profits, with the OPA’s cancellation of the plant not meeting the definition of discriminatory). In March 2012 OPA offered a $462 million settlement to TCE, which was rejected. In April 2012, under Cabinet direction, the OPA offered a settlement of $712 million, which was also rejected. Energy Minister Chris Bentley announced an agreement was reached on August 24, 2012 but still maintained the cost of the cancellation would be $40 million. On Oct 15, 2012, Premier Dalton McGuinty announced he would resign after the selection of a new leader by the Liberal Party. On the same day he prorogued the legislature, shutting down the investigative committees. Later in October 2012, Premier McGuinty rejected a media report citing research by energy consultant Tom Adams estimating the cancellation costs of the Mississauga and Oakville plants at $1.3 billion. Premier McGuinty continued to claim the cost of cancelling the Oakville plant would be $40 million and Mississauga plant cancellation would cost $190 million -- for a total of $230 million. The Ontario Liberal party elected MPP Kathleen Wynne as its leader on January 26, 2013. On February 7, 2013, Premier McGuinty requested the Auditor-General review of the costs associated with the cancellation of the Oakville gas plant. Wynne became Premier on Feb 11, 2013. On March 18, 2013 the Minister of Energy for the first time stated that the $40 million estimate of the cost for cancelling the Oakville plant "could be wrong." Final accounting of costs On Oct 8, 2013 the Auditor-General reported the cost to cancel the Oakville plant at $675 million. The Auditor-General noted that had the premier's office not become involved, the OPA may have been in a position to simply wait and then exercise an option to break the contract without penalty: "We believe that the settlement with TCE will not only keep TCE whole, but may make it better than whole," Lysyk said. The report estimated the Premier's Office directive to make TCE "whole" increased the payout to TCE by $225 million over what was due under the terms of the contract. A substantial portion of the net $675 million cost of cancelling the Oakville plant and replacing it with the Napanee plant relates to the decision to locate the replacement plant farther from the location of power consumption in the GTA and farther from natural gas supplies. The Auditor General calculated increased gas supply costs and additional line transmission losses totaling of $609 million (included in the $675 million total), which are wholly or partially off-set by a savings of $275 million through a lower negotiated price for the Napanee plant. Opposition contempt charge As the questions arose concerning the accuracy of the Liberals' claim that the two gas plant cancellations only cost $230 million, opposition members of the Legislative committee asked new Energy minister Chris Bentley to hand over all documents related to the gas plant cancellations. On May 16, 2012, the Estimates Committee of the Ontario Legislature adopted a motion directing the former Minister of Energy, the Ministry of Energy and the OPA to produce "all correspondence, in any form, electronic or otherwise, that occurred between September 1, 2010 and December 31, 2011 related to the cancellation of the Oakville power plant as well all correspondence, in any form, electronic or otherwise, that occurred between August 1, 2011 and December 31, 2011 related to the cancellation of the Mississauga power plant." On May 30, 2012, the former Minister of Energy declined to disclose the records requested by the Estimates Committee, citing “the confidential, privileged and highly commercially sensitive nature of the issues.” On July 13, 2012, 500 pages of emails, letters and PowerPoint presentations were released to the Estimates Committee. Members of the opposition were not satisfied with the 500 pages of documents that were produced On August 27, 2012, a member of the Estimates Committee sought a ruling from the Speaker on whether privilege had been breached by the failure of the former Minister to provide the documents ordered. On September 12, 2012, Energy Minister Chris Bentley stated he would comply with the Speaker's order to provide gas plants documents but asked for six weeks more time so as to not jeopardize negotiations with TCE. However, an agreement had been made with TCE on August 24, 2012. On September 13, 2012, the Speaker found in favour of members of the opposition that there was a prima facie case for contempt by the former Minister and, through the issuance of a Speaker's ruling, ordered the former Minister to comply with the Estimates Committee motion. On September 21, 2012, Don Guy, the Manager of the Liberals' 2011 campaign, emailed Premier's Office staffer Laura Miller and the brother of the Premier saying the Speaker needs to "change his mind" on his ruling. Miller emailed back to Guy stating Premier's Office staffer Dave Gene "is putting the member from Brant [ie: Levac] on notice we need better here." On September 24, 2012, the Liberals release 36,000 documents. On September 25, 2012, the Progressive Conservatives introduce a motion of contempt to the legislature. On October 12, 2012, an additional 20,000 previously undisclosed documents were released by the Liberals. However, not a single document originated from any of the political staff in the office of the Minister of Energy. Prorogation and Premier McGuinty's resignation On October 1, 2012, Government House Leader Office (GHLO) staffer David Phillips emailed the Premier's Chief of Staff (Livingston) and Deputy Chief of Staff (Miller) with his "very rough views/pitch on prorogation." The email noted the various scandals enveloping the Liberals and being pursued by various legislative committee meetings. "If we prorogue... these will not take place for the next five months." On October 15, 2012, Dalton McGuinty prorogued the legislature and announced his resignation pending a leadership convention. On October 20, 2012, Liberal government staffers used a media source to present the false-flag story that McGuinty may be quitting to pursue the leadership of the federal Liberal party. Don Guy added that a polling firm included the false McGuinty federal candidacy in a poll to be reported in the media. New contempt charges On Thursday, March 7, 2013, the Standing Committee on Justice Policy began their review of the contempt charge against Chris Bentley as well as "observations and recommendations concerning the tendering, planning, commissioning, cancellation, and relocation of the Mississauga and Oakville gas plants". Conservative committee members called the Honourable Peter Milliken PC, former federal Speaker of the House of Commons, as a witness. Milliken stated "This [contempt charge] didn't meet the standards for a contempt motion. I found the (Tories') request at the time reckless,". Cover-up discovered Although 56,500 documents had been tabled from the Ministry of Energy and the OPA to comply with the May 16, 2012 motion of the Estimates Committee of the Legislature, none of the documents came from the political staff in the Minister's Office. The Justice Committee asked the former Chief of Staff Craig MacLennan to give testimony. At a meeting of Justice Policy Committee of the Legislature on April 9, 2013 NDP MPP Peter Tabuns asked the former Chief of Staff to the Minister of Energy why the political staffer had provided no documents. MacLennan replied: "I didn’t have any responsive documents. I regret that I didn’t have any responsive documents. My colleague coordinated the search in the office. All I can speak to is what my work habit is, which is to keep a clean inbox. I always have worked that way." On April 12, 2013 NDP MPP Peter Tabuns lodged a complaint with the Ontario Privacy Commissioner asking her to investigate "what appears to be a breach of protocol and a violation of the Archives and Recordkeeping Act and the Freedom of Information and Protection of Privacy Act." The Privacy Commissioner's report was tabled June 5, 2013. In the report, the Commissioner stated "While I cannot state with certainty that emails had been deleted improperly by the former Premier’s staff during the transition to the new Premier in an effort to avoid transparency and accountability, it strains credulity that no one knew that the practice of deleting all emails was not in compliance with applicable records management and retention policies.". The report found "the practice of indiscriminate deletion of all emails sent and received by the former Chief of Staff was in violation of the Archives and Recordkeeping Act, 2006 (ARA) and the records retention schedule developed by Archives of Ontario for ministers’ offices. In my view, this practice also undermined the purposes of the Freedom of Information and Protection of Privacy Act F(FIPPA), and the transparency and accountability principles that form the foundation of both Acts. It truly strains credulity to think that absolutely no records responsive to the Estimates Committee motion and the Speaker's ruling were retained." The Privacy Commission met with information technology staff at the Ministry of Government Services (MGS) to inquire about the possibility of retrieving the deleted emails from MGS's central server. According to Cavourkian's report "MGS IT staff described the difficulty and complexity of reconstructing data from a search in the email RAID server into a useable file. Specifically, MGS IT staff stated that while searching for a deleted email from yesterday would require a great deal of effort, searching for data from two or three months ago would be “fruitless, as the data no longer exists.” MGS IT staff further stated that reconstructing the data from a search in the RAID server into a usable file would be “tantamount to reconstructing a single shredded document from a bin of shredded documents.” The Commission also found there was "a culture of avoiding the creation of written documentation on the gas plants issue." While probing the appropriateness of the staff of the Minister of Energy, the Commissioner was told Premier McGuinty's Chief of Staff, David Livingston, had asked the Cabinet Secretary about ways to permanently delete emails from computers in the Premier's Office. This fact was included in her June 5, 2013 report and led to the launch of an OPP investigation into the Livingston and the Premier's Office two days later. On Aug. 20, 2013 Privacy Commissioner Ann Cavoukian said the Wynne government had provided "inaccurate and incomplete information in my initial investigation" about the ability to retrieve deleted emails. "As a direct consequence of the incomplete response, the public has been misled…about the ability of staff to retrieve potentially relevant information.” On March 27, 2014 a court document became public in which the OPP stated they had probable cause to lay criminal charges against David Livingston, McGuinty's last chief of staff. The document alleged Livingston hired an IT contractor to the Liberal Party, Peter Faist (who was also the common-law partner of the Premier's Office staffer Laura Miller), to wipe clean the hard drives of computers in the Premier's Office that contained information about the gas plant scandal. Police say the hard drives will “afford evidence” of breach of trust. Mr. Faist and Ms. Miller had since moved to British Columbia where Ms. Miller became the Executive Director of the BC Liberal Party. They were scheduled to give evidence to the Justice Policy Committee of the Legislature when an election was called, suspending the activity of the committee. Criminal trial and conviction On December 17, 2015 the Ontario Provincial Police announced three criminal charges each against David Livingston and Laura Miller: breach of trust, mischief in relation to data, and misuse of a computer system to commit mischief. Both entered pleas of not guilty as to all counts. The trial was heard by the Ontario Court of Justice at Old City Hall in Toronto, and commenced in September 2017. At the close of its prosecution on November 3, 2017, the Crown advised that it would not pursue convictions on the breach of trust charges, seeing no reasonable prospect of conviction. The defence subsequently advised that it would not call any witnesses, and applied for a directed verdict of acquittal as to the remaining counts. Justice Timothy R. Lipson ruled against acquittal as to misuse of a computer system to commit mischief, but downgraded the other counts to attempt to commit mischief to data, ruling that: "...it would be speculative to conclude that data relevant to the prosecution case was destroyed. To draw an inference that the deleted files did, in fact, contain business or work-related material would, at best, amount to an educated guess and that is impermissible." Following closing arguments, the Court handed down verdicts on January 19, 2018. David Livingston was found guilty of the remaining counts, mischief in relation to data, and attempted misuse of a computer system to commit mischief. The ruling stated that "Mr. Livingston's plan to eliminate sensitive and confidential work-related data, in my view, amounted to a 'scorched earth' strategy, where information that could be potentially useful to adversaries, both within and outside of the Liberal Party, would be destroyed." Laura Miller was found not guilty on both counts. The Court found that there was evidence suggesting that Ms. Miller was a party to the offences, having been deeply involved in the government's communication strategy with respect to the power plant controversy, and having assisted David Livingston in selecting hard drives to be wiped. However, the Court held that there was reasonable doubt as to her guilt. The Canadian Press reported that Miller's acquittal "drew an audible gasp" in the packed courtroom. A sentencing hearing is scheduled for February 26, 2018. In response to the verdicts, Premier Wynne's office issued a statement: "We've been clear from the start that this is not how anyone in government should operate, and it is not how a premier's office should operate... Upon coming into office, we introduced a number of significant measures to strengthen the document retention protocol and ensure that all staff are aware of their responsibilities." The (then) Opposition leader MPP Patrick Brown stated that "[t]he guilty verdict is an indictment of the 15 years of Liberal political corruption that has long been rooted in the premier's office." David Livingston was sentenced on April 11, 2018 to four months in jail, one year of probation and 100 hours of community service. During sentencing, Justice Timothy Lipson stated that Livingston “abused his position of power to promote the interests of the governing party at the expense of the democratic process.” Livingston served a reduced sentence of 35 days in jail between July 29 and September 2, 2018. References Ontario political scandals Ontario electricity policy Corruption in Ontario Political scandals Trials of political people
27016205
https://en.wikipedia.org/wiki/International%20Conference%20on%20Architectural%20Support%20for%20Programming%20Languages%20and%20Operating%20Systems
International Conference on Architectural Support for Programming Languages and Operating Systems
The International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) is an annual interdisciplinary computer science conference organized by the Association for Computing Machinery (ACM). Reflecting its focus, sponsorship of the conference is made up of 50% by the ACM's Special Interest Group on Computer Architecture (SIGARCH) and 25% by each of the Special Interest Group on Programming Languages (SIGPLAN) and the Special Interest Group on Operating Systems (SIGOPS). It is a high-impact conference in computer architecture and operating systems, but less so in programming languages/software engineering. See also List of computer science conferences References Computer science conferences
71649
https://en.wikipedia.org/wiki/Challenge-Handshake%20Authentication%20Protocol
Challenge-Handshake Authentication Protocol
In computing, the Challenge-Handshake Authentication Protocol (CHAP) authenticates a user or network host to an authenticating entity. That entity may be, for example, an Internet service provider. CHAP provides protection against replay attacks by the peer through the use of an incrementally changing identifier and of a variable challenge-value. CHAP requires that both the client and server know the plaintext of the secret, although it is never sent over the network. Thus, CHAP provides better security as compared to Password Authentication Protocol (PAP) which is vulnerable for both these reasons. The MS-CHAP variant does not require either peer to know the plaintext and does not transmit it, but has been broken. Working cycle CHAP is an authentication scheme used by Point-to-Point Protocol (PPP) servers to validate the identity of remote clients. CHAP periodically verifies the identity of the client by using a three-way handshake. This happens at the time of establishing the initial link (LCP), and may happen again at any time afterwards. The verification is based on a shared secret (such as the client's password). After the completion of the link establishment phase, the authenticator sends a "challenge" message to the peer. The peer responds with a value calculated using a one-way hash function on the challenge and the secret combined. The authenticator checks the response against its own calculation of the expected hash value. If the values match, the authenticator acknowledges the authentication; otherwise it should terminate the connection. At random intervals the authenticator sends a new challenge to the peer and repeats steps 1 through 3. CHAP packets The ID chosen for the random challenge is also used in the corresponding response, success, and failure packets. A new challenge with a new ID must be different from the last challenge with another ID. If the success or failure is lost, the same response can be sent again, and it triggers the same success or failure indication. For MD5 as hash the response value is MD5(ID||secret||challenge), the MD5 for the concatenation of ID, secret, and challenge. See also List of authentication protocols Password Authentication Protocol Challenge–response authentication Cryptographic hash function References External links PPP Challenge Handshake Authentication Protocol (CHAP) Remote Authentication Dial In User Service (RADIUS): uses PAP or CHAP Extensible Authentication Protocol (EAP): discusses CHAP Internet protocols Password authentication Authentication protocols
25220
https://en.wikipedia.org/wiki/Quantum%20computing
Quantum computing
Quantum computing is a type of computation that harnesses the collective properties of quantum states, such as superposition, interference, and entanglement, to perform calculations. The devices that perform quantum computations are known as quantum computers. Though current quantum computers are too small to outperform usual (classical) computers for practical applications, they are believed to be capable of solving certain computational problems, such as integer factorization (which underlies RSA encryption), substantially faster than classical computers. The study of quantum computing is a subfield of quantum information science. Quantum computing began in 1980 when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that a quantum computer had the potential to simulate things a classical computer could not feasibly do. In 1994, Peter Shor developed a quantum algorithm for factoring integers with the potential to decrypt RSA-encrypted communications. In 1998 Isaac Chuang, Neil Gershenfeld and Mark Kubinec created the first two-qubit quantum computer that could perform computations. Despite ongoing experimental progress since the late 1990s, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream." In recent years, investment in quantum computing research has increased in the public and private sectors. On 23 October 2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration (NASA), claimed to have performed a quantum computation that was infeasible on any classical computer, but whether this claim was or is still valid is a topic of active research. There are several types of quantum computers (also known as quantum computing systems), including the quantum circuit model, quantum Turing machine, adiabatic quantum computer, one-way quantum computer, and various quantum cellular automata. The most widely used model is the quantum circuit, based on the quantum bit, or "qubit", which is somewhat analogous to the bit in classical computation. A qubit can be in a 1 or 0 quantum state, or in a superposition of the 1 and 0 states. When it is measured, however, it is always 0 or 1; the probability of either outcome depends on the qubit's quantum state immediately prior to measurement. Efforts towards building a physical quantum computer focus on technologies such as transmons, ion traps and topological quantum computers, which aim to create high-quality qubits. These qubits may be designed differently, depending on the full quantum computer's computing model, whether quantum logic gates, quantum annealing, or adiabatic quantum computation. There are currently a number of significant obstacles to constructing useful quantum computers. It is particularly difficult to maintain qubits' quantum states, as they suffer from quantum decoherence and state fidelity. Quantum computers therefore require error correction. Any computational problem that can be solved by a classical computer can also be solved by a quantum computer. Conversely, any problem that can be solved by a quantum computer can also be solved by a classical computer, at least in principle given enough time. In other words, quantum computers obey the Church–Turing thesis. This means that while quantum computers provide no additional advantages over classical computers in terms of computability, quantum algorithms for certain problems have significantly lower time complexities than corresponding known classical algorithms. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in any feasible amount of time—a feat known as "quantum supremacy." The study of the computational complexity of problems with respect to quantum computers is known as quantum complexity theory. Quantum circuit Definition The prevailing model of quantum computation describes the computation in terms of a network of quantum logic gates. This model is a complex linear-algebraic generalization of boolean circuits. A memory consisting of bits of information has possible states. A vector representing all memory states thus has entries (one for each state). This vector is viewed as a probability vector and represents the fact that the memory is to be found in a particular state. In the classical view, one entry would have a value of 1 (i.e. a 100% probability of being in this state) and all other entries would be zero. In quantum mechanics, probability vectors can be generalized to density operators. The quantum state vector formalism is usually introduced first because it is conceptually simpler, and because it can be used instead of the density matrix formalism for pure states, where the whole quantum system is known. We begin by considering a simple memory consisting of only one bit. This memory may be found in one of two states: the zero state or the one state. We may represent the state of this memory using Dirac notation so that A quantum memory may then be found in any quantum superposition of the two classical states and : The coefficients and are complex numbers. One qubit of information is said to be encoded into the quantum memory. The state is not itself a probability vector but can be connected with a probability vector via a measurement operation. If the quantum memory is measured to determine whether the state is or (this is known as a computational basis measurement), the zero state would be observed with probability and the one state with probability . The numbers and are called probability amplitudes. The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix Mathematically, the application of such a logic gate to a quantum state vector is modelled with matrix multiplication. Thus and . The mathematics of single qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit whilst leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are The CNOT gate can then be represented using the following matrix: As a mathematical consequence of this definition, , , , and . In other words, the CNOT applies a NOT gate ( from before) to the second qubit if and only if the first qubit is in the state . If the first qubit is , nothing is done to either qubit. In summary, a quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements. Any quantum computation (which is, in the above formalism, any unitary matrix over qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set, since a computer that can run such circuits is a universal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem. Quantum algorithms Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms. Quantum algorithms that offer more than a polynomial speedup over the best known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems. Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely. Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems. Many examples of provable quantum speedups for query problems are related to Grover's algorithm, including Brassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions, which uses Grover's algorithm, and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees, which is a variant of the search problem. Potential applications Cryptography A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security. Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking. Search problems The most well-known example of a problem admitting a polynomial quantum speedup is unstructured search, finding a marked item out of a list of items in a database. This can be solved by Grover's algorithm using queries to the database, quadratically fewer than the queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Problems that can be efficiently addressed with Grover's algorithm have the following properties: There is no searchable structure in the collection of possible answers, The number of possible answers to check is the same as the number of inputs to the algorithm, and There exists a boolean function that evaluates each input and determines whether it is the correct answer For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest of government agencies. Simulation of quantum systems Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. Quantum simulations might be used to predict future paths of particles and protons under superposition in the double-slit experiment. About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry while naturally occurring organisms also produce ammonia. Quantum simulations might be used to understand this process increasing production. Quantum annealing and adiabatic optimization Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Machine learning Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks. For example, the quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. Computational biology In the field of computational biology, quantum computing has played a big role in solving many biological problems. One of the well-known examples would be in computational genomics and how computing has drastically reduced the time to sequence a human genome. Given how computational biology is using generic data modeling and storage, its applications to computational biology are expected to arise as well. Computer-aided drug design and generative chemistry Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms. Hybrid architectures combining quantum computers with deep classical networks, such as Quantum Variational Autoencoders, can already be trained on commercially available annealers and used to generate novel drug-like molecular structures. Developing physical quantum computers Challenges There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer: Physically scalable to increase the number of qubits Qubits that can be initialized to arbitrary values Quantum gates that are faster than decoherence time Universal gate set Qubits that can be read easily Sourcing parts for quantum computers is also very difficult. Many quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co. The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers which enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge. Quantum decoherence One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator) in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds. As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions. These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time. As described in the Quantum threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing. Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates. Quantum supremacy Quantum supremacy is a term coined by John Preskill referring to the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark. In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to or the closing of the gap between Sycamore and classical supercomputers. In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer Jiuzhang to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds. On November 16, 2021 at the quantum computing summit IBM presented a 127-qubit microprocessor named IBM Eagle. Skepticism Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales. Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows: "So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be... about 10300... Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never." Candidates for physical realizations For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits): Superconducting quantum computing (qubit implemented by the state of small superconducting circuits [Josephson junctions]) Trapped ion quantum computer (qubit implemented by the internal state of trapped ions) Neutral atoms in optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice) Quantum dot computer, spin-based (e.g. the Loss-DiVincenzo quantum computer) (qubit given by the spin states of trapped electrons) Quantum dot computer, spatial-based (qubit given by electron position in double quantum dot) Quantum computing using engineered quantum wells, which could in principle enable the construction of quantum computers that operate at room temperature Coupled quantum wire (qubit implemented by a pair of quantum wires coupled by a quantum point contact) Nuclear magnetic resonance quantum computer (NMRQC) implemented with the nuclear magnetic resonance of molecules in solution, where qubits are provided by nuclear spins within the dissolved molecule and probed with radio waves Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon) Vibrational quantum computer (qubits realized by vibrational superpositions in cold molecules) Electrons-on-helium quantum computers (qubit is the electron spin) Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of trapped atoms coupled to high-finesse cavities) Molecular magnet (qubit given by spin states) Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerenes) Nonlinear optical quantum computer (qubits realized by processing states of different modes of light through both linear and nonlinear elements) Linear optical quantum computer (qubits realized by processing states of different modes of light through linear elements e.g. mirrors, beam splitters and phase shifters) Diamond-based quantum computer (qubit realized by the electronic or nuclear spin of nitrogen-vacancy centers in diamond) Bose-Einstein condensate-based quantum computer Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap Rare-earth-metal-ion-doped inorganic crystal based quantum computers (qubit realized by the internal electronic state of dopants in optical fibers) Metallic-like carbon nanospheres-based quantum computers The large number of candidates demonstrates that quantum computing, despite rapid progress, is still in its infancy. Models of computation for quantum computing There are a number of models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed. For practical implementations, the four relevant models of computation are: Quantum gate array – Computation decomposed into a sequence of few-qubit quantum gates. One-way quantum computer – Computation decomposed into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state (a cluster state), using a technique called quantum gate teleportation. Adiabatic quantum computer, based on quantum annealing – Computation decomposed into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution. Topological quantum computer – Computation decomposed into the braiding of anyons in a 2D lattice. The quantum Turing machine is theoretically important but the physical implementation of this model is not feasible. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical. Relation to computability and complexity theory Computability theory Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers. Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem and the existence of quantum computers does not disprove the Church–Turing thesis. Quantum complexity theory While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers. The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that and is widely suspected that , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity. The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP). The relationship of BQP to the basic classical complexity classes can be summarized as follows: It is also known that BQP is contained in the complexity class (or more precisely in the associated class of decision problems ), which is a subclass of PSPACE. It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based on Bohmian Mechanics could implement a search of an -item database in at most steps, a slight speedup over Grover's algorithm, which runs in steps. Note, however, that neither search method would allow quantum computers to solve NP-complete problems in polynomial time. Theories of quantum gravity, such as M-theory and loop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to the problem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time. See also Chemical computer D-Wave Systems DNA computing Electronic quantum holography Intelligence Advanced Research Projects Activity Kane quantum computer List of emerging technologies List of quantum processors Magic state distillation Natural computing Photonic computing Post-quantum cryptography Quantum algorithm Quantum annealing Quantum bus Quantum cognition Quantum circuit Quantum complexity theory Quantum cryptography Quantum logic gate Quantum machine learning Quantum supremacy Quantum threshold theorem Quantum volume Rigetti Computing Supercomputer Superposition Theoretical computer science Timeline of quantum computing Topological quantum computer Valleytronics References Further reading Textbooks Academic papers Table 1 lists switching and dephasing times for various systems. External links Stanford Encyclopedia of Philosophy: "Quantum Computing" by Amit Hagar and Michael E. Cuffaro. Quantum computing for the very curious by Andy Matuschak and Michael Nielsen Quantum Computing Made Easy on Satalia blog Lectures Quantum computing for the determined – 22 video lectures by Michael Nielsen Video Lectures by David Deutsch Lectures at the Institut Henri Poincaré (slides and videos) Online lecture on An Introduction to Quantum Computing, Edward Gerjuoy (2008) Lomonaco, Sam. Four Lectures on Quantum Computing given at Oxford University in July 2006 Models of computation Quantum cryptography Information theory Computational complexity theory Classes of computers Theoretical computer science Open problems Computer-related introductions in 1980 Emerging technologies
65728763
https://en.wikipedia.org/wiki/Cuckoo%27s%20egg%20%28metaphor%29
Cuckoo's egg (metaphor)
A cuckoo's egg is a metaphor for brood parasitism, where a parasitic bird deposits its egg into a host's nest, which then incubates and feeds the chick that hatches, even at the expense of its own offspring. That original biological meaning has been extended to other uses, including one which references spycraft and another piece of malware. History The concept has been in use in the study of brood parasitism in birds since the 19th century. It first evolved a metaphoric meaning of "misplaced trust", wherein the chick hatched of a cuckoo's egg incubated and raised by unknowing victim parents will first begin to starve and outgrow them as it or they kill off the birds' legitimate offspring. The first well known application to spycraft was in the 1989 book The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage by Clifford Stoll, in which Stoll deployed a honeypot to catch a cyber hacker that had accessed the secure computer system of the classified U.S. government Lawrence Berkeley National Laboratory. Stoll chronicles the so-called 'Cuckoo's Egg Investigation’, "a term coined by American press to describe (at the time) the farthest reaching computer-mediated espionage penetration by foreign agents”, which was also known as Operation Equalizer initiated and executed by the KGB through a small cadre of German hackers. In his book Stoll describes the hacker employing a Trojan horse strategy to penetrate the secure Livermore Laboratory computer system: I watched the cuckoo lay its egg: once again, he manipulated the files in my computer to make himself super-user. His same old trick: use the Gnu-Emacs move-mail to substitute his tainted program for the system's atrun file. Five minutes later, shazam! He was system manager. See also Brown-headed cowbird - another brood parasite that lays "cuckoo's eggs" References Metaphor
18637552
https://en.wikipedia.org/wiki/Health%20administration%20informatics
Health administration informatics
The emerging field of Health administration informatics is concerned with the evaluation, acquisition, implementation and day-to-day operation of information technology systems in support of all administration and clinical functions within the health care industry. The closely related field of biomedical informatics is primarily focused on the use of information systems for acquisition and application of patients' medical data, whereas nursing informatics deals with the delivery, administration and evaluation of patient care and disease prevention. What remains unclear, however, is how this emerging discipline should relate to the myriad of previously existing sub specializations within the broad umbrella of health informatics - including clinical informatics (which itself includes sub areas such as oncology informatics), bioinformatics and healthcare management informatics - particularly in light of the proposed "fundamental theorem" of biomedical informatics posed by Friedman in early 2009. The field of health administration informatics is emerging as attention continues to focus on the costly mistakes made by some health care organizations whilst implementing electronic medical records. Relevance within the health care industry In a recent survey of health care CIOs and Information System (IS) directors, increasing patient safety and reducing medical errors was reported as among the top business issues. Two other key findings were that: two-thirds of respondents indicated that the number of FTEs in their IT department will increase in the next 12 months; and three-quarters of respondents indicated that their IT budgets would be increasing. The most likely staffing needs reported by the health care executives are network and architecture support (HIMMS, 2005). “The government and private insurers are beginning to pay hospitals more for higher quality care–and the only way to measure quality, and then improve it, is with more information technology. Hospital spending on such gear is expected to climb to $30.5 billion next year, from $25.8 billion in 2004, according to researcher Dorenfest Group” (Mullaney and Weintraub, 2005). This fundamental change in health care (pay for performance) means that hospitals and other health care providers will need to develop, adapt and maintain all of the technology necessary to measure and improve on quality. Physicians have traditionally lagged behind in their use of technology (i.e., electronic patient records). Only 7% of physicians work for hospitals, and so the task of “wooing them is an extremely delicate task” (Mullaney and Weintraub, 2005). Careers The market demand for a specialized advanced degree that integrates Health Care Administration and Informatics is growing as the concept has gained support from the academic and professional communities. Recent articles in Health Management Technology cite the importance of integrating information technology with health care administration to meet the unique needs of the health care industry. The health care industry has been estimated to be around 10 years behind other industries in the application of technology and at least 10 to 15 years behind in leadership capability from the technology and perhaps the business perspective (Seliger, 2005; Thibault, 2005). This means there is quantifiable demand in the work force for health care administrators who are also prepared to lead in the field of health care administration informatics. In addition, the increasing costs and difficulties involved in evaluating the projected benefits from IT investments are requiring health care administrators to learn more about IT and how it affects business processes. The health care Chief Information Officer (CIO) must be able to build enterprise wide systems that will help reduce the administrative cost and streamline the automation of administrative processes and patient record keeping. Increasingly, the CIO is relied upon for specialized analytical and collaborative skills that will enable him/her to build systems that health care clinicians will use. A recent well-publicized debacle (shelving of a $34 million computer system after three months) at a top U. S. hospital underlines the need for leaders who understand the health care industry information technology requirements (Connolly, 2005). Several professional organizations have also addressed the need for academic preparation that integrates the two specializations addressed by UMUC’s MSHCAI degree. In the collaborative response to the Office of the National Coordinator for Health Information Technology (ONCHIT) request for information regarding future IT needs, thirteen major health and technology organizations endorsed a “Common Framework” to support health information exchange in the United States, while protecting patient privacy. The response cited the need for continuing education of health information management professionals as a significant barrier to implementation of a National Health Information Network (NHIN) (The Collaborative Response, 2005). See also Consumer health informatics Medical informatics Nursing informatics References Connolly, C. (2005, March 21) Cedars-Sinai doctors cling to Pen and paper. The Washington Post. Health Informatics World Wide (2005, March). Health informatics index site. Retrieved March 30, 2005 from . Healthcare Information and Management Systems Society (HIMSS) (2005, February). 16th annual HIMSS leadership survey sponsored by Superior Consultant Company. Retrieved 3/30/2005 from . Mullaney, T. J., & Weintraub, A. (2005 March 28). The digital hospital. Business Week 3926, 76. Seliger, R. (2005). Healthcare IT tipping point. Health Management Technology 26(3), 48-49. The Collaborative Response to the Office of the National Coordinator for Health Information Technology Request for Information (2005, January). Retrieved March 30, 2005 from . Thibault, B. (2005). Making beautiful music together. Behavioral Health 26(3), 28-29.
53675509
https://en.wikipedia.org/wiki/NIAflow
NIAflow
NIAflow is simulation software for mineral processing plants. Based on a flowsheet interface, it calculates the material flow through a variety of processing machinery. Overview NIAflow is used to design new mineral processing plants as well as optimize existing plants. Applying machine-specific parameters, the software computes the material flow through entire plants and provides product forecasts. Based on these results, process layout and machinery setup can be evaluated. NIAflow is a product of Haver & Boecker. It is available in three versions: Basic, Aggregate and Mining. Depending on the version, the number of machinery objects vary. History In 1996, NIAflow development began under the name “NIAexpert” as software for sizing and selection of vibrating screens. Its further development began in 2007 as “NIAproject[1]” which included a mathematical model to simulate a close-to-reality screening process on vibrating screens. It also offered multi-machine calculations and product forecasts. In 2016, “NIAflow” was introduced with the ability to simulate the material flow through various processing equipment for an entire mineral processing plant; it was no longer restricted to screening machines only as its predecessors. Features The software uses machine-specific algorithms to simulate the flow of material through each of the objects. Approximately 80 different mineral processing machines can be arranged in a flow diagram. All object parameters can be displayed on the flowsheet using an unlimited number of labels. Object parameters are stored in three detailing levels, where the ‘Essential’ level contains data required for calculation. The ‘Extended” level and the ‘Detailed’ level is intended to keep data required for the creation of a tender document or documentation Using the operation mode feature, typical conditions of a plant can be stored. The label layer function groups labels in user selectable layers. Using this function, different sets of information on the flowsheet can be created. In addition to the flowsheet, a project summary can be printed that contains information regarding the project, its objects and materials. Object groups Storing: Stockpile, Silo, Mining Truck, Front Loader, Road Truck, Silo Truck, Water Tank, Excavator Ship, Pond, Water Tap Conveying: Belt Conveyor, Vibrating feeder, Reciprocating Feeder, Apron Feeder, 2-Way Splitter, 3-Way Splitter, Bucket Elevator, Screw Conveyor, Rotary Valve, Chute Screening: 1 Screen Deck, 2 Screen Deck, 3 Screen Deck, 4 Screen Deck, Variable Screen, Stationary Grid, Grizzly Feeder, Roller Screen, Sieve Bend Crushing: Jaw Crusher, Cone Crusher, Roll Crusher, HSI, VSI Grinding: Sag Mill, Ball Mill, Rod Mill Sort: Spiral Sorter, Jig Sorter, Upstream Sorter, Optical Sorter, Air Separator, Belt Magnet, Eddy Current, Floatation Cell, Magnetic Separator Washing: Hydro-Clean, Friction Clean, Drum Washer, Bucket Wheel, Sand Screw, Log Washer, Hydro Cyclone, Sump Pump, Pump Slurry: Chamber Filter Press, Belt Filter Press, Thickener, Flocculence Unit, Blade Clarifier, Centrifuge, Disk Filter Dedust: Bag House, Aero Cyclone, Air Blower, Vent, Silo Top Filter, Funnel Packing: Pelletizing Disk, Mixer Controls: Hand Valve, Motor Valve, Pneumatic Valve, Hydraulic Valve, Float Valve, Check Valve, Conveyor Scale, Pressure Gauge, Level Control, Bulk Level Control, Flow Meter, Switch Cabinet Various: Free Text, Plant, Drum Dryer, Fluidized Bed Dryer, Drum Cooler Technical Description The calculation in NIAflow follows the flow of the material through the plant. When the plant layout contains closed circuits, NIAflow will repeat the calculation until a stationary condition is reached. During calculation, user-selectable limits are being watched, e.g. maximum tonnage throughput. NIAflow raises an error when those limits are exceeded. Material Handling and Object Calculation: Most objects in NIAflow are connected by lines where each line represents a material transported from one object to another. Any number of incoming lines can be attached to an in-point of an object. During calculation, all incoming materials are blended into a new material. Calculations are dependent on the type of object. At the conclusion of object calculation, the resulting material product(s) are connected to the out-point(s) of the object. Curve Interpolation: Particle Size Distributions (PSD) in NIAflow are generated using either linear or 3D+ (cubic spline) interpolation methods and can be viewed on Linear, Log, Log-Log and RRSB grids.[2] Interpolation methods and grids are stored together with the object properties. Blending: During blending, all material parameters are being re-calculated. Depending on the type of parameter the result can be the sum of the material properties (e.g. tph) or the weighted average (e.g. temperature). The Particle Size Distribution (PSD) of the blended material is computed by applying all sieves of the materials to the new one and calculating percentages based on the current grid and curve interpolation method. Classifying: Classifying objects are various screens as well as objects that can be set up for either sorting or classifying, e.g. upstream sorter/classifier. Classifying is performed by means of cut-curves (Tromp Curve).[3] Tromp curves describe the probability for a certain material fraction to arrive in the coarse product. For screens, NIAflow generates cut curves automatically based on the machine and media setup. For other machines, user input is required to define the cut curve. Sorting: Similar to classifying sorting is performed based on cut curves that have to be entered by the user. Sorting properties are stored for each individual material fraction. NIAflow supports sorting by density, color, shape, metal content, etc. Crushing, Milling: Crusher and mill products are calculated assuming a linear behavior of the product PSD in either a double logarithmic or RRSB grid. Each type of crushing or grinding machine creates its own specific inclination of the curve. The inclination combined with the maximum particle size leaving the machine is used for product forecast. Operation Modes: A plant can be set up in various operational modes, depending on how the objects that control the feed rate are set up. These objects are: Stockpile, Silo, Pond, Water Tank, 2 Way Splitter and 3 Way Splitter. The settings of these objects can be varied and various operational modes can be stored with the project. Versions Operating system Niaflow is based on Windows with framework 4.5.2. See also Mineral Processing Screening Log-Log Scale Cubic Spline Interpolation External links NIAflow web page Haver Niagara-NIAflow Software Download, Registrierungsform für Studenten Sources Haver & Boecker Optimizes Mining Operations with NIAflow Software, Mining.com, 8 September 2016, retrieved 26 April 2017 Haver & Boecker's NIAflow plant simulation software optimizes mining operations , Heavy Equipment Guide, 9 September 2016, retrieved 26 April 2017 Plant simulation software for minerals processing, Australian Mining 3 November, retrieved 26 April 2017 Plant simulation software to optimize operations, Pit & Quarry, 8 September 2016, retrieved 1 June 2017 Optimising crushing and screening efficiency with Haver & Boecker, World Highways, November 2016, retrieved 1 June 2017 New plant simulation software from Haver & Boecker, Trade Earthmovers, 8 November 2016, retrieved 1 June 2017 References Simulation software
3155420
https://en.wikipedia.org/wiki/Grill%20%28cryptology%29
Grill (cryptology)
The grill method (), in cryptology, was a method used chiefly early on, before the advent of the cyclometer, by the mathematician-cryptologists of the Polish Cipher Bureau (Biuro Szyfrów) in decrypting German Enigma machine ciphers. The Enigma rotor cipher machine changes plaintext characters into cipher text using a different permutation for each character, and so implements a polyalphabetic substitution cipher. Background The German navy started using Enigma machines in 1926; it was called Funkschlüssel C ("Radio cipher C"). By 15 July 1928, the German Army (Reichswehr) had introduced their own version of the Enigma—the Enigma G; a revised Enigma I (with plugboard) appeared in June 1930. The Enigma I used by the German military in the 1930s was a 3-rotor machine. Initially, there were only three rotors labeled I, II, and III, but they could be arranged in any order when placed in the machine. Rejewski identified the rotor permutations by , , and ; the encipherment produced by the rotors altered as each character was encrypted. The rightmost permutation () changed with each character. In addition, there was a plugboard that did some additional scrambling. The number of possible different rotor wirings is: The number of possible different reflector wirings is: A perhaps more intuitive way of arriving at this figure is to consider that 1 letter can be wired to any of 25. That leaves 24 letters to connect. The next chosen letter can connect to any of 23. And so on. The number of possible different plugboard wirings (for six cables) is: To encrypt or decrypt, the operator made the following machine key settings: the rotor order (Walzenlage) the ring settings (Ringstellung) the plugboard connections (Steckerverbindung) an initial rotor position (Grundstellung) In the early 1930s, the Germans distributed a secret monthly list of all the daily machine settings. The Germans knew that it would be foolish to encrypt the day's traffic using the same key, so each message had its own "message key". This message key was the sender-chosen initial rotor positions (e.g., YEK). The message key had to be conveyed to the recipient operator, so the Germans decided to encrypt it using the day's pre-specified daily ground setting (Grundstellung). The recipient would use the daily machine settings for all messages. He would set the Enigma's initial rotor position to the ground setting and decrypt the message key. The recipient would then set the initial rotor position to the message key and decrypt the body of the message. The Enigma was used with radio communications, so letters were occasionally corrupted during transmission or reception. If the recipient did not have the correct message key, then the recipient could not decipher the message. The Germans decided to send the three-letter message key twice to guard against transmission errors. Instead of encrypting the message key "YEK" once and sending the encrypted key twice, the Germans doubled the message key to "YEKYEK" ("doubled key"), encrypted the doubled key with the ground setting, and sent the encrypted doubled key. The recipient could then recognize a garbled message key and still decrypt the message. For example, if the recipient received and decrypted the doubled key as "YEKYEN", then the recipient could try both message keys "YEK" and "YEN"; one would produce the desired message and the other would produce gibberish. The encrypted doubled key was a huge cryptographic mistake because it allowed cryptanalysts to know two encipherments of the same letter, three places apart, for each of the three letters. The Polish codebreakers exploited this mistake in many ways. Marian Rejewski used the doubled key and some known daily keys obtained by a spy, to determine the wiring of the three rotors and the reflector. In addition, code clerks often did not choose secure random keys, but instead chose weak keys such as "AAA", "ABC", and "SSS". The Poles later used the doubled weak keys to find the unknown daily keys. The grill method was an early exploitation of the doubled key to recover part of the daily settings. The cyclometer and the bomba kryptologiczna were later exploitations of the doubled key. Example message Frode Weierud provides the procedure, secret settings, and results that were used in a 1930 German technical manual. Daily settings (shared secret): Wheel Order : II I III Ringstellung : 24 13 22 (XMV) Reflector : A Plugboard : A-M, F-I, N-V, P-S, T-U, W-Z Grundstellung: 06 15 12 (FOL) Operator chosen message key : ABL Enciphered starting with FOL: PKPJXI Message to send and resulting 5-letter groups of clear text: Feindliche Infanteriekolonne beobachtet. Anfang Südausgang Bärwalde. Ende 3 km ostwärts Neustadt. FEIND LIQEI NFANT ERIEK OLONN EBEOB AQTET XANFA NGSUE DAUSG ANGBA ERWAL DEXEN DEDRE IKMOS TWAER TSNEU STADT Resulting message: 1035 – 90 – 341 – PKPJX IGCDS EAHUG WTQGR KVLFG XUCAL XVYMI GMMNM FDXTG NVHVR MMEVO UYFZS LRHDR RXFJW CFHUH MUNZE FRDIS IKBGP MYVXU Z The first line of the message is not encrypted. The "1035" is the time, "90" is number of characters encrypted under the message key, and "341" is a system indicator that tells the recipient how the message was encrypted (i.e., using Enigma with a certain daily key). The first six letters in the body ("PKPJXI") are the doubled key ("ABLABL") encrypted using the daily key settings and starting the encryption at the ground setting/Grundstellung "FOL". The recipient would decipher the first six letters to recover the message key ("ABL"); he would then set the machine's rotors to "ABL" and decipher the remaining 90 characters. Notice that the Enigma does not have numerals, punctuation, or umlauts. Numbers were spelled out. Most spaces were ignored; an "X" was used for a period. Umlauts used their alternative spelling with a trailing "e". Some abbreviations were used: a "Q" was used for "CH". When Rejewski started his attack in 1932, he found it obvious that the first six letters were the enciphered doubled key. Key encryption The daily key settings and ground setting will permute the message key characters in different ways. That can be shown by encrypting six of the same letter for all 26 letters: AAAAAA -> PUUJJN BBBBBB -> TKYWXV CCCCCC -> KZMVVY DDDDDD -> XMSRQK EEEEEE -> RYZOLZ FFFFFF -> ZXNSTU GGGGGG -> QRQUNT HHHHHH -> SSWYYS IIIIII -> WNOZPL JJJJJJ -> MQVAAX KKKKKK -> CBTTSD LLLLLL -> OWPQEI MMMMMM -> JDCXUO NNNNNN -> YIFPGA OOOOOO -> LPIEZM PPPPPP -> AOLNIW QQQQQQ -> GJGLDR RRRRRR -> EGXDWQ SSSSSS -> HHDFKH TTTTTT -> BVKKFG UUUUUU -> VAAGMF VVVVVV -> UTJCCB WWWWWW -> ILHBRP XXXXXX -> DFRMBJ YYYYYY -> NEBHHC ZZZZZZ -> FCEIOE From this information, the permutations for each of the six message keys can be found. Label each permutation A B C D E F. These permutations are secret: the enemy should not know them. Notice the permutations are disjoint transpositions. For the A permutation, it not only changes "A" into "P" but it also changes "P" into "A". That allows the machine to both encrypt and decrypt messages. Augustin-Louis Cauchy introduced two-line notation in 1815 and cycle notation in 1844. Rejewski's characteristic Rejewski made an incredible discovery. Without knowing the plugboard settings, the rotor positions, the ring settings, or the ground setting, he could solve for all the daily message keys. All he needed were enough messages and some code clerks using non-random message keys. The message key is three characters long, so the doubled key is six characters long. Rejewski labeled the permutations for the successive message-key characters A B C D E F. He did not know what those permutations were, but he did know that A and D permutations encrypted the same message key letter, that B and E encrypted the same letter, and that C and F encrypted the same letter. If are the (unknown) plaintext letters of the message key and are the corresponding (known) ciphertext letters, then The equations can be post multiplied by D, E, and F respectively to simplify the right hand sides: The plaintext values are unknown, so those terms are just dropped to leave: The above equations describe a path through the permutations. If is passed through the inverse of , then it produces . If that character passes through , then the result is . Rejewski also knew that the Enigma permutations were self inverses: Enigma encryption and decryption were identical. That means that where is the identity permutation. Consequently, . Thus: The above equations show the relationship between the doubled key characters. Although Rejewski did not know the individual permutations A B C D E F, a single message told him how specific characters were permuted by the composed permutations AD, BE, and CF. From many messages, Rejewski could determine the composed permutations completely. In practice, about 60 messages were needed to determine the permutations. Rejewski recorded the three permutations with a cyclic notation he called the characteristic. gives an example: In this notation, the first cycle of permutation would map d to v, v to p, p to f, ..., y to o, and o would wrap around to d. Marks and Weierud give an example from Alan Turing that shows these cycles can be completed when some information is incomplete. Furthermore, Enigma permutations were simple transpositions, which meant that each permutation A B C D E F only transposed pairs of characters. Those character pairs had to come from different cycles of the same length. Moreover, any one pairing between two cycles determined all the other pairs in those cycles. Consequently, permutations A and D both had to transpose a and s because (a) and (s) are the only cycles of length one and there is only one way to pair them. There are two ways to match (bc) and (rw) because b must pair with either r or w. Similarly, there are ten ways to match the remaining ten-character cycles. In other words, Rejewski now knew that there were only twenty possibilities for the permutations A and D. Similarly, there were 27 candidates for B and E, and 13 candidates for C and F. Weak keys At this point, the Poles would exploit weaknesses in the code clerks' selection of message keys to determine which candidates were the correct ones. If the Poles could correctly guess the key for a particular message, then that guess would anchor two cycles in each of the three characteristics. The Poles intercepted many messages; they would need about 60 messages in the same daily key to determine the characteristic, but they may have many more. Early on, Rejewski had identified the six characters that made up the message key. If the code clerks were choosing random message keys, then one would not expect to see much correlation in the encrypted six characters. However, some code clerks were lazy. What if, out of a hundred messages, there were five messages from five different stations (meaning five different code clerks) that all used the same message key "PUUJJN"? That they all came up with the same key suggests they used a very simple or very common key. The Poles kept track of different stations and how those stations would choose message keys. Early on, clerks often used simple keys such as "AAA" or "BBB". The end result was that without knowing the Enigma's plugboard settings, the rotor positions, or the ring settings, Rejewski determined each of the permutations A B C D E F, and hence all of the day's message keys. Initially, Rejewski used the knowledge of permutations A B C D E F (and a manual obtained by a French spy) to determine the rotor wirings. After learning the rotor wirings, the Poles used the permutations to determine the rotor order, plugboard connections, and ring settings through further steps of the grill method. Continuing the 1930 example Using the daily key in the 1930 technical manual above, then (with enough messages) Rejewski could find the following characteristics: Although there are theoretically 7 trillion possibilities for each of the A B C D E F permutations, the characteristics above have narrowed the A and D permutations to just 13 possibilities, B and E to just 30 possibilities, and C and F to just 20 possibilities. The characteristic for CF has two singleton cycles, (e) and (z). Those singleton cycles must pair in the individual permutations, so the characteristic for CF implies that the "E" and "Z" exchange in both the C and F permutations. The pairing of "E" and "Z" can be checked in the original (secret) permutations given above. Rejewski would now know that indicators with the pattern "..E..E" were from a message key of "..Z"; similarly an indicator of "..Z..Z" were from a message key of "..E". In the day's traffic, he might find indicators such as "PKZJXZ" or "RYZOLZ"; might one of these indicators be the common (lazy) message key "EEE"? The characteristic limits the number of possible permutations to a small number, and that allows some simple checks. "PKZJXZ" cannot be "EEE" because it requires "K" and "E" to interchange in B, but both "K" and "E" are part of the same cycle in BE: (kxtcoigweh). Interchanging letters must come from distinct cycles of the same length. The repeating key could also be confirmed because it could uncover other repeating keys. The indicator "RYZOLZ" is a good candidate for the message key "EEE", and it would immediately determine both permutations A and D. For example, in AD, the assumed message key "EEE" requires that "E" and "R" interchange in A and that "E" and "O" interchange in D. If "E" interchanges with "R" in A (notice one character came from the first cycle in AD and the other character came from the second cycle), then the letter following "E" (i.e. "D") will interchange with the letter preceding "R" (i.e. "X") . That can be continued to get all the characters for both permutations. This characteristic notation is equivalent to the expressions given for the 1930 permutations A and D given above by sorting the cycles so that the earliest letter is first. The guessed message key of "EEE" producing indicator "RYZOLZ" would also determine the pairing of the 10-long cycles in permutation BE. That determines most of B and E, and there would only be three possible variations left that pair (ujd) and (mqa). There are still 20 possible variations for C and F. At this point, the Poles could decrypt all of the first and fourth letters of the daily keys; they could also decrypt 20 out 26 of the second and fifth letters. The Poles' belief in these permutations could be checked by looking at other keys and seeing if they were typical keys used by code clerks. With that information, they could go looking for and find other likely weak message keys that would determine the rest of the A B C D E F permutations. For example, if the Poles had an indicator "TKYWXV", they could decrypt it as "BB.BB."; checking the cycles for CF would reveal that the indicator is consistent with message key "BBB". Rejewski's model Rejewski modeled the machine as permutation made from permutations of plugboard (), the wiring from the keyboard/lamps to the rotors (), the three rotors (), and the reflector (). The permutation for each position of the doubled key was different, but they were related by a permutation that represented a single step of a rotor ( is known). Rejewski assumed that the left and middle rotors did not move while encrypting the doubled key. The six letters of the doubled key consequently see the permutations A B C D E F: Rejewski simplified these equations by creating as a composite reflector made from the real reflector and two leftmost rotors: Substitution produces: The result is six equations in four unknowns (S H N Q). Rejewski had a commercial Enigma machine, and he initially thought that would be the same. In other words, Rejewski guessed that Later, Rejewski realized that guess was wrong. Rejewski then guessed (correctly) that was just the identity permutation: That still left three unknowns. Rejewski comments: So I had a set of six equations in three unknowns, S, N, and Q. While I puzzled over how to solve that set of equations, on December 9, 1932, completely unexpectedly and at the most opportune moment, a photocopy of two tables of daily keys for September and October 1932 was delivered to me. Having the daily keys meant that was now known. The known permutations were moved to the left side in the equations by premultiplying and post multiplying. The leftmost and rightmost permutations on the right-hand side (which were also known) were moved to the left; the results were given the variable names U V W X Y Z: Rejewski then multiplied each equation with the next: Next, Rejewski eliminated the common subexpression by substituting its value obtained from the previous product. The result is a set of four equations in just one unknown: . Back to 1930 example For the 1930 example above, ABCDEFGHIJKLMNOPQRSTUVWXYZ A ptkxrzqswmcojylagehbvuidnf B ukzmyxrsnqbwdipojghvatlfec C uymsznqwovtpcfilgxdkajhrbe D jwvrosuyzatqxpenldfkgcbmhi E jxvqltnypaseugzidwkfmcrbho F nvykzutslxdioamwrqhgfbpjce are transformed to the permutations: ABCDEFGHIJKLMNOPQRSTUVWXYZ U gkvlysarqxbdptumihfnoczjew V gnfmycaxtrzsdbvwujliqophek W uekfbdszrtcyqxvwmigjaopnlh X jelfbdrvsaxctqyungimphzkow Y ltgmwycsvqxadzrujohbpiekfn Z mskpiyuteqcravzdjlbhgnxwfo and then multiplied to produce the five successive products: ABCDEFGHIJKLMNOPQRSTUVWXYZ UV = azoselgjuhnmwiqdtxcbvfkryp = (a)(e)(g)(y)(hj)(rx)(bzpdscoqt)(flmwkniuv) VW = sxdqlkunjihgfeopatyrmvwzbc = (o)(p)(v)(w)(ij)(rt)(asybxzcdq)(elgumfkhn) WX = pbxdefiwgmlonkhztsrajyuqcv = (b)(d)(e)(f)(gi)(rs)(apzvycxqt)(hwujmnklo) XY = qwaytmoihlkgbjfpzcvdusnxre = (k)(p)(u)(x)(hi)(sv)(aqzetdyrc)(bwnjlgofm) YZ = rhuaxfkbnjwmpolgqztsdeicyv = (f)(j)(q)(y)(bh)(st)(arzvexcud)(gkwinolmp) Now the goal is to find the single structure preserving map that transforms UV to VW, VW to WX, WX to XY, and XY to YZ. Found by subscription of cycle notation. When maps to , the map must mate cycles of the same length. That means that (a) in must map to one of (o)(p)(v)(w) in . In other words, a must map to one of opvw. These can be tried in turn. UV = (a)(e)(g)(y)(hj)(rx)(bzpdscoqt)(flmwkniuv) VW = (o) (p)(v)(w)(ij)(rt)(asybxzcdq)(elgumfkhn) VW = (o)(p)(v)(w)(ij)(rt)(asybxzcdq)(elgumfkhn) WX = (b)(d)(e)(f)(gi)(rs)(apzvycxqt)(hwujmnklo) WX = (b)(d)(e)(f)(gi)(rs)(apzvycxqt)(hwujmnklo) XY = (k)(p)(u)(x)(hi)(sv)(aqzetdyrc)(bwnjlgofm) XY = (k)(p)(u)(x)(hi)(sv)(aqzetdyrc)(bwnjlgofm) YZ = (f)(j)(q)(y)(bh)(st)(arzvexcud)(gkwinolmp) But a must map the same to o in each pairing, so other character mappings are also determined: UV = (a)(e)(g)(y)(hj)(rx)(bzpdscoqt)(flmwkniuv) VW = (o) (p)(v)(w)(ij)(rt)(asybxzcdq)(elgumfkhn) VW = (o)(p)(v)(w)(ij)(rt)(asybxzcdq)(elgumfkhn) WX = (ohwujmnkl) (b)(d)(e)(f)(gi)(rs)(apzvycxqt) WX = (b)(d)(e)(f)(gi)(rs)(apzvycxqt)(hwujmnklo) XY = (ofmbwnjlg) (k)(p)(u)(x)(hi)(sv)(aqzetdyrc) XY = (k)(p)(u)(x)(hi)(sv)(aqzetdyrc)(bwnjlgofm) YZ = (olmpgkwin) (f)(j)(q)(y)(bh)(st)(arzvexcud) Consequently, the character maps for sybxzcdq, pzvycxqt, and qzetdyrc are discovered and consistent. Those mappings can be exploited: UV = (a)(e)(g)(y)(hj)(rx)(bzpdscoqt)(flmwkniuv) VW = (o)(p) (w) (ij)(umfkhnelg)(xzcdqasyb) (v)(rt) VW = (o)(p)(v)(w)(ij)(rt)(asybxzcdq)(elgumfkhn) WX = (f)(b) (ig)(ohwujmnkl)(pzvycxqta) (d)(e)(rs) WX = (b)(d)(e)(f)(gi)(rs)(apzvycxqt)(hwujmnklo) XY = (u)(k)(p) (ih)(ofmbwnjlg) (x)(sv)(aqzetdyrc) XY = (k)(p)(u)(x)(hi)(sv)(aqzetdyrc)(bwnjlgofm) YZ = (f) (j) (hb)(olmpgkwin)(udarzvexc) (q)(y)(st) Which determines the rest of the map and consistently subscribes: UV = (a)(e)(g)(y)(hj)(rx)(bzpdscoqt)(flmwkniuv) VW = (o)(p)(v)(w)(tr)(ij)(umfkhnelg)(xzcdqasyb) VW = (o)(p)(v)(w)(ij)(rt)(asybxzcdq)(elgumfkhn) WX = (e)(f)(b)(d)(sr)(ig)(ohwujmnkl)(pzvycxqta) WX = (b)(d)(e)(f)(gi)(rs)(apzvycxqt)(hwujmnklo) XY = (u)(k)(p)(x)(vs)(ih)(ofmbwnjlg)(tdyrcaqze) XY = (k)(p)(u)(x)(hi)(sv)(aqzetdyrc)(bwnjlgofm) YZ = (q)(f)(y)(j)(ts)(hb)(olmpgkwin)(udarzvexc) The resulting map with successive subscriptions: resulting map: ABCDEFGHIJKLMNOPQRSTUVWXYZ ounkpxvtsrqzcaeflihgybdjwm = (aoepfxjrishtgvbuywdkqlzmcn) UV = (a)(e)(g)(y)(hj)(rx)(bzpdscoqt)(flmwkniuv) VW = (o)(p)(v)(w)(tr)(ij)(umfkhnelg)(xzcdqasyb) WX = (e)(f)(b)(d)(gi)(sr)(ycxqtapzv)(jmnklohwu) XY = (p)(x)(u)(k)(vs)(hi)(wnjlgofmb)(rcaqzetdy) YZ = (f)(j)(y)(q)(bh)(ts)(darzvexcu)(inolmpgkw) The map gives us , but that is also congugate (structure preserving). Consequently, the 26 possible values for are found by subscribing in 26 possible ways. The model above ignored the right rotor's ring setting (22) and ground setting (12), both of which were known because Rejewski had the daily keys. The ring setting has the effect of counterrotating the drum by 21; the ground setting advances it by 11. Consequently, the rotor rotation is -10, which is also 16. ABCDEFGHIJKLMNOPQRSTUVWXYZ Straight ounkpxvtsrqzcaeflihgybdjwm Shifted gpsquvbyxwortzmcekdafnljih = (agbpcsdqeufvnzhyixjwlrkomt) subscribe P in different ways: (abcdefghijklmnopqrstuvwxyz) (bcdefghijklmnopqrstuvwxyza) * actual rotor wiring (cdefghijklmnopqrstuvwxyzab) ... (zabcdefghijklmnopqrstuvwxy) rotor * ABCDEFGHIJKLMNOPQRSTUVWXYZ bdfhjlcprtxvznyeiwgakmusqo Grill The physical grill was used to determine both the rightmost rotor, its initial position, and the plugboard settings. Bottom sheet Rejewsky observed that is close to the identity permutation (in the early 1930s, only 12 of 26 letters were affected by the plugboard). He moved everything but to the left side of the equations by premultiplying or postmultiplying. The resulting system of equations is: At his point, is unknown, but it is the same for each equation. Rejewski does not know , but he knows it is one of the rotors (I, II, and III), and he knows the wiring for each of those rotors. There were only three rotors and 26 possible initial rotations. Consequently, there are only 84 possible values for . Rejewski can look at each possible value to see if the permutation is consistent. If there were no steckers ( were the identity), then each equation would produce the same . Consequently, he made one bottom sheet for each possible rotor (three sheets). Each bottom sheet consisted of 31 lines (26 + 5 to make six lines contiguous). Each line contained the stepped permutation of a known rotor. For example, a suitable bottom sheet for rotor III is, In the early 1930s, the rotor order was the same for a month or more, so the Poles usually knew which rotor was in the rightmost position and only needed to use one bottom sheet. After 1 November 1936, the rotor order changed every day. The Poles could use the clock method to determine the rightmost rotor, so the grill would only need to examine that rotor's bottom sheet. Top sheet For the top sheet, Rejewski wrote the six permutations through . A: abcdefghijklmnopqrstuvwxyz srwivhnfdolkygjtxbapzecqmu (..slit......................) ... F: abcdefghijklmnopqrstuvwxyz wxofkduihzevqscymtnrglabpj (..slit......................) There were six slits so the permutations on the bottom sheet would show through at the proper place. The top sheet would then be slid through all possible positions of rotor , and the cryptanalyst would look for consistency with some unknown but constant permutation . If there is not a consistent , then the next position is tried. Here's what the grill would show for the above permutations at its consistent alignment: A: abcdefghijklmnopqrstuvwxyz ptkxrzqswmcojylagehbvuidnf 17 fpjtvdbzxkmoqsulyacgeiwhnr (visible through slit) B: abcdefghijklmnopqrstuvwxyz ukzmyxrsnqbwdipojghvatlfec 18 oisucaywjlnprtkxzbfdhvgmqe (visible through slit) C: abcdefghijklmnopqrstuvwxyz uymsznqwovtpcfilgxdkajhrbe 19 hrtbzxvikmoqsjwyaecguflpdn (visible through slit) D: abcdefghijklmnopqrstuvwxyz jwvrosuyzatqxpenldfkgcbmhi 20 qsaywuhjlnprivxzdbftekocmg (visible through slit) E: abcdefghijklmnopqrstuvwxyz jxvqltnypaseugzidwkfmcrbho 21 rzxvtgikmoqhuwycaesdjnblfp (visible through slit) F: abcdefghijklmnopqrstuvwxyz nvykzutslxdioamwrqhgfbpjce 22 ywusfhjlnpgtvxbzdrcimakeoq (visible through slit) In permutation , the cryptanalyst knows that (c k) interchange. He can see how rotor III would scramble those letters by looking at the first line (the alphabet in order) and the line visible through the slit. The rotor maps c into j and it maps k into m. If we ignore steckers for the moment, that means permutation would interchange (j m). For to be consistent, it must be the same for all six permutations. Look at the grill near permutation to check if its also interchanges (j m). Through the slit, find the letter j and look in the same column two lines above it to find h. That tells us the rotor, when it has advanced three positions, now maps h into j. Similarly, the advanced rotor will map y into m. Looking at permutation , it interchanges (h y), so the two tests are consistent. Similarly, in permutation , the (d x) interchange and imply that (t h) interchange in . Looking at permutation , (e l) interchange and also imply that (t h) interchange in . All such tests would be consistent if there were no steckers, but the steckers confuse the issue by hiding such matches. If any of the letters involved in the test is steckered, then it will not look like a match. The effect of the rotor permutation can be removed to leave the implied by the permutations. The result (along with the actual value of ) is: -: ABCDEFGHIJKLMNOPQRSTUVWXYZ Q(A): vyzrilptemqfjsugkdnhoaxwbc Q(B): myqvswpontxzaihgcuejrdfkbl Q(C): vcbrpmoulxwifzgeydtshakjqn Q(D): kyirhulecmagjqstndopfzxwbv Q(E): vemgbkdtwufzcxrysoqhjainpl Q(F): wvlrpqsmjizchtuefdgnobayxk Q : vyqrpkstnmfzjiuecdghoaxwbl (this actual Q is unknown to the cryptanalyst) Most of the letters in an implied permutation are incorrect. An exchange in an implied permutation is correct if two letters are not steckered. About one half the letters are steckered, so the expectation is only one fourth of the letters in an implied permutation are correct. Several columns show correlations; column A has three v characters, and (a v) interchange in the actual ; column D has four r characters, and (d r) interchange in . describes the possibility of writing down the six implied s for all 26 possible rotor positions. Rejewski states, "If permutation actually were the identity, then ... for a particular [initial position] we would obtain the same value for all expressions and in this way we would find the setting of drum . Permutation does exist, however, so for no [initial position] will the expression be equal to each other, but among them will be a certain similarity for a particular [initial position], since permutation does not change all the letters." Rejewski states that writing down all the possible "would be too laborious", so he developed the grill (grid) method. "Next, the grid is moved along the paper on which the drum connections are written until it hits upon a position where some similarities show up among the several expression . ... In this way the setting of drum and the changes resulting from permutation are found simultaneously. This process requires considerable concentration since the similarities I mentioned do not always manifest themselves distinctly and can be very easily overlooked." The reference does not describe what techniques were used. Rejewski did state that the grill method required unsteckered pairs of letters. Permutation has the exchanges (ap)(bt)(ck).... If we assume the exchange (ap) is unsteckered, that implies exchanges (fl). The other five permutations can be quickly checked for an unsteckered pair that is consistent with interchanging (fl) — essentially checking column F for other rows with l without computing the entire table. None are found, so (ap) would have at least one stecker so the assumption it is unsteckered is abandoned. The next pair can be guessed as unsteckered. The exchange (bt) implies exchanges (pg); that is consistent with (lw) in , but that guess fails to pan out because t and w are steckered. A: b↔t B: l↔w C: k←t D: x→m E: m→u F: j←x ↓ ↓ ↓ ↓ * ↑ ↑ * ↑ * * ↑ b t l w x t k z z f j k ↓ ↓ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ Q: p↔g p↔g p↔g p↔g p↔g p↔g guessing (b)(t) unsteckered in S leads to the guess (l)(w) unsteckered in S C finds stecker (k x) D finds stecker (z m) E finds stecker (f u) F finds (j) Following those guesses ultimately leads to a contradiction: A: f↔z B: m→d C: p←l D: f→s E: p!x F: ↓ ↓ ↑ * * ↑ ↑ * ↑ ↑ u m z y r l u a r k ↓ ↓ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ Q: e↔q e↔q e↔q e↔q e↔q e↔q exploit (f z) in A leads to (e q) exchange in Q B finds (d y) steckered C finds (p r) steckered D finds (a s) steckered E finds (p x) steckered - but p is already steckered to r! failure The third exchange (ck) implies exchanges (jm); this time permutation with an unsteckered (hy) would be consistent with exchanging (jm). A: c↔k B: C: D: h↔y E: F: ↓ ↓ ↑ ↑ c k i x n j h y u i g u ↓ ↓ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ Q: j↔m j↔m j↔m j↔m j↔m j↔m guessing (c)(y) unsteckered in S leads to the guess (h)(y) unsteckered in S At this point, the guess is that the letters chky are unsteckered. From that guess, all the steckers can be solved for this particular problem. The known (assumed) exchanges in are used to find exchanges in , and those exchanges are used to extend what is known about . Using those unsteckered letters as seeds finds (hy) interchange in and implies (kf) is in ; similarly (cy) interchange in and implies (uo) is in . Examining (uo) in the other permutations finds (tu) is a stecker. A: B: C: D: E: h↔y F: ↓ ↓ j a o s i v v s h y w e ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↑ ↑ Q: k↔f k↔f k↔f k↔f k↔f k↔f exploit (hy) in E A: B: C: t←k D: E: F: c↔y * ↑ ↓ ↓ o l d a u k f w m j c y ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↑ ↑ Q: u↔o u↔o u↔o u↔o u↔o u↔o exploit (cy) in F shows (tu) are in S That adds letters tu to the seeds. Those letters were also unknown above, so further information can be gleaned by revisiting: also has (g)(if)(x). A: c↔k B: f→x C: D: h↔y E: t→f F: g←t ↓ ↓ ↑ * ↑ ↑ ↑ * * ↑ c k i x n j h y u i g u ↓ ↓ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ Q: j↔m j↔m j↔m j↔m j↔m j↔m knowing (tu) in S leads to (g)(if) in S then (if) in S can be used to find (x) in S Revisit (kf)(uo) in gives more information: A: B: o←p C: f→n D: n→p E: h↔y F: z→e * ↑ ↑ * ↑ * ↓ ↓ ↑ * j a o s i v v s h y w e ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↑ ↑ Q: k↔f k↔f k↔f k↔f k↔f k↔f exploit (if) in S leads to (nv) in S (nv) in S leads to stecker (ps) (ps) in S leads to (o) (wz) in S leads to (e) A: o→l B: C: t←k D: i→z E: F: c↔y ↑ * * ↑ ↑ * ↓ ↓ o l d a u k f w m j c y ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↑ ↑ Q: u↔o u↔o u↔o u↔o u↔o u↔o exploit (if) in S leads to stecker (wz) in S (o) in S leads to (l) in S Another revisit fully exploits (jm): A: c↔k B: f x C: v→j D: h↔y E: t→f F: g←t ↓ ↓ ↑ * ↑ * ↑ ↑ ↑ * * ↑ c k i x n j h y u i g u ↓ ↓ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ Q: j↔m j↔m j↔m j↔m j↔m j↔m knowing (nv) in S leads to (j) in S That addition fills out even more: A: j→m B: o←p C: f→n D: n→p E: h↔y F: z→e ↑ * * ↑ ↑ * ↑ * ↓ ↓ ↑ * j a o s i v v s h y w e ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↑ ↑ Q: k↔f k↔f k↔f k↔f k↔f k↔f exploit (j) in S leads to (am) in S A: o→l B: d←m C: t←k D: i→z E: a↔j F: c↔y ↑ * * ↑ * ↑ ↑ * ↑ ↑ ↓ ↓ o l d a u k f w m j c y ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↓ ↑ ↑ Q: u↔o u↔o u↔o u↔o u↔o u↔o exploit (j)(am) in S leads to (d) in S Q = ( (fk)(jm)(ou)... ) missing 10 pairings S = ( (am)(c)(d)(fi)(g)(h)(j)(k)(l)(nv)(o)(ps)(tu)(wz)(x)(y)... ) 22 characters so far: missing beqr have found all 6 steckers, so (b)(e)(q)(r) All of is now known after examining 3 exchanges in . The rest of can be found easily. When a match is found, then the cryptanalyst would learn both the initial rotation of and the plugboard (Stecker) permutation . Recovering absolute rotor positions for the message key At this point, the rotor positions for the permutation is not known. That is, the initial positions (and possibly the order) of rotors and are not known. The Poles applied brute force by trying all possible initial positions () of the two rotors. With three rotors, knowing which rotor was at position meant there were only two possible ways to load the other two rotors. Later, the Poles developed a catalog of all the permutations. The catalog was not large: there were six possible combinations of two left rotors with initial settings, so the catalog had 4,056 entries. After using the grill, the Poles would look up in the catalog to learn the order and initial positions of the other two rotors. Initially, the Germans changed the rotor order infrequently, so the Poles would often know the rotor order before they began working. The rotor order changed every quarter until 1 February 1936. Then it changed every month until 1 November 1936, when it was changed daily. Recovering the ring setting The cryptanalyst now knew the plugboard, the rotor order, and the absolute setting of the rotors for the doubled key, but he did not know the ring setting. He also knew what the message key setting should be, but that setting was useless without knowing the ring setting. The ring setting could be anything, and that meant the Poles did not know how to position the rotors for the message body. All the work up to this point had focussed on exploiting the doubled key. To determine the ring setting, the attention now shifted to the actual message. Here, the Germans had made another mistake. Each message usually started with the text "ANX", which was German an meaning "to:" with the "X" meaning space. The Poles applied brute force here, too. They would go through up to settings to find settings that produced "ANX". Once found, the cryptanalyst would use the absolute setting of the rotors to determine the ring setting. The entire daily key was thus recovered. Later, the Poles refined the brute force search technique. By examining some messages, they could determine the position of the rightmost rotor; consequently, only 676 rotor positions would have to be tried. Rejewski no longer remembers how this trick worked. Decline The grill method is described by Marian Rejewski as being "manual and tedious" and, like the later cryptologic bomb, as being "based... on the fact that the plug connections [in the Enigma's commutator, or "plugboard"] did not change all the letters." Unlike the bomb, however, "the grill method required unchanged pairs of letters [rather than] only unchanged letters." Initially, the plugboard only swapped six pairs of letters. That left more than half of the alphabet unaffected by permutation . The number of steckers changed 1 August 1936; then it could be from five to eight pairs of letters were swapped. The extra swapped characters reduced the effectiveness of the grid method, so the Poles started looking for other methods. The result was the cyclometer and corresponding card catalog; that method was immune to steckers. The grill method found application as late as December 1938 in working out the wiring in two Enigma rotors newly introduced by the Germans. (This was made possible by the fact that a Sicherheitsdienst net, while it had introduced the new drums IV and V, continued using the old system for enciphering the individual message keys.) On 15 September 1938, most German nets stopped encrypting the doubled key with a common setting (the ground setting). The Poles had been able to take advantage of all messages in a net using the same machine settings to encrypt the doubled key. Now most nets stopped doing that; instead, the operator would choose his own ground setting and send it in the clear to the recipient. This change frustrated the grill method and the cyclometer card catalog. One net, the Sicherheitsdienst (SD) net, continued to use a common ground setting, and that net was used to reverse engineer new rotors (IV and V) that were introduced. The SD net traffic was doubly encoded, so the ANX method would not work. The grill method would sometimes fail after the Germans increased the number of plugboard connections to ten on 1 January 1939. When the SD net switched to the new message-key protocol on 1 July 1939, the grill method (and the cyclometer method) were no longer useful. Here's an example of the new message procedure for a message on 21 September 1938. 2109 -1750 - 3 TLE - FRX FRX - 1TL -172= HCALN UQKRQ AXPWT WUQTZ KFXZO MJFOY RHYZW VBXYS IWMMV WBLEB DMWUW BTVHM RFLKS DCCEX IYPAH RMPZI OVBBR VLNHZ UPOSY EIPWJ TUGYO SLAOX RHKVC HQOSV DTRBP DJEUK SBBXH TYGVH GFICA CVGUV OQFAQ WBKXZ JSQJF ZPEVJ RO - The "3 TLE" (German Teile, parts) says it is a 3-part message; the "1TL" (German Teil, part) says this is the first part; the "172" says there are 172 characters in the message (including the message key). For this message, the ground setting "FRX" is transmitted twice in the clear; the ground setting would/should be different for every message on net. Consequently, the Poles could not find the needed sixty message keys encrypted under the same ground setting. Without the same-key message volume, they could not determine the characteristic, so they could not determine the permutations A B C D E F or use the grill. For this message, the daily settings (rotor order, plugboard, and ring settings) were used with "FRX" to decrypt the first six characters ("HCALN U") to obtain the doubled message key ("AGIAGI"). To decrypt these messages, the Poles used other techniques to exploit the doubled message key. See also Permutation matrix Notes References . of of of External links Polish Contributions to Computing, http://chc60.fgcu.edu/EN/HistoryDetail.aspx?c=1 Also https://www.iacr.org/archive/eurocrypt2003/26560106/26560106.doc European Axis Signal Intelligence in World War II as Revealed by "TICOM" Investigations and by other Prisoner of War Interrogations and Captured Material, Principally German: Volume 2 — Notes on German High Level Cryptography and Cryptanalysis; see page 76: Swiss changed rotor wirings every 3 months, but Germans figured out the wirings because some messages were sent twice during the tri-monthly changeover. Germans were told new Croat rotor wirings by the company that manufactured the rotors. Bauer p 419 History of cryptography Science and technology in Poland Cipher Bureau (Poland)
41611565
https://en.wikipedia.org/wiki/EWS-UX
EWS-UX
EWS-UX is a Unix operating system used by NEC Corporation for its EWS-4800 line of engineering workstations. EWS-UX is based largely on versions of Unix System V supplemented with BSD software. It was widely used from the late 1980s to around 2000. Overview EWS-UX and the EWS-4800 line of workstations were widely used for CAD / CAM work. Early versions of EWS-UX run on Motorola 68000 series CISC processors, while later versions run on MIPS RISC processors. NEC attempted to introduce binary compatibility between Unix versions used by DEC, Sony (NEWS-OS), and Sumitomo's Unix (SEIUX). However, DEC dropped out of the agreement to pursue the DEC Alpha architecture. EWS-UX and UP-UX (NEC's Unix server OS) became integrated and merged into UX/4800. Versions EWS-UX / V: Based on Unix SVR2. It runs on the EWS4800 series equipped with the MC68020, MC68030, and MC68040 processors. EWS-UX / V (Rel4.0): Based on Unix SVR4. It runs on the EWS4800 series equipped with R3000 (VR3600) and R4000 processors. EWS-UX / V (Rel4.2): Based on Unix SVR4.2. It supports processors such as the R4400, VR4200, and R4600. EWS-UX / V (Rel4.2MP): Based on Unix SVR4.2MP. It supports multi-processor systems using the R4400MC. It is mostly similar to UP-UX, NEC's Unix operating system for servers. See also SUPER-UX External links MIPS operating systems NEC software UNIX System V
17054905
https://en.wikipedia.org/wiki/Orchestra%20Control%20Engine
Orchestra Control Engine
Orchestra Control Engine is a suite of software components (based on Linux/RTAI) used for the planning, development and deployment of real-time control applications for industrial machines and robots. Orchestra Control Engine has been developed by Sintesi SpA in partnership with the Italian National Research Council and in collaboration with international industrial companies in the field of robotics and production systems. Sintesi SpA is a company that develops mechatronic components and solutions. It has specialized in measurement, control and design technologies for robotics and production systems. Main features Orchestra Control Engine is flexible because it can be customized. This is done visually. The solutions created are open (based on an open source framework) and are extendible. Modular components of the software allow a user to develop, debug and test control applications. For example, previously developed algorithms can be divided into functional units and reused indefinitely. All the units work together. The software can be distributed among various remote hardware devices which may be hundreds of meters apart. It also scalable in that it selects hardware which provides the best cost and performance for a particular operation. The system's parameters can be quickly reconfigured both on line and also at the time of a run. Suite components Linux/RTAI creates Orchestra Control Engine's hard real time behaviour. Its "open source" characteristics allow changes to fit the users' requirements. Non hard real time components of orchestra Control Engine can be used with non-Linux platforms such as Microsoft Windows or Macintosh. Orchestra Core A hard real time multithreaded engine operates in multicore/multiprocessor architectures. Within the scheme, modules can be filled in with more or less complex algorithms which control the process. The run time engine loads the modules. The user can adapt the modules to the topology. For complex topology, multiple modules can be used or parallel loops can be implemented. Orchestra Run Time Manager The run time manager controls the formalities of execution of the program; decides priorities within the operation; and manages the multi-thread and multiprocessor operations. It is made up of templates that define thread typologies according to the formalities of execution and from a part that manages the POU (Program Organization Unit). Orchestra Logic Programming The logic programming of Orchestra Control Engine assists in the use of the five contemplated languages of the IEC 61131 norm. It also assists in the use of the C/C++ language. Orchestra Path Programming The path programming of Orchestra Control Engine assists in the writing of movement and workmanship mechanics. Piece manufacturing programs (part programs) can be edited according to the international ISO-DIN 60025 standard and the American EIA RS274 D standard. It is also important for the interpretation of modules and in turn for the input which allows a Motion Control Loop. Orchestra Designer The designer is a Java IDE. It assists development of motion control applications for different environments. This involves completing new modules, using code templates, allowing the adding and shaping of new blocks and testing the modules both independently and in a control scheme. It also automatically provides XML configuration files for each module and for the control loop. Orchestra Builder The builder is a software tool that allows Simulink models to be automatically generated into Orchestra core compatible modules. It does this by making a definition for every parameter of the Simulink model. It can generate a function which initializes the loading of a newly developed control system and, it can generate the step function which holds the code for the logic of each module. Orchestra HMI HMI is a Java application (therefore a cross-platform one), that looks for and interacts with different parts of a control system. Orchestra HMI has a graphic interface (including a touch screen) which can run on any common PC. It can be customised to suit the user and provides user authentication. Orchestra HMI allows the user to CN configure and plan the production island and command processes such as the starting a motion program. The user can screen and edit processes. Orchestra HMI provides the visualization of signals coming from an OrchestraCore or an Orchestra Run Time Manager by means of graphic controls (indicators, 2D plots, LCD displays) and the 3D visualization of machines and anthropomorphous manipulators. Orchestra Library The library contains sets of modules, information from sensors, interfaces with external entities such as machines, robots, sensors and DAQ boards. Solutions Orchestra Control Engine is a suite of programs. Using the various components in combination allows for flexibility. d Orchestra Motion Control Framework The motion control framework allows users to develop motion control applications by integrating the best modules for their purpose. The modules may be ones already available or those the user develops using the orchestra designer and builder facilities. The modules can be run so that the process has multiple threads. Parallelisms are identified and thus algorithms are refined. The modules can be "debugged" as they are completed if specific verifications are programmed. Alternatively, the modules can be completed in "release" mode if no special verifications are required. The modules be completed with any number of entries, parameters, states and vectorial output in double precision floating point, as well as states of any other type. These characteristics are codified through XML files. Orchestra MultiPLC Orchestra MultiPLC (multi programmable logic controller) is composed of Orchestra Run Time Manager, Orchestra Logic Programming and OrchestraHMI. It allows the execution of a motion control application as one or more programs or functional blocks which may be reused. The controller's open schema accepts and translates XML files. The functional blocks can be prioritised within a series or programmed to operate periodically. New tasks may be added to the application. Orchestra Full for Numerical Control Orchestra Full for Numerical Control consists of Orchestra Motion Control Framework, OrchestraMulti PLC, and some other specific components: OrchestraGCode interprets the G-code program received by the HMI: if the G-code instruction is one of motion, then it is sent to the MotionSupervisor, if not, OrchestraGCode will write the instruction to the appropriate software. MotionSupervisor acts as an interface between the Motion Control Loop, the Orchestra GCode, the ControllerSupervisor and the Logical Control Loop. Using information from the ControllerSupervisor, it selects either automatic or jog mode. In jog mode, MotionSupervisor provides axes to moves, direction and feed rates. In automatic and in semiautomatic mode, instructions on movement will come from the G-Code interpreter. The MotionSupervisor also collects error messages coming from the MotionControl Loop and sends them to the ControllerSupervisor. ControllerSupervisor centralizes all the information related to Orchestra Control Engine. It receives information from the HMI, the teach pendant and other software components. Such information is sorted to the other components even if direct channels of communication among the various components for the specific information interchange are foreseen. ControllerSupervisor sends error messages to OrchestraHMI. Local errors are handled in the software components in which they take place. Errors beyond the local level are handled by the ControllerSupervisor instigating a safety procedure and or showing the error to the user. Orchestra for Open Robot Controllers Orchestra for Open Robot Controllers allows the feasibility of innovative industrial robot algorithms to be tested. It can integrate advanced sensors and functions. Its interface with a personal computer is via OrchestraCore. Its function is generally one of realization of movement rather than the logic of control and the generation of trajectory. Release history Orchestra Control Engine See also RTAI Numerical control Programmable Logic Controller G-code External links Orchestra Control Engine Official Website Sintesi SpA Website Orchestra Control Engine at Icra '07 Video Orchestra Control Engine at Feimafe '07 Video Italian National Research Council RTAI Official Website Real-time computing Control engineering Automation
1077225
https://en.wikipedia.org/wiki/Internet%20Protocol%20television
Internet Protocol television
Internet Protocol television (IPTV) is the delivery of television content over Internet Protocol (IP) networks. This is in contrast to delivery through traditional terrestrial, satellite, and cable television formats. Unlike downloaded media, IPTV offers the ability to stream the source media continuously. As a result, a client media player can begin playing the content (such as a TV channel) almost immediately. This is known as streaming media. Although IPTV uses the Internet protocol it is not limited to television streamed from the Internet (Internet television). IPTV is widely deployed in subscriber-based telecommunications networks with high-speed access channels into end-user premises via set-top boxes or other customer-premises equipment. IPTV is also used for media delivery around corporate and private networks. IPTV in the telecommunications arena is notable for its ongoing standardisation process (e.g., European Telecommunications Standards Institute). IPTV services may be classified into live television and live media, with or without related interactivity; time shifting of media, e.g., catch-up TV (replays a TV show that was broadcast hours or days ago), start-over TV (replays the current TV show from its beginning); and video on demand (VOD) which involves browsing and viewing items of a media catalogue. Definition Historically, many different definitions of IPTV have appeared, including elementary streams over IP networks, MPEG transport streams over IP networks and a number of proprietary systems. One official definition approved by the International Telecommunication Union focus group on IPTV (ITU-T FG IPTV) is: IPTV is defined as multimedia services such as television/video/audio/text/graphics/data delivered over IP based networks managed to provide the required level of quality of service and experience, security, interactivity and reliability. Another definition of IPTV, relating to the telecommunications industry, is the one given by Alliance for Telecommunications Industry Solutions (ATIS) IPTV Exploratory Group in 2005: IPTV is defined as the secure and reliable delivery to subscribers of entertainment video and related services. These services may include, for example, Live TV, Video On Demand (VOD) and Interactive TV (iTV). These services are delivered across an access agnostic, packet switched network that employs the IP protocol to transport the audio, video and control signals. In contrast to video over the public Internet, with IPTV deployments, network security and performance are tightly managed to ensure a superior entertainment experience, resulting in a compelling business environment for content providers, advertisers and customers alike. History Up until the early 1990s, it was not thought possible that a television programme could be squeezed into the limited telecommunication bandwidth of a copper telephone cable to provide a video-on-demand (VOD) television service of acceptable quality, as the required bandwidth of a digital television signal was around 200Mbps, which was 2,000 times greater than the bandwidth of a speech signal over a copper telephone wire. VOD services were only made possible as a result of two major technological developments: discrete cosine transform (DCT) video compression and asymmetric digital subscriber line (ADSL) data transmission. DCT is a lossy compression technique that was first proposed by Nasir Ahmed in 1972, and was later adapted into a motion-compensated DCT algorithm for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1991 onwards. Motion-compensated DCT video compression significantly reduced the amount of bandwidth required for a television signal, while at the same time ADSL increased the bandwidth of data that could be sent over a copper telephone wire. ADSL increased the bandwidth of a telephone line from around 100kbps to 2Mbps, while DCT compression reduced the required bandwidth of a digital television signal from around 200Mbps down to about 2Mbps. The combination of DCT and ADSL technologies made it possible to practically implement VOD services at around 2Mbps bandwidth in the 1990s. The term IPTV first appeared in 1995 with the founding of Precept Software by Judith Estrin and Bill Carrico. Precept developed an Internet video product named IP/TV. IP/TV was an Mbone compatible Windows and Unix-based application that transmitted single and multi-source audio and video traffic, ranging from low to DVD quality, using both unicast and IP multicast Real-time Transport Protocol (RTP) and Real time control protocol (RTCP). The software was written primarily by Steve Casner, Karl Auerbach, and Cha Chee Kuan. Precept was acquired by Cisco Systems in 1998. Cisco retains the IP/TV trademark. Telecommunications company US West (later Qwest) launched an IPTV service called TeleChoice in Phoenix, Arizona in 1998 using VDSL technology, becoming the first company in the United States to provide digital television over telephone lines. The service was shut down in 2008. Internet radio company AudioNet started the first continuous live webcasts with content from WFAA-TV in January 1998 and KCTU-LP on 10 January 1998. Kingston Communications, a regional telecommunications operator in the UK, launched Kingston Interactive Television (KIT), an IPTV over digital subscriber line (DSL) service in September 1999. The operator added additional VoD service in October 2001 with Yes TV, a VoD content provider. Kingston was one of the first companies in the world to introduce IPTV and IP VoD over ADSL as a commercial service. The service became the reference for various changes to UK Government regulations and policy on IPTV. In 2006, the KIT service was discontinued, subscribers having declined from a peak of 10,000 to 4,000. In 1999, NBTel (now known as Bell Aliant) was the first to commercially deploy Internet protocol television over DSL in Canada using the Alcatel 7350 DSLAM and middleware created by iMagic TV (owned by NBTel's parent company Bruncor). The service was marketed under the brand VibeVision in New Brunswick, and later expanded into Nova Scotia in early 2000 after the formation of Aliant. iMagic TV was later sold to Alcatel. In 2002, Sasktel was the second in Canada to commercially deploy IPTV over DSL, using the Lucent Stinger DSL platform. In 2005, SureWest Communications was the first North American company to offer high-definition television (HDTV) channels over an IPTV service. In 2005, Bredbandsbolaget launched its IPTV service as the first service provider in Sweden. As of January 2009, they are not the biggest provider any longer; TeliaSonera, who launched their service later, now has more customers. By 2010, iiNet and Telstra launched IPTV services in conjunction to internet plans. In 2008, Pakistan Telecommunication Company Limited (PTCL) launched IPTV under the brand name of PTCL Smart TV in Pakistan. This service is available in 150 major cities of the country offering 140 live channels. In 2010, CenturyLink – after acquiring Embarq (2009) and Qwest (2010) – entered five U.S. markets with an IPTV service called Prism. This was after successful test marketing in Florida. In Brazil, since at least 2012, Vivo has been offering the service Vivo TV Fibra in 200+ cities where it has FTTH coverage (4Q 2020 data) . Since at least 2018, Oi has also been offering IPTV under its FTTH service "Oi Fibra". Also, several regional FTTH providers also offer IPTV along with FTTH internet services. In 2016, Korean Central Television (KCTV) introduced the set-top box called Manbang, reportedly providing video-on-demand services in North Korea via quasi-internet protocol television (IPTV). Manbang allows viewers to watch five different TV channels in real-time, and find political information regarding the Supreme Leader and Juche ideology, and read articles from state-run news organizations. Markets Residential The global IPTV market was expected to grow from 28 million subscribers at US$12 billion revenue in 2009 to 83 million and US$38 billion in 2013. Europe and Asia are the leading territories in terms of the overall number of subscribers. But in terms of service revenues, Europe and North America generate a larger share of global revenue, due to very low average revenue per user (ARPU) in China and India, the fastest growing (and ultimately, the biggest markets) is Asia. Services also launched in Bosnia and Herzegovina, Bulgaria, Pakistan, Canada, Croatia, Lithuania, Moldova, Montenegro, Morocco, North Macedonia, Poland, Mongolia, Romania, Serbia, Slovenia, the Netherlands, Georgia, Greece, Denmark, Finland, Estonia, Czech Republic, Slovakia, Hungary, Norway, Sweden, Iceland, Latvia, Turkey, Colombia, Chile and Uzbekistan. The United Kingdom launched IPTV early and after a slow initial growth, in February 2009 BT announced that it had reached 398,000 subscribers to its BT Vision service. Claro has launched their own IPTV service called "Claro TV". This service is available in several countries in which they operate, such as Dominican Republic, El Salvador, Guatemala, Honduras, Nicaragua. IPTV is just beginning to grow in Central and Eastern Europe and Latin America, and now it is growing in South Asian countries such as Sri Lanka, Nepal Pakistan and India. but significant plans exist in countries such as Russia. Kazakhstan introduced its own IPTV services by the national provider Kazakhtelecom JSC and content integrator Alacast under the "iD TV" brand in two major cities Astana and Almaty in 2009 and is about to go nationwide starting 2010. Australian ISP iiNet launched Australia's first IPTV with fetchtv. In India, IPTV was launched by MTNL, BSNL and Jio in New Delhi, Mumbai and Punjab. APSFL is another IPTV provider in the state of Andhra Pradesh. In Nepal, IPTV was first launched by NEW IT VENTURE CORPORATION called Net TV Nepal, the service can be accessed through its app, web app and Set top boxes provided by local ISPs, another IPTV was started by Nepal Telecom called WOW Time in 2016 which can be accessed through its app. In Sri Lanka, IPTV was launched by Sri Lanka Telecom (operated by SLT VisionCom) in 2008, under the brand name of PEO TV. This service is available in whole country. Dialog TV has been available through the service since 2018. In Pakistan, IPTV was launched by PTCL in 2008, under the brand name of PTCL Smart TV. This service is available in 150 major cities of the country. In the Philippines, PLDT offers Cignal IPTV services as an add-on in certain ADSL and fiber optic plans. In Malaysia, various companies have attempted to launch IPTV services since 2005. Failed PayTV provider MiTV attempted to use an IPTV-over-UHF service but the service failed to take off. HyppTV was supposed to use an IPTV-based system, but not true IPTV as it does not provide a set-top box and requires users to view channels using a computer. True IPTV providers available in the country at the moment are Fine TV and DETV. In Q2 2010, Telekom Malaysia launched IPTV services through their fibre to the home product Unifi in select areas. In April 2010, Astro began testing IPTV services on TIME dotCom Berhad's high-speed fibre to the home optical fibre network. In December 2010, Astro began trials with customers in high-rise condominium buildings around the Mont Kiara area. In April 2011, Astro commercially launched its IPTV services under the tag line "The One and Only Line You'll Ever Need", a triple play offering in conjunction with TIME dotCom Berhad that provides all the Astro programming via IPTV, together with voice telephone services and broadband Internet access all through the same fibre optic connection into the customer's home. In 2020, Astro launched "Plug-and-Play", which uses Unicast technology for streaming TV. In Turkey, TTNET launched IPTV services under the name IPtivibu in 2010. It was available in pilot areas in the cities of Istanbul, İzmir and Ankara. As of 2011, IPTV service is launched as a large-scale commercial service and widely available across the country under the trademark "Tivibu EV". Superonline plans to provide IPTV under the different name "WebTV" in 2011. Türk Telekom started building the fibre optic substructure for IPTV in late 2007. Commercial and corporate IPTV has been widely used since around 2002 to distribute television and audio-visual (AV) media around businesses and commercial sites, whether as live TV channels or Video on Demand (VOD). Examples of types of commercial users include airports, schools, offices, hotels, and sports stadiums, to name just a few. Architecture Elements IPTV head-end: where live TV channels and AV sources are encoded, encrypted and delivered in the form of IP multicast streams. Video on Demand (VOD) platform: where on-demand video assets are stored and served as IP unicast streams when a user makes a request. The VOD platform may sometimes be located with, and considered part of, the IPTV headend. Interactive portal: allows the user to navigate within the different IPTV services, such as the VOD catalogue. Delivery network: the packet-switched network that carries IP packets (unicast and multicast). Endpoints: User equipment that can request, decode and deliver IPTV streams for display to the user. This can include computers and mobile devices as well as set-top boxes. Home TV gateway: the piece of equipment at a residential IPTV user's home that terminates the access link from the delivery network. User set-top box: the piece of endpoint equipment that decodes and decrypts TV and VOD streams for display on the TV screen. Architecture of a video server network Depending on the network architecture of the service provider, there are two main types of video server architecture that can be considered for IPTV deployment: centralised and distributed. The centralised architecture model is a relatively simple and easy to manage solution. Because all media content is stored in centralised servers, it does not require a comprehensive content distribution system. Centralised architecture is generally good for a network that provides relatively small VOD service deployment, has adequate core and edge bandwidth or has an efficient content delivery network (CDN). A distributed architecture has bandwidth usage advantages and inherent system management features that are essential for managing a larger server network. Distributed architecture requires intelligent and sophisticated content distribution technologies to augment effective delivery of multimedia contents over the service provider's network. Residential IPTV home networks In many cases, the residential gateway that provides connectivity with the Internet access network is not located close to the IPTV set-top box. This scenario becomes very common as service providers start to offer service packages with multiple set-top boxes per subscriber. Networking technologies that take advantage of existing home wiring (such as power lines, phone lines or coaxial cables) or of wireless hardware have become common solutions for this problem, although fragmentation in the wired home networking market has limited somewhat the growth in this market. In December 2008, ITU-T adopted Recommendation G.hn (also known as G.9960), which is a next-generation home networking standard that specifies a common PHY/MAC that can operate over any home wiring (power lines, phone lines or coaxial cables). Groups such as the Multimedia over Coax Alliance, HomePlug Powerline Alliance, Home Phoneline Networking Alliance, and Quasar Alliance (Plastic Optical Fibre) each advocate their own technologies. Telecomms IMS architecture There is a growing standardisation effort on the use of the 3GPP IP Multimedia Subsystem (IMS) as an architecture for supporting IPTV services in telecommunications carrier networks. Both ITU-T and ETSI are working on so-called "IMS-based IPTV" standards (see e.g. ETSI TS 182 027). Carriers will be able to offer both voice and IPTV services over the same core infrastructure and the implementation of services combining conventional TV services with telephony features (e.g. caller ID on the TV screen) will become straightforward. Protocols IPTV supports both live TV as well as stored video-on-demand. Playback requires a device connected to either a fixed or wireless IP network in the form of a standalone personal computer, smartphone, touch screen tablet, game console, connected TV or set-top box. Content is compressed by Video and audio codecs and then encapsulated in MPEG transport stream or Real-time Transport Protocol or other packets. IP multicasting allows for live data to be sent to multiple receivers using a single multicast group address. In standards-based IPTV systems, the primary underlying protocols used are: Service-provider-based streaming: IGMP for subscribing to a live multicast stream (TV channel) and for changing from one live multicast stream to another (TV channel change). IP multicast operates within LANs (including VLANs) and across WANs also. IP multicast is usually routed in the network core by Protocol Independent Multicast (PIM), setting up correct distribution of multicast streams (TV channels) from their source all the way to the customers who wants to view them, duplicating received packets as needed. On-demand content uses a negotiated unicast connection. Real-time Transport Protocol (RTP) over User Datagram Protocol (UDP) or the lower overhead H.222 transport stream over Transmission Control Protocol (TCP) are generally the preferred methods of encapsulation. Web-based unicast only live and VoD streaming: Adobe Flash Player prefers RTMP over TCP with setup and control via either AMF or XML or JSON transactions. Apple iOS uses HLS adaptive bitrate streaming over HTTP with setup and control via an embedded M3U playlist file. Microsoft Silverlight uses smooth streaming (adaptive bitrate streaming) over HTTP. Web-based multicast live and unicast VoD streaming: The Internet Engineering Task Force (IETF) recommends RTP over UDP or TCP transports with setup and control using RTSP over TCP. Connected TVs, game consoles, set-top boxes and network personal video recorders: Local network content uses UPnP AV for unicast via HTTP over TCP or for multicast live RTP over UDP. Web-based content is provided through either inline Web plug-ins or a television broadcast-based application that uses a middleware language such as MHEG-5 that triggers an event such as loading an inline Web browser using an Adobe Flash Player plug-in. Local IPTV, as used by businesses for audio visual AV distribution on their company networks is typically based on a mixture of: Conventional TV reception equipment and IPTV encoders TV gateways that receive live Digital Video Broadcasting (DVB) MPEG transport streams (channels) from terrestrial aerials, satellite dishes, or cable feeds and convert them into IP streams Via satellite Although IPTV and conventional satellite TV distribution have been seen as complementary technologies, they are likely to be increasingly used together in hybrid IPTV networks. IPTV is largely neutral to the transmission medium, and IP traffic is already routinely carried by satellite for Internet backbone trunking and corporate VSAT networks. The copper twisted pair cabling that forms the last mile of the telephone and broadband network in many countries is not able to provide a sizeable proportion of the population with an IPTV service that matches even existing terrestrial or satellite digital TV distribution. For a competitive multi-channel TV service, a connection speed of 20 Mbit/s is likely to be required, but unavailable to most potential customers. The increasing popularity of high-definition television increases connection speed requirements or limits IPTV service quality and connection eligibility even further. However, satellites are capable of delivering in excess of 100 Gbit/s via multi-spot beam technologies, making satellite a clear emerging technology for implementing IPTV networks. Satellite distribution can be included in an IPTV network architecture in several ways. The simplest to implement is an IPTV-direct to home (DTH) architecture, in which hybrid DVB-broadband set-top boxes in subscriber homes integrate satellite and IP reception to give additional bandwidth with return channel capabilities. In such a system, many live TV channels may be multicast via satellite and supplemented with stored video-on-demand transmission via the broadband connection. Arqiva’s Satellite Media Solutions Division suggests "IPTV works best in a hybrid format. For example, you would use broadband to receive some content and satellite to receive other, such as live channels". Hybrid IPTV Hybrid IPTV refers to the combination of traditional broadcast TV services and video delivered over either managed IP networks or the public Internet. It is an increasing trend in both the consumer and pay TV markets. The growth of Hybrid IPTV is driven by two major factors. Since the emergence of online video aggregation sites, like YouTube and Vimeo in the mid-2000s, traditional pay TV operators have come under increasing pressure to provide their subscribers with a means of viewing Internet-based video on their televisions. At the same time, specialist IP-based operators have looked for ways to offer analogue and digital terrestrial services to their operations, without adding either additional cost or complexity to their transmission operations. Bandwidth is a valuable asset for operators, so many have looked for alternative ways to deliver these new services without investing in additional network infrastructures. A hybrid set-top allows content from a range of sources, including terrestrial broadcast, satellite, and cable, to be brought together with video delivered over the Internet via an Ethernet connection on the device. This enables television viewers to access a greater variety of content on their TV sets, without the need for a separate box for each service. Hybrid IPTV set-top boxes may also enable users to access a range of advanced interactive services, such as VOD, catch-up TV, as well as Internet applications, including video telephony, surveillance, gaming, shopping, e-government accessed via a television set. From a pay-TV operator's perspective, a hybrid IPTV set-top box gives them greater long-term flexibility to deploy new services and applications as and when consumers require, most often without the need to upgrade equipment or for a technician to visit and reconfigure or swap out the device. This reduces the cost of launching new services, increases speed to market and limits disruption for consumers. The Hybrid Broadcast Broadband TV (HbbTV) consortium of industry companies the establishment of an open European standard for hybrid set-top boxes for the reception of broadcast and broadband digital TV and multimedia applications with a single user interface. These trends led to the development of Hybrid Broadcast Broadband TV set-top boxes that included both a broadcast tuner and an Internet connection – usually via an Ethernet port. The first commercially available hybrid IPTV set-top box was developed by Advanced Digital Broadcast, a developer of digital television hardware and software, in 2005. The platform was developed for Spanish pay TV operator Telefonica, and used as part of its Movistar TV service, launched to subscribers at the end of 2005. An alternative approach is the IPTV version of the Headend in the Sky cable TV solution. Here, multiple TV channels are distributed via satellite to the ISP or IPTV provider's point of presence (POP) for IP-encapsulated distribution to individual subscribers as required by each subscriber. This can provide a huge selection of channels to subscribers without overburdening incoming Internet to the POP, and enables an IPTV service to be offered to small or remote operators outside the reach of terrestrial high-speed WAN connection. An example is a network combining fibre and satellite distribution via an SES New Skies satellite of 95 channels to Latin America and the Caribbean, operated by IPTV Americas. Advantages The Internet protocol-based platform offers significant advantages, including the ability to integrate television with other IP-based services like high-speed Internet access and VoIP. A switched IP network also allows for the delivery of significantly more content and functionality. In a typical TV or satellite network, using broadcast video technology, all the content constantly flows downstream to each customer, and the customer switches the content at the set-top box. The customer can select from as many choices as the telecomms, cable or satellite company can stuff into the pipe flowing into the home. A switched IP network works differently. Content remains in the network, and only the content the customer selects is sent into the customer's home. That frees up bandwidth, and the customer's choice is less restricted by the size of the pipe into the home. Interactivity An IP-based platform also allows significant opportunities to make the TV viewing experience more interactive and personalised. The provider may, for example, include an interactive programme guide that allows viewers to search for content by title or actor's name, or a picture-in-picture functionality that allows them to channel surf without leaving the programme they're watching. Viewers may be able to look up a player's stats while watching a sports game or control the camera angle. They also may be able to access photos or music from their PC on their television, use a wireless phone to schedule a recording of their favourite show, or even adjust parental controls so their child can watch a documentary for a school report, while they're away from home. A feedback channel from the viewer to the provider is required for this interactivity. Terrestrial, satellite, and some cable networks for television do not feature a feedback channel and thus don't allow interactivity. However, interactivity with those networks can be possible by combining TV networks with data networks such as the Internet or a mobile communication network. Video-on-demand IPTV technology is bringing video on demand (VoD) to television, which permits a customer to browse an online programme or film catalogue, to watch trailers and to then select a selected recording. The playout of the selected item starts nearly instantaneously on the customer's TV or PC. Technically, when the customer selects the movie, a point-to-point unicast connection is set up between the customer's decoder (set-top box or PC) and the delivering streaming server. The signalling for the trick play functionality (pause, slow-motion, wind/rewind etc.) is assured by RTSP (Real Time Streaming Protocol). The most common codecs used for VoD are MPEG-2, MPEG-4 and VC-1. In an attempt to avoid content piracy, the VoD content is usually encrypted. Whilst encryption of satellite and cable TV broadcasts is an old practice, with IPTV technology it can effectively be thought of as a form of Digital rights management. A film that is chosen, for example, may be playable for 24 hours following payment, after which time it becomes unavailable. IPTV-based converged services Another advantage is the opportunity for integration and convergence. This opportunity is amplified when using IMS-based solutions. Converged services implies interaction of existing services in a seamless manner to create new value added services. One example is on-screen Caller ID, getting Caller ID on a TV, and the ability to handle it (send it to voice mail, etc.). IP-based services help to enable efforts to provide consumers anytime-anywhere access to content over their televisions, PCs, and mobile device, and to integrate services and content to tie them together. Within businesses and institutions, IPTV eliminates the need to run a parallel infrastructure to deliver live and stored video services. Limitations IPTV is sensitive to packet loss and delays if the streamed data is unreliable. IPTV has strict minimum speed requirements in order to facilitate the right number of frames per second to deliver moving pictures. This means that the limited connection speed and bandwidth available for a large IPTV customer base can reduce the service quality delivered. Although a few countries have very high-speed broadband-enabled populations, such as South Korea with 6 million homes benefiting from a minimum connection speed of 100 Mbit/s, in other countries (such as the UK) legacy networks struggle to provide 3–5 Mbit/s and so simultaneous provision to the home of TV channels, VOIP and Internet access may not be viable. The last-mile delivery for IPTV usually has a bandwidth restriction that only allows a small number of simultaneous TV channel streams – typically from one to three – to be delivered. Streaming IPTV across wireless links within the home has proved troublesome; not due to bandwidth limitations as many assume, but due to issues with multipath and reflections of the RF signal carrying the IP data packets. An IPTV stream is sensitive to packets arriving at the right time and in the right order. Improvements in wireless technology are now starting to provide equipment to solve the problem. Due to the limitations of wireless, most IPTV service providers today use wired home networking technologies instead of wireless technologies like IEEE 802.11. Service providers such as AT&T (which makes extensive use of wireline home networking as part of its AT&T U-verse IPTV service) have expressed support for the work done in this direction by ITU-T, which has adopted Recommendation G.hn (also known as G.9960), which is a next-generation home networking standard that specifies a common PHY/MAC that can operate over any home wiring (power lines, phone lines or coaxial cables). Latency The latency inherent in the use of satellite Internet is often held up as reason why satellites cannot be successfully used for IPTV. In practice, however, latency is not an important factor for IPTV, since it is a service that does not require real-time transmission, as is the case with telephony or videoconferencing services. It is the latency of response to requests to change channel, display an EPG, etc. that most affects customers’ perceived quality of service, and these problems affect satellite IPTV no more than terrestrial IPTV. Command latency problems, faced by terrestrial IPTV networks with insufficient bandwidth as their customer base grows, may be solved by the high capacity of satellite distribution. Satellite distribution does suffer from latency – the time for the signal to travel up from the hub to the satellite and back down to the user is around 0.25 seconds, and cannot be reduced. However, the effects of this delay are mitigated in real-life systems using data compression, TCP-acceleration, and HTTP pre-fetching. Satellite latency can be detrimental to especially time-sensitive applications such as on-line gaming (although it only seriously affects the likes of first-person shooters while many MMOGs can operate well over satellite Internet), but IPTV is typically a simplex operation (one-way transmission) and latency is not a critical factor for video transmission. Existing video transmission systems of both analogue and digital formats already introduce known quantifiable delays. Existing DVB TV channels that simulcast by both terrestrial and satellite transmissions experience the same 0.25-second delay difference between the two services with no detrimental effect, and it goes unnoticed by viewers. Bandwidth requirements Digital video is a combination of sequence of digital images, and they are made up of pixels or picture elements. Each pixel has two values, which are luminance and chrominance. Luminance is representing intensity of the pixel; chrominance represents the colour of the pixel. Three bytes would be used to represent the colour of the high quality image for a true colour technique. A sequence of images is creating the digital video, in that case, images are called as frames. Movies use 24 frames per second; however, the rate of the frames can change according to territories' electrical systems so that there are different kinds of frame rates, for instance, North America is using approximately 30 frames per second where the Europe television frame rate is 25 frames per second. Each digital video has dimensions width and height; when referred to analogue television, the dimension for SDTV is 720×480 pixels, on the other hand, numerous HDTV requires 1920×1080 pixels. Moreover, whilst for SDTV, two bytes (16 bits) is enough to create the colour depth, HDTV requires three bytes (24 bits) to create the colour depth. Thereby, with a rate of 30 frames/second, the uncompressed data rate for SDTV becomes 30×720×480×16, in other words, 147,456,000 bits per second. Moreover, for HDTV, at the same frame rate, uncompressed date rate becomes 30×1920×1080×24 or 1,492,992,000 bits per second. Using that simple calculation, a service provider's service delivery to the subscribers is limited unless a lossy compression method is used. There is no absolute answer for the bandwidth requirement for the IPTV service because the bandwidth requirement is increasing due to the devices inside the household. Thus, currently compressed HDTV content can be delivered at a data rate between 8 and 10 Mbit/s, but if the home of the consumer equipped with several HDTV outputs, this rate will be multiplied respectively. The high-speed data transfer will increase the needed bandwidth for the viewer, at least 2 Mbit/s is needed to use web-based applications on the computer. Additionally to that, 64 kbit/s is required to use landline telephone for the property. In minimal usage, to receive an IPTV triple-play service requires 13 Mbit/s to process in a household. Privacy implications Due to limitations in bandwidth, an IPTV channel is delivered to the user one at a time, as opposed to the traditional multiplexed delivery. Changing a channel requires requesting the head-end server to provide a different broadcast stream, much like VOD (For VOD the stream is delivered using unicast, for the normal TV signal multicast is used). This could enable the service provider to accurately track each and every programme watched and the duration of watching for each viewer; broadcasters and advertisers could then understand their audience and programming better with accurate data and targeted advertising. In conjunction with regulatory differences between IPTV and cable TV, this tracking could pose a threat to privacy according to critics. For IP multicast scenarios, since a particular multicast group (TV channel) needs to be requested before it can be viewed, the same privacy concerns apply. Vendors Global sales of IPTV systems exceeded US$2 billion in 2007, although only a small number of companies supply most current IPTV system solutions. Some, such as Movistar TV, was formed by telecoms operators themselves, to minimize external costs, a tactic also used by PCCW of Hong Kong. Some major telecoms vendors are also active in this space, notably Accenture (Accenture Video Solution), Alcatel-Lucent (sometimes working with Movistar TV), Ericsson (notably since acquiring Tandberg Television), Huawei, NEC, PTCL Smart TV, Sri Lanka Telecom, Thomson, and ZTE, as are some IT houses, led by Microsoft. Miami-based AlphaOTT, Tokyo-based The New Media Group, Malaysian-based Select-TV, Oslo/Norway-based SnapTV, and California-based UTStarcom, Inc. also offer end-to-end networking infrastructure for IPTV-based services, and Hong Kong-based BNS Ltd. provides turnkey open platform IPTV technology solutions. Hospitality IPTV Ltd, having established many closed network IPTV systems, expanded in 2013 to OTT delivery platforms for markets in New Zealand, Australia, and the Asia Pacific region. Google Fiber offers an IPTV service in various US cities which includes up to 1 Gigabit-speed internet and over 290 channels depending on package via the fiber optic network being built out in Kansas City Kansas and Kansas City Missouri. Many of these IPTV solution vendors participated in the biennial Multiservice Switching Forum Interoperability 2008 (GMI) event which was coordinated by the MultiService Forum (MSF) at five sites worldwide from 20 to 31 October 2008. Test equipment vendors including Netrounds, Codenomicon, Empirix, Ixia, Mu Dynamics, and Spirent joined solution vendors such as the companies listed above in one of the largest IPTV proving grounds ever deployed. Service bundling For residential users, IPTV is often provided in conjunction with video on demand and may be bundled with Internet services such as Internet access and Voice over Internet Protocol (VoIP) telecommunications services. Commercial bundling of IPTV, VoIP and Internet access is sometimes referred to in marketing as triple play service. When these three are offered with cellular service, the combined service may be referred to as quadruple play. Regulation Historically, broadcast television has been regulated differently from telecommunications. As IPTV allows TV and VoD to be transmitted over IP networks, new regulatory issues arise. Professor Eli M. Noam highlights in his report "TV or Not TV: Three Screens, One Regulation?" some of the key challenges with sector specific regulation that is becoming obsolete due to convergence in this field. See also Comparison between OTT and IPTV Comparison of streaming media systems Comparison of video services Content delivery network Internet television List of music streaming services List of streaming media systems P2PTV Protection of Broadcasts and Broadcasting Organizations Treaty SAT>IP Software as a service Streaming media TV gateway Web television Webcast References Further reading Digital television Film and video technology Internet broadcasting Internet radio Streaming television Video on demand Television technology Television terminology Computer-related introductions in 1995 Telecommunications-related introductions in 1995 1990s neologisms
3913459
https://en.wikipedia.org/wiki/Blackworm
Blackworm
Blackworm is an Internet worm discovered on January 20, 2006 that infects several versions of Microsoft Windows. It is also known as Grew.a, Grew.b, Blackmal.e, Nyxem.e, Nyxem.d, Mywife.d, Tearec.a, CME-24, and Kama Sutra. Blackworm spreads mainly by sending infected email attachments, but also infects other computers by copying itself over network shares. The virus removes antivirus programs from remote computers before attempting to infect them. When first installed, it copies itself to the Windows and system directories. It uses filenames that resemble those of legitimate Windows system files in an attempt to remain hidden. It activates on the third day of each month; the first known activation happened on February 3, 2006. On activation, the virus overwrites data files of many common types, including Word, Excel, and PowerPoint documents; ZIP and RAR archives; and PDFs. It can destroy files on fixed and removable drives and tries, but fails, to affect data on network drives. It also attempts to disable antivirus programs by removing the registry entries that automatically run them and deleting the antivirus programs directly. The virus visits a tracking Web page each time it infects a computer. Over 300,000 unique IPs visited that site, suggesting that at least that many computers suffered infection. It is not known how many of them remained infected long enough to trigger the virus’s payload. References External links CME-24 (BlackWorm) Users’ FAQ Nyxem.E at Symantec - Detailed description of the Nyxem.E virus Nyxem.E at Microsoft - Microsoft description and detailed information on the Nyxem.E virus Nyxem.E at Kaspersky Labs - Nyxem.E detailed description and manual removal instructions How to remove Blackworm tutorial Computer worms Hacking in the 2000s
474372
https://en.wikipedia.org/wiki/Two-dimensional%20gel%20electrophoresis
Two-dimensional gel electrophoresis
Two-dimensional gel electrophoresis, abbreviated as 2-DE or 2-D electrophoresis, is a form of gel electrophoresis commonly used to analyze proteins. Mixtures of proteins are separated by two properties in two dimensions on 2D gels. 2-DE was first independently introduced by O'Farrell and Klose in 1975. Basis for separation 2-D electrophoresis begins with electrophoresis in the first dimension and then separates the molecules perpendicularly from the first to create an electropherogram in the second dimension. In electrophoresis in the first dimension, molecules are separated linearly according to their isoelectric point. In the second dimension, the molecules are then separated at 90 degrees from the first electropherogram according to molecular mass. Since it is unlikely that two molecules will be similar in two distinct properties, molecules are more effectively separated in 2-D electrophoresis than in 1-D electrophoresis. The two dimensions that proteins are separated into using this technique can be isoelectric point, protein complex mass in the native state, or protein mass. Separation of the proteins by isoelectric point is called isoelectric focusing (IEF). Thereby, a pH gradient is applied to a gel and an electric potential is applied across the gel, making one end more positive than the other. At all pH values other than their isoelectric point, proteins will be charged. If they are positively charged, they will be pulled towards the more negative end of the gel and if they are negatively charged they will be pulled to the more positive end of the gel. The proteins applied in the first dimension will move along the gel and will accumulate at their isoelectric point; that is, the point at which the overall charge on the protein is 0 (a neutral charge). For the analysis of the functioning of proteins in a cell, the knowledge of their cooperation is essential. Most often proteins act together in complexes to be fully functional. The analysis of this sub organelle organisation of the cell requires techniques conserving the native state of the protein complexes. In native polyacrylamide gel electrophoresis (native PAGE), proteins remain in their native state and are separated in the electric field following their mass and the mass of their complexes respectively. To obtain a separation by size and not by net charge, as in IEF, an additional charge is transferred to the proteins by the use of Coomassie brilliant blue or lithium dodecyl sulfate. After completion of the first dimension the complexes are destroyed by applying the denaturing SDS-PAGE in the second dimension, where the proteins of which the complexes are composed of are separated by their mass. Before separating the proteins by mass, they are treated with sodium dodecyl sulfate (SDS) along with other reagents (SDS-PAGE in 1-D). This denatures the proteins (that is, it unfolds them into long, straight molecules) and binds a number of SDS molecules roughly proportional to the protein's length. Because a protein's length (when unfolded) is roughly proportional to its mass, this is equivalent to saying that it attaches a number of SDS molecules roughly proportional to the protein's mass. Since the SDS molecules are negatively charged, the result of this is that all of the proteins will have approximately the same mass-to-charge ratio as each other. In addition, proteins will not migrate when they have no charge (a result of the isoelectric focusing step) therefore the coating of the protein in SDS (negatively charged) allows migration of the proteins in the second dimension (SDS-PAGE, it is not compatible for use in the first dimension as it is charged and a nonionic or zwitterionic detergent needs to be used). In the second dimension, an electric potential is again applied, but at a 90 degree angle from the first field. The proteins will be attracted to the more positive side of the gel (because SDS is negatively charged) proportionally to their mass-to-charge ratio. As previously explained, this ratio will be nearly the same for all proteins. The proteins' progress will be slowed by frictional forces. The gel therefore acts like a molecular sieve when the current is applied, separating the proteins on the basis of their molecular weight with larger proteins being retained higher in the gel and smaller proteins being able to pass through the sieve and reach lower regions of the gel. Detecting proteins The result of this is a gel with proteins spread out on its surface. These proteins can then be detected by a variety of means, but the most commonly used stains are silver and Coomassie brilliant blue staining. In the former case, a silver colloid is applied to the gel. The silver binds to cysteine groups within the protein. The silver is darkened by exposure to ultra-violet light. The amount of silver can be related to the darkness, and therefore the amount of protein at a given location on the gel. This measurement can only give approximate amounts, but is adequate for most purposes. Silver staining is 100x more sensitive than Coomassie brilliant blue with a 40-fold range of linearity. Molecules other than proteins can be separated by 2D electrophoresis. In supercoiling assays, coiled DNA is separated in the first dimension and denatured by a DNA intercalator (such as ethidium bromide or the less carcinogenic chloroquine) in the second. This is comparable to the combination of native PAGE /SDS-PAGE in protein separation. Common techniques IPG-DALT A common technique is to use an Immobilized pH gradient (IPG) in the first dimension. This technique is referred to as IPG-DALT. The sample is first separated onto IPG gel (which is commercially available) then the gel is cut into slices for each sample which is then equilibrated in SDS-mercaptoethanol and applied to an SDS-PAGE gel for resolution in the second dimension. Typically IPG-DALT is not used for quantification of proteins due to the loss of low molecular weight components during the transfer to the SDS-PAGE gel. IEF SDS-PAGE See Isoelectric focusing 2D gel analysis software In quantitative proteomics, these tools primarily analyze bio-markers by quantifying individual proteins, and showing the separation between one or more protein "spots" on a scanned image of a 2-DE gel. Additionally, these tools match spots between gels of similar samples to show, for example, proteomic differences between early and advanced stages of an illness. Software packages include Delta2D, ImageMaster, Melanie, PDQuest, Progenesis and REDFIN – among others. While this technology is widely utilized, the intelligence has not been perfected. For example, while PDQuest and Progenesis tend to agree on the quantification and analysis of well-defined well-separated protein spots, they deliver different results and analysis tendencies with less-defined less-separated spots. Challenges for automatic software-based analysis include incompletely separated (overlapping) spots (less-defined and/or separated), weak spots / noise (e.g., "ghost spots"), running differences between gels (e.g., protein migrates to different positions on different gels), unmatched/undetected spots, leading to missing values, mismatched spots , errors in quantification (several distinct spots may be erroneously detected as a single spot by the software and/or parts of a spot may be excluded from quantification), and differences in software algorithms and therefore analysis tendencies Generated picking lists can be used for the automated in-gel digestion of protein spots, and subsequent identification of the proteins by mass spectrometry. Mass spectrometry analysis can identify precise mass measurements along with the sequencing of peptides that range from 1000-4000 atomic mass units. For an overview of the current approach for software analysis of 2DE gel images see or. See also Difference gel electrophoresis QPNC-PAGE PROTOMAP References External links JVirGel Create virtual 2-D Gels from sequence data. Gel IQ A freely downloadable software tool for assessing the quality of 2D gel image analysis data. 2-D Electrophoresis Principles & Methods Handbook Molecular biology Laboratory techniques Electrophoresis es:Electroforesis en gel he:אלקטרופורזה דו-ממדית בג'ל zh:双向电泳
15191106
https://en.wikipedia.org/wiki/3D%20World%20Atlas
3D World Atlas
3D World Atlas is a virtual globe program developed by the Cosmi Corporation. At Version 2.1, it is one of the leading atlas programs, along with other 3D atlas exploring programs such as Google Earth . History 3D World Atlas was created by the Cosmi Corporation in 1999. It was programmed by Ron Paludan and the research was done by Eve Paludan. Information came from the 1999 World Factbook by the Central Intelligence Agency and the US Arms Control and Disarmament Agency. Features As an atlas software, the 3D World Atlas has many features. These include, but are not limited to, world maps on a 3D globe, thousands of tables and charts, national flags, and a world clock. The software also includes distance measuring and in-depth information on every country, including independence days, government types, and such. There are many tools you can use on the program. These all perform basic tasks helping the viewer in understanding the world. A zoom feature allows you to zoom into countries, and lists major cities, a print option allows you to print the current display of a page, and a find button allows you to find any city, country, river, lake, or continent within seconds. Other features include the ability to copy the current view onto the clipboard, where it can by displayed by pasting it on an image program such as Microsoft Paint, thematic maps, charts, and clocks that all allow you to customize the area, an "Earth shadow" feature which creates a shadow on the Earth if you are on 3D, and a 2D feature that allows you to see the Earth from a map perspective. The program also allows you to retrieve the coordinates of any point on Earth by hovering your mouse over the location you wish to know. Awards 3D World Atlas won the "editor's pick" from online software reviewer Software Informer. References Windows-only software Virtual globes 1999 software
16423665
https://en.wikipedia.org/wiki/4722%20Agelaos
4722 Agelaos
4722 Agelaos is a Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered during the third Palomar–Leiden Trojan survey at the Palomar Observatory in California in 1977. The Jovian asteroid has a rotation period of 18.4 hours and belongs to the 90 largest Jupiter trojans. It was named after Agelaus from Greek mythology. Discovery Agelaos was discovered on 16 October 1977, by Dutch astronomer couple Ingrid and Cornelis van Houten at Leiden, on photographic plates taken by Dutch–American astronomer Tom Gehrels at the Palomar Observatory in California. The body's observation arc begins with its first observations at Palomar on 7 October 1977, just nine day prior to its official discovery observation. Palomar–Leiden survey The survey designation "T-3" stands for the third Palomar–Leiden Trojan survey, named after the fruitful collaboration of the Palomar and Leiden Observatory in the 1960s and 1970s. Gehrels used Palomar's Samuel Oschin telescope (also known as the 48-inch Schmidt Telescope), and shipped the photographic plates to Ingrid and Cornelis van Houten at Leiden Observatory where astrometry was carried out. The trio are credited with the discovery of several thousand asteroid discoveries. Orbit and classification Agelaos is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.6–5.8 AU once every 11 years and 11 months (4,341 days; semi-major axis of 5.21 AU). Its orbit has an eccentricity of 0.11 and an inclination of 9° with respect to the ecliptic. Physical characteristics Agelaos is an assumed, carbonaceous C-type asteroid. It has a V–I color index of 0.91, typical for most Jovian D-type asteroids, the dominant spectral type among the larger Jupiter trojans. Rotation period In December 2002, a first rotational lightcurve of Agelaos was obtained from photometric observations over two consecutive nights by Italian astronomer Stefano Mottola with the 1.2-meter telescope at Calar Alto Observatory in Spain. Lightcurve analysis gave a rotation period of 18.61 hours with a brightness amplitude of 0.23 magnitude (). Observations in the R-band by astronomers at the Palomar Transient Factory in October 2012 gave a period of 18.456 hours with an amplitude of 0.15 magnitude (). The so-far best-rated lightcurve by Robert Stephens at the Center for Solar System Studies in Landers, California, gave a concurring period of and a brightness variation of 0.19 (). Diameter and albedo According to the surveys carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer and the Japanese Akari satellite, Agelaos measures 50.38 and 59.47 kilometers in diameter and its surface has an albedo of 0.076 and 0.067, respectively. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 53.16 kilometers based on an absolute magnitude of 10.1. Naming This minor planet was named from Greek mythology after the shepherd Agelaus, who was ordered by King Priam to expose the Trojan prince Paris as an infant – because the prophecy predicted that he would cause the destruction of Troy– but brought him up as his own son instead. The official naming citation was published by the Minor Planet Center on 28 May 1991 (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center Asteroid 4722 Agelaos at the Small Bodies Data Ferret 004722 Discoveries by Cornelis Johannes van Houten Discoveries by Ingrid van Houten-Groeneveld Discoveries by Tom Gehrels 4271 Minor planets named from Greek mythology Named minor planets 19771016
4080311
https://en.wikipedia.org/wiki/List%20of%20PlayStation%20Portable%20system%20software%20compatibilities
List of PlayStation Portable system software compatibilities
Sony regularly released firmware updates for its PlayStation Portable system, and encouraged PSP owners to upgrade the PSP system software. To increase system software upgrades, Sony encoded their games so that some of them require newer versions of the system software. This is a list of PSP games having such requirements. All system software updates are backwards-compatible; that is, all games that work on system software version 1.5 will work on version 2.0, and so on. PSP games system software compatibility listing Unless otherwise noted, system software requirements for multi-region games are referring to the North American release. Version 1.5x Ape Escape: On the Loose Ape Escape Academy Archer Maclean's Mercury Armored Core: Formula Front ATV Offroad Fury: Blazin' Trails Bleach: Heat the Soul Bleach: Heat the Soul 2 Bomberman: Panic Bomber Burnout Legends Bust-A-Move Pocket Championship Manager Coded Arms Colin Mcrae Rally 2005 Con, The Darkstalkers Chronicle: The Chaos Tower Dead To Rights: Reckoning Death, Jr. Dynasty Warriors Everybody's Golf F1 Grand Prix FIFA 06 Fired Up Frantix Frogger Helmet Chaos Ghost in the Shell: Stand Alone Complex Go! Sudoku Gretzky NHL GripShift Hot Shots Golf: Open Tee Kao Challengers Lemmings Lumines Madden NFL 2006 Marvel Nemesis: Rise of the Imperfects MediEvil Resurrection Metal Gear Acid Midway Arcade Treasures Extended Play Namco Museum Battle Collection NBA Street Showdown Need for Speed: Most Wanted Need for Speed: Underground Rivals Pac-Man World 3 Prince of Persia: Revelations Pursuit Force Puzzle Bobble Ridge Racer Smart Bomb Spider-Man 2 SSX On Tour Star Soldier: Vanishing Earth Star Wars: Battlefront II Tiger Woods PGA Tour TOCA Race Driver 2 Tony Hawk's Underground 2 Remix Twisted Metal: Head-On Virtua Tennis: World Tour Wipeout Pure World Series of Poker World Tour Soccer Version 2.0 Crash Tag Team Racing Grand Theft Auto: Liberty City Stories (older, unpatched version) Infected Kingdom of Paradise Need for Speed Most Wanted: 5-1-0 Pinball Hall of Fame: The Gottlieb Collection SOCOM: U.S. Navy SEALs Fireteam Bravo Star Wars: Battlefront 2 Tokobot Ultimate Block Party X-Men Legends II: Rise of Apocalypse Version 2.5 EXIT Peter Jackson's King Kong: The Official Game of the Movie The Sims 2 Version 2.6 Astonishia Story Boku no Natsuyasumi Portable Bust-A-Move Deluxe Capcom Classics Collection: Remixed Daxter Every Extend Extra (Japanese version) Field Commander Gradius Collection Grand Theft Auto: Liberty City Stories (newer, patched version) James Bond 007: From Russia with Love Key of Heaven Me & My Katamari Mega Man Powered Up Metal Gear Ac!d 2 Metal Gear Solid: Digital Graphic Novel Midnight Club 3: Dub Edition Monster Hunter Freedom MTX Mototrax Portable Island: Te no Hira no Resort Puzzle Challenge: Crosswords and More Street Fighter Alpha 3 Super Monkey Ball Adventure Syphon Filter: Dark Mirror Tom Clancy's Splinter Cell: Essentials Untold Legends: The Warrior's Code Valkyrie Profile: Lenneth Tekken 5: Dark Resurrection (Japanese And American Version) Version 2.71 50 Cent: Bulletproof - G Unit Edition Activision Hits Remixed (EU 2.81) Avatar: The Last Airbender Every Extend Extra FIFA 07 Gangs of London Gunpey LocoRoco Lumines II Medal of Honor: Heroes Mercury Meltdown NASCAR Tama-Run Version 2.80 Killzone: Liberation Marvel Ultimate Alliance Mind Quiz Version 2.81 ATV Offroad Fury Pro Ace Combat X: Skies of Deception Brothers in Arms: D-Day Grand Theft Auto: Vice City Stories Metal Gear Solid: Portable Ops Mortal Kombat: Unchained MotoGP Power Stone Collection Ridge Racer 2 Star Trek: Tactical Assault SOCOM: U.S. Navy SEALs Fireteam Bravo 2 Sonic Rivals Thrillville Tony Hawk's Project 8 Yu-Gi-Oh! Duel Monsters GX: Tag Force Version 2.82 300: March to Glory After Burner: Black Falcon Chotto Shot Edit (Japanese PSP Camera software) Sid Meier's Pirates! Puzzle Quest: Challenge of the Warlords Test Drive: Unlimited Tom Clancy's Rainbow Six: Vegas Version 3.03 7 Wonders of the Ancient World Version 3.11 Crush Cube PQ2 Transformers - The Game Ultimate Board Game Collection Version 3.40 Go!Edit Final Fantasy Tactics: The War of the Lions Ratchet & Clank: Size Matters Hot Brain Version 3.50 Alien Syndrome Tomb Raider: Anniversary Xyanide Resurrection Version 3.51 Anata wo Yurusanai Kaitou Apricot Portable Medal of Honor Heroes 2 Metal Gear Solid Portable Ops Plus Star Wars Battlefront: Renegade Squadron The Simpsons Game Version 3.52 Castlevania: The Dracula X Chronicles Smackdown Vs Raw 2008 Warhammer 40,000: Squad Command Syphon Filter: Logan's Shadow Silverfall Version 3.71 Patapon Wipeout Pulse Need for Speed: ProStreet God of War: Chains of Olympus Crisis Core: Final Fantasy VII Assassin's Creed: Bloodlines FIFA 09 Need for Speed: Carbon Hot Wheels Ultimate Racing Version 3.72 Luxor: Pharaoh's Challenge Minna no Golf Portable 2 Version 4.05 PlayStation Network Collection - The Power Pack WWE SmackDown vs. Raw 2009 Version 5.02 Gripshift Final Fantasy VII International (PS1 version via PSN Japan) Final Fantasy Dissidia – needs patch Buzz!:Master Quiz Version 5.03 Disney Up Petz My Puppy Family Dynasty Warriors: Strikeforce; originally released in Japan as Shin Sangokumusou Multi Raid (真・三國無双 MULTI RAID, Shin Sangokumusō Maruchi Reido) Version 5.50 IL-2 Sturmovik: Birds of Prey - (EUR) needs patch Madden NFL 10 Soul Calibur: Broken Destiny - (EUR) needs patch fifa 10 Version 5.55 Tales Of Vs. G.I. Joe: The Rise of Cobra Armored Core 3 Portable Soul Calibur: Broken Destiny Final Fantasy: Dissidia Disgaea 2: Dark Hero Days Cloudy with a Chance of Meatballs Colin McRae: Dirt 2 MotorStorm: Arctic Edge Marvel Ultimate Alliance 2 IL-2 Sturmovik: Birds of Prey Metal Gear Solid: Peace Walker (5.51) Beaterator Naruto Shippuden: Legends: Akatsuki Rising - (EUR) Shin Megami Tensei Persona Gran Turismo FIFA 10 James Cameron's Avatar: The Game WWE Smackdown vs Raw 2010 Version 6.00 (10 Sept 2009) Jak and Daxter: The Lost Frontier LittleBigPlanet Pro Evolution Soccer 2010 Gran Turismo Assassin's Creed Bloodlines Version 6.10 (1 Oct 2009) Tekken 6 NBA LIVE 10 Manhunt 2 Street Fighter Alpha 3 Max Harvest Moon: Boy & Girl Kenka Bancho: Badass Rumble LocoRoco Midnight Carnival NBA 2K10 Mega Man Maverick Hunter X Version 6.20 (18 Nov 2009) Silent Hill: Origins God Eater Naruto Ultimate Ninja Heroes 3 Harvest Moon: Hero of Leaf Valley Prince of Persia: The Forgotten Sands Hexyz Force Dante's Inferno ModNation Racers Pinball Heroes Bundle 2 Metal Gear Solid: Peace Walker Metal Slug XX Disgaea Infinite TNA Impact!: Cross the Line Midway Arcade Treasures Extended Play Hatsune Miku: Project DIVA 2nd Hot Shots Tennis: Get a Grip Bejeweled 2 PSP Shin Megami Tensei: Persona 3 Portable Despicable Me Groovin’ Blocks PSP Lego Harry Potter: Years 1-4 Gravity Crash Portable Piyotama PSP Kingdom Hearts: Birth By Sleep Version 6.31 (29 July 2010) Warriors Of The Lost Empire Madden NFL 11 YS Seven Zuma PSP Valkyria Chronicles 2 Ace Combat Joint Assault Hannspree Ten Kate Honda SBK Superbike World Championship Phantasy Star Portable 2 UFC Undisputed 2010 Cabela’s North American Adventures Gladiator Begins CLADUN: This Is An RPG! 101-in-1 Megamix Rapala Pro Bass Fishing Ben 10 Ultimate Alien: Cosmic Destruction NBA 2K11 FIFA 11 DJ Max Portable 3 Blazing Souls Accelate Bakugan: Defenders of the Core Z.H.P. Unlosing Ranger VS Darkdeath Evilman WWE SmackDown vs. Raw 2011 God of War: Ghost of Sparta Ys: The Oath in Felghana No Heroes Allowed! Knights in the Nightmare Split Second Pro Evolution Soccer 2011 (Winning Eleven 2011) Tom Clancy's Ghost Recon Predator Peggle PSP Worms: Battle Islands Bomberman Version 6.35 (24 Nov 2010) Tron: Evolution PSP Football Manager Handheld 2011 Monster Jam: Path of Destruction Military History Commander: Europe at War Michael Jackson The Experience The Lord of the Rings: Aragorn's Quest Legends of War: Patton’s Campaign Auditorium Hot Shots Shorties (Blue, Red, Green, Yellow) Version 6.37 (20 Jan 2011) Lord of Arcana YS: I & II Chronicles Patapon 3 The 3rd Birthday Version 6.39 (24 May 2011) Hatsune Miku: Project DIVA Extend See also PlayStation Portable system software List of PlayStation Portable games List of PlayStation Portable Gamesharing games Portable Technology-related lists
19850468
https://en.wikipedia.org/wiki/Computational%20thinking
Computational thinking
In education, computational thinking (CT) is a set of problem-solving methods that involve expressing problems and their solutions in ways that a computer could also execute. It involves automation of processes, but also using computing to explore, analyze, and understand processes (natural and artificial). History The history of computational thinking as a concept dates back at least to the 1950s but most ideas are much older. Computational thinking involves ideas like abstraction, data representation, and logically organizing data, which are also prevalent in other kinds of thinking, such as scientific thinking, engineering thinking, systems thinking, design thinking, model-based thinking, and the like. Neither the idea nor the term are recent: Preceded by terms like algorithmizing, procedural thinking, algorithmic thinking, and computational literacy by computing pioneers like Alan Perlis and Donald Knuth, the term computational thinking was first used by Seymour Papert in 1980 and again in 1996. Computational thinking can be used to algorithmically solve complicated problems of scale, and is often used to realize large improvements in efficiency. The phrase computational thinking was brought to the forefront of the computer science education community in 2006 as a result of a Communications of the ACM essay on the subject by Jeannette Wing. The essay suggested that thinking computationally was a fundamental skill for everyone, not just computer scientists, and argued for the importance of integrating computational ideas into other subjects in school. The essay also said that by learning computational thinking, children will be better in many everyday tasks—as examples, the essay gave packing one's backpack, finding one's lost mittens, and knowing when to stop renting and buying instead. The continuum of computational thinking questions in education ranges from K–9 computing for children to professional and continuing education, where the challenge is how to communicate deep principles, maxims, and ways of thinking between experts. For the first ten years computational thinking was a US-centered movement, and still today that early focus is seen in the field's research. The field's most cited articles and most cited people were active in the early US CT wave, and the field's most active researcher networks are US-based. Dominated by US and European researchers, it is unclear to what extent can the field's predominantly Western body of research literature cater to the needs of students in other cultural groups. Characteristics The characteristics that define computational thinking are decomposition, pattern recognition / data representation, generalization/abstraction, and algorithms. By decomposing a problem, identifying the variables involved using data representation, and creating algorithms, a generic solution results. The generic solution is a generalization or abstraction that can be used to solve a multitude of variations of the initial problem. Another characterization of computational thinking is the "three As" iterative process based on three stages: Abstraction: Problem formulation; Automation: Solution expression; Analysis: Solution execution and evaluation. Connection to the "four Cs" The four Cs of 21st century learning are communication, critical thinking, collaboration, and creativity. The fifth C could be computational thinking which entails the capability to resolve problems algorithmically and logically. It includes tools that produce models and visualize data. Grover describes how computational thinking is applicable across subjects beyond science, technology, engineering, and mathematics (STEM) which include the social sciences and language arts. Since its inception, the 4 Cs have gradually gained acceptance as vital elements of many school syllabi. This development triggered a modification in platforms and directions such as inquiry, project-based, and more profound learning across all K–12 levels. Many countries have introduced computer thinking to all students. The United Kingdom has CT in its national curriculum since 2012. Singapore calls CT as "national capability". Other nations like Australia, China, Korea, and New Zealand embarked on massive efforts to introduce computational thinking in schools. In the United States, President Barack Obama created this program, Computer Science for All to empower this generation of students in America with the proper computer science proficiency required to flourish in a digital economy. Computational thinking means thinking or solving problems like computer scientists. CT refers to thought processes required in understanding problems and formulating solutions. CT involves logic, assessment, patterns, automation, and generalization. Career readiness can be integrated into learning and teaching environments in multiple ways. In K–12 education Similar to Seymour Papert, Alan Perlis, and Marvin Minsky before, Jeannette Wing envisioned computational thinking becoming an essential part of every child's education. However, integrating computational thinking into the K–12 curriculum and computer science education has faced several challenges including the agreement on the definition of computational thinking, how to assess children's development in it, and how to distinguish it from other similar "thinking" like systems thinking, design thinking, and engineering thinking. Currently, computational thinking is broadly defined as a set of cognitive skills and problem solving processes that include (but are not limited to) the following characteristics (but there are arguments that few, if any, of them belong to computing specifically, instead of being principles in many fields of science and engineering) Using abstractions and pattern recognition to represent the problem in new and different ways Logically organizing and analyzing data Breaking the problem down into smaller parts Approaching the problem using programmatic thinking techniques such as iteration, symbolic representation, and logical operations Reformulating the problem into a series of ordered steps (algorithmic thinking) Identifying, analyzing, and implementing possible solutions with the goal of achieving the most efficient and effective combination of steps and resources Generalizing this problem-solving process to a wide variety of problems Current integration computational thinking into the K–12 curriculum comes in two forms: in computer science classes directly or through the use and measure of computational thinking techniques in other subjects. Teachers in Science, Technology, Engineering, and Mathematics (STEM) focused classrooms that include computational thinking, allow students to practice problem-solving skills such as trial and error. Valerie Barr and Chris Stephenson describe computational thinking patterns across disciplines in a 2011 ACM Inroads article However Conrad Wolfram has argued that computational thinking should be taught as a distinct subject. There are online institutions that provide a curriculum, and other related resources, to build and strengthen pre-college students with computational thinking, analysis and problem-solving. Center for Computational Thinking Carnegie Mellon University in Pittsburgh has a Center for Computational Thinking. The Center's major activity is conducting PROBEs or PROBlem-oriented Explorations. These PROBEs are experiments that apply novel computing concepts to problems to show the value of computational thinking. A PROBE experiment is generally a collaboration between a computer scientist and an expert in the field to be studied. The experiment typically runs for a year. In general, a PROBE will seek to find a solution for a broadly applicable problem and avoid narrowly focused issues. Some examples of PROBE experiments are optimal kidney transplant logistics and how to create drugs that do not breed drug-resistant viruses. Criticism The concept of computational thinking has been criticized as too vague, as it's rarely made clear how it is different from other forms of thought. The inclination among computer scientist to force computational solutions upon other fields has been called "computational chauvinism". Some computer scientists worry about the promotion of computational thinking as a substitute for a broader computer science education, as computational thinking represents just one small part of the field. Others worry that the emphasis on computational thinking encourages computer scientists to think too narrowly about the problems they can solve, thus avoiding the social, ethical and environmental implications of the technology they create. In addition, as nearly all CT research is done in the US and Europe, it is not certain how well those educational ideas work in other cultural contexts. A 2019 paper argues that the term "computational thinking" (CT) should be used mainly as a shorthand to convey the educational value of computer science, hence the need of teaching it in school. The strategic goal is to have computer science recognized in school as an autonomous scientific subject more than trying to identify "body of knowledge" or "assessment methods" for CT. Particularly important is to stress the fact that the scientific novelty associated with CT is the shift from the "problem solving" of mathematics to the "having problem solved" of computer science. Without the "effective agent", who automatically executes the instructions received to solve the problem, there would be no computer science, but just mathematics. Another criticism in the same paper is that focusing on "problem solving" is too narrow, since "solving a problem is just an instance of a situation where one wants to reach a specified goal". The paper therefore generalizes the original definitions by Cuny, Snyder, and Wing and Aho as follows: "Computational thinking is the thought processes involved in modeling a situation and specifying the ways an information-processing agent can effectively operate within it to reach an externally specified (set of) goal(s)." Many definitions of CT describe it only at skill level because the momentum behind its growth comes from its promise to boost STEM education. And, the latest movement in STEM education is based on suggestions (by learning theories) that we teach students experts' habits of mind. So, whether it is computational thinking, scientific thinking, or engineering thinking, the motivation is the same and the challenge is also the same: teaching experts' habits of mind to novices is inherently problematic because of the prerequisite content knowledge and practice skills needed to engage them in the same thinking processes as the experts. Only when we link the experts' habits of mind to fundamental cognitive processes can we then narrow their skill-sets down to more basic competencies that can be taught to novices. There have been only a few studies that actually address the cognitive essence of CT. Among those, Yasar (Communications of ACM, Vol. 61, No. 7, July 2018) describes CT as thinking that is generated/facilitated by a computational device, be it biological or electronic. Accordingly, everyone employs CT, not just computer scientists, and it can be improved via education and experience. Yasar founded the first undergraduate degree program in computational science in 1998; an NSF-supported program that fueled the advancement in computational thinking education long before the seminal paper by Wing in 2006. In 2003, he testified before the US Congress about the virtue of a computational approach to STEM education. In his work, he describes not only the cognitive essence of CT, but he also links it to both scientific thinking and engineering thinking. See also Computer-based math References Further reading Problem solving skills Computational fields of study Theories of deduction Cognition Computational science
1810666
https://en.wikipedia.org/wiki/Eric%20%28software%29
Eric (software)
eric is a free integrated development environment (IDE) used for computer programming. Since it is a full featured IDE, it provides by default all necessary tools needed for the writing of code and for the professional management of a software project. eric is written in the programming language Python and its primary use is for developing software written in Python. It is usable for development of any combination of Python 3 or Python 2, Qt 5 or Qt 4 and PyQt 5 or PyQt 4 projects, on Linux, macOS and Microsoft Windows platforms. License, price and distribution eric is licensed under the GNU General Public License version 3 or later and is thereby Free Software. This means in general terms that the source code of eric can be studied, changed and improved by anyone, that eric can be run for any purpose by anyone and that eric - and any changes or improvements that may have been made to it - can be redistributed by anyone to anyone as long as the license is not changed (copyleft). eric can be downloaded at SourceForge and installed manually with a python installer script. Most major Linux distributions include eric in their software repositories, so when using such Linux distributions eric can be obtained and installed automatically by using the package manager of the particular distribution. Additionally, the author offers access to the source code via a public Mercurial repository. Characteristics eric is written in Python and uses the PyQt Python bindings for the Qt GUI toolkit. By design, eric acts as a front end for several programs, for example the QScintilla editor widget. Features The key features of eric 6 are: Source code editing: Unlimited number of editors Configurable window layout Configurable syntax highlighting Sourcecode autocompletion Sourcecode calltips Sourcecode folding Brace matching Error highlighting Advanced search functionality including project wide search and replace Integrated class browser Integrated profiling and code coverage support GUI designing: Integration of Qt Designer, a Graphical user interface builder for the creation of Qt-based Graphical user interfaces Debugging, checking, testing and documenting: Integrated graphical python debugger which supports both interactive probing while suspended and auto breaking on exceptions as well as debugging multi-threaded and multiprocessing applications Integrated automatic code checkers (syntax, errors and style, PEP-8) for static program analysis as well as support of Pylint via plug-in Integrated source code documentation system Integrated unit testing support by having the option to run python code with command-line parameters Integrated interface to the enchant spell checking library Application diagrams Version control: Integrated version control support for Mercurial and Subversion repositories (as core plug-ins) and git (as optional plug-in) Project management and collaboration: Advanced project management facilities Integrated task management with a self-updating To-do list Integrated cooperation functions (chat, shared editor) Other: Integrated web browser Integrated support for Django (as optional plug-in) Running external applications from within the IDE Interactive Python shell including syntax hilighting and autocompletion Integrated CORBA support based on omniORB Integrated rope refactoring tool (as optional plug-in) Integrated interface to cx_freeze (as optional plug-in) Many integrated wizards for regex and Qt dialogs (as core plug-ins) Tools for previewing Qt forms and translations Support for Python 2 and 3 Prior to the release of eric version 5.5.0, eric version 4 and eric version 5 coexisted and were maintained simultaneously, while eric 4 was the variant for writing software in Python version 2 and eric version 5 was the variant for writing software in Python version 3. With the release of eric version 5.5.0 both variants had been merged into one, so that all versions as of eric version 5.5.0 support writing software in Python 2 as well as in Python 3, making the separate development lanes of eric version 4 and 5 obsolete. Those two separate development lanes are no longer maintained, and the last versions prior to merging them both to 5.5.0 were versions 4.5.25 and 5.4.7. Gallery Releases Versioning scheme Until 2016, eric used a software versioning scheme with a three-sequence identifier, e.g. 5.0.1. The first sequence represents the major version number which is increased when there are significant jumps in functionality, the second sequence represents the minor number, which is incremented when only some features or significant fixes have been added, and the third sequence is the revision number, which is incremented when minor bugs are fixed or minor features have been added. From late 2016, the version numbers show the year and month of release, e.g. 16.11 for November 2016. Release strategy eric follows the development philosophy of Release early, release often, following loosely a time-based release schedule. Currently a revision version is released around the first weekend of every month, a minor version is released annually, in most cases approximately between December and February. Version history The following table shows the version history of eric, starting from version 4.0.0. Only major (e.g. 6.0.0) and minor (e.g. 6.1.0) releases are listed; revision releases (e.g. 6.0.1) are omitted. Name Several allusions are made to the British comedy group Monty Python, which the Python programming language is named after. Eric alludes to Eric Idle, a member of the group, and IDLE, the standard python IDE shipped with most distributions. See also Comparison of integrated development environments for Python References External links Code navigation tools Cross-platform free software Debuggers Free HTML editors Free integrated development environments Free integrated development environments for Python Free software programmed in Python Linux integrated development environments Linux programming tools MacOS programming tools Programming tools for Windows Python (programming language) software Software that uses Qt Software that uses Scintilla Software using the GPL license
31963426
https://en.wikipedia.org/wiki/X%20Rebirth
X Rebirth
X Rebirth is a single-player space trading and combat game developed by Egosoft, published by Deep Silver (Europe) and Tri Synergy (America). It is the sixth installment in the X universe adventure video game series, following X3: Albion Prelude (2012), as the new sequel to the last game title. The game runs on Linux, macOS and Microsoft Windows. Egosoft Director Bernd Lehahn has stated that X Rebirth will not be available on consoles. Gameplay X Rebirth incorporates open-ended (or "sandbox") gameplay. As with previous installments in the series the game will take place in a universe that is active even when the player is not present, involving simulated trade, combat, piracy, and other features. The player as an individual may take part in these or other actions to gain notoriety or wealth, going so far as to be able to construct their own space installations and command fleets of starships, establishing what amounts to their own personal empire and dynamically and drastically altering the game world in the process. Prior to launch, the developers announced that X Rebirth would feature a new interface design, intended to reduce the initial complexity for new players. However, they asserted that the game mechanics would remain comparable in complexity to those found in previous X series titles. However, after release, many people attacked a lot of the design decisions made by Egosoft, and a number of customers requested refunds. On 12 March 2015 the first version (build 0) was released for Linux. Synopsis Its story shows a more focused effort on specific characters than Egosoft's previous 'X' effort: Reception The game has been received negatively by critics, holding the score of 33/100 on Metacritic, indicating "generally unfavorable reviews". As of 23 March 2014, X Rebirth has received 25 patches since release, although the game is widely regarded as being a disappointment following from previous successes in the series. In April 2014, X Rebirth 2.0 became available, delivering a number of patch fixes and gameplay improvements. Egosoft released the first downloadable content (DLC) for X Rebirth on 11 December entitled The Teladi Outpost including a host of fixes and improvements to the base game. This update, delivered exclusively for the 64 bit platform, brought the release to version 3.0. The Teladi Outpost DLC features a new system containing two sectors. On the days surrounding and during the weeks following the update, Egosoft delivered a number of video tutorials to YouTube covering many common facets of gameplay. See also List of PC games References External links Argonopedia, the X series wiki 2013 video games Science fiction video games Space trading and combat simulators Video games developed in Germany Video game sequels Windows games MacOS games Linux games X (video game series) Deep Silver games Video games with Steam Workshop support Video games set in the 30th century de:X (Spieleserie)#X Rebirth
148417
https://en.wikipedia.org/wiki/Stored-program%20computer
Stored-program computer
A stored-program computer is a computer that stores program instructions in electronically or optically accessible memory. This contrasts with systems that stored the program instructions with plugboards or similar mechanisms. The definition is often extended with the requirement that the treatment of programs and data in memory be interchangeable or uniform. Description In principle, stored-program computers have been designed with various architectural characteristics. A computer with a von Neumann architecture stores program data and instruction data in the same memory, while a computer with a Harvard architecture has separate memories for storing program and data. However, the term stored-program computer is sometimes used as a synonym for the von Neumann architecture. Jack Copeland considers that it is "historically inappropriate, to refer to electronic stored-program digital computers as 'von Neumann machines'". Hennessy and Patterson wrote that the early Harvard machines were regarded as "reactionary by the advocates of stored-program computers". History The concept of the stored-program computer can be traced back to the 1936 theoretical concept of a universal Turing machine. Von Neumann was aware of this paper, and he impressed it on his collaborators. Many early computers, such as the Atanasoff–Berry computer, were not reprogrammable. They executed a single hardwired program. As there were no program instructions, no program storage was necessary. Other computers, though programmable, stored their programs on punched tape, which was physically fed into the system as needed. In 1936, Konrad Zuse anticipated in two patent applications that machine instructions could be stored in the same storage used for data. The University of Manchester's Baby is generally recognized as world's first electronic computer that ran a stored program—an event that occurred on 21 June 1948. However the Baby was not regarded as a full-fledged computer, but more a proof of concept predecessor to the Manchester Mark 1 computer, which was first put to research work in April 1949. On 6 May 1949 the EDSAC in Cambridge ran its first program, making it another electronic digital stored-program computer. It is sometimes claimed that the IBM SSEC, operational in January 1948, was the first stored-program computer; this claim is controversial, not least because of the hierarchical memory system of the SSEC, and because some aspects of its operations, like access to relays or tape drives, were determined by plugging. The first stored-program computer to be built in continental Europe was the MESM, completed in the Soviet Union in 1950. The first stored-program computers Several computers could be considered the first stored-program computer, depending on the criteria. IBM SSEC, became operational in January 1948 but was electromechanical In April 1948, modifications were completed to ENIAC to function as a stored-program computer, with the program in its function tables (by setting dials on its function tables, which could store 3,600 decimal digits for instructions. ARC2, a relay machine developed by Andrew Booth and Kathleen Booth at Birkbeck, University of London, officially came online on 12 May 1948. It featured the first rotating drum storage device. Manchester Baby, a developmental, fully electronic computer that successfully ran a stored program on 21 June 1948. It was subsequently developed into the Manchester Mark 1, which ran its first program in early April 1949. Electronic Delay Storage Automatic Calculator, EDSAC, which ran its first programs on 6 May 1949, and became a full-scale operational computer. EDVAC, conceived in June 1945 in First Draft of a Report on the EDVAC, but not delivered until August 1949. BINAC, delivered to a customer on 22 August 1949. It worked at the factory but there is disagreement about whether or not it worked satisfactorily after being delivered. If it had been finished at the projected time, it would have been the first stored-program computer in the world. It was the first stored-program computer in the U.S. Manchester University Transistor Computer , is generally regarded as the first transistor-based stored-program computer having become operational in November 1953. Telecommunication The concept of using a stored-program computer for switching of telecommunication circuits is called stored program control (SPC). It was instrumental to the development of the first electronic switching systems by American Telephone and Telegraph (AT&T) in the Bell System, a development that started in earnest by c. 1954 with initial concept designs by Erna Schneider Hoover at Bell Labs. The first of such systems was installed on a trial basis in Morris, Illinois in 1960. The storage medium for the program instructions was the flying-spot store, a photographic plate read by an optical scanner that had a speed of about one microsecond access time. For temporary data, the system used a barrier-grid electrostatic storage tube. See also Stored program control References Classes of computers Department of Computer Science, University of Manchester Discovery and invention controversies
18965074
https://en.wikipedia.org/wiki/Batangas%20State%20University
Batangas State University
The Batangas State University (BatSU or BatStateU; Filipino: Pambansang Pamantasan ng Batangas) is a Level IV state university in the province of Batangas, Philippines. Established as a manual training school in 1903, Batangas State University is the oldest higher education institution in the country's Calabarzon Region. It was converted into a state college in 1968 through RA 5270, and was renamed Pablo Borbon Memorial Institute of Technology. It was finally elevated into a state university in 2001 by virtue of RA 9045. It has 11 campuses and more than 35,000 students enrolled in over 110 undergraduate and graduate degree programs. Batangas State University was named one of the country's model higher education institutions by the Commission on Higher Education or CHED in 2016. The university's Electronics Engineering program is designated by CHED as a national Center of Excellence, and its Electrical Engineering, Mechanical Engineering, Development Communication, and Teacher Education programs are national Centers of Development. It has ISO 9001:2015 certification from TÜV Rheinland Philippines, Inc., and is host to the first China-Philippines Silk Road Institute in the country. BatStateU is the first state university in the Philippines with engineering and information technology programs accredited by the US-based Accreditation Board for Engineering and Technology or ABET – Engineering Accreditation Commission and Computing Accreditation Commission. With 15 development centers, it is recognized by the Regional Development Council of Region IV-A as the Regional Center for Technology Business Incubation and Development, and as the Regional Center for Science, Technology, Engineering, and Environment Research. In 2020, the university received a three-star rating from Quacquarelli Symonds Stars University rating. Through Proclamation No. 947, President Rodrigo Roa Duterte designated the BatStateU Knowledge, Innovation, and Science Technology or KIST Park as a Special Economic Zone. It is the first KIST Park registered by the Philippine Economic Zone Authority or PEZA. This was officially launched on July 20, 2020, in virtual ceremonies attended by key government and industry officials. History Early years Batangas State University was originally established as a Manual Training School in 1903 through the supervision of its first American principal, Mr. Scheer. The institution aimed to train youth for beneficial jobs specifically in woodworking. Two years later, it was renamed Batangas Trade School with Mr. Schartz, Zacarias Canent, Isaias, and Nad Pascual Magcamit as its principals, successively. The school was destroyed by fire in 1928 and classes were held temporarily at the old government building near the present Basilica of Immaculate Conception church. The construction of the school building at the site of Batangas State University's Main Campus I began in 1932. After the Liberation, Batangas Trade School resumed activities on 10 September 1945 with Vicente J. Mendoza as its principal. Under the Philippine Rehabilitation Act of 1946, the school was renovated and the first batch of female students were admitted when courses in food trade, garment, and cosmetology were introduced as a response to the growing need of female workforce. Pablo Borbon era Sometime before 1952, the school was renamed Pablo Borbon Memorial Trade School as a tribute to Pablo Borbon who served as the 6th governor of Batangas from 1910 to 1916. Through Republic Act No. 741, the school gained a national trade status on 18 June 1952. Again, it was renamed Pablo Borbon Regional School of Arts and Trades on 22 June 1957 as mandated by Republic Act No. 1957. Two months later, Arsenio Galauran became the school superintendent while the institution started to offer technical courses. The school started offering mechanical and electrical engineering in 1961. Galauran was succeeded by Vicente Mendoza in November 1962. Mendoza was then followed by Rosauro de Leon on 8 June 1963. It was during de Leon's administration that the school began to offer terminal classes in auto mechanics, cosmetology, electronics, dressmaking, machine shop practice, and radio mechanics. On 19 June 1965, Republic Act No. 4582 directed the school to offer degree courses in industrial education and industrial arts. As authorized by Republic Act No. 5270, Pablo Borbon Regional School of Arts and Trades was elevated into a state college and renamed Pablo Borbon Memorial Institute of Technology or PBMIT on 15 June 1968. At the time of its conversion, it was the 23rd state college in the country. Rosauro de Leon was appointed to become PBMIT's first president. In 1972, the newly established state college started to offer courses in electrical and mechanical engineering courses. Sometime before 1973, a secondary school department that came to be known as the Laboratory School was inaugurated. By 1973, Marcos Ato was its principal when the Laboratory School adopted the Revised Secondary Education Program or RSEP. The following year, the Graduate School was formally opened with Master of Arts in Industrial Education major in Administration and Supervision as its pioneer course. This was followed in 1978 when Master of Management specialized in Business and Public Managements was offered in partnership with former U.P. College of Public Administration. Earlier in 1977, PBMIT launched the Extension Trade Training Program that aimed to train out-of-school youth in electricity, food trades, mechanics, practical automotive, and woodcraft in a span of 200 hours. Isabelo R. Evangelio succeeded de Leon as college president in 1983. A year after Evangelio's ascendancy to the office, PBMIT acquired a three-hectare land in Batangas City. Eventually, this would become the site of Batangas State University's Main Campus II. Evangelio was succeeded by Mariano O. Albayalde in 1986. In the same year, PBMIT broadened its undergraduate programs in home economics, mathematics, and science. In association with Technological University of the Philippines or TUP, a doctoral degree in Industrial Education Management was offered in 1987. A science class with emphasis in mathematics and science of the Special Science Curriculum was piloted in the Laboratory School from 1987 to 1990 through the supervision of its principal, Mercedes del Rosario. Albayalde's presidency was followed by Ernesto M. De Chavez in 1989. Courses in English language, elementary and secondary education, and computer science were made available the subsequent year. Simultaneously, PBMIT spearheaded the Dual Training System or DTS that was intended for aspiring technicians. DTS was conducted on a trimester basis; classes were held four days a week in industry and two days in school. By 1991, two more courses in development communication and biology were offered. Starting from 1993, the Laboratory School adopted the Technology Based-Curriculum to conform with PBMIT's Science Education Program. Together with Philippine Science High School and Quezon City Science High School, the three were the first secondary schools in the Philippines to adopt the aforementioned curriculum. In 1994, an extension campus was opened in Balayan with welding fabrication and automotive, electrical, and electronics technology as its premier courses. From 1995 to 2000, numerous courses in various disciplines were introduced. Some of these were architecture, business administration, chemical engineering, sanitary engineering, fine arts, information technology, psychology, and public administration. The former College of Liberal Arts, Science, and Computer Studies; School of Accountancy, Business and Economics, Center for Gender, and Poverty Studies; and School of Food Science were established. A separate department for primary students was created that offered Kindergarten I and II in preschool and Grade I in elementary. Conversion into a State University On 22 March 2001, Pablo Borbon Memorial Institute of Technology was converted into Batangas State University by virtue of Republic Act No. 9045. Ernesto M. De Chavez became the university's first president. The conversion also led to the unification of the Grade School Department and the Laboratory School from which the Integrated School came into existence with Maxima Ramos as its first director. On 17 July 2006, Nora L. Magnaye assumed as the university's second president and the first woman to hold the position. During her presidency, Batangas State University started to establish ties with different universities and colleges in China, Malaysia, South Korea, Thailand, and Vietnam. On 17 July 2014, Tirso A. Ronquillo was appointed as the third university president. Since 2015, massive infrastructure development was concretized in the university's campuses. It was during Dr. Ronquillo’s term when the university became a Level IV university, received ISO 9001:2015 certification, and was awarded Three Stars by the QS Stars rating. It was also during this time when its engineering and information technology programs were accredited by the US-based Accrediting Board for Engineering and Technology. The university established the first KIST park in the country, started offering new emerging programs, developed research and development centers, and expanded international partnerships during his term. In December 2019, the university launched its ten-year strategic plan highlighted by its new vision, mission, and strategic direction until 2029. Present Administration The BatStateU Board of Regents The Batangas State University Board of Regents is the highest governing body of the university, as stipulated in Sec. 5 of RA 9045. The Board regularly convenes at least once every quarter. Currently, it is composed of the following: Dr. Lilian De Las Llagas, CHED Commissioner – Chairperson-designate Dr. Tirso Ronquillo, University President – Vice Chairperson Sen. Joel Villanueva, Chair of the Senate Committee on Higher and Technical Education Rep. Mark Go, Chair of the House Committee on Higher and Technical Education Dir. Luis Banua, Regional Director of the National Economic and Development Authority, Regional Office IV-A Dir. Alexander Madrigal, Regional Director of the Department of Science and Technology, Regional Office IV-A Dr. Jesse Nelson Llama, President of the Confederation of BatStateU Faculty Associations Mr. Arvin Lloyd Atienza, President of the Confederation of BatStateU  Student Associations Engr. Armando Plata, President of the Confederation of BatStateU Alumni Associations Mr. Faustino Caedo, Private/ Prominent Citizen selected by the Board of Regents Prof. Enrico Dalangin – University Secretary The Executive Committee The university's Executive Committee (ExeCom) serves as the institution's management committee that spearheads strategic planning, internal policy formulation, decision making, and policies implementation based on Board-approved policies and guidelines. It is chaired by the University President. Its members are the Vice Presidents and the Executive Directors of campuses. The Administrative and Academic Councils The university has an Administrative Council, as stipulated in Section 10 of RA 9045. It consists of the president of the university as the chairman, the vice presidents, deans, directors, and other officials of equal rank as members. The Administrative Council reviews and recommends to the Board policies governing the administration, management and development planning of the university for appropriate action. The Academic Council, as provided in Section 11 of RA 9045, has the president of the university as chairman and all members of the instructional staff with the rank of not lower than assistant professor as members. This council has the power to review and recommend the curricular offerings and rules of discipline of the university, subject for appropriate action of the Board. It shall fix the requirements for admission of students, as well as for graduation and the conferment of degrees, subject to review and/or approval by the Board. Campuses Since 2003, Batangas State University has two main, two satellites, and six extension campuses in Batangas. To maintain camaraderie between its campuses, the university administers several annual activities like quiz bees and intramurals. The university's main campuses are located in Batangas City; Pablo Borbon Main I is at Rizal Avenue, Poblacion while Pablo Borbon Main II is within Golden Country Homes Subdivision in Brgy. Alangilan. Both are named in honor of former governor Pablo Borbon. Being the oldest of all the campuses, Main I is the site of the former Batangas Trade School which was established in 1932. Since then, Main I has been the flagship campus and the seat of the administration of the university. The site of the second oldest campus, Main II, was acquired in 1984. On 25 February 2000, the Apolinario R. Apacible School of Fisheries or ARASOF in Brgy. Bucana, Nasugbu was incorporated to the former Pablo Borbon Memorial Institute of Technology as its first satellite campus. With the implementation of Republic Act No. 9045, two more satellite campuses were incorporated to the then newly formed Batangas State University; this were Jose P. Laurel Polytechnic College or JPLPC in Poblacion, Malvar and a branch of the Polytechnic University of the Philippines or PUP in Poblacion, Santo Tomas. However, on 22 May 2007, Congress enacted Republic Act No. 9472 that excluded PUP Santo Tomas from Batangas State University. Earlier in 1994, the university's third oldest and first extension campus was inaugurated in Brgy. Caloocan, Balayan. In 2000, a memorandum of agreement was signed for the purpose of establishing more extension campuses in Lipa City, Rosario, Lobo, San Juan, Calaca, Padre Garcia, San Pascual, and Taysan. The said campus in Brgy. Marawoy, Lipa City was named Don Claro M. Recto campus as a tribute to the well-known Filipino politician while the one in Brgy. Namunga, Rosario was named Jose B. Zuño campus in honor of Rosario's first postwar mayor. The extension campuses in Lobo and San Juan were constructed in Brgy. Masaguitsit and Brgy. Talahiban, respectively. Recently, a ceremony was held on 8 June 2017 for the commencement of construction of another extension campus in Mabini and was launched on 6 August 2018. University Symbols Red Spartans The Red Spartans is the official mascot of the university. It was launched on 23 September 2014 during the 111th Foundation Anniversary of the institution. Designed by John Jeffrey Alcantara, a Fine Arts major of the university, it was officially registered at the Intellectual Property Office of the Philippines on 15 April 2016 with Certificate of Registration No. 4/2014/00013631. Tower of Wisdom In an evening fellowship on 19 November 2016, the university marker named Tower of Wisdom was inaugurated in its Main Campus during the 113th founding anniversary of Batangas State University. The design was conceptualized by Dr. Tirso Ronquillo, and was applied at the Intellectual Property Office as an Industrial Design on 15 November 2016. The structure rests on a 16 meter long and 11.3 meter wide platform, to symbolize the 16th year of the current century and the 113th founding anniversary of the institution. It is 19.03 meters high, symbolic of the year the institution was founded. Through Resolution No. 729, s.2017, the Board of Regents declared the Tower of Wisdom as the official landmark of the university. Academics Program Offerings The university offers academic programs in engineering, architecture, fine arts, interior design, law, computer science, information technology, industrial technology, teacher education, nursing, dietetics, accountancy, management accounting, business administration, entrepreneurship, public administration, customs administration, tourism management, hospitality management, development communication, criminology, biology, chemistry, mathematics, agriculture, forestry, and fisheries and aquatic sciences. Recently, the university started offering programs in disaster risk management, the first in the Calabarzon region.  It also has an Integrated School in its Main Campus I and a Laboratory School in its Nasugbu campus, both offering basic education, junior high school, and senior high school (STEM strand) under a science and technology-based curriculum. Huawei Technologies Philippines, Inc. also partnered with the university in offering the first Huawei ICT Academy in CALABARZON region, and the seventh academy in the country. Offered in multiple sites for increased student access to quality education, the university's academic programs are anchored on pragmatic, relevant, and socially responsive curricula. These are government recognized and issued certificates of program compliance by the Commission on Higher Education. In addition, its programs are regularly accredited by the Accrediting Agency of Chartered Colleges and Universities in the Philippines or AACCUP, Inc. Several of its programs have already reached Level IV, the highest level of accreditation by AACCUP. These are the Mechanical Engineering, Elementary Education, Secondary Education, and Development Communication programs. Other programs have passed Level IV-Phase 1, and will undergo the Phase 2 evaluation before being awarded Level IV accreditation. In 2015, AACCUP awarded Batangas State University as the Top 2 state university with the most number of accredited programs in the country. The College of Engineering, Architecture and Fine Arts offers the university's flagship degree programs, and is a pioneer in the full implementation of outcomes-based teaching and learning and the integration of Technopreneurship in its curricula. Aside from AACCUP, its programs are also regularly accredited by the Philippine Technological Council or PTC, the umbrella organization of engineering professionals in the Philippines, and the US-based Accreditation Board for Engineering and Technology. New Academic Programs As part of the university's ten-year strategic plan, Batangas State University offers new graduate and undergraduate programs with curricula that reflect 21st century competencies along emerging industries. These include aerospace engineering, geological engineering, geodetic engineering, biomedical engineering, automotive engineering, transportation engineering, metallurgical engineering, naval architecture and marine engineering, public health for disaster response, and ceramics engineering for the undergraduate programs. The new graduate programs are on urban planning and design, construction management, materials science engineering, transportation engineering, engineering management, engineering education, supply chain management, port management, advanced manufacturing, data science and analytics, artificial intelligence, energy engineering, and earthquake engineering. Academic Calendar In compliance with Republic Act No. 7797, the Board of Regents approved the movement of the university's academic calendar from June–March to August–May starting in 2016. This was also done to align the university's academic calendar with that of the ASEAN and the international community. Libraries and Laboratories The university library hosts a wide collection of references and subscriptions to journals, magazines, and newspapers. It has an e-Library system with library automation software that supports the Destiny Library Manager for a more detailed transaction in online circulation, inventory reporting, computerized logbooks, and utilization reports. The university also has access to IEEE XPlore and Science Direct. The library's website has been redesigned to include new features such as the enhancement of the Online Public Access Catalog or OPAC, the inclusion of e-journals, e-books, and other library resources section, library space reservations feature, and enhanced borrowing and reservation feature. Recently, the university has realigned its budget for subscription to digital content, ensuring that the University Library remains a relevant environment for faculty and students to acquire resources essential for their teaching, learning, and research. Soon, the university's Science, Technology, Engineering, Agriculture and Mathematics (STEAM) Library will be inaugurated in its main campus. Each college has program-specific laboratories to support hands-on, simulated learning and application of theories. One of the most recent laboratories established is the fabrication laboratory called the Labspace for Innovation Knowledge-Honing and Application (LIKHA FabLab), which is part of the university's Manufacturing Research Center. Located in the Science, Technology, Engineering, and Environment Research or STEER Hub, this laboratory has a high quality research infrastructure for developing models and making prototypes for mass production. It was established in 2018 through a grant from the Department of Trade and Industry of P12 million worth of state-of-the-art equipment and facilities to accommodate university students and micro, small and medium enterprises (MSMEs).  It is equipped and ready for digital designs, 3D printing, laser engraving/cutting, CNC wood router, vacuum formatting, large format printing, and CNC metal milling. In response to the need for personal protective equipment or PPE due to the COVID-19 pandemic in early 2020, the LIKHA FabLab fabricated the Red Spartan Face Shields. Along with masks, face shields are a basic requirement for health workers as an additional barrier to reduce the risk of viral transmission via airborne droplets. Through the Optimized Vacuum Forming Method for fabricating the face shield, the LIKHA FabLab reduced the time of fabrication to six minutes per face shield as compared to the one hour and 46 minutes just using the 3D printing process. As of June 2020, the LIKHA FabLab has already produced over 2,000 face shields distributed among health care professionals and frontline personnel in Batangas. Learning Management System The university has a virtual learning environment initially utilized by graduate school professors but was eventually used by undergraduate students as well. It partnered with Google for the Google Education program, enabling the faculty and students to have free, unlimited access to G Suite and all its products and features. The university recently established its Center for Transformative Learning (CenTRaL), which serves as the university’s arm in harnessing innovative technologies in delivering alternative modes of teaching and learning. It has three major components: capacity building and training, ICT technical services, and content development and evaluation. Admissions, Scholarships and Housing Prospective students need to pass the BatStateU Admission Test prior to admission. Applicants are guided through the university's online facility on how to apply and qualify to take the entrance examination. There are four filing centers and six testing centers in the university. The Admission Test is administered once a year, with all information about application and test dates regularly announced through the university's website. Upon passing the test, registration of students can be done online or through physical office transactions. In terms of scholarships, Filipino students who pass the college entrance examination can enjoy the free tuition and miscellaneous fees provided for in RA 10931 or the Universal Access to Quality Higher Education Act of 2018. Foreign nationals, on the other hand, are welcome and will be assisted by the Office for External Affairs. The university gives financial assistance and support to students through its Scholarship and Financial Assistance Office. In addition, some industries and institutional partners provide financial assistance to qualified students. The university has a hostel in BatStateU Main I and in BatStateU ARASOF-Nasugbu, and a student dormitory in BatStateU Main II. In addition, the Office for Student Housing and Residential Services accredits boarding houses and dormitories outside of the university to ensure the safety and convenience of its students. Awards and Recognitions International Recognition and Accreditation In March 2020, Quacquarelli Symonds or QS Stars University rating gave Batangas State University a three-star rating. It received five stars for Teaching; four stars for Employability; one star for internationalization; two stars for Academic Development; three stars for Facilities; four stars for Inclusiveness; two stars for Specialist Criteria: Innovation; and four stars for Specialist Criteria: Electronics Engineering. Batangas State University is also the first state university in the Philippines with engineering and information technology programs accredited by the Accrediting Board for Engineering and Technology or ABET. The ABET – Engineering Accreditation Commission awarded accreditation to eight engineering programs of the university: Chemical Engineering, Civil Engineering, Computer Engineering, Electrical Engineering, Electronics Engineering, Industrial Engineering, Mechanical Engineering, and Sanitary Engineering.  In addition, the ABET – Computing Accreditation Commission has accredited the university's Information Technology program. TÜV Rheinland Philippines, Inc. awarded the university the ISO 9001:2008 certification in December 2017, and the ISO 9001:2015 certification after passing the external surveillance audit in September 2018. The ISO certification covers the design, development, and implementation of higher education services. National Awards and Citations In 2016, the Commission on Higher Education selected Batangas State University as a Model Higher Education Institution. This made BatStateU a host university for the Philippine Higher Education Career System - Executive Development Program or EDP, which is part of the University Dynamics Laboratory of CHED in partnership with the Development Academy of the Philippines. The university hosted ten candidates of the EDP from 26 November to 1 December 2016. CHED also recognizes the university's Electronics Engineering program as a national Center of Excellence, while its Electrical Engineering, Mechanical Engineering, Development Communication, and Teacher Education programs are designated as national Centers of Development. Over 190 of its graduates also topped national licensure and certification examinations administered by the country's Professional Regulations Commission or PRC in various fields as of December 2019. Specifically, Batangas State University has been consistently hailed as top performing school in the Mechanical Engineering licensure examination, making it one of the top mechanical engineering schools in the country. Two of Batangas State University's research projects received the National Gawad KALASAG (KAlamidad at Sakuna LAbanan, SAriling Galing ang Kaligtasan) award from the Office of Civil Defense – National Disaster Risk Reduction and Management Council or NDRRMC. The amphibious vehicle known as the Tactical Operative Amphibious Drive or TOAD, which can be used for rescue operations during heavy floods, received the special award in November 2016. On the other hand, the Solar-Powered Isotropic Generator of Acoustic Wave or SIGAW, which is a tsunami early warning device, received Special Recognition during the Gawad Kalasag awards night in December 2018. Gawad Kalasag is an annual awarding ceremony for significant initiatives in the promotion and advancement of DRRM in the country. During the 2019 Search for Sustainable and Eco-friendly Schools, Batangas State University was awarded as the Regional Champion for CALABARZON. The university also received the Nestle Water Leadership Award and the One Meralco Energy Leadership Award. In ceremonies held on 22 November 2019, the university was adjudged as the National Winner – College Level for the Energy Leadership Award. The biennial Search is organized by the Department of Environment and Natural Resources through the Environment Management Bureau, in collaboration with the Department of Education (DepED), and the Commission on Higher Education, with support from Nestle Philippines, Inc., One Meralco Foundation, Inc., and Smart Communications, Inc. Internationalization The university is a lifetime member of the Association of Universities of Asia and the Pacific, and a member of the University Mobility in Asia and the Pacific. It is also a member of the ASEAN Federation of Engineering Organizations, composed of engineering institutions and organizations of ASEAN countries. The university has also established partnerships with foreign universities, organizations, and industries as part of its internationalization program. These include USAID Global Research and Innovation Fellowship Network or GRIFN, the United Nations World Food Programme, the Center for Appropriate Technology (Gruppe Angepasste Technologie-GrAT) in Austria, Texas A&M University in the United States, Kochi University in Japan, Singapore Polytechnic in Singapore, Global Korean Nursing Foundation in South Korea, Universiti Teknologi Malaysia in Malaysia, International University in Cambodia, Universiti of Transport and Technology in Vietnam, and Shanghai University of Electric Power in China, among others. It is also host to the first China-Philippines Silk Road Institute in the country, which was launched on 10 December 2019 in its main campus. Recently, it partnered with the National University in California, USA, and the Malaysia Institute of Supply Chain Innovation, which is a member of the Massachusetts Institute of Technology (MIT) Global SCALE (Supply Chain and Logistics Excellence) Network. As part of its personnel development program, the university sends its faculty to foreign universities for advanced studies, immersion, study visits, and benchmarking. A number of faculty members have been sent to the University of California in Berkeley, the Erasmus University in Rotterdam in the Netherlands, Louisiana State University in Baton Rouge, Louisiana, U.S., the National Sun Yat-Sen University in Taiwan, and Pusan National University in Busan City, South Korea. It also has a student exchange program in partnership with the Rajamangala University of Technology Thanyaburi in Thailand, and regularly sends its teacher education students to the Thai Nguyen University of Agriculture and Forestry in Vietnam for their internship and practice teaching. In 2015, the Board of Regents approved the Balik-Scientist / Balik-Professional Program, wherein Filipino scientists who are based or have served in foreign countries are invited to the university to share their expertise and provide technical assistance in the conduct of research and development activities. Since then, the university has hosted several scientists, including Dr. Josefino Comiso from the University of California, Los Angeles, who is a senior research scientist at the Cryospheric Sciences Laboratory of the NASA Goddard Space Flight Center and specializes on polar oceanography, climate change and satellite remote sensing. Other Balik-Scientists engaged by the university are Dr. Rodrigo Jamisola Jr. of Botswana International University of Science and Technology in Botswana, Southern Africa in the field of robotics and autonomous underwater vehicles; Dr. Pher Errol Quinay of the University of Tokyo, Japan in the field of computational earthquake engineering; Dr. Ginno Lizano Andres of Ritsumeikan University, Japan for capacitive deionization technology; and Dr. Abigail Cid of Kyoto University, Japan, on stable isotope of oxygen in phosphate and its analysis. The biennial International Research Conference on Innovations in Engineering, Science, and Technology (IRCIEST) is hosted by the university, most recent of which was held on 04-6 December 2019. It was the fourth IRCIEST and centered on topics relative to the Fourth Industrial Revolution, hence it was called IRCIEST 4.0. Keynote speakers in previous conferences include Engr. Diosdado “Dado” Banatao of Tallwood Venture Capital, Dr. David Hall of USAID- STRIDE Program, Mr. Sagiv Massad of the Technology and Cyber Security Business Profile in Israel, and Hon. Fortunato T. Dela Peña, Secretary, of the Philippines’ Department of Science and Technology. In 2018, the university hosted the first ASEAN Conference and Exposition on Disaster Risk Management and Climate Change Adaptation, in partnership with Universiti Teknologi Malaysia and National University of Kaohsiung Taiwan. USec. Ricardo B. Jalad of the Department of National Defense served as the keynote speaker in the conference. The university’s College of Engineering, Architecture and Fine Arts has 29 ASEAN Engineers in its faculty roster, awarded by the ASEAN Federation of Engineering Organisations (AFEO), which facilitates the mobility of engineers within ASEAN countries. In addition, the college also has memberships in international professional organizations: In addition, it is also a member of national associations such as Aerospace Industries Association of the Philippines; Semiconductor and Electronics Industries in the Philippines, Inc.; Institute of Electronics Engineers of the Philippines; Institute of Integrated Electrical Engineers; Philippines; Philippine Institute of Civil Engineers; Philippine Institute of Chemical Engineers; Philippine Society of Sanitary Engineers; Philippine Society of Mechanical Engineers; Philippine Institute of Industrial Engineers; and the United Architects of the Philippines. On February 15, 2021, the university partnered with the Embassy of India and formalized five cooperation agreements with industries based in India. Shambhu S. Kumaran, Ambassador of India to the Philippines, witnessed the signing ceremonies and offered the BatStateU faculty with fully-funded doctorate scholarships in the QS-ranked Indian Institute of Technology, as well as specialized training courses in engineering and technology through the India-ASEAN framework. Research and Innovation Batangas State University has one of the largest research and training infrastructure in the region, the seven-storey Calabarzon Integrated Research and Training Center (CIRTC) located in its Main Campus I. In 2017, it was endorsed by the Regional Development Council of Region IV-A as Calabarzon’s Center for analytical testing and research services. It also has a Food Innovation Center or FIC in its Main Campus II, which serves as a hub for innovations on product/process development, and provides marketing strategies, food analysis, food safety, and quality training. FIC is not only limited to the BatStateU community but also serves Micro, Small and Medium Enterprise in Batangas Province where it envisions to transform the livelihood of the communities. The Science, Technology, Engineering and Environment Research Hub or STEER Hub is located in its Main Campus II in Brgy. Alangilan, Batangas City. It houses the Center for Technopreneurship and Innovation, the Innovation and Technology Support Office, the LIKHA FabLab (Manufacturing Research Center), the Electronic Systems Research Center, the Environment Research Center, the ICT Research Center, and the Material Science and Testing Research Center. The STEER Hub is endorsed by the Regional Development Council of Region IV-A as Calabarzon’s Center for Science, Technology, Engineering and Environmental Research. In February 2018, the university established its newest research center, the Verde Island Passage Center for Oceanographic Research and Aquatic Life Sciences or VIP CORALS. Located in its campus in Lobo, Philippines, the center provides research, teaching, and extension services relevant to the marine resources and its marine environment in the Verde Island Passage. The center also trains LGUs in the Verde Island Passage on marine protection, livelihood programs, and policy formulation, boosting the local tourism industry. On 20 September 2019, it co-hosted a symposium entitled Saknungan sa VIP 2019 with the De La Salle University-Manila- Br. Alfred Shields Ocean Research Center and the California Academy of Sciences. It has also hosted Dr. Kent Carpenter, a professor at Old Dominion University in Norfolk, Virginia for a lecture-forum on 20 November 2019 on the environmental damage to coral reefs in the South China Sea. It was Dr. Carpenter's research which revealed that the Verde Island Passage is the center of the center of marine biodiversity in the world. The university also has an Analytical Laboratory and Testing Services Center to cater to biotechnology and natural products, as well as a Social Innovation Research Center that focuses on research-based extension and community services.  It also has a research center focused on Disaster Risk Management, called the Innovation and Advanced Computing Technologies for Disaster Risk Management or iACT4DRM. This is part of a bigger center, called the Adaptive Capacity-Building and Technology Innovation for Occupational Hazards and National Disaster or ACTION Center. KIST Park On 22 May 2020, President Rodrigo Roa Duterte designated the BatStateU Knowledge, Innovation, and Science Technology (KIST) Park as a Special Economic Zone (Information Technology Park) by virtue of Proclamation No. 947. It is the first KIST Park registered by the Philippine Economic Zone Authority or PEZA. The BatStateU KIST Park is envisioned as the country's primary seedbed and enclave for technology that nurtures the development and growth of new, small, high-tech firms. It facilitates the transfer of university know-how to locator- companies, encourages the development of faculty or student-based spin-offs, and stimulates the development of innovative products and processes. Located at the BatStateU Pablo Borbon Main II in Brgy. Alangilan, Batangas City, the KIST Park promises to enable business activity in the area by activating incubation and stimulating the creation of new innovative enterprises. Its proximity to the National Capital Region as well as accessibility to other parts of the country makes it a top location for technology transfer and commercialization in the Philippines and a central point of contact for innovative technology-based start-ups looking to transform knowledge into marketable products and services. Its national launching ceremony took place virtually on July 20, 2020. It was attended by Atty. Mcjill Fernandez, Deputy Executive Secretary for General Administration of the Office of the President, Sec. Ramon Lopez of DTI, Sec. Fortunato dela Pena of DOST, PEZA Director General BGen. Charito Plaza, and other key government officials and industry representatives. Extension and Community Service The Adopt-a-Barangay program is the university's flagship program on extension and community services. Started in 2014 through a comprehensive needs assessment, the program benefited ten communities within the university's service areas, selected using the Community-Based Monitoring System of the Province of Batangas. Much of the university's community-oriented, research-based extension services have been focused on these barangays. More than 300 community-oriented extension services are conducted annually by the university, with more than 20,000 beneficiaries in each year. Anchored on a ten-point agenda, the Extension Service Program consists of Environment and Natural Resources; Conservation, Protection and Rehabilitation; Technology Transfer, Utilization and Commercialization; Technical Assistance and Advisory; Technical-Vocational Education and Training; Social Development; Parents’ Empowerment through Social Development (PESODEV); Disaster Preparedness and Response/ Climate Change Adaptation; Gender and Development; Community Outreach; and Smart Analytics and Engineering. The university also regularly transfers its developed technologies to communities that need them. Significant technologies transferred include the Solar Isotropic Generator of Acoustic Wave, which is a tsunami Early Warning System installed in selected barangays in Batangas and Quezon provinces. The project was co-funded by the United Nations World Food Programme or UNWFP and DOST IV-A. Another project is the Solar Fish Dryer, which is a robust drying chamber powered by solar energy installed in a community in Taal Lake in Batangas and in Quezon Province. BRYCE, a biodiesel reactor designed to convert used cooking oil into biodiesel, was transferred to a Gawad Kalinga community in Batangas City. Singapore Polytechnic or SP partnered with the university in the conduct of the Learning Express (LeX) Program. This involves Social Innovation Projects where students from Singapore Polytechnic engage in livelihood and volunteer programs that enable them to address the needs of the local communities while developing their academic skills using the Design Thinking Methodology. Several faculty members of the university are sent to Singapore for orientation and continuous training on project implementation and monitoring. Media outlets BatStateU has two student-run media outlets, namely The Lathe Group of Publications and its own college radio DWPB-FM 107.3 FM, first established in the 2000s. References External links Official Website of Batangas State University Online Services of Batangas State University Universities and colleges in Batangas State universities and colleges in the Philippines Education in Batangas City 1903 establishments in the Philippines Educational institutions established in 1903
14904980
https://en.wikipedia.org/wiki/English%20Freakbeat%2C%20Volume%202
English Freakbeat, Volume 2
English Freakbeat, Volume 2 is a compilation album in the English Freakbeat series, featuring recordings that were released decades earlier, in the mid-1960s. Release data The album was released as an LP in 1989 by AIP Records (as #AIP-10047) and as a CD in 1996 (as #AIP-CD-1047). Vinyl-Only tracks and CD bonus tracks The English Freakbeat LPs and CDs have most tracks in common, although not always in the same order, and some of the LP tracks were not included on the CDs. Also, the CD bonus tracks are not always at the end of the album. Thus, for clarity, we have shown tracks for both editions of the album, with vinyl-only tracks and CD bonus tracks indicated. Notes on the tracks The following information is taken mostly from the CD liner notes. Glenn Athens & the Trojans – the leader is also known as Glen Athens – were from Surrey and won the local Beat Trophy in 1964; they were named by Mirabel Magazine as the top "semi-pro" band for two straight years. This track is taken from a 1965 EP on Spot Records. The song by The Sessions was written by Miki Dallon (who was featured as a solo artist on English Freakbeat, Volume 1) and was evidently released only in America. The original incarnation of Mickey Finn as Mickey Finn & the Blue Men in 1964-65 (see English Freakbeat, Volume 4) included a young Jimmy Page. Both sides of their final single are included, from late 1967. The Kubas/the Koobas had early connections with the Beatles; they had not only toured with the legendary band but were also managed by Brian Epstein. Also, they made an appearance in the film Ferry Cross the Mersey. These two previously un-reissued tracks are from the flip side of their first single as the Kubas and from a later release as the Koobas. "Messin' with the Man" by the Beat Merchants is the "B" side of their first single; the "A" side is on English Freakbeat, Volume 1. Their second single, "So Fine" was paired with a song by Freddie and the Dreamers in its American issue. Reportedly, the Wolf Pack is actually the Animals recording under a pseudonym for a soundtrack album called The Dangerous Christmas of Red Riding Hood. Released on ABC-Paramount Records, the song may only have been issued in the U.S. Members of The Syndicats include Steve Howe – a future member of Tomorrow and then Yes – who was the lead guitarist for this track. Their classic "Crawdaddy Simone" can be found on English Freakbeat, Volume 4. At one time, the Soul Agents reputedly included John Anthony, who later managed Genesis and other bands. They also served as the backing band for Rod Stewart for several years. Also, three members of this band later joined the Loot, who are featured on English Freakbeat, Volume 1. These two tracks are from their rare third single. The history of the Irish band the Wheels – rivals of Van Morrison's early band Them – is recounted in the original liner notes of the landmark Nuggets: Original Artyfacts from the First Psychedelic Era, 1965-1968, but only in regard to their connections with the Shadows of Knight, who covered several of their songs. "Don't You Know" is the "B" side of their version of Gloria – the band claimed that Morrison wrote it for them! – while "Road Block" is from the flip side of the second single. Another song by the Wheels can be found on English Freakbeat, Volume 4. The estimable Joe Meek produced both singles by the Blueberries; "Little Baby" is their first release. Jimmy Page is rumored to have played on this session, but this is denied by the band; their own guitarist Mike Stubbs later joined the Syndicats. The curious "7 Pounds of Potatoes" – which "come between me and my love", claim the lyrics – is a surprising number from the Dakotas. Although best known for backing Billy J. Kramer (see English Freakbeat, Volume 5) on several more pop-oriented recordings in the British Invasion era, the Dakotas made several hit records in their own right, notably "Cruel Sea" from 1963, which was renamed "Cruel Surf" for its U.S. release and was later covered by the Ventures. The Limeys released six typical Mersey-oriented singles over the 1964 to 1966 period; "Cara-Lin", originally recorded by the Strangeloves is from their final single. The Lancasters is one of several English artists that were "discovered" by Kim Fowley in the 1963-1965 period. This is yet another track in the English Freakbeat series that was released only in the U.S., in December 1964. Among the bandmembers are Ritchie Blackmore, one of the co-founders of Deep Purple. "Satan's Holiday" is actually the venerable "Hall of the Mountain King". Track listing LP Side 1: Glen Athens & the Trojans: "Let Me Show You" — rel. 1965 The Sessions: "Let Me In" — rel. 1965 Mickey Finn: "Garden of My Mind" — rel. 1967 The Kubas: "I Love Her" The Beat Merchants: "So Fine" — rel. 1965 Wolfpack: "We're Gonna Howl" The Syndicats: "Howlin' for My Baby" (Howlin' Wolf) Side 2: The Soul Agents: "Gospel Train" The Soul Agents: "Let's Make it Pretty Easy" (John Lee Hooker), vinyl-only track The Muleskinners: "Back Door Man" — rel. 1965 The Wheels: "Don't You Know" The Blueberries: "Please Don't Let Me Know" — rel. 1966 The Beat Merchants: "Messin' with the Man" — rel. 1964 The Dakotas: "7 Pounds of Potatoes" — rel. 1967 CD Glenn Athens & the Trojans: "Let Me Show You" — rel. 1965 The Sessions: "Let Me In" — rel. 1965 Mickey Finn: "Garden of My Mind" — rel. 1967 Mickey Finn: "Time to Start Loving You" — rel. 1967, CD bonus track The Kubas: "I Love Her" The Koobas: "Face", CD bonus track The Beat Merchants: "So Fine" — rel. 1965 The Beat Merchants: "Messin' with the Man" — rel. 1964 The Wolf Pack: "We're Gonna Howl" The Syndicats: "Howlin' for My Baby" (Howlin' Wolf) The Soul Agents: "Gospel Train" The Soul Agents: "I Just Wanna Make Love to You", CD bonus track The Muleskinners: "Back Door Man" (Howlin' Wolf) — rel. 1965 The Muleskinners: "Missed Your Lovin'", CD bonus track The Wheels: "Don't You Know" — rel. 1965 The Wheels: "Road Block" — rel. 1965 The Blueberries: "Please Don't Let Me Know" — rel. 1966 The Blueberries: "It's Gonna Work out Fine" — rel. 1966, CD bonus track The Blue Rondos: "Little Baby", CD bonus track The Dakotas: "7 Pounds of Potatoes" — rel. 1967 The Limeys: "Cara-Lin" — rel. 1966, CD bonus track The Lancasters: "Earthshaker" — rel. 1964, CD bonus track The Lancasters: "Satan's Holiday" — rel. 1964, CD bonus track 1989 compilation albums Compilation albums by British artists Pop rock compilation albums Psychedelic rock compilation albums
39192504
https://en.wikipedia.org/wiki/University%20of%20Illinois%20Department%20of%20Computer%20Science
University of Illinois Department of Computer Science
The Department of Computer Science (CS) at the University of Illinois at Urbana-Champaign has consistently been ranked as a top computer science program in the world. U.S. News & World Report rank UIUC's Computer Science as a Top 5 CS Graduate School program in the nation as of 2018, and Top 5 CS Undergraduate School program in the nation as of 2021. The University of Illinois at Urbana-Champaign is also ranked as one of the Top 5 Graduate Schools in Computer Engineering. CSrankings.org puts UIUC in the Top 2 Computer Science schools in the world by publications and research output in top conferences over the past 10 years. Since its reorganization in 1964, the Department of Computer Science has produced a myriad of publications and research that have advanced the field of Computer Science. In addition, many faculty and alumni have been leads with modern-day applications and projects such as Mosaic (web browser), LLVM, PayPal, Yelp, YouTube, Malwarebytes, and Oracle. History In 1949, the University of Illinois created the Digital Computer Laboratory following the joint funding between the University and the U.S. Army to create the ORDVAC and ILLIAC I computers under the direction of physicist Ralph Meagher. The ORDVAC and ILLIAC computers the two earliest von-Neumann architecture machines to be constructed. Once completed in 1952, the ILLIAC I inspired machines such as the MISTIC, MUSASINO-1, SILLIAC, and CYCLONE, as well as providing the impetus for the university to continue its research in computing through the ILLIAC II project. Yet despite such advances in high-performance computing, faculty at the Digital Computer Laboratory continued to conduct research in other fields of computing as well, such as in Human-Computer Interaction through the PLATO project, the first computer music (the ILLIAC Suite), computational numerical methods through the work of Donald B. Gillies, and James E. Robertson, the 'R' co-inventor of the SRT division algorithm, to name a few. Given this explosion in research in computing, in 1964, the University of Illinois reorganized the Digital Computer Laboratory into the Department of Computer Science, and by 1967, the department awarded its first PhD and master's degrees in Computer Science. In 1982, UIUC physicist Larry Smarr wrote a blistering critique of America's supercomputing resources, and as a result the National Science Foundation established the UIUC's National Center for Supercomputing Applications in 1985. NCSA was one of the first places in industry or academia to develop software for the 3 major operating systems at the time - Macintosh, PC, and UNIX. NCSA in 1986 released NCSA Telnet and in 1993 it released the Mosaic web browser. In 2004, the Department of Computer Science moved out of the Digital Computer Laboratory building into the Thomas M. Siebel Center for Computer Science following a gift from alumnus Thomas Siebel. Statistics As of the 2017–2018 academic year, there are a total of 2702 students in the department. (1787 Undergraduate, 915 Graduate). The average salary reported by 2018-2019 undergraduates was $106,551. Incoming 2018 freshman class average ACT score: 33.5; average math ACT score: 34.0. There are 85 full-time faculty members, in the fields of: Architecture, Compilers and Parallel Computing Artificial Intelligence Bioinformatics and Computational Biology Computers and Education Database and Information Systems Graphics, Visualization Programming Languages, Formal Systems, and Software Engineering Systems and Networking Scientific Computing Theory and Algorithms Human Computer Interaction Degrees and programs Undergraduate The department offers 14 undergraduate degree programs, all leading to Bachelor of Science degrees, through six different colleges: Computer Science (Engineering) Mathematics and Computer Science (Liberal Arts and Science) Statistics and Computer Science (LAS) Computer Science and Chemistry (LAS) Computer Science and Linguistics (LAS) Computer Science and Anthropology (LAS) Computer Science and Astronomy (LAS) Computer Science and Economics (LAS) Computer Science and Geography and Geographic Information Systems (LAS) Computer Science and Advertising (Media) Computer Science and Philosophy (LAS) Computer Science and Animal Sciences (Agricultural, Consumer, and Environmental Sciences) Computer Science and Crop Sciences (Agricultural, Consumer, and Environmental Sciences) Computer Science and Music (Fine and Applied Arts) The department also sponsors a Minor in Computer Science available to all UIUC students. The department also offers two 5-year bachelors/masters programs through the College of Engineering: Bachelor of Science/Master of Science (B.S./M.S.) in Computer Science and Bachelors of Science/Masters of Computer Science(B.S./M.C.S.). Graduate Doctor of Philosophy (Ph.D.) Master of Science (M.S.) in Computer Science Professional Masters of Computer Science (M.C.S.) Online MCS is offered in partnership with Coursera. MCS in Data Science(MCS-DS) Track is offered in partnership with the School of Information Science, the Department of Statistics, and Coursera Master of Science in Bioinformatics (M.S. Bioinformatics) Notable faculty Sarita Adve, principal investigator for the Universal Parallel Computing Research Center Vikram Adve, helped to create LLVM along with Chris Lattner, Former Interim Head of the Department of Computer Science Gul Agha, director of the Open Systems Laboratory and researcher in concurrent computation Prith Banerjee, former senior Vice President of Research at Hewlett Packard and director of HP Labs Roy H. Campbell, Sohaib and Sara Abbasi Professor of Computer Science Herbert Edelsbrunner, recipient of the National Science Foundation's Alan T. Waterman Award David Forsyth, Professor of Computer Science C. William Gear, mathematician specialized in numerical analysis, computer graphics, and software development Donald B. Gillies, mathematician and computer scientist specialized in game theory and computer architecture Bill Gropp, Thomas M. Siebel Chair Professor, director of the National Center for Supercomputing Applications, and co-creator of Message Passing Interface, IEEE Computer Society President-Elect (2021) Jiawei Han, Abel Bliss Professor specialized in data mining Michael Heath, director of the Center for the Simulation of Advanced Rockets and former interim department head (2007–2009) Thomas Huang, researcher and professor emeritus specialized in Human-Computer Interaction Ralph Johnson, Research Associate Professor and co-author of Design Patterns: Elements of Reusable Object-Oriented Software David Kuck, sole software designer on the ILLIAC IV and developer of the CEDAR project Steven M. LaValle, principal scientist at Oculus Rift Chung Laung Liu, Professor of Computer Science Ursula Martin, computer scientist specialized in theoretical computer science and formal methods and a Commander of the Order of the British Empire Bruce McCormick, professor of physics, computer science, and bioengineering Klara Nahrstedt, Ralph and Catherine Fisher Professor of Computer Science and director of the Coordinated Science Laboratory David Plaisted, faculty at the Department of Computer Science until professorship at UNC-Chapel Hill Daniel Reed, former department head (1996–2001) and former director of the National Center for Supercomputing Applications (2000–2003) Edward Reingold, specialized in algorithms and data structures Dan Roth, Professor of Computer Science Rob A. Rutenbar, Abel Bliss Professor and former department head (2010–2017), noted for advances in computer hardware Marc Snir, Michael Faiman and Saburo Muroga Professor of Computer Science and former department head (2001–2007) Shang-Hua Teng, Professor of Computer Science and Gödel Prize laureate Josep Torrellas, Willett Faculty Scholar in Computer Science and research faculty for the Universal Parallel Computing Research Center Marianne Winslett, professor emerita of computer science Stephen Wolfram, former Professor of Physics, Mathematics, and Computer Science and founder of Wolfram Research Frances Yao, Professor of Computer Science and staff at Xerox Palo Alto Research Center Yuanyuan Zhou, Professor of Computer Science and founder of Emphora, Pattern Insight, and Whova oe Notable alumni Sohaib Abbasi B.S. 1978, M.S. 1980, former CEO of Informatica Nancy Amato Ph.D. 1995, Unocal Professor in the Department of Computer Science and Engineering at Texas A&M University, steering member of CRA-W, and current head of the Department of Computer Science, University of Illinois, Urbana-Champaign Daniel E. Atkins III Ph.D. 1970, Inaugural Director of the Office of Cyberinfrastructure for the U.S. National Science Foundation. Marc Andreessen B.S. 1993, Mosaic (web browser), Netscape Eric Bina M.S. 1988, Mosaic (web browser), Netscape Ed Boon B.S., Mortal Kombat Rick Cattell B.S. 1974, co-founder of Object Data Management Group, ACM Fellow, winner of the 1978 ACM Doctoral Dissertation Award Steve Chen B.S. 2002, YouTube Steve S. Chen Ph.D. 1975, Cray Computer Edward Davidson Ph.D. 1968, professor emeritus in Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor Steve Dorner B.S. 1983, Eudora (email client) Brendan Eich M.S. 1986, JavaScript, Mozilla Clarence Ellis Ph.D. 1969, First African-American Computer Science Doctorate recipient and pioneer in Computer Supported Cooperative Work (CSCW) and Groupware Ping Fu M.S. 1990, Geomagic Mary Jane Irwin M.S. 1971, PhD. 1975, NAE member; computer architecture researcher Jawed Karim B.S. 2004, YouTube Robert L. Mercer M.S. 1970, Ph.D. 1972, co-CEO of Renaissance Technologies and pioneer in Computational Linguistics Marcin Kleczynski B.S. 2012, CEO and founder of Malwarebytes Pete Koomen M.S. 2006, co-founder and CTO of Optimizely Chris Lattner Ph.D. 2005, LLVM Der-Tsai Lee M.S. 1976, Ph.D. 1978, 14th President of National Chung Hsing University Max Levchin B.S. 1997, PayPal, Slide Nimit Maru B.S. 2004, co-founder and CEO of Fullstack Academy Robert McCool, B.S. 1995, author of the original NCSA HTTPd web server and the Common Gateway Interface (CGI) Mary T. McDowell B.S. 1986, former CEO of Polycom, former executive vice president at Nokia Peng T. Ong M.S. 1988, co-founder of Match.com Ray Ozzie B.S. 1979, Lotus Notes, Groove Networks, and former CTO and Chief Software Architect at Microsoft. Anna Patterson Ph.D. 1998, Vice President of Engineering, Artificial Intelligence at Google and co-founder of Cuil Linda Petzold B.S. 1974, Ph.D. 1978, Professor of Computer Science and Mechanical Engineering at UC Santa Barbara, NAE member, and J. H. Wilkinson Prize for Numerical Software recipient; computational science and engineering researcher Fontaine Richardson Ph.D. 1968, founder of Applicon Thomas Siebel M.S. 1985, founder, chairman, and CEO of Siebel Systems; founder, chairman, and CEO of C3 Russel Simmons B.S. 1998, co-founder and initial CTO of Yelp, Inc and a member of the PayPal Mafia Anil Singhal M.C.S. 1979, co-founder and CEO of NetScout Systems James E. Smith M.S. 1974, Ph.D. 1976, winner of the 1999 Eckert–Mauchly Award Jeremy Stoppleman B.S. 1999, co-founder and CEO of Yelp, Inc. Parisa Tabriz B.S. 2005, M.S. 2007, computer security expert at Google and Forbes 2012 "Top 30 People Under 30 To Watch in the Technology Industry" Mark Tebbe B.S. 1983, Adjunct Professor of Entrepreneurship at Booth School of Business at the University of Chicago and co-founder of Answers Corporation Andrew Yao Ph.D. 1975, Turing award winner, theoretical computer science researcher In popular culture In 2001: A Space Odyssey, HAL 9000 is said to have been made operational at the HAL Plant in Urbana, Illinois which was meant to represent the Coordinated Science Laboratory where the ILLIAC project was conducted See also Beckman Institute for Advanced Science and Technology Coordinated Science Laboratory ILLIAC Grainger College of Engineering References University of Illinois at Urbana–Champaign Computer science departments in the United States 1964 establishments in Illinois
14679575
https://en.wikipedia.org/wiki/CuneiForm%20%28software%29
CuneiForm (software)
CuneiForm Cognitive OpenOCR is a freely distributed open-source OCR system developed by Russian software company Cognitive Technologies. CuneiForm OCR was developed by Cognitive Technologies as a commercial product in 1993. The system came with the most popular models of scanners, MFPs and software in Russia and the rest of the world: Corel Draw, Hewlet-Packard, Epson, Xerox, Samsung, Brother, Mustek, OKI, Canon, Olivetti, etc. In 2008 Cognitive Technologies opened the program's source codes. Features CuneiForm is a system developed for transforming the electronic copies of paper documents and image files into an editable form without changing the structure and the original document fonts in automatic or semi-automatic mode. The system includes two components for single and batch processing of electronic documents. The list of languages supported by the system: Besides, the system supports a mixture of Russian and English. Recognition of other mixed languages is only supported in the branch, developed by Andrei Borovsky in 2009. Educating the system to recognize other languages is difficult since each language is related to a dat-file, the structure and development method of which are not disclosed by the developers. History 1993 - Cognitive Technologies signed an OEM-contract with Corel, under the terms which Cognitive recognition library came embedded into the Corel Draw 3.0 (and later versions) package popular in the publishing sphere. 1994 – The contract with Hewlett-Packard on the equipment of all scanners imported into Russia with CuneiForm OCR. This was the first HP contract with a Russian software company. 1995 - The contract with the Japanese corporation Epson on supplying their scanners with the CuneiForm OCR. The OEM contract was signed with the world's largest manufacturer of fax machines, laser printers, scanners and other office equipment - Brother Corporation. According to the agreement, the new roller scanner Brother IC-150 was equipped with Cognitive software for scanning and recognition worldwide. 1996 - OEM agreement with one of the world's largest manufacturers of monitors, fax machines, laser printers, MFPs and other office equipment - Samsung Information Systems America. According to the agreement the new multifunction device Samsung OFFICE MASTER OML-8630A was to be equipped with the Cognitive Cuneiform LE system of symbol optical recognition worldwide. OEM agreement with a leading global manufacturer of office equipment Xerox on equipping the multifunctional devices Xerox 3006 and Pro-610 with the CuneiForm recognition system. CuneiForm '96 OCR release, with the first adaptive recognition algorithms in the world. Adaptive Recognition - a method based on a combination of two types of printed character recognition algorithms: multifont and omnifont. The system generates an internal font for each input document based on well printed characters using a dynamic adjustment (adaptation) to the specific input symbols. Thus, the method combines the omnitude and the technological efficiency of the omnifont approach with the high font recognition accuracy that dramatically improves the recognition rate. 1997 – The first usage of neural network-based technologies in CuneiForm. The algorithms using neural networks for character recognition are developed as follows: the character image that is to be recognized (pattern) is reduced to a certain standard size (normalized). The luminance values of the normalized pattern are used as input parameters for the neural network. The number of output parameters of the neural network is equal to the number of recognized characters. The result of recognition is a symbol, which corresponds to the maximum value of the output vector of the neural network. New OEM agreement with Canon equipping multi-function devices imported into Russia with the CuneiForm system; New OEM contract with OKI Europe Limited on equipping MFPs OKI FAX 4100 and OKI FAX 5200 MFD's, imported into Russia with the CuneiForm system; The first CuneiForm MMX Update OCR-system for Intel MMX processor release; NeuHause scanners come with the CuneiForm recognition system; Russia's first network scanning system CuneiForm 98 NEST release. 1999 New OEM contract with the Olivetti company on supplying the multi-function devices imported into Russia with the CuneiForm system; Distribution agreement with a leading European distributor of software company WSKA (France) on the distribution of OCR Cuneiform Direct in Europe; New version of the system released, Cuneiform 2000, that implements the method of "cognitive analysis TM”: an expert system is integrated into the recognition core, which analyses of alternatives to the estimates on the output from each detection algorithm, and choose the best option. The method of "Meridian table segmentation TM" is developed for the improvement of the accuracy of recreating the original form of the table in the output document; The original document form recreation mechanism - "What you scan is what you get TM" is introduced. The technology was aimed at saving the scanned document's original form in terms of its components placement. This particularly important for the documents with complex topology: multicolumn texts with headings, annotations, graphic illustrations, tables, etc. 2001 - OEM-contract with Canon on its scanners and multifunction devices equipment with Cognitive Technologies CuneiForm OCR software for Eastern Europe Development prospects December 12, 2007 OCR CuneiForm freeware-version was released and the opening of its source was announced. April 2, 2008 the source codes of the Cuneiform OCR are published under the BSD license, and in the fall - the system's interface source texts. The latest version of OpenSource version for Windows has not been updated since 14.02.2009. This version is no longer available for download. Instead, the version of 11.11.2008 is available on the download page In 2009 graphical interfaces for the open version of Cuneiform based on Qt 4 library - Cuneiform-Qt, YAGF are released. Starting with version 0.9.0 open version for Linux can be used as library. See also Puma.NET is a wrapper library for Cognitive Technologies CuneiForm recognition engine. It makes it easy to incorporate OCR functionality in any .NET Framework 2.0 (or higher) application. References External links Cognitive OpenOCR, version 11, BSD Free software programmed in C Free software programmed in C++ Optical character recognition Formerly proprietary software MacOS graphics-related software MacOS text-related software Windows graphics-related software Windows text-related software
53754191
https://en.wikipedia.org/wiki/Hippodamas%20%28mythology%29
Hippodamas (mythology)
In Greek mythology, Hippodamas ( ; Ancient Greek: Ἱπποδάμας, gen. ) may refer to the following characters: Hippodamas, son of Achelous and Perimede, daughter of Aeolus; brother of Orestes and father of Euryte, wife of Porthaon. Hippodamas, father of Perimele. He pushed his daughter off a cliff when he discovered that she was having a love affair with Achelous. Hippodamas, a Trojan prince and son of King Priam of Troy. He was killed by Ajax the Great. Hippodamas, a Trojan soldier who killed by Odysseus. Hippodamas, another Trojan, was killed by Achilles. Notes References Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project. Hesiod, Catalogue of Women from Homeric Hymns, Epic Cycle, Homerica translated by Evelyn-White, H G. Loeb Classical Library Volume 57. London: William Heinemann, 1914. Online version at theio.com Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer. Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. Greek text available at the Perseus Digital Library. Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Pseudo-Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Greek text available from the same website. Publius Ovidius Naso, Metamorphoses translated by Brookes More (1859-1942). Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. Publius Ovidius Naso, Metamorphoses. Hugo Magnus. Gotha (Germany). Friedr. Andr. Perthes. 1892. Latin text available at the Perseus Digital Library. Children of Achelous Trojans Children of Priam Princes in Greek mythology Aetolian characters in Greek mythology Thessalian characters in Greek mythology Characters in Greek mythology
29087109
https://en.wikipedia.org/wiki/Universal%20Credit
Universal Credit
Universal Credit is a United Kingdom social security payment. It is replacing and combining six benefits for working-age people who have a low household income: income-based Employment and Support Allowance, income-based Jobseeker's Allowance, and Income Support; Child Tax Credit and Working Tax Credit; and Housing Benefit. Contribution-based Jobseeker's Allowance and contribution-based Employment and Support Allowance have been replaced with "new style" versions, and are not affected by Universal Credit. The new policy was announced in 2010 at the Conservative Party annual conference by the Work and Pensions Secretary, Iain Duncan Smith, who said it would make the social security system fairer to claimants and taxpayers. At the same venue the Welfare Reform Minister, Lord Freud, emphasised the scale of their plan, saying it was a "once in many generations" reform. A government white paper was published in November 2010. A key feature of the proposed new benefit was that unemployment payments would taper off as the recipient moved into work, not suddenly stop, thus avoiding a 'cliff edge' that was said to 'trap' people in unemployment. Universal Credit was legislated for in the Welfare Reform Act 2012. In 2013, the new benefit began to be rolled out gradually to Jobcentres, initially focusing on new claimants with the least complex circumstances: single people who were not claiming for the cost of their accommodation. There were problems with the early strategic leadership of the project and with the IT system on which Universal Credit relies. Implementation costs, initially forecast to be around £2 billion, later grew to over £12 billion. More than three million recipients of the six older "legacy" benefits were expected to have transferred to the new system by 2017, but under current plans the full move will not be completed before 2024. One specific concern is that payments are made monthly, with a waiting period of at least five weeks (originally six) before the first payment, which can particularly affect claimants of Housing Benefit and lead to rent arrears (although claimants can apply for emergency loans paid more promptly). In May 2019, one million people were receiving less than their entitlement, often due to the repayment of loans given during the initial five-week wait period. In September 2019, a total of 2.5 million people were receiving the benefit; 65% of those recipients were out of work. Background The Universal Credit mechanism was itself first outlined as a concept in a 2009 report, Dynamic Benefits, by Iain Duncan Smith's thinktank the Centre for Social Justice. It would go on to be described by Work and Pensions Secretary Iain Duncan Smith at the Conservative Party annual conference in 2010. The initial aim was for it to be implemented fully over four years and two parliaments, and to merge the six main existing benefits (income-based Jobseeker's Allowance, income-related Employment and Support Allowance, Income Support, Working Tax Credit, Child Tax Credit and Housing Benefit) into a single monthly payment, as well as cut the considerable cost of administering six independent benefits, with their associated computer systems. Unlike existing benefits like Income Support, which had a 100% withdrawal rate, Universal Credit was designed to gradually taper away – like tax credits and Housing Benefit – allowing claimants to take part-time work without losing their entitlement altogether. In theory, it makes claimants better off taking on work, as they keep at least a proportion of the money they earn. But reductions in funding and changes to withdrawal rates left commentators on either side of the debate to question whether it would actually make work pay. The Daily Telegraph claimed "part-time work may no longer pay", and "some people would be better off refusing" part-time work and in the Guardian Polly Toynbee wrote "Universal credit is simple: work more and get paid less". Finally, the "Minimum Income Floor" used when calculating Universal Credit for self-employed claimants may make it much less worthwhile for large parts of the population to work for themselves. Policy The objectives of the policy included creating a more responsive system that would simplify and incentivise a return to work, pay benefits in a monthly cycle more akin to salaries, reduce the high marginal deduction rate that accumulates from the withdrawal of more than one means-tested benefit simultaneously to a single deduction rate improving incentives, ensure that taking on even a small or varying amount of work would be financially rewarding, and reduce the proportion of children growing up in homes where no one works. Universal Credit would merge out-of-work benefits and in-work support to improve return to work incentives. The clearer financial incentives through Universal Credit would be strengthened by four types of conditionality for claimants depending on their circumstances, ranging from being required to look for full-time work to not being required to find work at all (people in the unconditional group include the severely disabled and carers). Payments are made once a month directly into a bank or building society account, except in Scotland where claimants are given the option to have it paid fortnightly. Any help with rent granted as part of the overall benefit calculation is included in the monthly payment and claimants normally then pay landlords themselves. It is possible in some circumstances to get an Alternative Payment Arrangement (APA), which allows payment of housing benefit direct to the landlord. Universal Credit claimants are also entitled to Personal Budgeting Support (PBS), which is aimed to help them adapt to some of the changes it brings, such as monthly payment. Major amendments In 2015 the Chancellor, George Osborne, announced a future £3.2 billion a year cut to the overall Universal Credit budget after an attempt to cut Tax Credits that year was thwarted by parliament. The Resolution Foundation has argued that this cut, which will be felt more keenly as millions more people transfer to Universal Credit, risks the new system failing to achieve its original purpose of incentivising work in low-income households. The amendments were: Reductions in the amount of "work allowances" before tapered deductions due to income are applied, from April 2016 Limiting the per-child element to only two children for new claims and births after April 2017 Removing the extra element for the first child for new claims from April 2017 In November 2016, in response to criticism that the previous changes had reduced incentives to work, the government announced a reduction in the Universal Credit post-tax taper rate, which controls the reduction of Universal Credit as employment income grows, from 65% to 63% of post-tax income, which will ultimately cost £600 million per year. In the 2018 budget the Chancellor, Philip Hammond, announced an increase in the "work allowances" for households with children, and people with disabilities, with effect from April 2019, partially reversing the reductions announced in 2015. The post-tax work allowances will increase by £1,000 per year, representing an extra £630 of income for about 2.4 million households in employment, ultimately at a cost of about £1.7 billion per year. Extra transitional support for claimants being moved to Universal Credit was also announced. In April 2020 as a one-year temporary response to the COVID-19 pandemic, the Universal Credit standard allowance was increased by £20 per week and housing benefit rent limits relaxed. The uplift was extended until 30 September 2021. The October 2021 budget increased in-work support by increasing the work allowances by £500 a year, and reducing the post-tax deduction taper rate from 63% to 55%. Iain Duncan Smith wrote that he was delighted that the taper rate would now be 55%, the level he wanted over a decade ago when he devised the scheme, but which had not been allowed by the Treasury. Relationship to other proposed welfare policies Universal Credit has some similarities to Lady Williams' idea of a negative income tax, but it should not be confused with the universal basic income policy idea. There is some debate as to whether Universal Credit should be described as "universal", given it is both subject to income cut-offs and requires some claimants to be available for work. Implementation Universal Credit is part of a package of measures in the Welfare Reform Act 2012, which received Royal Assent on 9 March 2012. The Act delegates its detailed workings to regulations, most of which were published as the Universal Credit Regulations 2013. Related regulations appeared in a range of other statutory instruments also. The Department for Work and Pensions (DWP) announced in February 2012 that Universal Credit would be delivered by selected best-performing DWP and Tax Credit processing centres. Initially, the announcement made clear that local authorities (responsible for administering payment of Housing Benefit, a legacy benefit to be incorporated into the scheme) would not have a significant part in delivering Universal Credit. However, the Government subsequently recognised there may be a useful role for local authorities to play when helping people access services within Universal Credit. Philip Langsdale, chief information officer at DWP, who had been leading the programme, died in December 2012, and in previous months there had also been significant personnel changes. Project Director Hillary Reynolds resigned in March 2013 after just four months, leaving the new Chief Executive of Universal Credit to take on her role. Writing in 2013, Emma Norris of the Institute for Government argued the original timetable for implementation of Universal Credit was "hugely overambitious", with delays due to IT problems and senior civil servants responsible for the policy changing six times. A staff survey, reported in The Guardian on 2 August 2013, quoted highly critical comments from Universal Credit implementation staff. On 31 October 2013, in another article said to be based on leaked documents, the paper reported that only 25,000 people – about 0.2% of all benefit recipients – were projected to transfer to the new programme by the time of the next general election in May 2015. In the event, over 100,000 people had made a claim for Universal Credit by May 2015. A pilot in four local authority areas was due to precede national launch of the scheme for new claimants (excluding more complex cases such as families with children), in October 2013, with full implementation to be completed by 2017. Due to persistent computer system failures and delays in implementation, only one pilot, in Ashton-under-Lyne, went ahead by the expected date. The other three pilots went ahead later in the summer, and were met by staff protests. The roll-out of Universal Credit in the Northwest of England was limited to new, single, healthy claimants, later extended to couples, then families, in the same area, reflecting the gradual maturing of different aspects of the computer system. Once the Northwest roll-out was largely complete, the government gradually extended Universal Credit to new single healthy claimants in the rest of the British mainland, nearly completing this roll-out . It was expected that this would gradually be extended to couples and families outside the Northwest once the roll-out to UK mainland single claimants was completed. In Northern Ireland, implementation was held up by disputes over policy and funding between feuding parties in the Northern Ireland assembly; the roll-out of Universal Credit in Northern Ireland began in September 2017. As of 2018 one third of claimants have their benefit reduced to pay rent, council tax and utility bill arrears. This pushed people who already have little further into poverty. Abby Jitendra of the Trussell Trust said this can lead to "the tipping point into crisis. (...) Repaying an advance payment, for example, can be an unaffordable expense when taken from a payment that wasn't enough to start with, pushing people further into debt at the time when support is most needed." Gillian Guy of Citizens Advice said, "Deductions from universal credit can make it harder for people to get by. People receiving universal credit are unlikely to have much slack in their budgets, so even small amounts can put a huge strain on their finances. Building on last year's improvements to universal credit, the government now needs to ensure deductions are made at a manageable rate and take a person's ability to cover their expenses into account." Charlotte Hughes who advises benefit recipients, said deductions were impossible to predict and often done with no warning. "The first time somebody knows that money's been taken out of their account is when they go to the bank. It's just a minefield. Living with that stress that you don't know what money you're going to get from week to week, from month to month, that makes you ill – and that's before you can't eat, and before you can't look after your kids properly. It's rampant." Pilots The scheme was originally planned to begin in April 2013, in four local authorities – Tameside (containing Ashton-under-Lyne), Oldham, Wigan and Warrington, with payments being handled by the DWP Bolton Benefit Centre – but was later reduced to a single area (Ashton) with the others due to join in July. In Wales, it is known the UC pilot covered new claimants in Brecon in early 2013. The pilot would initially cover only about 300 claims per month for the simplest cases of single people with no dependent children, and was to extend nationally for new claimants with the same circumstances by October, with a gradual transition to be complete by 2017. (One tester of the new system in April noted that the online forms took around 45 minutes to complete, and there was no save function.) In March 2013 it was reported that final Universal Credit calculations would be made manually on spreadsheets during the pilot, with the IT system being limited to booking appointments and storing personal details. It was separately reported that no claimants turned up in person at the town hall on the first day of the scheme. The Financial Times reported that the October national roll-out of Universal Credit would now begin in a single Jobcentre (or possibly a "cluster" of them) in each region and that in December 2012 Hilary Reynolds, who had recently been appointed programme director but had moved shortly thereafter, stated in a letter to local authorities: "For the majority of local authorities the impact of [Universal Credit] during the year 2013–14 will be limited." On 3 December 2013, the DWP issued a report containing statistics which showed that, between April and 30 September, only 2,150 people had been signed up to Universal Credit in the four pilot areas. This report confirmed that Universal Credit had been rolled out to Hammersmith on 28 October, followed by Rugby and Inverness on 25 November, and was to expand to Harrogate, Bath, and Shotton by spring 2014. Implementation costs While the DWP had estimated administration costs for the roll-out of Universal Credit to be £2.2 billion, by August 2014 this estimate had risen to £12.8 billion over its "lifetime" and was later increased again to £15.8 billion. Much of the increased cost was linked with software problems and duplication of systems needed to pay out new and legacy benefits. The initial roll-out proceeded much more slowly than had been originally planned, and led to the early departure of several senior leadership figures. In 2018 the National Audit Office maintained Universal Credit could incur higher administrative costs than the systems it replaces. A study by the Resolution Foundation published in November 2018 also predicted that Universal Credit will cost more than the older system of benefits it is replacing. In 2020 a National Audit Office report identified £1.4 billion of extra costs up to March 2020 because of the recent two-year delay, which included the costs of continuing to run the legacy systems for longer. Per claim administration costs for 2019/2020 were about 10% higher than forecast, though the DWP continued to forecast that eventually administration costs would be 9% lower than the benefits it replaced, however the NAO assessed this was "still not certain". Fraud and error was estimated at 9.4% (£1.7 billion) of payments, higher than the 6.4% forecast. Current status In July 2018 the Secretary of State for Work and Pensions, Damian Green, announced a further 12-month delay to the planned implementation completion date to allow additional contingency time, taking that to 2022. This was the seventh rescheduling since 2013, pushing the implementation completion date to five years later than originally planned. In October 2018, the full rollout of Universal Credit will be delayed again to December 2023. As of February 2016, 364,000 people had made claims for Universal Credit. Government research stated "Universal Credit claimants find work quicker, stay in work longer and earn more than the Jobseekers' Allowance claimants." Delays in payments were getting claimants into rent arrears and other debts, however. Claimants may wait up to thirteen weeks for their first payment. Tenants can get into rent arrears more frequently on Universal Credit rather than Housing Benefit and many risk eviction and homelessness as a result. Landlords may refuse potential tenants on the benefit and marriages have broken up under the strain of coping with these delays and managing on Universal Credit. Frank Field, MP and charities state women have been forced into prostitution because they could not manage during times when Universal Credit payments were delayed. Field stated, "If I told people a few years ago that this was happening they would have thought I was off my rocker. I'm still struggling to comprehend it. Women often come to us in tears, they say the benefits system has got worse and they have very little choice." Fields also said, "I wrote to the secretary of state about how the rollout of universal credit in Birkenhead is not going as well as we’re told in the House of Commons, with some women taking to the red light district for the first time. Might she [Esther McVey] come to Birkenhead and meet those women's organisations and the police who are worried about women's security being pushed into this position?" In 2019 the Work and Pensions Committee of the House of Commons heard evidence from women claiming they had been forced into prostitution through delays in paying Universal Credit or because Universal Credit payments were insufficient to meet their basic needs. The committee recommended ending the five-week wait for a first payment and, giving vulnerable claimants advances that did not need to be repaid if they would otherwise experience hardship. In April 2018 The Trussell Trust reported that their food banks in areas where universal credit had been rolled out had seen an average 52% rise in demand compared to the previous year. The Trussell Trust fears a big increase in food bank use when Universal Credit is rolled out in April 2019. Emma Revie of Trussell said, "We’re really worried that our network of food banks could see a big increase in people needing help. Leaving 3 million people to wait at least five weeks for a first payment – especially when we have already decided they need support through our old benefits or tax credits system – is just not good enough. Now is the time for our government to take responsibility for moving people currently on the old system over, and to ensure no one faces a gap in payments when that move happens. Universal credit needs to be ready for anyone who might need its help, and it needs to be ready before the next stage begins." Despite May's promise to support those "just about managing", working homeowners who currently get tax credits lose badly with universal credit. A million homeowners now getting tax credits will have less with the new system and lose on average £43 a week. 600,000 working single parents will lose on average £16 per week and roughly 750,000 households on disability benefits will lose on average £75 per week. Nearly 2 in 5 households receiving benefits will be on average worse off by £52 per week. Up to thirty Conservative MP's are threatening to vote against the government over Universal Credit. Heidi Allen said, "Significant numbers of colleagues on my side of the House are saying this isn't right and are coming together to say the chancellor needs to look at this again." The Resolution Foundation predicts that 400,000 single parents will be better off under universal credit but 600,000 will do worse. The foundation fears lone parents could be trapped in low-paid work with short hours. In January 2019, Amber Rudd, the Work and Pensions Secretary, suspended a planned vote on moving the three million established recipients of older benefits onto Universal Credit. Rudd said that the government would seek approval for the previously announced pilot study of 10,000 such people, who will not see their benefits stopped in the summer of 2019 but have the opportunity to apply for Universal Credit. Rudd also announced that plans to retrospectively extend a benefit cap to families of more than two children born before 2017 would no longer take place. On the same day, the High Court ruled that her department had "wrongly interpreted" regulations covering the calculation of Universal Credit payments in cases where working claimants' paydays fluctuated. In February 2020, the government announced that the rollout of Universal Credit would be delayed again until September 2024 – nine months later than previously estimated. The Department for Work and Pensions explained that the delay was due to 900,000 more claimants than expected remaining on the legacy welfare schemes that Universal Credit is replacing. The delay is estimated to add more than £500 million to its overall cost. Support Support for claimants was initially delivered by local authorities. In October 2018, the Department for Work and Pensions announced that from April 2019, Citizens Advice and Citizens Advice Scotland would be provided with £39 million of funding and £12 million in set up costs to deliver Universal Support. In April 2019, this was rebranded as the "Help to Claim" service and helpline, delivered by Citizens Advice and Citizens Advice Scotland. Unemployment claimant count Before 2013, the unemployment claimant count was simply the number of people claiming Jobseeker's Allowance. However, Universal Credit implementation has gradually broadened the groups of people who are required to look for work. For example, those that had previously claimed Child Tax Credit or Housing Benefit but not Jobseeker's Allowance, such as a person looking after children full time at home with a working partner, were not required to seek work. Under Universal Credit, the partners of claimants are now generally required to seek work. Another example is Universal Credit claimants who would previously have received Employment and Support Allowance, or were awaiting a Work Capability Assessment, are now generally required to look for work. Between November 2015 and November 2019, the number of claimants increased by over 70% (420,000), largely due to such causes. In many areas the claimant count had more than doubled, and in some more that quadrupled. Consequently, in 2018 the UK Statistics Authority removed its quality mark from claimant count statistics as it no longer provided a reliable comparator against previous labour market statistics. In 2019 the DWP published an alternative claimant count series, which used modelling to estimate the number of claimants had Universal Credit been fully implemented in 2013, for use in statistical labour market comparison. Changing monthly pay day When claimants are paid salary a few days early by employers, for example because of weekends or bank holidays, this can result in two monthly payments in one assessment period and zero in the next. This results in both very uneven Universal Credit payments and lower payments over the complete year because of the cancellation of work allowance for entire months, causing hardship, stress and misery for claimants and additional costs for food banks and other support organisations. Four single mothers, with assistance from the Child Poverty Action Group, took this issue for judicial review at the High Court, which in 2019 ruled the DWP had made a "perverse" and incorrect interpretation of the 2013 universal credit regulations. The DWP had argued it would be very expensive to change their computer system to fix this problem. In June 2020 the DWP lost an appeal, where it was ruled that this was "one of the rare instances where the secretary of state for work and pensions’ refusal to put in place a solution to this very specific problem is so irrational that I have concluded that the threshold is met because no reasonable [minister] would have struck the balance in that way". The DWP decided not to appeal further and to modify their systems to comply with the ruling. In July 2020 the DWP lost a further case where the claimant was being paid on a 4-week cycle which interacted badly with the monthly cycle Universal Credit was designed for, causing her to lose up to £463 some months through the incorrect application of the benefit cap. Criticism Universal Credit has been and is subject to many criticisms. Louise Casey fears recipients could become homeless and destitute. According to official figures 24% of new claimants wait over 6 weeks for full payment and many get behind with their rent. Research by Southwark Council suggests that rent arrears continue when tenants have been on Universal Credit a long time. Twelve Tory MPs including Heidi Allen wished the rollout delayed. Local Authorities and recipients of Universal Credit feared claimants will become homeless in large numbers. Gordon Brown maintains, "Surely the greatest burning injustice of all is children having to go to school ill-clad and hungry. It is the poverty of the innocent – of children too young to know they are not to blame. But the Conservative government lit the torch of this burning injustice and they continue to fan the flames with their £3bn of cuts. A return to poll tax-style chaos in a summer of discontent lies ahead." Stephen Bush in The New Statesman maintained that the group currently (October 2017) in receipt of Universal Credit was unrepresentative, consisting mainly of men under 30 who were more likely to find work as they did not have to juggle work obligations with dependent needs. He also argued that men under 30 were also more likely to be living with parents so delays in payments affected them less. Bush believed that when Universal Credit is extended to older claimants and women with dependents, fewer would get back to work easily and there would be more hardship. Johnny Mercer said, "Universal credit has the potential to help people out of poverty by removing the disincentives to move into work in the previous system and allowing them to reach their full potential. A modern compassionate Conservative government simply must get it right though. This government can make the system better by smoothing the path from welfare into work with a fresh investment in universal credit in this budget." Mercer backed calls to increase funding for Universal Credit by stopping a plan to cut income tax. Some claimants on Universal Credit feel they cannot get enough to live without resorting to crime. One in five claims for Universal Credit fails because the claimant does not follow the procedure correctly and there are fears this is because the procedure is hard to understand. Food bank use has increased since Universal Credit started. Delays in providing money force claimants to use food banks, also Universal Credit does not provide enough to cover basic living expenses. Claiming Universal Credit is complex and the system is hard to navigate, many claimants cannot afford internet access and cannot access online help with claiming. A report by the Trussell Trust says, "Rather than acting as a service to ensure people do not face destitution, the evidence suggests that for people on the very lowest incomes ... the poor functioning of universal credit can actually push people into a tide of bills, debts and, ultimately, lead them to a food bank. People are falling through the cracks in a system not made to hold them. What little support available is primarily offered by the third sector, whose work is laudable, but cannot be a substitute for a real, nationwide safety net." The National Audit Office maintains there is no evidence Universal Credit helps people into work and it is unlikely to provide value for money, the system is in many ways unwieldy and inefficient. There are calls for delays and for the system to be fixed before it is rolled out to millions of further claimants. Margaret Greenwood said, "The government is accelerating the rollout in the face of all of the evidence, using human beings as guinea pigs. It must fix the fundamental flaws in universal credit and make sure that vulnerable people are not pushed into poverty because of its policies." Whistleblowers maintain the system is badly designed, broken and glitches regularly lead to hardship for claimants. Hardship can involve delays in benefit payment lasting weeks or reduction in payment by hundreds of pounds below what a claimant is entitled to. A whistleblower said, "The IT system on which universal credit is built is so fundamentally broken and poorly designed that it guarantees severe problems with claims." He maintained the system is too complex and error prone, affecting payments, and correction was frequently slow. "In practical terms, it is not working the way it was intended and it is having an actively harmful effect on a huge number of claimants." Errors and delays add an average of three weeks to the official 35 day wait for a first payment forcing claimants into debt, rent arrears and to food banks. Campaigners fear the situation could worsen in 2019 when 3 million claimants are moved to the new system. The Department of Work and Pensions is accused of being defensive and insular. One whistleblower said design problems existed due to failure to understand what claimants need, particularly when they do not have digital skills or internet access. He said, "We are punishing claimants for not understanding a system that is not built with them in mind." In October 2018, former Prime Minister Sir John Major warned against Universal Credit being introduced "too soon and in the wrong circumstances". Major argued that people who faced losing out in the short term had to be protected, "or you run into the sort of problems the Conservative Party ran into with the poll tax in the late 1980s". A report by Bright Blue, funded by Trust for London, recommended that several changes be made to the Universal Credit system. These changes included introducing a Universal Credit app, offering all new Universal Claimants a one off "helping hand" payment to avoid the five week waiting period and entering claimants who consistently meet conditions around job seeking into a monthly lottery. Reducing incomes Multiple organisations have predicted that Universal Credits will cause families with children to be financially worse off. When fully operational, the Institute for Fiscal Studies estimates that 2.1 million families will lose while 1.8 million will gain. Single parents and families with three children will lose an average £200 a month according to the Child Poverty Action Group and the Institute for Public Policy Research. Alison Garnham of the CPAG urged ministers to reverse cuts to work allowances and get Universal Credit, "fit for families". Garnham said: "Universal credit was meant to improve incentives for taking a job while helping working families get better off. But cuts have shredded it. And families with kids will see the biggest income drops." Since 2013 Universal Credit has changed nine times, most changes making it less generous. This includes cuts in work allowances, a freeze in credit rates for four years and (from April 2017) the child credit being limited to two per family. In October 2017, the Resolution Foundation estimated that that compared to the existing tax credit system: 2.2 million working families would be better off under the Universal Credit system, with an average increase in income of £41 a week. On the other hand, the Foundation estimated that 3.2 million working families would be worse off, with an average loss of £48 a week. In October 2018, the Work and Pensions Secretary, Esther McVey MP, admitted that "some people could be worse off on this benefit", but argued that the most vulnerable would be protected. In January 2020, a report by the Resolution Foundation found that Universal Credit creates “a complex mix of winners and losers”. It found that in Liverpool, just 32% of families will be better off under Universal Credit, while 52% will be worse off, compared to a national average of 46% losing out, and 39% gaining. Deductions Over 60% of claimants experience deductions, many to repay loans they got to tide them over the first five weeks before they officially got payments. Gillian Guy of Citizens Advice said, “Our evidence shows many people on universal credit are struggling to make ends meet, and that deductions are contributing to this.” She urged the government to start affordability tests before recovering debts from claimants. Neil Couling, the chief of the universal credit programme, “admitted that the government over the last 18 months has demanded a push to recover old debt and has provided UC with extra funds to do this”. The recovery of debts causes hardship, often due to loans given during the initial five-week wait period but with some larger debts from over-payments made in the previous tax credits system when income increased rapidly. Self-employed claimants Research by the Low Income Tax Reform Group suggests self-employed claimants could be over £2,000 year worse off than employed claimants on similar incomes. The problem applies with fluctuating income as Universal Credit assumes a fixed number of hours worked – the so-called Minimum Income Floor – in its calculation. The government has been urged to change this to allow the self-employed to base claims on their average incomes. Some claimants even could be over £4,000 a year worse off. Some people cannot find work except by being self-employed and such people will be discouraged from starting a business. Frank Field said, "Given what we now know about the hundreds of thousands of workers in the gig economy who earn less than the national living wage, it begs the question as to how many grafters and entrepreneurs are going to be further impoverished, or pushed deeper into debt, as a result of this new hole being opened up in the safety net." Citizens Advice maintains that it risks "creating or exacerbating financial insecurity for the rising sector of the workforce in non-traditional work". People working in their own business will be affected as will those with seasonal work like agriculture and the hotel trade and those with varying overtime pay. 4.5 million people do work with varying hours while 4.8 million are self-employed most receiving in work benefits. Online applications Professor John Seddon, an author and occupational psychologist, began a campaign in January 2011 for an alternative way to deliver Universal Credit, arguing it wasn't possible to deliver high-variety services through "cheaper" transaction channels, and would drive costs up. He wrote an open letter to Iain Duncan Smith and Lord Freud as part of a campaign to call halt to current plans and embark instead on a "systems approach". Seddon also launched a petition calling for Duncan Smith to: "rethink the centralised, IT-dominated service design for the delivery of Universal Credit". Professor Phillip Alston (the United Nations Special Rapporteur on extreme poverty and human rights) visited the UK in 2018 to report on the impact that the austerity measures implemented since 2010 had had on the disabled and reported that the universal credit digital system had been designed to be difficult. His report has prompted academics to call for complementary non-digital services. Echoing these concerns, Ronnie Campbell, MP for Blyth Valley, sponsored an Early Day Motion on 13 June 2011 on the delivery of Universal Credit which was signed by thirty MPs: "That this House notes that since only fifteen per cent of people in deprived areas have used a Government website in the last year, the Department for Work and Pensions (DWP) may find that more Universal Credit customers than expected will turn to face-to-face and telephone help from their local authority, DWP helplines, Government-funded welfare organisations, councillors and their Hon. Member as they find that the automated system is not able to deal with their individual questions, particular concerns and unique set of circumstances". Wait for payments and payment frequency The Trades Union Congress has raised concerns about the delay – which is at least six weeks – between making a claim and receiving money. The Work and Pensions Select Committee said waiting six-weeks for the first payment caused "acute financial difficulty". Reducing the delay would make the policy more likely to succeed. Some claimants must wait eight months for their first payment. About 20% waited nearly five months and roughly 8% had to wait for eight months. Committee Chair, Frank Field said, "Such a long wait bears no relation to anyone's working life and the terrible hardship it has been proven to cause actually makes it more difficult for people to find work. " According to a report in The Guardian, thousands of claimants get into debt, get behind with their rent and risk eviction due to flaws in Universal Credit. Landlords and politicians want the system overhauled. Due to ongoing problems an official inquiry has been launched into Universal Credit. In October 2017, Prime Minister Theresa May said the six-week delay would continue, despite the concerns of many MPs including some in her own party. A study for Southwark and Croydon councils found substantial increase in indebtedness and rent arrears among claimants on Universal Credit compared with claimants on the old system. Referrals to food banks increased, in one case by 97 per cent. End Hunger UK, a coalition of poverty charities and faith organizations said payment delays, administrative mistakes and failure to support claimants having difficulties with the online only system forced up food bank use. End Hunger UK urged ministers to do a systematic study of how universal credit affects claimants’ financial security. End Hunger UK urged a large cut in the time claimants wait for their first payment from at least five weeks to only two weeks, and maintain the long wait is financially crippling for claimants without savings. The Trussell Trust found demand for food aid rose an average of a 52% in universal credit areas in 2017, contrasted with 13% in areas where it had not been introduced. "It is simply wrong that so many families are forced to use food banks and are getting into serious debt because of the ongoing failings in the benefits system," said Paul Butler, Bishop of Durham. Claimants with children who start work must pay child care early and wait for Universal Credit to repay them later, the Work and Pensions Select Committee reported. The WPSC maintains this is a, "barrier to work". According to the report, "Universal Credit claimants must pay for childcare up front and claim reimbursement from the department after the childcare has been provided. This can leave households waiting weeks or even months to be paid back. Many of those households will be in precarious financial positions which Universal Credit could exacerbate: if, for example, they have fallen into debt or rent arrears while awaiting Universal Credit payments. Too many will face a stark choice: turn down a job offer, or get themselves into debt in order to pay for childcare." The WPSC maintains ministers should pay universal credit childcare costs directly to childcare providers. This would take the burden away from parents and give providers a more certain incomes. Frank Field said, "It's not just driving parents into despair and debt and creating problems for childcare providers – it's also actively working to prevent the government achieving its own aim of getting more people into work." The High Court ruled in January 2019 that the DWP "wrongly interpreted" regulations covering the calculation of Universal Credit payments in cases where working claimants' paydays fluctuated. The DWP's interpretation had meant receiving a salary a few days early because of a weekend or bank holiday "vastly reduced" one's universal credit payment, a situation the judges described as "odd in the extreme" that "could be said to lead to nonsensical situations". The DWP had defended this anomaly by saying that changing its automated system would be expensive. The Child Poverty Action Group said the anomaly had caused "untold hardship, stress and misery" and forced one of the working single mothers in the case to rely on food banks, despite requiring fresh fruit and vegetables due to pregnancy. In February 2019, the Secretary of State for Work and Pensions, Amber Rudd, conceded that "there were challenges with the initial roll-out of universal credit. The main issue which led to an increase in food bank use could have been the fact that people had difficulty accessing their money early enough". Rudd went on to explain that "We have made changes to accessing universal credit so that people can have advances, so that there is a legacy run-on after two weeks of housing benefit, and we believe that will help with food and security". Direct payments to tenants Direct payment of the housing component of Universal Credit (formerly Housing Benefit) to tenants has been the subject of controversy. Although Housing Benefit has long been paid to most private sector tenants directly, for social housing tenants it has historically been paid directly to their landlords. As a result, implementation of a social housing equivalent to the Local Housing Allowance policy, which has been present in the private sector without comment for over a decade, has widely been perceived as a tax on having extra bedrooms, rather than the tenant making up a shortfall in the rent arising from a standardisation of their level of benefit. The Social Security Advisory Committee have argued that the policy of direct payments requires "close monitoring" so as to make sure Universal Credit does not further discourage landlords from renting to people on benefits. Disincentive to save The Institute for Fiscal Studies has argued that Universal Credit is a disincentive for people to save money, as when liquid savings exceed £6,000 the Universal Credit award is reduced until savings fall below £6,000. Impact on the self-employed The Resolution Foundation has warned that Universal Credit will have a detrimental effect on self-employed people, because the level of Universal Credit awarded does not fully take account of any dramatic changes in their income from month to month. According to the Child Poverty Action Group, Universal Credit may the low-paid self-employed and anyone who makes a tax loss (spends more on tax-deductible expenses than they receive in taxable income) in a given tax year. Impact on disabled people Citizens Advice research argued that 450,000 disabled people and their families would be worse off under universal credit. The Work and Pensions Select Committee found Universal Credit's system of sanctions to be "pointlessly cruel", and asked for Universal Credit to be put on hold till it is clear that very vulnerable disabled claimants are protected from serious income loss. Single disabled people in employment were found to be £300 per month worse off when moved on to Universal Credit. Claimants risk being isolated, destitute and in some cases forced to rely on dependent children for care, according to the committee. Impact on passported benefits The Daily Mirror reported an example of a claimant who was moved over to Universal Credit from a "legacy benefit" and whose passported benefits, such as free school meals, were withdrawn in error. Domestic abuse Universal Credit payments go to one person in each household, so abuse victims and their children are often left dependent according to a Work and Pensions Committee report and abusers can exert financial control. It is feared this can facilitate abuse and enable bullying. Financial abuse is also facilitated, one abuse survivor said to the committee that "[the abusive partner] will wake up one morning with £1,500 in his account and piss off with it, leaving us with nothing for weeks." Committee chair, Frank Field said, "This is not the 1950s. Men and women work independently, pay taxes as individuals, and should each have an independent income. Not only does UC's single household payment bear no relation to the world of work, it is out of step with modern life and turns back the clock on decades of hard-won equality for women." MPs listened to evidence that abuse victims found the whole of their income, including money intended for the children, went into the abuser's bank account. The report noted the single household payment caused difficulties for victims wanting to leave and said, "there is a serious risk of Universal Credit increasing the powers of abusers". The report called for all Job Centres to have a private room where people at risk of abuse could state their concerns confidentially. There should also be a domestic abuse specialist in each Job Centre who could alert other staff to signs of abuse. Scotland wants to split payments by default but needs the DWP to change the IT system. The report wants the department to work with Scotland and test a new split payment system. Katie Ghose of Women's Aid called on the government to implement the report and stated, "It is clear from this report that there are major concerns about the safety of Universal Credit in cases where there is domestic abuse." The campaign group Women's Aid have argued that as Universal Credit benefits are paid as a single payment to the household, this has negative consequences for victims of domestic abuse. The Guardian also argued the change disempowers women, preventing them from being financially independent. Women's Aid and the TUC jointly did research showing 52 per cent of victims living with their abuser claimed financial abuse prevented them from leaving. Under Universal Credit when a couple separates, one person must inform the DWP, and make a fresh claim, which takes at least five weeks to process. Someone without money, and possibly with children cannot manage such a wait. Katie Ghose said, "We're really concerned that the implications for women for whom financial abuse is an issue have not been fully thought through or appreciated by the government." Jess Phillips stated, "What we are doing is essentially eliminating the tiny bit of financial independence that at woman might have had." And, she added, the DWP keeps no data on whether Universal Credit goes to men or to women, therefore the magnitude of the problem cannot be measured. Impact on families In the report Pop Goes the Payslip the advice organisation Citizens Advice highlighted examples of people in work worse off under Universal Credit than under the 'legacy' benefits it replaces. Similarly, a report from 2012 by Save the Children highlights how "a single parent with two children, working full-time on or around the minimum wage, could be as much as £2,500 a year worse". Gingerbread has also highlighted their concerns over Universal Credit's disproportional impact on single parent families in particular. As of August 2018, 25% of all recipients were lone parents, with 90% of all single parent families eligible for Universal Credit when the benefit is fully rolled out. Single parents make up the overwhelming majority of claimants affected by the Benefit Cap under Universal Credit: 73% of capped households are headed by single parents, while 75% of those have a child under five - a fact which greatly impacts their ability to find suitable work. Work disincentives A House of Commons Library briefing note raised the concern that changes to Universal Credit that were scheduled to take effect in April 2016 might make people reluctant to take more hours at work: There is concern that families transferring to Universal Credit as part of the managed migration whose entitlement to UC is substantially lower than their existing benefits and tax credits might be reluctant to move into work or increase their hours if this would trigger a loss of transitional protection, thereby undermining the UC incentives structure.The very long application and assessment hiatus also discourages UC recipients from moving off Universal Credit entirely, for more than six months, fearful having to undergo repeated and very long waiting periods, with no income, resulting because of redundancy and/or loss of temporary employment for legitimate reasons necessitating reapplication for Universal Credit from scratch. In 2015, the Chancellor announced a future £3.2 billion a year cut to the Universal Credit budget after an attempt to cut Tax Credits that year was thwarted by parliament. The Resolution Foundation has argued that this cut risks the new system failing to achieve its original purpose of incentivising work in low-income households. The 2015 cut was partly reversed in the 2018 budget by a promised uplift of £1.7 billion per year to the money available to encourage recipients to increase their hours at work. Internal criticisms A freedom of information request was made by Tony Collins and John Slater in 2012. They sought the publication of documents detailing envisioned problems, problems that arose with implementation, and a high-level review. In March 2016, a third judicial case ordered the DWP to release the documents. The government's argument against releasing the documents was the possibility of a chilling effect for the DWP and other government departments. IT problems Universal Credit has been dogged by IT problems. A DWP whistleblower told Channel 4's Dispatches in 2014 that the computer system was "completely unworkable", "badly designed" and "out of date". A 2015 survey of Universal Credit staff found that 90% considered the IT system inadequate. Telephone problems Claimants on low income were forced to pay for long telephone calls. Citizens Advice in England carried out a survey in summer 2017 which found an average waiting time of 39 minutes with claimants often needing to make repeated calls. Nearly a third of respondents said they made over 10 calls. The government was urged to make telephone calls over Universal Credit free for claimants which was done in 2019. ID requirements In December 2018, Liverpool Walton MP Dan Carden wrote to Secretary of State for Work and Pensions, Amber Rudd, to raise concerns after a number of people claiming Universal Credit were told to apply for provisional driving licences as a form of ID, with the costs being taken from their benefits. In response to the letter a DWP spokesperson said that "having ID is not a requirement for those making a Universal Credit claim", though this appeared to contradict the guidance on the government's website. Fraud It was originally hoped that Universal Credit would help to save about £1bn in fraud and error. However, in July 2019 it was revealed that an estimated £20 million of public money had been stolen by fraudsters. Criminals were found to be exploiting a loophole in the online system to fraudulently apply for universal credit and claiming advance loans on behalf of other people who were unaware that they had been signed up for the benefit. One jobcentre reported that £100,000 of fraudulent activity had been recorded each month. It was estimated that 42,000 people may have fallen victim to the scam. Following the revelations, the Work and Pensions minister Justin Tomlinson announced that "where it is clear that they have been a victim of fraud through no fault of their own, no, we would not expect them to pay it back." A spokesperson from the Department for Work and Pensions later explained that victims of the scam would have to repay any money that they had kept. Mental health A study by The Lancet Public Health Journal released in February 2020 linked a sharp increase in mental-health problems among the unemployed with the rollout of Universal Credit and other government welfare changes. It found that the number of unemployed people with psychological distress rose by 6.6% from 2013 to 2018. It also found a third of unemployed recipients of Universal Credit were likely to have become clinically depressed. See also Welfare Reform Act 2012 Welfare Reform and Work Act 2016 Further reading Centre for Social Justice (2009), Dynamic Benefits: Towards welfare that works Published by: Centre for Social Justice Gillies, A., Krishna, H., Paterson, J., Shaw, J., Toal, A. and Willis, M. (2015) Universal Credit: What You Need To Know 3rd edition, 159 pages, Published by: Child Poverty Action Group References External links Online benefits calculator – covers universal credit and all means tested benefits Universal Credit – welfare that works, Department for Work and Pensions (DWP) Universal Credit FAQs Who is affected by Universal Credit Benefits in the Future Universal Credit: What You Need To Know? 2010 establishments in the United Kingdom Taxation in the United Kingdom Tax credits Social security in the United Kingdom
21135358
https://en.wikipedia.org/wiki/2009%20USC%20Trojans%20football%20team
2009 USC Trojans football team
The 2009 USC Trojans football team (variously "Trojans" or "USC") represented the University of Southern California during the 2009 NCAA Division I FBS football season. The team played their home games at the Los Angeles Memorial Coliseum and was coached by Pete Carroll, who was in his ninth and final season at USC. They finished the season 9–4, 5–4 in Pac-10 play and won the Emerald Bowl over Boston College 24–13. Before the season Pre-season outlook Recruiting class The Trojans signed a top-5 recruiting class. Transfers Departures Offseason news Schedule Roster Name Yr. Ht./Wt. Quarterback 7 Matt Barkley Fr. 6-2/230 15 Aaron Corp Jr. 6-3/200 14 Garrett Green Sr. 6-2/210 6 John Manoogian Fr. 6-1/200 16 Mitch Mustain Jr. 6-3/210 Running Back 21 Allen Bradford Jr. 5-11/235 2 C.J. Gable Jr. 6-0/200 45 Adam Goodman Sr. 6-1/240 31 Stanley Havili Jr. 6-0/225 13 Stafon Johnson Sr. 5-11/215 4 Joe McKnight Jr. 6-0/200 6 Curtis McNeal Fr. 5-8/180 34 Ahmed Mokhtar So. 6-0/210 10 D.J. Shoemate So. 6-0/220 26 Marc Tyler So. 6-0/220 36 Simione Vehikite Fr. 6-0/245 Wide Receiver 9 David Ausberry Jr. 6-4/235 83 Steve Blackhart Fr. 6-2/180 49 Robbie Boyer Fr. 6-0/185 19 Brice Butler Fr. 6-3/200 46 Sean Calcagnie Jr. 6-0/190 23 Jordan Cameron Jr. 6-5/220 80 Brandon Carswell So. 6-1/185 41 Preston Cavignac Jr. 6-0/185 1 De'Von Flournoy Fr. 6-0/180 41 J.B. Green Fr. 6-1/195 8 Ronald Johnson Jr. 6-1/185 28 Drew Ness So. 6-0/190 17 Travon Patterson Jr. 5-10/175 47 Scott Stephens Jr. 6-1/180 81 Spencer Vigoren Sr. 6-4/220 18 Damian Williams Jr. 6-1/190 Tight End 88 Blake Ayles So. 6-5/255 40 Rhett Ellison Jr. 6-5/250 82 Bryson Lloyd Fr. 6-3/225 86 Anthony McCoy Sr. 6-5/255 67 Michael Reardon So. 6-5/275 87 Ian Wandler Jr. 6-4/270 73 Steve Gatena 6-5/270 Offensive Line 71 Charles Brown Sr. 6-6/295 53 Jeff Byers Sr. 6-4/285 72 Martin Coleman So. 6-5/315 77 Kevin Graf Fr. 6-6/315 74 Zack Heberer Jr. 6-5/300 78 Khaled Holmes Fr. 6-4/305 76 Nick Howell Sr. 6-5/280 75 Matt Kalil Fr. 6-6/290 68 Butch Lewis Jr. 6-5/300 50 Abe Markowitz Fr. 6-2/280 59 John Martinez Fr. 6-3/275 64 Garrett Nolan Sr. 6-4/280 61 Kristofer O'Dowd Jr. 6-5/300 56 Alex Parsons Sr. 6-4/300 62 Chris Pousson So. 6-4/240 70 Tyron Smith So. 6-6/270 85 Cooper Stephenson Jr. 6-3/215 Defensive Line 94 Armond Armstead So. 6-5/290 16 James Boyd Fr. 6-5/230 91 Jurrell Casey So. 6-1/295 92 Hebron Fangupo Jr. 6-2/330 93 Everson Griffen Jr. 6-3/265 98 DaJohn Harris So. 6-4/285 96 Wes Horton Fr. 6-5/245 97 Malik Jackson So. 6-5/230 42 Devon Kennard Fr. 6-3/255 8 Nick Perry Fr. 6-3/240 90 Derek Simmons Jr. 6-4/285 99 Averell Spicer Sr. 6-2/295 44 Christian Tupou Jr. 6-2/280 Linebacker 43 Will Andrew Fr. 6-2/225 52 Luthur Brown Sr. 6-2/235 37 Jordan Campbell So. 5-11/230 46 Ross Cumming So. 6-1/220 59 Dan Deckas Jr. 5-10/210 54 Chris Galippo So. 6-2/255 57 Nick Garratt Sr. 6-1/235 81 Kevin Greene Fr. 6-3/235 10 Jarvis Jones Fr. 6-3/225 35 Uona Kaveinga So. 6-1/235 17 Michael Morgan Jr. 6-4/220 53 Marquis Simmons Fr. 6-0/215 6 Malcolm Smith Jr. 6-1/225 Defensive Back 30 Brian Baucham Fr. 5-11/190 1 T.J. Bryant So. 6-0/180 45 Omari Crittenden Jr. 6-0/185 38 Robert Erickson Jr. 5-11/190 25 Patrick Hall Fr. 6-1/185 22 Daniel Harper So. 5-11/185 4 Torin Harris Fr. 6-1/175 26 Will Harris Sr. 6-1/200 28 Justin Hart Sr. 6-0/175 47 Michael Helfrich Fr. 6-0/190 23 Shane Horton So. 6-1/210 27 Marshall Jones Jr. 6-0/185 2 Taylor Mays Sr. 6-3/230 19 Drew McAllister So. 6-1/200 7 T. J. McDonald Fr. 6-2/205 49 Ryan McMahon So. 6-0/200 9 Byron Moore Fr. 6-1/205 36 Josh Pinkard Sr. 6-2/210 34 Spencer Spiegel So. 5-11/175 29 Jawanza Starling Fr. 6-1/190 15 Kevin Thomas Sr. 6-1/185 24 Shareece Wright Jr. 6-0/180 Kickers/Punters 38 Jordan Congdon Sr. 5-9/180 48 Jacob Harfman Jr. 5-10/190 30 Joe Houston Jr. 5-8/170 39 Billy O'Malley Jr. 6-1/195 27 Boomer Roepke So. 5-9/180 Game summaries San Jose State The #4 Trojans opened their season against the lightly regarded San Jose Spartans. Though the Spartans outscored USC 3-0 in the 1st quarter of play, the Trojans quickly recovered, scoring 56 consecutive points for a 53-point victory. Ohio State A crowd of 106,033, the largest in Ohio Stadium history, were in attendance as the #3 USC Trojans came to Columbus, Ohio to face the #8 Ohio State Buckeyes. Both teams showed great defense with the game close at the half tied 10–10. After a safety and a field goal, Ohio State led 15–10 with less than five minutes to go. However, Matt Barkley and the Trojans drove down the field to score a touchdown and a two-point conversion to end the game. The final score was USC 18, Ohio State 15, with the Buckeyes losing to the Trojans for the second straight year. Freshman quarterback Matt Barkley injured his right shoulder. 1st Quarter 11:37 USC Johnson 2-yard run for Touchdown (Congdon kick) 7–0 USC 8:06 OSU Herron 2-yard run for Touchdown (Pettrey kick) 7–7 2nd Quarter 14:56 OSU Pettrey 18-yard field goal 10–7 OSU 0:00 USC Congdon 21-yard field goal 10–10 3rd Quarter 9:03 USC High snap out of the end zone for Safety 12–10 OSU 4:43 OSU Pettrey 22-yard field goal 15–10 OSU 4th Quarter 1:05 USC Johnson 2-yard run for Touchdown (Barkley pass to McKnight) 18–15 USC Washington Quarterback Aaron Corp took over for Matt Barkley. The Huskies became the latest Pac-10 team to upset the Trojans, only two Pacific-10 Conference teams have failed to beat USC during the Pete Carroll era: Arizona and Arizona State. Other Pac-10 teams have defeated USC at least once during this period; Oregon State did it twice, 2006 and again in 2008, as did Stanford in 2001 and 2007 (and would do so again in 2009). 1st Quarter 12:28 USC McKnight 7-yard run for Touchdown (Congdon kick) 7–0 USC 4:36 USC Congdon 42 yd field goal 10–0 USC 0:11 UW Jake Locker 4yd run for Touchdown (Erik Folk kick) 10–7 USC 2nd Quarter 4:09 UW Folk 28 yd field goal 10–10 Tied 3rd Quarter None 4th Quarter 9:53 UW Folk 46 yd field goal 13-10 UW 4:07 USC Congdon 25 yd field goal 13–13 Tied 0:03 UW Folk 22 yd field goal 16–13 UW Washington State California The Trojans dominated the Bears, scoring the most points since their season opener against San Jose State. Notre Dame The Trojans marched into South Bend ranked #6 in the nation after a 30-3 beating of #24 Cal. The Fighting Irish lived up to their name, staying with highly ranked USC through two quarters, only trailing by 6 points, 13-7. In the third quarter, the Trojans started to run away with the game, outscoring Notre Dame 14-7. Going into the fourth quarter, USC had a commanding lead, 27-14. USC scored another touchdown early in the fourth quarter to go ahead, 34-14. It looked like Notre Dame would get beat badly by their rivals once again. But, it wasn't to be for the Fighting Irish. Instead of losing by double digits to the Trojans again, they rallied and found themselves down 34-27 with 1 second left at the USC 1-yard-line. Jimmy Clauson fired an incomplete pass and USC extended their winning streak over Notre Dame to 8. The freshman Matt Barkley attempted 29 passes, completing 19, on his way to 380 yards and 2 touchdowns. Clauson went 24-43 with a mere 260 yards and 2 touchdowns. Anthony McCoy led the Trojans (5-1) in receiving yards with 5 catches for 153 yards. Notre Dame's (4-2) leading receiver was Golden Tate with 8 catches for 117 yards. Oregon State Last year the #1 Trojans went to Corvallis and were upset 27–21. It was the second straight trip to Corvallis for USC that resulted in defeat. Jacquizz Rodgers ran for 187 yards on 37 carries and two touchdowns in the win last year. Oregon State was the second Pac-10 Conference school to have beaten USC twice during the Pete Carroll era in 2006 and 2008 (Stanford was first with victories in 2001 and 2007 (and would do so again in 2009). The last time Oregon State won against USC in the Coliseum was when Dwight D. Eisenhower was the President of the United States. USC scored first when quarterback Matt Barkley passed to Anthony McCoy for an 8-yard touchdown. The Beavers got on the scoreboard with two field goal kicks from Justin Kahut (both 48 yards). In the second quarter, Matt Barkley completed a pass to Ronald Johnson for a 22 yards touchdown. On second and goal, Barkley rushed for a 1-yard touchdown for the Trojans. Kahut kicked a 33-yard field goal for Oregon State just before the half. In the third quarter, Sean Canfield passed to Jacquizz Rodgers for a 6-yard touchdown for the Beavers on a 3:06-drive that took 8 plays for 61 yards. The Trojans countered with a 7-play drive for 70 yards with Allen Bradford rushing for 2 yards for a touchdown. Canfield narrowed Oregon State's gap by completing a 15-yard scoring pass to Damola Adeniji. USC answered with Allen Bradford scoring a 43-yard touchdown. Oregon State became the first team to score more than 10 points on the Trojans in their last ten home games. Oregon Prior to the game, USC had lost three in a row in the state of Oregon, but had won four of the last five against the Ducks. The Trojans lost the game 47–20, which was the worst defeat suffered by USC since 1997. 1st Quarter 8:32 USC Jordan Congdon 28 field goal 3–0 USC 7:30 Oregon Morgan Flint 32-yard field goal 3–3 tied 1:37 Oregon Jeremiah Masoli 3-yard run for Touchdown (Morgan Flint kick) 10–3 Oregon 2nd Quarter 10:55 USC Matt Barkley 3-yard pass to Ronald Johnson for Touchdown (Jordan Congdon kick) 10–10 Tied 8:39 Oregon Andre Crenshaw 1-yard run for Touchdown (Morgan Flint kick) 17–10 Oregon 3:17 USC Matt Barkley 4-yard pass to Damian Williams for Touchdown (Jordan Congdon kick) 17–17 Tied 1:49 Oregon Jeremiah Masoli 17-yard pass to Jamere Holland for Touchdown ( Morgan Flint kick) 24–17 Oregon 3rd Quarter 11:58 Oregon Morgan Flint 35-yard field goal 27–17 Oregon 8:26 USC Jordan Congdon 39 field goal 27–20 Oregon 5:50 Oregon LaMichael James 5-yard run for Touchdown (Morgan Flint kick) 34–20 Oregon 0:00 Oregon Kenjon Barner 3-yard run for Touchdown (Morgan Flint kick) 41–20 Oregon 4th Quarter 8:00 Oregon Morgan Flint 22-yard field goal 44–20 Oregon 2:05 Oregon Morgan Flint 23-yard field goal 47–20 Oregon Arizona State Stanford Stanford's 55–21 victory was the highest number of points any team had scored against a USC Trojans football team in the 121-year history of Trojan football. The 34-point loss was the worst defeat USC had suffered since 1966. This was Stanford's third victory against USC in their last five games against each other at the Coliseum (Stanford winning 2001, 2007, and 2009, with USC winning in 2003 and 2005), with USC having defeated every non-Stanford opponent in the Coliseum since 2001, going 47–2 since Stanford's September 29, 2001 victory in the Coliseum. It was the first defeat in a November game for the Trojans under Coach Pete Carroll's nine-season tenure. For the first time in since Carroll's first season, USC lost more than two games in one season. For the second time in three weekends, Carroll suffered the worst loss of his USC tenure (the other being the Oregon game). This was the largest margin of victory for Stanford in a Stanford-USC game since the two teams' rivalry began in 1918. Harbaugh became the only coach in college football with a winning record against Carroll, going 2–1 in the three times the two coaches have faced each other. Stanford would eclipse the all-time point spread record it set from the 2007 Stanford vs. Southern California football game, as USC was a 41-point favorite. UCLA UCLA–USC rivalry game for the Victory Bell, which the Trojans retained by defeating the Bruins 28–7. Both teams wore home jerseys, in a tradition that was restarted the previous year, with the Bruins wearing their 1966 throwback powder blue jerseys. The final two minutes of the game proved to be interesting. With the Trojans leading 21–7 after a touchdown with 1:30 in the fourth quarter, and having possession of the ball after UCLA turned it over on downs, Carroll instructed his quarterback to take a knee. Rick Neuheisel then called a timeout to stop the clock. On second down, the Trojans immediately connected on a 48-yard pass play for their fourth touchdown of the game. USC beat UCLA for the 10th time in 11 years, but the late touchdown pass stirred passions in the crosstown rivalry and lead to a benches-clearing incident. Arizona Arizona's defeat of the Trojans gave the Wildcats their first win over USC during the Pete Carroll era. Arizona was also the first non-Stanford team in the Pac-10 to defeat the Trojans in the Coliseum under Carroll (Stanford had defeated Carroll's teams in the Coliseum in 2001, 2007, and 2009). Arizona State is the only Pac-10 team to never beat the Trojans during Carroll's tenure. Boston College This marked the first time USC played in the Emerald Bowl. On December 26, 2009 at AT&T Park in San Francisco California, attended by 40,121; the Trojans squared off against the Boston College Eagles from the Atlantic Coast Conference. This also marked the first time that the Trojans had played in a non-BCS bowl game in seven years. Boston College was making its 11th straight bowl appearance. The Eagles became the first team to play in the Emerald Bowl twice, beating Colorado State 35–21 in the 2003 San Francisco Bowl (the former name of the Emerald Bowl). This was the third meeting between the two schools and the first in a bowl game. USC had won both games in the series, a 23–17 victory in Los Angeles in 1987 and a 34–7 win in Chestnut Hill in 1988. USC freshman quarterback Matt Barkley threw touchdown passes to Stanley Havili on the Trojans first two possessions and added a touchdown run in the fourth quarter. Barkley finished the game with a total of 350 yards passing. Of his 350 yards, Damian Williams accounted for 189 of them on 12 catches. Williams was named the game's MVP for his efforts. Boston College was led by tailback Montel Harris, who rushed for 102 yards and also added a touchdown run. Rankings After the season On January 10, 2010, coach Carroll told his players that he will resign his position with the Trojans and become the new head coach of the Seattle Seahawks. Lane Kiffin, formerly with the Trojans, Oakland Raiders, and Tennessee Volunteers, was hired as the new head coach. References External links USC USC Trojans football seasons Redbox Bowl champion seasons USC Trojans football
37251834
https://en.wikipedia.org/wiki/IBM%20Basic%20Programming%20Support
IBM Basic Programming Support
IBM Basic Programming Support/360 (BPS), originally called Special Support, was a set of standalone programs for System/360 mainframes with a minimum of 8 KiB of memory. BPS was developed by IBM's General Products Division in Endicott, New York. The package included "assemblers, IOCS, compilers, sorts, and utilities but no governing control program." BPS components were introduced in a series of product announcements between 1964 and 1965. BPS came in two versions — a strictly card system and a tape based system which, contrary to the stated goals, kept a small supervisor permanently resident. Programming languages available were Assembler, RPG, and FORTRAN IV (subset). Tape FORTRAN required 16 KiB of memory. There were also two versions of the BPS assembler, with the tape version having enhanced capabilities. BPS also had a "disk" counterpart called BOS. It also required 8 KiB of memory and supported disks such as the IBM 2311. The group responsible for BPS/BOS went on to develop DOS/360 and TOS/360 as a supposed "interim" solution when it became evident that OS/360 would be too large to run on 16 KiB systems. BPS and BOS could be used to run standalone applications on a minimal System/360. One application was the System/360 Work Station for remote job entry to a larger system. See also Punched card input/output References External links IBM System/360 Basic Programming Support and IBM Basic Operating System/360 Programming Systems Summary C24-3420-0 IBM mainframe operating systems Discontinued operating systems
11328773
https://en.wikipedia.org/wiki/77%20Million%20Paintings
77 Million Paintings
77 Million Paintings is a digital art software/DVD combination by British musician Brian Eno, released in 2006. The release consists of two discs, one containing the software that creates the randomized music and images that emulate a single screen of one of Eno's video installation pieces. The other is a DVD containing interviews with the artist. The title is derived from the possible number of combinations of video and music which can be generated by the software, effectively ensuring that the same image/soundscape is never played twice. An accompanying booklet includes a piece by Nick Robertson describing the intention behind the software, and an article by Brian Eno ("My Light Years") describing his experiments with light and music. The software was developed by Jake Dowie for both Windows and Macintosh operating systems. First Edition Far from containing 77 million paintings, the software consists of 296 original works which are overlaid and combined up to four at a time in a simulation of simultaneous projection onto a common screen. The various images are slowly faded in and out asynchronously before being replaced by another random element. Also, the music that accompanies the paintings, if played on a Mac G5 or a Windows PC, is randomly generated in a similar way, so the selection of elements and their duration in the piece are arbitrarily chosen, forming a virtually infinite number of variations. In conjunction with this, Annabeth Robinson (AngryBeth Shortbread) recreated the performance in Second Life by building the performance in a multi-user virtual environment (MUVE). Second Edition A second edition of "77 Million Paintings", featuring improved morphing and a further two layers of sound, was released on 14 January 2008. Project evolution 77 Million Paintings has evolved beyond the domestic environment. It continues to be shown in multiple-monitor configurations in art galleries and is projected onto iconic buildings around the world. In 2009, Eno was invited to project 77 Million Paintings onto the sails of the Sydney Opera House. See also Generative art Procedural generation References External links Official site Gallery by the Long Now Foundation Profile Brian Eno Electronic albums by British artists Digital art
901696
https://en.wikipedia.org/wiki/Myth%3A%20The%20Fallen%20Lords
Myth: The Fallen Lords
Myth: The Fallen Lords is a 1997 real-time tactics video game developed by Bungie for Windows and Mac OS. Released in November 1997 in North America and in February 1998 in Europe, the game was published by Bungie in North America and by Eidos Interactive in Europe. At the time, Bungie was known primarily as developers of Mac games, and The Fallen Lords was the first game they had developed and released simultaneously for both PC and Mac. It is the first game in the Myth series, which also includes a sequel, Myth II: Soulblighter, set sixty years after the events of the first game, also developed by Bungie, and a prequel, Myth III: The Wolf Age, set one thousand years prior to the events depicted in The Fallen Lords, and developed by MumboJumbo. The game tells the story of the battle between the forces of the "Light" and those of the "Dark" for control of an unnamed mythical land. The Dark are led by Balor and a group of lieutenants (the titular Fallen Lords), whilst the Light are led by "The Nine"; powerful sorcerers known as "Avatara", chief amongst whom is Alric. The game begins in the seventeenth year of the war in the West, some fifty years since the rise of Balor, with the forces of Light on the brink of defeat; almost the entire land is under the dominion of the Dark, with only one major city and a few smaller towns remaining under the control of the Light. The plot follows the activities of "The Legion", an elite unit in the army of the Light, as they attempt to turn back the tide and defeat Balor. The Fallen Lords received positive reviews from critics, and is credited as a defining title in the fledgling real-time tactics genre. Reviewers praised its plot, graphics, gameplay, level design, online multiplayer mode, and differentiation from traditional real-time strategy games. The most often criticized aspects were the difficulty of the single-player campaign, which many reviewers felt was far too high, even on the lowest setting, and some awkwardness in controlling units. The game went on to win multiple awards, including "Strategy Game of the Year" from both PC Gamer and Computer Gaming World, and "Game of the Year" from both Computer Games Strategy Plus and Macworld. It was also a commercial success, selling over 350,000 units worldwide across both systems, earning back roughly seven times its budget. At the time, it was Bungie's most successful game, and served to bring them to the attention of PC gamers and, more specifically, Microsoft, who would purchase the company in 2000. The Myth series as a whole, and Soulblighter in particular, supported an active online community for over a decade after the official servers went offline. The first formally organized group of volunteer-programmers was MythDevelopers, who were given access to the game's source code by Bungie. The most recently active Myth development group is Project Magma, an offshoot of MythDevelopers. These groups have worked to provide ongoing technical support for the games, update them to newer operating systems, fix bugs, release unofficial patches, create mods, and maintain online servers for multiplayer gaming. Gameplay Myth: The Fallen Lords is a real-time tactics game, unlike the gameplay in some real-time strategy games, the player does not have to engage in resource micromanagement or economic macromanagement, does not have to construct a base or buildings, and does not have to gradually build up their army by acquiring resources and researching new technologies. Instead, each level begins with the player's army already assembled and ready for combat. During the game, the player controls forces of various sizes made up of a number of different units, each possessing their own strengths and weaknesses. In single-player mode, only Light units are playable, but in online multiplayer mode, the player can control both Light and Dark units. Basic gameplay involves the player selecting and commanding units. To select an individual unit, the player clicks on that unit. Once selected, the unit is surrounded by a yellow rectangle, beside which is a health meter, which diminishes as the unit takes damage. Units do not regenerate health, and there is no way to construct new units (although in some single-player missions, reinforcements are automatically received at predetermined points). To select all nearby units of a given type, the player double-clicks on any individual unit of that type. To select multiple units of different types, the player can either "shift click" (hold down the shift key and click on each individual unit) or use "band-selection" (click and hold the mouse button on a piece of ground, then drag the cursor across the screen. This causes a yellow box to appear, which grows and shrinks as it follows the cursor's movement. When the player releases the button, any units within the box are selected). The player can instantly select all units on screen, irrespective of type, by pressing the enter key. The player can also assign manually selected unit groupings to a specific key on the keyboard, and when that key is pressed, it instantly selects the desired group of units. Once one or more units have been selected, the player can click on the ground to make them walk to the selected spot, or click on an enemy to make them attack. Units with projectile weapons, such as archers and dwarves can also be ordered to attack a specific spot on the ground, rather than an enemy. It is also important that the player have their units facing in the right direction. This is accomplished by "gesture clicking" - using the mouse to indicate which way the units will face when they reach their destination. Gesture clicking is especially important when using formations, of which there are nine available. After selecting a group of units, the player must press the corresponding formation button on the keyboard, and then click on the ground where they want the units to form. The player can also order all selected units to scatter and to retreat. When a single unit is selected, information about that unit appears in the "Status Bar" at the top of the HUD; the unit's name, a brief biography, how many kills he has, how many battles he has survived, and (if he is capable of carrying items) his inventory. When multiple units are selected, the names, types, and quantity of units will appear, but there will be no biography or information on their kills or previous battles. If no units are selected, the Status Bar provides details of the current mission. The HUD also features a transparent overhead mini-map, which displays information about the current battlefield; the player's field of vision is indicated by a yellow trapezoid, enemy units appear as red dots, friendly non-playable units as blue dots, and the player's army as green dots. The player can click anywhere on the mini-map to instantly jump to that location. However, the mini-map does not initially display the entire battlefield; the player must explore the area for it to become fully mapped. The player has full control over the camera throughout the game, and can move it backwards and forwards, left and right, orbit left and right (keeps the camera focused on a single spot while making a 360 degree circle around that spot), pan left and right (the camera remains in the same spot but the player's point of view moves from side to side), and zoom in and out. All movements can be carried out via the keyboard, although the mouse can also be used to move the camera forwards, backwards, left and right, by moving the cursor to the top, bottom, left or right of the screen, respectively. Selecting and commanding units only forms the basic gameplay of The Fallen Lords. The battles are more complex than simply commanding units to attack the enemy, with strategy and awareness of the conditions of the battlefield, and even the weather, also playing important roles. For example, due to the game's physics engine, objects react with one another, with units, and with the terrain. This can manifest itself simply in a severed head bouncing off one of the player's units and changing direction, but it can also have more serious consequences. For example, a dwarf could throw a molotov cocktail at an enemy on a hillside and miss, with the projectile rolling back down the hill towards the player's own units. Projectiles in general, both those used by the player and the enemy, have no guarantee of hitting anything; they are merely propelled in the direction instructed by the physics engine. Arrows, for example, may miss their intended target due to a small degree of simulated aiming error that becomes more significant at long range, or the target may move out of the way, or behind a tree or building. If archers are firing at enemies who are engaged in melee combat, they may also hit the player's own units instead of the enemy, causing the same amount of damage. This is also true of dwarfs' molotov cocktails. As such, friendly fire is an important aspect of the game. The weather is also something the player must always bear in mind. For example, rain or snow can put out explosive-based attacks. It is also much easier for projectile units to hit enemies below them rather than above them, and as such, positioning of the player's units is an important aspect of the game. Single-player In the single-player campaign, the player starts each mission with a group of soldiers, and must use that group to accomplish a specific goal or set of goals. These goals can involve killing a certain number of enemies, defending a location, reaching a certain point on the map, escorting a unit safely to a certain area, or destroying a specific object or enemy. The focus of the single-player campaign is on a smaller force defeating a much larger enemy force; in every mission, the Light units are outnumbered by enemies, often vastly, and so the player must use the terrain, employ the specific skills of their individual units, and gradually decrease the enemy force, or attempt to avoid it altogether. Units in the single-player campaign acquire experience with each kill. Experience increases attack rate, accuracy, and defence, and any unit that survives a battle will carry over to the next battle with their accumulated experience (assuming the next battle features units of that type). Multiplayer When it was released, The Fallen Lords could be used for multiplayer gaming on Bungie, TEN, or via a LAN on PC or AppleTalk on Mac. In multiplayer, the player starts with an army, and can customize it by trading units with other players, using point values that approximate the value of the units being traded. Multiplayer games include "King Of The Hill" (a hill on the map is marked with a flag, with the hill captured when one or more of a team's units move within a certain range of the flag and eliminate any enemy units in the same area; the winner is the team who controls the hill for the longest amount of time), "Steal The Bacon" (somewhere on the battlefield is a ball; the object is to get the ball and keep it away from the opponents, with the winner being the last team to touch the ball), "Balls On Parade" (each team has a ball; the object is to capture as many of the opponents' balls as possible, with winner being the team in possession of the most balls at the end of the game), "Flag Rally" (a number of flags are on the battlefield, with the winner being the first player to touch them all), "Territories" (a number of flags are on the battlefield, with the winner being the team to capture and hold the most flags), "Scavenger Hunt" (a number of balls are on the battlefield, with the winner being the first player to touch them all), "Captures" (a number of balls are on the battlefield, with the winner being the player who is in possession of the most balls at the end of the match), "Body Count" (team deathmatch), and "Last Man On The Hill" (whichever player owns the hill when time runs out is the winner). Story History In the history of Myth, one particularly celebrated legend is that of Connacht, who, one thousand years ago, saved the world from a race of flesh-eating monsters called the Myrkridia, which had hunted humanity to near extinction over the previous millennium. Coming from the eastern land of Gower at the same time a comet appeared in the western skies, Connacht was the first human to fight the Myrkridia and survive. However, not only did he survive, he defeated them, ultimately imprisoning them in a magical prison known as the Tain, built for him by the Dwarven smiths of Muirthemne. With the Myrkridia gone, Connacht became Emperor of the Cath Bruig, presiding over a prosperous era known as the Age of Light. Many years later, he disappeared from the historical records. It is unknown exactly what happened to him, although one theory suggests he went in search of powerful magical artifacts, fearful of the ramifications if such items should fall into the wrong hands. Whatever the truth about his disappearance, Connacht was never seen again. In more recent times, fifty years prior to the beginning of the game, Balor, a mysterious and evil being, attacked the eastern Empire with an undead army, sacking Muirthemne. Aided by lieutenants known as "Fallen Lords", Balor turned the farmlands surrounding Muirthemne into a barren desert called The Barrier, whilst the Dwarven cities of Myrgard and Stoneheim were captured by Ghôls. The Dwarven population became refugees, travelling west, into the land known as The Province. Eventually, every human city to the east of the Cloudspine Mountains fell under Balor's control. Thirty-three years later, he headed west. Within two years, Covenant, capital city of The Province, had fallen. Tyr, the last free city of the south, was destroyed five years later, ten years prior to current events, leaving only the free cities of the west to stand against Balor. Plot The game begins seventeen years after Balor crossed the Cloudspine, with the forces of Light losing the war badly. They are led by "The Nine", a group of avatara, chief amongst whom is Alric. The story is told through the journal entries of a soldier in "The Legion", an elite unit in the army. As the game begins, a berserk runs into the camp of The Nine, and gives them an urn. They extract a severed head, which opens its eyes. The game then cuts to The Legion as they head to the city of Madrigal, headquarters of The Nine, which is under siege by Shiver (one of the Fallen), with the army planning to attack her from behind. The plan works, and after four days, the siege is lifted. Of particular significance is that Rabican (one of The Nine) kills Shiver in a "dream duel". Rabican had been advised by the Head, who claims to be an ancient enemy of Balor, that Shiver's one real weakness was her vanity, and his victory represents the first time one of the Fallen has been defeated. After this a detachment of the Legion is sent to the ruins of Covenant, a major city destroyed in the earlier years of the war, to find the Total Codex. The Total Codex is ancient book that reputedly has the past, present, and future written within its pages. The Legion successfully retrieves the codex while skirmishing with the Fallen Lord known as the Watcher. During this time Alric, an Avatara of the Nine, is sent east with an army on the advice of the head to recover another magical artifact. The Legion then meet with Maeldun (one of The Nine) in the city of Scales, where they learn Rabican's army is heading to block Seven Gates and Bagrada, two of the passes through the Cloudspine Mountains, so as to prevent The Deceiver (one of the Fallen) crossing west prior to winter. Rabican holds Seven Gates, and The Legion hold Bagrada, but their victory is tempered by the fact that The Watcher (another of the Fallen) remains behind their lines, and Alric and his army are trapped beyond the Cloudspine. News soon reaches The Nine that Alric's army has been destroyed, and he has been captured by The Deceiver. He was sent to The Barrier to search for a suit of enchanted armor by the Head, who now claims to have been an ally of Connacht, although some are beginning to doubt the veracity of its claims. A small group from The Legion fly over the mountains in a hot air balloon, and rescue Alric. The Legion is then ordered to Silvermines to look for The Watcher's arm, lost when Balor freed him from captivity beneath the Cloudspine, as The Nine believe the arm can be used to fashion a weapon to use against The Watcher. However, The Deceiver is also in Silvermines searching for the arm, as he and The Watcher were enemies before the rise of Balor. The Legion find the arm, but soon thereafter, a volcano erupts, melting the snow on the Cloudspine, and allowing The Deceiver to move west. At the same time, The Watcher attacks Rabican's army, crushing it. The army of The Deceiver, heading west, and the army of The Watcher, pursuing the remnants of Rabican's army east, begin to fight one another, with Maeldun using the distraction to retake the passes. The following spring, Cu Roi and Murgen (two of The Nine) take four thousand men into occupied eastern territory to try to gain the support of the Forest Giants. They agree to join the Light, but Soulblighter (Balor's chief lieutenant) springs a surprise attack, trapping The Legion within the Tain; an artifact small enough to hold in one's hand, but which contains a pocket universe of limitless capacity. A group of fifty men led by Murgen find the battle standard of the long-dead Myrkridia, and shortly thereafter, Murgen finds a secret exit. He is able to open it, but the rest of the four thousand men are lost, as is Cu Roi, whilst Murgen is killed as he destroys the Tain. Shocked at their escape, Soulblighter flees, but news soon arrives that Maeldun has lost Bagrada, and The Deceiver has crossed west. Also, when the remainder of The Nine tried to destroy the Head, which they have come to believe has been betraying them, they were prevented from doing so by the army, with two of The Nine killed in the ensuing conflict. Meanwhile, Alric joins The Legion. Rather than returning west, Alric leads The Legion north, moving towards Balor's fortress in Rhi'anon, capital city of the Trow, an ancient race of giants thought extinct until they joined the war against the Light. Believing they can do nothing to save any of the remaining free cities from The Deceiver, Alric hopes to achieve a more important victory; during his captivity in The Barrier, he learned that to ensure the obedience of the Fallen Lords, Balor bound them to his will, and is channeling his power to them. Thus, if he were destroyed, they would lose their power, ending the war. Leaving a garrison of men behind to delay the pursuing Soulblighter, Alric plans to attack The Watcher using arrows tipped with bone from his arm. The plan works; The Watcher is killed, scattering his army and clearing the way ahead, whilst Soulblighter breaks off his pursuit. At the same time as a comet appears in the western skies, Alric orders the majority of the surviving members of the Legion, twenty-two hundred men, to launch a frontal attack on Balor's fortress in a suicide mission designed to cause a distraction, as he takes the remaining one-hundred men through a World Knot (a teleportation device) to a spot behind the fortress. As they near the fortress, Alric tells the stunned soldiers that Balor is in fact Connacht, and with this in mind, he intends to raise the Myrkridian battle standard found in the Tain, hoping to enrage Balor into making a tactical error. The plan works; furious at the sight of the flag, Balor leaves the fortress, and Alric immobilizes him with an Eblis Stone. The Legion kills him, and take his head to a bottomless pit known as "The Great Devoid", as only by throwing his head into the Devoid can he be destroyed. The thirty remaining members of The Legion are ambushed by Soulblighter as they approach the Devoid, but they fight their way through, and fling the head into the pit. Soulblighter turns into a murder of crows and flees, moments before a massive explosion erupts from within the Devoid. With Balor's destruction, the remaining Fallen are rendered powerless, and their armies collapse, bringing to an end the war between the Light and the Dark. Development Origins The Fallen Lords was originally conceived by Jason Jones as Bungie were nearing the end of development of Marathon Infinity in late 1995. They had planned to do another first-person shooter as their next game. However, the initial screenshots of id Software's Quake had just been released, and when Jones saw them, he felt Bungie's new game was shaping up as too similar. As such, he approached his colleagues with the question: "What do you think about having this world with 100 guys fighting 100 other guys in 3D?" Bungie had worked on several 3D action games for Mac OS, and Jones' idea was to bring that experience to a real-time strategy game rather than another first-person game. The team agreed with Jones that their new shooter could end up as being very similar to Quake, and, as such, after working on the project for two months, Jones explains that "in one day we switched our project from a shooter game that would have had us chasing our competition's tail to what has basically become Myth". Dubbed "The Giant Bloody War Game", the team's initial inspirations for Myth were films such as Mel Gibson's Braveheart, "with its close-up portrayal of bloody melees between large forces", and literature such as Glen Cook's The Black Company series, "in which gruesome tales of battle contrast with engaging and intriguing characters". Speaking of the influence of Cook, Doug Zartman, Bungie's director of public relations and one of the game's writers, claimed: Similarly, programmer Jason Regier explained they wanted to set the game in "a dark, amoral world where opposing sides are equally brutal and their unity is torn by power struggles within the ranks. We dreamed of gameplay that combined the realism and excitement of action games with the cunning and planning required by strategy games". Zartman further stated: "We wanted to capture the feeling that you get watching large groups of people clashing on the open field. We wanted to recreate the blood-letting and grisly reality of large-scale battles". Although Myth is, by definition, a real-time strategy game, he was also eager to differentiate the game from other RTS games: Once they had decided on the basic game mechanics, their first task was to draw up a list of elements they wished to avoid; specifically, RTS clichés, obvious references to J. R. R. Tolkien's Middle-earth, allusions to the Arthurian legend, or any kind of narrative involving "little boys coming of age and saving the world". On the other hand, elements they did wish to incorporate included "ideas that contributed to the visual realism of the game", such as a 3D landscape, polygonal buildings, reflective water, particle-based weather, battlefields littered with severed limbs, and explosions that damaged the terrain permanently. They were also determined to include a robust online multiplayer mode and allow hundreds of troops to appear on a battlefield at once. Cross-platform Work on the game began in January 1996, with four programmers, two artists, and a product manager. Originally, the game was to have no music whatsoever, but composer Martin O'Donnell, who had recently been hired by Bungie, convinced the developers this was a bad idea, and he, Michael Salvatori and Paul Heitsch were commissioned to compose a soundtrack and work on the sound effects. A major early decision was to develop and release the game simultaneously on both Mac OS and Microsoft Windows, which would be a first for the company. Up to this point in their history, their only venture into PC gaming had been a port of Marathon 2: Durandal. Bungie had not been happy with the port, and were determined that The Fallen Lords be a genuine cross-platform release. This meant designing the game from the ground up to be cross-platform compatible, rather than developing it for one operating system and then porting it to another. As such, 90% of the game's source code was platform independent, with 5% written for PC subroutines and 5% for Mac-specific functionality. All of the game's data, from cutscenes to the number of warriors who are left-handed, was stored in platform-independent data files called "tags", which are automatically byte-swapped when necessary and accessed via a cross-platform file manager. As the team was more familiar with developing for Mac OS, the initial coding was done on a Mac using CodeWarrior. When PC builds were required, the team switched to Windows, using Visual C++. Ultimately, the entire game was written in C. To ensure the game looked identical on both PCs and Macs, the team created and implemented their own dialog manager and font manager, which allowed them to use custom graphics for all interface items. The font manager supported antialiased, two-byte fonts, and a variety of text-parsing formats, allowing international localizations to be completed relatively easily. Programming Although The Fallen Lords employs a fully 3D terrain, with 3D polygonal buildings, the characters are 2D sprites. To bring the 3D environment and the 2D characters together and construct each level, the team developed four separate programming tools; "Tag Editor" edited the constants stored in the cross-platform data files; "Extractor" handled the 2D sprites and the sequencing of their animations; "Loathing" acted as the map editor; and "Fear" dealt with the 3D polygonal models such as houses, pillars, and walls. Jason Jones explained: Loathing was specifically built around the Myth engine and allowed the team to modify the 3D landscape, apply lighting, determine terrain type, script the AI, and position structures, scenery, and enemies. The artists used PowerAnimator on an SGI Indigo 2 to create polygonal models and render all the characters. The 3D models were imported into the game using Fear, while the 2D sprites were cleaned up in Adobe Photoshop and imported and animated using Extractor. To create the texture maps for the terrain, the artists used Photoshop to draw the equivalent of an aerial photo, and then applied it to the 3D landscape using Loathing. Initially, the developers had planned on using fractal-generated landscapes, but they felt the randomness of such landscapes would make it difficult to design interesting levels, and so all maps were instead constructed by hand. Implementing pathfinding was a particularly difficult challenge. The terrain in the game is a 3D polygonal mesh constructed of square cells, each of which is tessellated into two triangles. Certain cells have an associated terrain type which indicates their impassability, and may contain any solid object. The team originally planned to use the A* algorithm, but soon realized this would create problems in terms of the realism they desired. As impassable obstacles can lie anywhere on the map, and as the square cells are quite large, the obstacles are not guaranteed to be aligned at their center. Furthermore, even if an obstacle did occupy exactly one cell, the A* algorithm would make a unit walk up to the obstacle, turn, and continue around it. The developers instead wanted their units to move to avoid obstacles ahead of time, as they approached them, such as smoothly weaving through a forest instead of continually heading straight for a tree, only to stop and suddenly walk around it. As such, they wrote their own pathfinding algorithm. As the terrain in the game never changes, paths could be calculated once and remembered. Then, the team factored in arbitrarily placed obstacles and periodically refined their pathfinding using a vector-based scheme. If the planned path caused the unit to hit an obstacle, the path was altered, with the AI choosing whether deviating to the left or the right was the shorter option. Their system worked for 90% of cases, but in testing, the developers discovered several scenarios where their pathfinding algorithm didn't work especially well. However, by the time they made this discovery, it was too late to implement the changes that would have been necessary to fully correct it. As such, their assessment of the pathfinding in the final version of the game was that "it works pretty well and provides the effect we sought, but there's definitely room for improvement". Speaking of the game's physics engine, Jason Jones said: By November 1996, Bungie had a demo with rudimentary gameplay in place. In an effort to create media buzz, they took the demo to several gaming magazines. Speaking in 2000, Doug Zartman explained that the physics engine was a major factor in the game even at that early stage: Release Writing in 1998, programmer Jason Regier stated of the game: One of Regier's few disappointments was that during the early stages of promotion, Bungie advertised a scripting language that would allow players to modify elements of the game. As he explains: "We had hoped that user scripts could be written for extensible artificial intelligence, as well as custom formations, net game rules, and map behaviors". The team selected Java as the basis for the scripting language. Early versions of the game allowed some simple scripts to work for presentation purposes, such as instructing a unit to search the battlefield for the heads of the enemy and collect them in a pile. The programmer responsible for the scripting language left Bungie midway through production, and they were left with a number of features to implement and no library of user-friendly interfaces with the game code. Given its incomplete state at such a late stage of development, there was little choice but to drop this functionality. Technology The Fallen Lords originally supported both software rendering and 3dfx's Glide 3D acceleration. Shortly after the game was released, Bungie issued a v1.1 upgrade patch, which reduced the difficulty of the single-player campaign and added support for Rendition's Redline, and 3dfx's Voodoo Rush. Jason Regier wrote of working with 3D graphic accelerators: Total Codex bundle In 1999, Bungie re-released The Fallen Lords for Mac OS and Windows as part of a special edition called Myth: The Total Codex. The bundle included The Fallen Lords v1.3 (Bungie's last upgrade of the game), Myth II: Soulblighter, the Soulblighter expansion pack Myth II: Chimera, and official Strategies and Secrets guides for both of the main games. Community Although the official Bungie Myth servers closed in February 2002, the Myth series continues to have an active online fanbase to this day. After Bungie released the Total Codex bundle in 1999, which contained The Fallen Lords v1.3, Soulblighter v1.3 and the Soulblighter expansion pack, Myth II: Chimera, they ceased working to develop the game's source code, as Microsoft, who purchased the company in 2000, wanted them to concentrate on Halo. As such, they were approached by a group of programmers, artists and coders known as MythDevelopers, who asked for access to the code so as to continue its development. With the blessing of Take-Two Interactive, who had acquired the rights to the Myth intellectual property in 2000, Bungie released their entire archive of Myth-related materials to MythDevelopers, including the source code and all artwork featured in the game. MythDevelopers were also granted access to the source code for Myth III: The Wolf Age, which was developed by MumboJumbo in 2001. Bungie also open sourced their Myth metaserver source code in 2002. MythDevelopers used this material to improve and further develop the games. Although their initial focus was on the bug-ridden release version of The Wolf Age, they also worked to update the first two games to newer operating systems on both Mac and PC, fix bugs, and create unofficial patches to enhance both the games themselves and the mapmaking tools which Bungie had released with Soulblighter. MythDevelopers disbanded in December 2003, with Project Magma becoming the main development group for The Fallen Lords and Soulblighter, and FlyingFlip Studios for The Wolf Age. Servers Prior to disbanding, MythDevelopers created and operated PlayMyth.net, the most popular online Myth server after the official servers were taken offline. Although built using the Soulblighter server, PlayMyth could also run both The Fallen Lords and The Wolf Age, which was developed by MumboJumbo using a network gameplay system designed to run on GameSpy rather than Bungie.net. PlayMyth went offline in October 2007 after it was repeatedly hacked, with the most popular servers becoming MariusNet.com and GateofStorms.net. MariusNet had been online since just prior to Bungie.net's Myth servers going offline, and was officially approved by Bungie. The original impetus behind the project was as a temporary replacement for Myth players in case the original servers were shut down, which had been rumored for some time. The Bungie servers had not supported The Fallen Lords since November 2001, and the community believed the servers would soon close for Soulblighter as well. When The Fallen Lords servers closed in November, the only way to play a multiplayer game was via a LAN or AppleTalk, and MariusNet was created as a Bungie.net "emulator", which, like PlayMyth, supported all three Myth games, and thus gave players a way to play The Fallen Lords online. At the time, Bungie had not open sourced the metaserver source code, so creating a network for The Fallen Lords was accomplished via reverse engineering. Dave Carlile, the main programmer of the server, explained: MariusNet closed in 2014 when the server company shut down, and the hardware was damaged whilst being moved to its new location. GateofStorms, which was created by Project Magma, and only supports Soulblighter v1.8 (released by Magma in 2013), remains active, and continues to host individual games and tournaments. Reception The Fallen Lords received "universal acclaim" upon release, and holds an aggregate score of 91 out of 100 on Metacritic, based on nine reviews. The game was seen as a defining title in the emerging real-time strategy genre, helping to solidify the elements of the genre both with gamers and in the gaming press. It also served to bring Bungie to the attention of PC gamers for the first time and, more specifically, Microsoft, who would purchase the company in 2000 so Bungie's new game, Halo, could be developed as a launch title for Microsoft's debut video game console, the Xbox. GameSpot's Michael E. Ryan scored the game 8.9 out of 10, arguing that upon the initial release, "standing between Myth and gaming perfection were an absurd level of difficulty, a poorly implemented unit-facing command, and a handful of nitpicky flaws". However, he praised the v1.1 update, which dealt with the difficulty and unit-facing problems, and as such, argued, "Myth can now rightfully claim its place among the best strategy games on the market". He praised the plot, the different styles of gameplay, the level design, and the range of available units. He was also impressed with the graphics, writing "Myth is one of the most impressive games you'll see this year". He concluded: "When you combine the excellent multiplayer support, the great graphics, and the dramatic gameplay improvements offered by the 1.1 patch, you get a truly remarkable real-time strategy game". PC Zones Jamie Cunnigham scored it 8 out 10, praising the interface, the range of units, the level design, multiplayer mode, and the use of the camera during battles. Of the graphics, he wrote: "Every part of scenery, from huge boulders to arrow heads, is tracked in 3D space. Which means that if you blow up an enemy unit, organs fly out and bounce off others - complete with shadows and splat noises". On the other side, he was critical of the difficulty, writing that "Myth is let down by what can only be described as an overwhelming frustration whenever you play it for any length of time. So much attention has been paid to the technology that some of the fun element has suffered". Game Revolution's Calvin Hubble rated it a B+, calling it "one of the most impressive looking strategy games to hit the market". He praised the graphics, calling them "simply spectacular". He also lauded the online multiplayer mode. Conversely he was critical of the difficulty level, finding the game too hard on even its easiest setting. He concluded: "Myth is a great game to look at. After beating the first couple of levels, the enjoyment could quickly turn to nausea as try after try fails to pass one single level. The graphics and realism are breathtaking, if only the single player game wasn't so difficult". Next Generation reviewed the PC version of the game, rating it four stars out of five, and stated that "with a large and complex control scheme, Myth'''s learning curve starts higher than anything else on the market. Yet, for the kind of players who paint their own miniatures and build sets, it's a challenge of fabled proportions". Sales and awards According to Alex Seropian, co-founder of Bungie, The Fallen Lords cost roughly $2million to develop and market, by far Bungie's most expensive game up to that time, and as such, they needed it to be financially successful, especially as it was their first original PC game. The game did prove a commercial success, selling over 350,000 units worldwide at roughly $40 per unit, earning the company $14million, and becoming Bungie's most successful game thus far. In the United States, the game sold 40,617 copies during 1997. By 2000, the game had over 100,000 people registered with online accounts at Bungie.net. The success of the game also helped Bungie rank #101 in Inc.s 1998 list of the 500 fastest growing private corporations in North America. Primarily due to the success of The Fallen Lords, Bungie's profits had increased by 2,228% from 1993 to 1997. The game also won numerous awards, including "Real-Time Strategy Game of the Year" from PC Gamer, "Strategy Game of the Year" from Computer Gaming World, and "Game of the Year" from both Computer Games Strategy Plus and Macworld. Online Game Review named it one of the fifty greatest games ever made. In 2012, The Fallen Lords was listed on Time's All-TIME 100 greatest video games list. In 1998, PC Gamer declared it the 19th-best computer game ever released, and the editors called it "a breath of fresh air" and "a modern classic". In 2003, The Fallen Lords'' was inducted into GameSpot's list of the greatest games of all time. References External links Project Magma 1997 video games Bungie games Eidos Interactive games Fantasy games Classic Mac OS games Multiplayer and single-player video games Multiplayer online games Myth (video game series) Real-time tactics video games Video games developed in the United States Video games scored by Martin O'Donnell Video games scored by Michael Salvatori Windows games