id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
9178237
https://en.wikipedia.org/wiki/Joe%20McKnight
Joe McKnight
Joseph Nathan McKnight Jr. (April 16, 1988 – December 1, 2016) was an American football running back and return specialist who played in the National Football League (NFL) and Canadian Football League (CFL). He attended the University of Southern California (USC), where he played college football for the USC Trojans. McKnight was selected in the fourth round of the 2010 NFL Draft by the New York Jets. After playing in the NFL for the Jets and Kansas City Chiefs, he played in the CFL for the Edmonton Eskimos and the Saskatchewan Roughriders. On December 1, 2016, McKnight was killed in an apparent road rage incident. High school career McKnight attended John Curtis Christian High School in River Ridge, Louisiana. For his first years of high school, McKnight played defense as a cornerback; his high school career was complicated by the aftermath of Hurricane Katrina, which devastated his part of Louisiana just before his junior season of 2005. Separated from his mother, who had evacuated to Baton Rouge, McKnight temporarily relocated to Shreveport where he eventually enrolled and played two games for Evangel Christian Academy. His family was able to relocate back to River Ridge, but their home had been destroyed so they moved into a one-bedroom apartment. For the rest of the shortened 2005 season, McKnight scored 22 touchdowns (nine rushing, five receiving, four punt returns, three interceptions, one kickoff return) and averaged 18 yards a play in leading his team to the state championship. In 2006, McKnight rushed for 719 yards on 45 carries, scoring 14 touchdowns, had 24 catches for 735 yards and 13 touchdowns, and with special teams play scored a total of 30 touchdowns as a senior; he was instrumental in John Curtis Christian's 14–0 season, often used as a decoy player due to his scoring threat. McKnight was the latest of a line of running back prospects out of John Curtis Christian HS – among them Reggie Dupard, Chris Howard and Jonathan Wells. Most recruiting analysts ranked him as one of the top two of the 2007 high school class (next to quarterback Jimmy Clausen of Oaks Christian High School). Considered a five-star recruit, on January 28, 2007, McKnight was named co-Player of the Year by Parade, sharing the honor with Clausen. Rivals.com ranked McKnight the best running back prospect in the U.S., and the second best recruit overall. USC assigned linebackers coach and former NFL All-Pro Ken Norton Jr. to handle the recruitment of McKnight. Louisiana State University (LSU) coach Les Miles visited McKnight on February 1, 2007, the last possible date before National Signing Day, in order to ensure McKnight's commitment to LSU. However, on February 7 McKnight committed to USC. At USC, McKnight was joined by second-ranked running back prospect Marc Tyler. The backlash against McKnight's decision to attend college out of state manifested itself when the Curtis School's marching band was booed performing at a Mardi Gras parade. McKnight stated that his interest in USC came from its football tradition, notably their Heisman Trophies, and his interest in sports broadcasting, noting the USC Annenberg School for Communication. He is the highest rated football player ever recruited by the University of Southern California. Recruiting controversy McKnight told reporters on National Signing Day that he spoke to former USC running back Reggie Bush before he chose Southern California over favored LSU. On February 9, the Los Angeles Times reported that USC officials were investigating whether an NCAA recruiting violation occurred during the Trojans' pursuit of the Louisiana prep star. At issue was whether McKnight listened to a phone call between head coach Pete Carroll and Bush. NCAA rules state that alumni cannot speak to players and attempt to persuade them to join their former school. McKnight and his high school coach, J.T. Curtis, both later claimed that McKnight misspoke during the news conference and that McKnight had never actually met or spoken to Bush. Bush and Carroll denied that a conference call ever took place. Carroll blamed the controversy surrounding McKnight on LSU fans who were unhappy that McKnight chose to leave the state of Louisiana and attend USC. College career Because of his talent and versatility, the media and opposing coaches called McKnight the "next Reggie Bush". McKnight began college at the University of Southern California in June 2007. He was awarded with the inaugural Frank Gifford Endowed Football Scholarship, which is given out annually by the USC athletic department to an incoming freshman running back, quarterback or wide receiver who best emulates Gifford's life, success and spirit. By the pre-season practice before his freshman season at USC, McKnight had already demonstrated some of the speed and moves that made him an impact player in high school. After a slow start during his freshman season, dealing with fumble issues for the first time in his career, he made a pivotal contribution in a victory against Arizona, where he ran a punt return for 45 yards, and later ran for 59 yards to set USC up for their last 10 points in their game. McKnight achieved a break-through at the end of his freshman season during the 2008 Rose Bowl, where he had 206 all-purpose yards: 36 on three punt returns, 45 on six pass receptions and 125 in 10 carries with one touchdown. During his freshman season, McKnight painted "I need $" under his eye blacks to signify his desire to play professional football in the NFL. Before his sophomore season, McKnight was included in Sports Illustrated spring list of top ten Heisman Trophy contenders going into the fall. However, his sophomore season was hampered early on by medical conditions, fumbles and injuries, including a nagging metatarsalphalangeal joint sprain that prevented him from playing in two games. In February 2009, Bobby Burton of Rivals.com summed up his career by writing, "McKnight really hasn't lived up to the hype. He's good, but he just hasn't broken out to show he's the best back in his class". In his junior and final college football season, it was reported that McKnight was granted free use of a 2006 Land Rover SUV registered to Santa Monica businessman Scott Schenter, in violation of NCAA rules. Schenter responded, claiming that the SUV actually belonged to McKnight's girlfriend, Johana Michelle Beltran, although McKnight had reportedly been seen driving it around town. As a result of the pending investigation, the USC compliance department did not clear McKnight to play in the 2009 Emerald Bowl. On January 8, 2010, McKnight declared his intention to forgo his final year of eligibility, hired an agent and entered the 2010 NFL Draft. Professional career 2010 NFL Combine New York Jets 2010 McKnight was drafted by the New York Jets in the fourth round (112th overall) of the 2010 NFL Draft. To select McKnight, the Jets traded up twelve spots in the fourth round with the Carolina Panthers. In return, the Jets sent their fourth (124th overall) and sixth (198th overall) round selections to Carolina. McKnight formally signed with the Jets on June 22, 2010. McKnight struggled in the early stages of his professional career, vomiting during the Jets' rookie minicamp in May 2010. McKnight later admitted he had not been in proper condition at the time. He continued to struggle into the preseason, fumbling the football three times. Though he was not in danger of being released from the team, it was maintained that McKnight would not appear on the active roster until the coaches felt more confident in his abilities. McKnight made his NFL debut on October 3, 2010, against the Buffalo Bills and was used in a limited capacity on offense before being utilized as an emergency cornerback and on special teams over the course of the season. In McKnight's first game as the starting running back, against the Bills during the Jets' final regular season contest, McKnight ran for 158 yards on 32 carries with no fumbles and caught two passes for 15 yards. 2011 During the Jets' 2011 home opener against the Dallas Cowboys on September 11, 2011, McKnight blocked Matt McBriar's punt on a critical play resulting in a touchdown by fellow defender Isaiah Trufant. The Jets went on to defeat the Cowboys 27–24. During the Jets' Sunday night game against the Baltimore Ravens, on October 2, 2011, McKnight returned a kickoff for 107 yards, his first career touchdown. It was the longest play in Jets history. On January 16, 2012, McKnight was named as the Kick Returner for All-Pro team By Pro Football Weekly / Pro Football Writers Association. 2012 On September 26, 2012, Jets head coach Rex Ryan announced that McKnight would see an increase in defensive snaps at cornerback. The announcement was prompted after Darrelle Revis suffered a season-ending ACL tear. Shortly after the announcement was made, McKnight was switched back to his former role as a running back. On October 8, 2012, McKnight returned a 100-yard kickoff for a touchdown against the Houston Texans, recording his second career touchdown. Elias Sports Bureau reported that McKnight had extended the Jets' NFL record of most consecutive years with a kickoff return for a touchdown to 11 years. He was released by the Jets on August 26, 2013. Kansas City Chiefs After spending the entire 2013 year out of football, McKnight signed with the Kansas City Chiefs on January 12, 2014. On July 21, McKnight was placed on the Physically Unable to Perform list. On September 21, 2014, in a game against the Miami Dolphins, he scored two times on receptions from Alex Smith. McKnight suffered a torn Achilles tendon during practice on September 26, 2014, and was ruled out for the rest of the 2014 season. Canadian Football League On February 19, 2016, McKnight signed with the Edmonton Eskimos of the Canadian Football League (CFL). but was released on August 10, 2016. The Saskatchewan Roughriders acquired McKnight on September 20, 2016. On October 15, 2016, making his first career CFL start, McKnight rushed for 150 yards on 17 carries. He also had one reception for 3 yards. McKnight appeared in three games for the Roughriders in 2016, running for 228 yards in 38 attempts and catching 11 passes. McKnight was under contract to the Saskatchewan Roughriders for the 2017 season. Death On December 1, 2016, McKnight was fatally shot by 54-year-old Ronald Gasser at an intersection in Terrytown, Louisiana, in what was described as a road rage shooting. McKnight exited his vehicle and approached Gasser's car when he was shot by Gasser who was still in his own vehicle. An initial media report based on an alleged eyewitness claimed that after firing the initial shot, Gasser stood over McKnight and said "I told you not to fuck with me" after which, he fired another shot. However, this scenario was disputed by Jefferson Parish Sheriff's Office, which stated that forensic evidence suggested Gasser fired the shots in quick succession from within his car. Gasser remained at the scene and turned in his gun to police. The Jefferson Parish Sheriff's Office took Gasser into custody for questioning, and released him without pressing charges while the investigation continued. Gasser was arrested for manslaughter on December 5, 2016, but was indicted by a grand jury on the more severe charge of second degree murder on February 2, 2017. On January 26, 2018, Gasser was acquitted of murder, but found guilty of manslaughter by a 10–2 verdict, and was sentenced to 30 years in prison two months later on March 15. The conviction (by a 10–2 verdict) and sentence, however, were set aside as a result of the Ramos v. Louisiana ruling by the United States Supreme Court, which declared that non-unanimous criminal convictions violate the Sixth and Fourteenth Amendments. Because the decision was made retroactive, all defendants who had not exhausted appeals had their convictions set aside for new trials, and Gasser was granted a new trial shortly after the Ramos decision (a separate case, Edwards v. Vannoy, will address the issue for those who had already exhausted appeals). However, because of double jeopardy, Gasser can only face manslaughter and not murder charges on retrial. Gasser remains in prison awaiting his retrial. Awards and honors McKnight was named Pac-10 Offensive Player of the Week on September 14, 2009, for his role in a win against Ohio State. The Times-Picayune named McKnight its "Male High School Athlete of the Decade". Longest play in New York Jets history (107-yard kickoff return for a touchdown vs. Baltimore Ravens) on October 2, 2011. References External links USC Trojans bio New York Jets bio 1988 births 2016 deaths African-American players of American football African-American players of Canadian football American football running backs Canadian football running backs Deaths by firearm in Louisiana Edmonton Elks players Kansas City Chiefs players New York Jets players People from Kenner, Louisiana People from River Ridge, Louisiana Players of American football from Louisiana USC Annenberg School for Communication and Journalism alumni USC Trojans football players Male murder victims Murdered African-American people 20th-century African-American people 21st-century African-American sportspeople People murdered in Louisiana
25323835
https://en.wikipedia.org/wiki/Czech%20Technical%20University%20in%20Prague
Czech Technical University in Prague
Czech Technical University in Prague (CTU, ) is one of the largest universities in the Czech Republic with 8 faculties, and is one of the oldest institutes of technology in Central Europe. It is also the oldest non-military technical university in Europe. In the academic year 2020/21, Czech Technical University offered 130 degree programs in Czech and 84 in English. It was considered one of the top 10 universities in emerging Europe and Central Asia in the same year. History It was established as the Institute of Engineering Education in 1707, but as a secondary education (high school) instead of a tertiary university, by Emperor Joseph I as a response to Christian Josef Willenberg's petition addressed to preceding emperor Leopold I. In 1806, the institute of Engineering Education was transformed into Prague Polytechnical Institute (or Prague Polytechnic), when the university studies began. After the disintegration of the Austro-Hungarian Empire, the name of the school was changed in 1920 to the Czech Technical University in Prague. Origins In 1705, asked Emperor Leopold I for permission to teach "the art of engineering". Later, the emperor's only son, who succeeded him on the throne in 1707 as Joseph I, ordered the Czech state of Prague to provide engineering education. For various reasons, the request wasn't implemented for a long period of time. However, in October 1716, Willenberg repeated the request. Finally, on 9 November 1717, a decree by the Czech state granted Willenberg the first engineering professorship in Central Europe. On 7 January 1718, he began teaching. Initially, Willenberg started teaching only 12 students in his own apartment (six barons, four knights, and two burghers), but gradually students proliferated (in 1779, there were around 200) and they started studying in more suitable premises. Initially, the training focused mainly on the military. Teaching in the first year lasted one hour per day; in the second year, almost two. The successor of Prof. Willenberg was Johann Ferdinand Schor, builder of hydraulic structures in the basin of the Vltava and author of textbooks used at the school of mathematics. He began under Willenberg's leadership by teaching optics, perspectivity, technical drawing and geography. The third was professor František Antonín Herget, who mainly focused on civil engineering, particularly construction. In September 1776, Maria Theresa allowed Herget to use the Clementinum building; in 1786, the school moved to the new and better building. In 1787, the School of Engineering was established at the decree of Emperor Joseph II. Academic profile Rankings The CTU is the best technical university in the Czech Republic. In 2010, in the world rating of THES-QS universities in the category of technical sciences, the CTU took the 121st place, in the category of natural sciences – 246th place. In 2018, Czech Technical University was ranked as 220th in Engineering and Technology in the QS World University Rankings. Admissions Students apply to faculty. Each faculty has different admissions requirements. Acceptance rate ranges from 52.32% (Faculty of Information Technology) to 81.51% (Faculty of Transportation Sciences). The percentage of international students grew from 2.5% in 2000 to 16.4% in 2017. Graduation rate Due to the pace and difficulty of CTU coursework, high percentage of students fail to complete first year of their studies. First year failure rates range from 23% (Faculty of Civil Engineering) to 47% (Faculty of Information Technology). Overall, only 48% of enrolled undergraduate students end up graduating. International cooperation Study and work abroad CTU has international agreements with 484 foreign universities. Many of them are ranked in the first hundred in QS World University Rankings such as National University of Singapore, Nanyang Technological University, Purdue University, Korea Institute of Science and Technology (KAIST), Hong Kong University of Science and Technology, Technical University of Munich, Delft University of Technology or KU Leuven. CTU has many bilateral agreements with universities outside of Europe. The most sought after universities are from Canada, Australia, Singapore, United States and Japan. That said, every year many students choose to study in attractive destinations such as Argentina, Brazil, China, Hong Kong, India, Indonesia, South Africa, South Korea, Costa Rica, Mexico, New Zealand, Peru, Russia or Taiwan. CTU also participates in the European programmes Erasmus and Leonardo. International students CTU has currently over 3500 international students from 117 countries. About 750 of them are an exchange students. One of the organizations that takes care of international students is International Student Club (ISC), which organizes Buddy Programme and extra-curricular activities. Dual diploma CTU has currently 21 agreements with universities such as Technical University of Munich, RWTH Aachen or Trinity College Dublin. Constituent parts CTU has 8 faculties. The oldest one (Faculty of Civil Engineering) was founded in 1707, while the youngest and most selective faculty (Faculty of Information Technology) was founded in 2009. The university also has 5 university institutes, such as Czech Institute of Informatics, Robotics and Cybernetics, Klokner Institute, Institute of Physical Education and Sport, University Centre for Energy Efficient Buildings and Institute of Experimental and Applied Physics. Other constituent parts include Computing and Information Centre, Technology and Innovation Centre, The Research Centre for Industrial Heritage, Centre for Radiochemistry and Radiation Chemistry, Division of Construction and Investment and Central Library. The university also has a Publishing House and service facilities. Student clubs within the CTU are integrated in the Student Union. It has 27 members and covers a wide range of free time activities, with the biggest club being Silicon Hill. The Student Union also organizes social events for students throughout the year. Notable alumni František Běhounek, radiologist Christian Doppler, mathematician and physicist Ivan Puluj, physicist and one of the founders of medical radiology Antonín Engel, architect Josef Gerstner, physicist and engineer Václav Havel, statesman, writer and former dissident, who served as the last President of Czechoslovakia Josef Hlávka, architect, main founder of Academy of Science, patron Otakar Husák, CTU graduate, chemist, General, Czechoslovak Legionnaire in Russia and France, fighter from Zborov and Terron, Chairman of President Masaryk's Military Office, Minister of Defence, First Director of the Explosia Semtín factory, prisoner of concentration camps Dachau and Buchenwald, Director of the Synthesia Semtín (1945–1948), political prisoner (Prague Nusle-Pankrác, Mírov 1950–1956) Eva Jiřičná, architect Karel Jonáš, who became Charles Jonas (Wisconsin politician), Czech-American publisher, legislator and Lieutenant Governor of Wisconsin George Klir, computer and systems scientist Karl Kořistka, geographer and technologist František Křižík, inventor, electrical engineer and entrepreneur Ivo Lukačovič, entrepreneur, founder and chairman of Seznam.cz Vladimir Prelog, chemist and Nobel Prize winner Richard Rychtarik, set designer Marie Schneiderová-Zubaníková first female Czech civil engineering graduate (in 1923) Emil Votoček, chemist Emil Weyr, mathematician Josef Zítek, architect and engineer Gallery Notes and references External links CTU official website in English CTU official website in Czech www.StudyAtCTU.com Official website for international students International Student Club Organization for international students IAESTE Organization for international students UCEEB University Centre for Energy Efficient Buildings (UCEEB) Top Industrial Managers for Europe (TIME) network for student mobility. Universities in the Czech Republic Technical universities and colleges Educational institutions in Prague 1707 establishments in the Holy Roman Empire 1707 establishments in the Habsburg Monarchy 18th-century establishments in Bohemia Educational institutions established in 1707 Engineering universities and colleges in the Czech Republic
25114114
https://en.wikipedia.org/wiki/Input%20port
Input port
Input port may refer to: Input device, a generic term for any device that provides input to a system Parallel port, a computer hardware interface Serial port, a computer hardware interface Universal Serial Bus, a computer hardware interface IEEE 1394 interface, a computer hardware interface, known commonly as Firewire PS/2 connector, a common computer interface for mice and keyboards See also Output device Peripheral device Computer hardware Computer keyboard Mouse (computer)
1830194
https://en.wikipedia.org/wiki/PacketCable
PacketCable
PacketCable network is a technology specification defined by the industry consortium CableLabs for using Internet Protocol (IP) networks to deliver multimedia services, such as IP telephony, conferencing, and interactive gaming on a cable television infrastructure. The PacketCable technology is based on the DOCSIS base with extensions that enable cable operators to deliver data and voice traffic efficiently using a single high-speed, quality-of-service (QoS)-enabled broadband (cable) architecture. The PacketCable effort dates back to 1997 when cable operators identified the need for a real-time multimedia architecture to support the delivery of advanced multimedia services over the DOCSIS architecture. The original PacketCable specifications were based on the physical network characteristics of operators in the U.S. For the European market, Cable Europe Labs, maintains a separate, but equivalent effort, EuroPacketCable, based on European network implementations. Technical overview PacketCable interconnects three network types: Hybrid Fibre Coaxial (HFC) access network Public switched telephone network (PSTN) TCP/IP Managed IP networks Protocols DOCSIS (Data Over Cable Service Interface Specification) - standard for data over cable and details mostly the RF band Real-time Transport Protocol (RTP) & Real Time Control Protocol (RTCP) required for media transfer PSTN Gateway Call Signaling Protocol Specification (TGCP) which is an MGCP extension for Media Gateways Network-Based Call Signaling Protocol Specification (NCS) which is an MGCP extension for analog residential Media Gateways - the NCS specification, which is derived from the IETF MGCP RFC 2705, details VoIP signalling. Basically the IETF version is a subset of the NCS version. The Packet Cable group has defined more messages and features than the IETF. Common Open Policy Service (COPS) for quality of service PacketCable voice coders The required coders are: ITU G.711 (both μ-law and a-law algorithm versions) - for V1.0 & 1.5 iLBC - for V1.5 BV16 - for V1.5 In addition the specifications recommended the following: ITU G.728 ITU G.729 Annex E PacketCable 1.0 PacketCable 1.0 comprises eleven specifications and six technical reports which define call signaling, quality of service (QoS), codec usage, client provisioning, billing event message collection, public switched telephone network (PSTN) interconnection, and security interfaces for implement a single-zone PacketCable solution for residential Internet Protocol (IP) voice services. PacketCable 1.5 PacketCable 1.5 contains additional capabilities over PacketCable 1.0. It superseded previous versions (1.1, 1.2, and 1.3). The standard covers 21 specifications and one technical report which together define call signaling, quality of service (QoS), coders, client provisioning, billing event message collection, PSTN interconnection, and security interfaces for implementing a single-zone or multi-zone PacketCable solution for residential Internet Protocol (IP) voice services. PacketCable 2.0 Version 2.0 introduces IMS Release 7 IP Multimedia Subsystem into the core of the architecture. PacketCable uses a simplified IMS in some areas and enhances it in some cable-specific areas. PacketCable defined Delta specs related to the most important IMS specs from 3GPP. Deployment VoIP services based on the PacketCable architecture are being widely deployed by operators: B.net (Croatia) Cable One (System wide) Cabletica (Costa Rica) Cablevision – Optimum Voice (System wide) Charter (System wide) Claro TV (Guatemala, El Salvador, Nicaragua, Honduras) Cogeco - Cogeco Home Phone (Canada) Comcast - Comcast Digital Voice (System-wide) Cox – Cox Digital Telephone (System-wide) GCI (Alaska) Izzi Telecom (México) KabloNet (Turkey) Liberty Global (Puerto Rico) Net Serviços de Comunicação - NET Serviços de Comunicação (Brasil) NetUno (Venezuela) NOS (Portugal) ONO (Spain) Optus - SingTel Optus Pty Ltd (Australia) Rogers Telecom (Canada wide (Major cities and towns serviceable with rogers high-speed internet are eligible, still expanding, St John's, NL to Vancouver, BC, serviceable as of July 2007)) Shaw Communications (Canada: Calgary, Edmonton, Winnipeg and Victoria) Shentel (United States: Virginia, West Virginia, Maryland.) Telecentro Argentina (Argentina) TIGO (Honduras, El Salvador, Costa Rica) The United Group ("Serbia Broadband" - "Telemach Slovenia" - "Telemach Bosnia and Herzegovina) Unitymedia (UPC Germany) UPC Broadband (Across Europe) Vidéotron (Canada: Quebec) Virgin Media Ireland (Ireland) Ziggo (The Netherlands) References PacketCable 1.5 Specifications Audio/Video Codecs - PKT-SP-CODEC1.5-I01-050128 PacketCable 1.5 Specifications Network-Based Call Signaling Protocol - PKT-SP-NCS1.5-I01-050128 PSTN Gateway Call Signaling Protocol Specification - PKT-SP-TGCP1.5-I01-050128 Further reading Riddel, Jeff. (2007). PacketCable Implementation. Cisco Press. 1061 pages. . External links Digital cable Voice over IP
9129709
https://en.wikipedia.org/wiki/CommunityViz
CommunityViz
CommunityViz is the name of a group of extensions to ArcGIS Geographic Information System software. CommunityViz is an analysis tool used for, among other applications, urban planning, land use planning, geodesign, transportation planning and resource management applications. It also provides options for 3D visualization in the Scenario 3D and Scenario 360 plugins. CommunityViz also allows users to export and view their work in ArcGIS Online, Google Earth and other KML/KMZ viewers such as ArcGIS Explorer. The software was originally produced by the Orton Family Foundation and in 2005 was handed off to Placeways LLC. In 2017, the software was purchased by City Explained, Inc. where its development continues. History CommunityViz began as an idea in the late 1990s when Noel Fritzinger and his friend Lyman Orton, proprietor of the Vermont Country Store and long-term member of his town’s local planning board, first envisioned a software tool that would make the planning process more accessible to ordinary citizens. After forming the Orton Family Foundation, they recruited a consultant team to develop the idea. The initial team included The Environmental Simulation Center, Fore Site Consulting, PricewaterhouseCoopers, Multigen-Paradigm, and Green Mountain GeoGraphics. The first commercial version was released in late 2001 and included three components: Scenario Constructor for interactive analysis, SiteBuilder 3D (OEMed from MultiGen-Paradigm) for 3D visualization, and Policy Simulator for agent-based modeling of future outcomes resulting from present-day policy decisions. By 2003 there was enough experience and research to guide a complete redevelopment that resulted in CommunityViz Version 2. The new version was built for the new architecture of ArcGIS 8.x. It changed Scenario Constructor to Scenario 360 and gave it a new, intuitive interface. SiteBuilder 3D was updated, but Policy Simulator was dropped from the package. The new design quickly gained popularity. In 2005, CommunityViz development and operations were spun off from the Foundation into a new company called Placeways LLC. With continuing guidance and financial support from the Foundation, Placeways continued with new research and development, introducing Version 3 and its “decision tool” architecture in the fall of 2005, Version 4 with new 3D technology in 2009, and Version 5 with new web publishing technology in 2015, with numerous interim releases and feature upgrades on a continuing basis. In 2017, development of the software was handed off to City Explained, Inc. The current release of CommunityViz is Version 5.1. The software is sold using different price tiers for commercial, government/non-profit and educational users. Version 5.1 is compatible with ArcGIS 10.3, 10.4, 10.5 and 10.6. Analysis Capabilities In CommunityViz Scenario 360, users can create their own analyses across multiple scenarios using custom formulas, indicators, and charts which all update dynamically in real time as the user makes changes on the map or to the calculations. Because CommunityViz is an extension to ArcGIS, users can bring in GIS data and use CommunityViz while maintaining access to extensive ArcGIS Desktop and ArcGIS Online functionality. Data from other external models can be brought in as well. The CommunityViz dynamic analysis engine provides a versatile modeling framework. It includes over 90 analysis functions ranging from simple arithmetic to complex geospatial calculations, and the functions themselves can be assembled into compound formulas that reference one another to create a complete model. Model calculations typically run in real time, so that as a user experiments with edits to a map, changes to modeling inputs (called assumptions), scenarios, or alternate data inputs, results appear immediately. Modeling results are displayed in a variety of visual forms including color-changing maps, dynamically changing charts, tables and reports, and potentially 3D visualizations. Tools CommunityViz contains additional analysis features including several for creating indicators, such as the 360 Indicators Wizard which can produce up to 101 indicators, the Custom Impacts Wizard to aid in designing your own indicators, and the ability to freely design as many of your own formulas and indicators as you desire. Other tools include the Land Use Designer which allows you to paint desired land uses and analyze the effects, the Build-Out Wizard which calculates the development capacity for your land, a Suitability Wizard, and TimeScope which allows you to visualize change through time. Allocator 5 and Allocator Wizard (Allocator 4) are decision tools that helps you model patterns of future growth. It distributes a user-specified quantity of new buildings across the map according to the capacity and desirability of each land use feature. Allocation, sometimes known by the acronym LUAM (Land Use Allocation Model), is a key tool in many long-range transportation and land use planning processes. Linking the Build-Out, Suitability and Allocator 5 tools creates a powerful and flexible urban growth modeling suite, used by cities, counties and regional governments. Users CommunityViz is used primarily for land-use planning and natural resource management, but because it allows its users to create custom analyses, it can be applied to almost any geographic decision-making process. The largest user groups comprise government planners (local, regional, and national), private planning and design firms, and universities. Most users are already somewhat familiar with GIS or have access to a GIS department. CommunityViz has been used extensively by academics and researchers. A selected list of papers and publications can be downloaded from CommunityViz Selected Publications on the City Explained website. 3D Tools Current 3D capabilities include: Automatically export to Google Earth and create scenes using SketchUp models. Create highly realistic, interactive 3D scenes using Scenario 3D with 3D models in common CAD and SketchUp formats (.KMZ, .3DS, and COLLADA interchange (.DAE)). Works as an extension to ArcGIS ArcScene. Awards At the 2011 Esri Business Partners Conference, Placeways received the Extension to ArcGIS Desktop award for CommunityViz. Book In 2011, The Planners Guide to CommunityViz: The Essential Tool for a New Generation of Planning, by Doug Walker and Tom Daniels, was published and made available for purchase through the APA Planners Press. The book, through visuals, examples, and case studies, demonstrates CommunityViz applications across many disciplines and the many ways it can be applied including for analysis, visualization, and public participation. Common misspellings "CommunityVis," "Community Viz," "communityviz," "Communitybiz" and "Communityviz" are some of the most common ways of misspelling the name. The correct spelling contains no spaces, a capital "V," and a "z." The abbreviations "CViz" and "CV" are sometimes used. See also ArcGIS - ESRI's software. External links CommunityViz web site City Explained web site Orton Family Foundation web site GIS software ArcGIS Extension
42368650
https://en.wikipedia.org/wiki/Pleco%20Software
Pleco Software
Pleco Software (pronounced Pl-ee-ko) provides an English and Chinese Dictionary application for iOS and Android devices. The Pleco Software company was founded in May 2000 by Michael Love. Features Pleco allows different ways of input, including Pinyin input method, English words, handwriting recognition and optical character recognition. It has many sets of dictionaries (including the Oxford, Longman, FLTRP, and Ricci), audio recordings from two different native speakers, flashcards functionality, and a document reader that can look up words in a document. Pleco is a free application with in-app purchases, additional functions and large dictionaries (including English, French, German, Mandarin, Cantonese, classical Chinese, and a traditional Chinese medicine reference). History Pleco was started by Mike Love in May 2000 when he was 18 years old. The application was first launched on the Palm Pilot in 2001. In 2013, Pleco 3.0 was released. In November 2017, Endymion Wilkinson's Chinese History: A New Manual was added. Reception As of July 2021, Pleco Chinese Dictionary had 4.7 stars on the iOS App Store, based on 1,300 ratings, and 4.6 stars out of 5 on Google Play, based on over 40,000 ratings. In a 2013 opinion article for the New York Times, the British chef Fuchsia Dunlop wrote, "Pleco has absolutely changed my life", and "it's completely brilliant for traveling." In 2018, New York Times columnist Lucas Peterson said he found Pleco to be a "useful translation app". See also List of flashcard software Chinese language References Language learning software Chinese-language education Android (operating system) software IOS software
44610768
https://en.wikipedia.org/wiki/National%20Cybersecurity%20Center%20of%20Excellence
National Cybersecurity Center of Excellence
The National Cybersecurity Center of Excellence (NCCoE) is a US government organization that builds and publicly shares solutions to cybersecurity problems faced by U.S. businesses. The center, located in Rockville, Maryland, was established in 2012 through a partnership with the National Institute of Standards and Technology (NIST), the state of Maryland, and Montgomery County. The center is partnered with nearly 20 market-leading IT companies, which contribute hardware, software and expertise. The NCCoE asks industry sector members about their cybersecurity problems, then selects issues that affect an entire sector or reaches across sectors. The center forms a team of people from cybersecurity technology companies, other federal agencies and academia to address each problem. The teams work in the center's labs to build example solutions using commercially available, off-the-shelf products. For each example solution, the NCCoE publishes a practice guide, a collection of the materials and information needed to deploy the example solution, and makes it available to the general public. The center's goal is to “accelerate the deployment and use of secure technologies” that can help businesses improve their defenses against cyber attack. History NIST The NCCoE is part of NIST, a non-regulatory federal agency within the U.S. Department of Commerce that develops measurement standards and conducts research in measurement science. According to the NIST website, the Federal Information Security Management Act of 2002 (FISMA) “reaffirmed NIST’s role of developing information security standards (Federal Information Processing Standards) and guidelines for non-national security federal information systems and assigned NIST some specific responsibilities, including the development of: Standards to be used by Federal agencies to categorize information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; Guidelines recommending the types of information and information systems to be included in each category; and Minimum information security requirements (management, operational and technical security controls) for information and information systems in each category.” Many private sector organizations voluntarily adopt these standards, guidelines and security requirements. As a NIST center, the NCCoE is an applied space for the demonstration of standards-based approaches to cybersecurity. Executive Order 13636, “Improving Critical Infrastructure Cybersecurity” President Barack Obama issued Executive Order 13636, “Improving Critical Infrastructure Cybersecurity,” in February 2013 tasking NIST to create a cybersecurity framework that helps organizations mitigate risks to the nation's essential systems such as power generation and distribution, the financial services sector, and transportation. NIST released the Framework for Improving Critical Infrastructure Cybersecurity in February 2014, which “consists of standards, guidelines and practices to promote the protection of critical infrastructure.” The NCCoE demonstrates how the framework can be implemented in real-world environments. When an industrial sector approaches the center with a cybersecurity problem, the center maps the solution's hoped-for capabilities to the Cybersecurity Framework, as well as to other standards, controls and best practices. Media coverage The NCCoE's launch was formally announced on February 21, 2012 by U.S. Senator Barbara Mikulski (D-Md.), Maryland Lt. Governor Anthony Brown, Montgomery County Executive Isiah Leggett and Under Secretary of Commerce for Standards and Technology and NIST Director Patrick D. Gallagher. NIST issued a press release the same day stating that the center was created to “work to strengthen U.S. economic growth by supporting automated and trustworthy e-government and e-commerce.” The NCCoE will “host multi-institutional, collaborative efforts that build on expertise from industry and government,” according to the press release. Federally funded research and development center In September 2014, the National Institute of Standards and Technology (NIST) awarded a contract to the MITRE Corporation to operate the Department of Commerce’s first Federally Funded Research and Development Center (FFRDC), the National Cybersecurity FFRDC, which supports the NCCoE. According to the press release on the NIST website, “this FFRDC is the first solely dedicated to enhancing the security of the nation’s information systems.” The press release states that the FFRDC will help the NCCoE “expand and accelerate its public-private collaborations” and focus on “boosting the security of U.S. information systems.” “FFRDCs operate in the public interest and are required to be free from organizational conflicts of interest as well as bias toward any particular company, technology or product—key attributes given the NCCoE’s collaborative nature…The first three task orders under the contract will allow the NCCoE to expand its efforts in developing use cases and building blocks and provide operations management and facilities planning.” Collaborators Founding Partners The partners that founded the NCCoE are the National Institute of Standards and Technology (NIST), the state of Maryland and Montgomery County. This partnership was instrumental in establishing the center as a nationally recognized cybersecurity resource that has the potential to increase the number of local cybersecurity companies, local workforce development and provide local companies with exposure to NIST's expertise. National Cybersecurity Excellence Partners National Cybersecurity Excellence Partners (NCEPs) offer technology companies the opportunity to develop long-term relationships with the NCCoE and NIST. As core partners, NCEPs can provide hardware, software, or personnel who collaborate with the NCCoE on current projects. Industry representatives Sector representatives approach the NCCoE on behalf of their industry to share business problems that can be solved through a cybersecurity solution. These representatives can also provide insight during the project build process and help validate the center's approach to developing an example solution. Experts from government and academia Members of government agencies and academic institutions can discuss their cybersecurity challenges with the NCCoE, provide insight and feedback on existing center projects, or collaborate with technology companies in the center's labs. Users Other users, such as businesses working to improve their cybersecurity, have the opportunity to test the NCCoE's example solutions, evaluate their effectiveness, and provide feedback. See also National Cyber Security Centre References Computer security organizations Government agencies of the United States
22497
https://en.wikipedia.org/wiki/OpenGL
OpenGL
OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering. Silicon Graphics, Inc. (SGI) began developing OpenGL in 1991 and released it on June 30, 1992; applications use it extensively in the fields of computer-aided design (CAD), virtual reality, scientific visualization, information visualization, flight simulation, and video games. Since 2006, OpenGL has been managed by the non-profit technology consortium Khronos Group. Design The OpenGL specification describes an abstract API for drawing 2D and 3D graphics. Although it is possible for the API to be implemented entirely in software, it is designed to be implemented mostly or entirely in hardware. The API is defined as a set of functions which may be called by the client program, alongside a set of named integer constants (for example, the constant GL_TEXTURE_2D, which corresponds to the decimal number 3553). Although the function definitions are superficially similar to those of the programming language C, they are language-independent. As such, OpenGL has many language bindings, some of the most noteworthy being the JavaScript binding WebGL (API, based on OpenGL ES 2.0, for 3D rendering from within a web browser); the C bindings WGL, GLX and CGL; the C binding provided by iOS; and the Java and C bindings provided by Android. In addition to being language-independent, OpenGL is also cross-platform. The specification says nothing on the subject of obtaining and managing an OpenGL context, leaving this as a detail of the underlying windowing system. For the same reason, OpenGL is purely concerned with rendering, providing no APIs related to input, audio, or windowing. Development OpenGL is an actively developed API. New versions of the OpenGL specifications are regularly released by the Khronos Group, each of which extends the API to support various new features. The details of each version are decided by consensus between the Group's members, including graphics card manufacturers, operating system designers, and general technology companies such as Mozilla and Google. In addition to the features required by the core API, graphics processing unit (GPU) vendors may provide additional functionality in the form of extensions. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions. Vendors can use extensions to expose custom APIs without needing support from other vendors or the Khronos Group as a whole, which greatly increases the flexibility of OpenGL. All extensions are collected in, and defined by, the OpenGL Registry. Each extension is associated with a short identifier, based on the name of the company which developed it. For example, Nvidia's identifier is NV, which is part of the extension name GL_NV_half_float, the constant GL_HALF_FLOAT_NV, and the function glVertex2hNV(). If multiple vendors agree to implement the same functionality using the same API, a shared extension may be released, using the identifier EXT. In such cases, it could also happen that the Khronos Group's Architecture Review Board gives the extension their explicit approval, in which case the identifier ARB is used. The features introduced by each new version of OpenGL are typically formed from the combined features of several widely implemented extensions, especially extensions of type ARB or EXT. Documentation The OpenGL Architecture Review Board released a series of manuals along with the specification which have been updated to track changes in the API. These are commonly referred to by the colors of their covers: The Red Book OpenGL Programming Guide, 9th Edition. The Official Guide to Learning OpenGL, Version 4.5 with SPIR-V The Orange Book OpenGL Shading Language, 3rd edition. A tutorial and reference book for GLSL. Historic books (pre-OpenGL 2.0): The Green Book OpenGL Programming for the X Window System. A book about X11 interfacing and OpenGL Utility Toolkit (GLUT). The Blue Book OpenGL Reference manual, 4th edition. Essentially a hard-copy printout of the Unix manual (man) pages for OpenGL. Includes a poster-sized fold-out diagram showing the structure of an idealised OpenGL implementation. The Alpha Book (white cover) OpenGL Programming for Windows 95 and Windows NT. A book about interfacing OpenGL with Microsoft Windows. OpenGL's documentation is also accessible via its official webpage. Associated libraries The earliest versions of OpenGL were released with a companion library called the OpenGL Utility Library (GLU). It provided simple, useful features which were unlikely to be supported in contemporary hardware, such as tessellating, and generating mipmaps and primitive shapes. The GLU specification was last updated in 1998 and depends on OpenGL features which are now deprecated. Context and window toolkits Given that creating an OpenGL context is quite a complex process, and given that it varies between operating systems, automatic OpenGL context creation has become a common feature of several game-development and user-interface libraries, including SDL, Allegro, SFML, FLTK, and Qt. A few libraries have been designed solely to produce an OpenGL-capable window. The first such library was OpenGL Utility Toolkit (GLUT), later superseded by freeglut. GLFW is a newer alternative. These toolkits are designed to create and manage OpenGL windows, and manage input, but little beyond that. GLFW – A cross-platform windowing and keyboard-mouse-joystick handler; is more game-oriented freeglut – A cross-platform windowing and keyboard-mouse handler; its API is a superset of the GLUT API, and it is more stable and up to date than GLUT OpenGL Utility Toolkit (GLUT) – An old windowing handler, no longer maintained. Several "multimedia libraries" can create OpenGL windows, in addition to input, sound and other tasks useful for game-like applications Allegro 5 – A cross-platform multimedia library with a C API focused on game development Simple DirectMedia Layer (SDL) – A cross-platform multimedia library with a C API SFML – A cross-platform multimedia library with a C++ API and multiple other bindings to languages such as C#, Java, Haskell, and Go Widget toolkits FLTK – A small cross-platform C++ widget library Qt – A cross-platform C++ widget toolkit. It provides many OpenGL helper objects, which even abstract away the difference between desktop GL and OpenGL ES wxWidgets – A cross-platform C++ widget toolkit Extension loading libraries Given the high workload involved in identifying and loading OpenGL extensions, a few libraries have been designed which load all available extensions and functions automatically. Examples include OpenGL Easy Extension library (GLEE), OpenGL Extension Wrangler Library (GLEW) and glbinding. Extensions are also loaded automatically by most language bindings, such as JOGL and PyOpenGL. Implementations Mesa 3D is an open-source implementation of OpenGL. It can do pure software rendering, and it may also use hardware acceleration on BSD, Linux, and other platforms by taking advantage of the Direct Rendering Infrastructure. As of version 20.0, it implements version 4.6 of the OpenGL standard. History In the 1980s, developing software that could function with a wide range of graphics hardware was a real challenge. Software developers wrote custom interfaces and drivers for each piece of hardware. This was expensive and resulted in multiplication of effort. By the early 1990s, Silicon Graphics (SGI) was a leader in 3D graphics for workstations. Their IRIS GL API became the industry standard, used more widely than the open standards-based PHIGS. This was because IRIS GL was considered easier to use, and because it supported immediate mode rendering. By contrast, PHIGS was considered difficult to use and outdated in functionality. SGI's competitors (including Sun Microsystems, Hewlett-Packard and IBM) were also able to bring to market 3D hardware supported by extensions made to the PHIGS standard, which pressured SGI to open source a version of IrisGL as a public standard called OpenGL. However, SGI had many customers for whom the change from IrisGL to OpenGL would demand significant investment. Moreover, IrisGL had API functions that were irrelevant to 3D graphics. For example, it included a windowing, keyboard and mouse API, in part because it was developed before the X Window System and Sun's NeWS. And, IrisGL libraries were unsuitable for opening due to licensing and patent issues. These factors required SGI to continue to support the advanced and proprietary Iris Inventor and Iris Performer programming APIs while market support for OpenGL matured. One of the restrictions of IrisGL was that it only provided access to features supported by the underlying hardware. If the graphics hardware did not support a feature natively, then the application could not use it. OpenGL overcame this problem by providing software implementations of features unsupported by hardware, allowing applications to use advanced graphics on relatively low-powered systems. OpenGL standardized access to hardware, pushed the development responsibility of hardware interface programs (device drivers) to hardware manufacturers, and delegated windowing functions to the underlying operating system. With so many different kinds of graphics hardware, getting them all to speak the same language in this way had a remarkable impact by giving software developers a higher-level platform for 3D-software development. In 1992, SGI led the creation of the OpenGL Architecture Review Board (OpenGL ARB), the group of companies that would maintain and expand the OpenGL specification in the future. In 1994, SGI played with the idea of releasing something called "OpenGL++" which included elements such as a scene-graph API (presumably based on their Performer technology). The specification was circulated among a few interested parties – but never turned into a product. Microsoft released Direct3D in 1995, which eventually became the main competitor of OpenGL. Over 50 game developers signed an open letter to Microsoft, released on June 12, 1997, calling on the company to actively support Open GL. On December 17, 1997, Microsoft and SGI initiated the Fahrenheit project, which was a joint effort with the goal of unifying the OpenGL and Direct3D interfaces (and adding a scene-graph API too). In 1998, Hewlett-Packard joined the project. It initially showed some promise of bringing order to the world of interactive 3D computer graphics APIs, but on account of financial constraints at SGI, strategic reasons at Microsoft, and a general lack of industry support, it was abandoned in 1999. In July 2006, the OpenGL Architecture Review Board voted to transfer control of the OpenGL API standard to the Khronos Group. Version history The first version of OpenGL, version 1.0, was released on June 30, 1992, by Mark Segal and Kurt Akeley. Since then, OpenGL has occasionally been extended by releasing a new version of the specification. Such releases define a baseline set of features which all conforming graphics cards must support, and against which new extensions can more easily be written. Each new version of OpenGL tends to incorporate several extensions which have widespread support among graphics-card vendors, although the details of those extensions may be changed. OpenGL 2.0 Release date: September 7, 2004 OpenGL 2.0 was originally conceived by 3Dlabs to address concerns that OpenGL was stagnating and lacked a strong direction. 3Dlabs proposed a number of major additions to the standard. Most of these were, at the time, rejected by the ARB or otherwise never came to fruition in the form that 3Dlabs proposed. However, their proposal for a C-style shading language was eventually completed, resulting in the current formulation of the OpenGL Shading Language (GLSL or GLslang). Like the assembly-like shading languages it was replacing, it allowed replacing the fixed-function vertex and fragment pipe with shaders, though this time written in a C-like high-level language. The design of GLSL was notable for making relatively few concessions to the limits of the hardware then available. This harked back to the earlier tradition of OpenGL setting an ambitious, forward-looking target for 3D accelerators rather than merely tracking the state of currently available hardware. The final OpenGL 2.0 specification includes support for GLSL. Longs Peak and OpenGL 3.0 Before the release of OpenGL 3.0, the new revision had the codename Longs Peak. At the time of its original announcement, Longs Peak was presented as the first major API revision in OpenGL's lifetime. It consisted of an overhaul to the way that OpenGL works, calling for fundamental changes to the API. The draft introduced a change to object management. The GL 2.1 object model was built upon the state-based design of OpenGL. That is, to modify an object or to use it, one needs to bind the object to the state system, then make modifications to the state or perform function calls that use the bound object. Because of OpenGL's use of a state system, objects must be mutable. That is, the basic structure of an object can change at any time, even if the rendering pipeline is asynchronously using that object. A texture object can be redefined from 2D to 3D. This requires any OpenGL implementations to add a degree of complexity to internal object management. Under the Longs Peak API, object creation would become atomic, using templates to define the properties of an object which would be created with one function call. The object could then be used immediately across multiple threads. Objects would also be immutable; however, they could have their contents changed and updated. For example, a texture could change its image, but its size and format could not be changed. To support backwards compatibility, the old state based API would still be available, but no new functionality would be exposed via the old API in later versions of OpenGL. This would have allowed legacy code bases, such as the majority of CAD products, to continue to run while other software could be written against or ported to the new API. Longs Peak was initially due to be finalized in September 2007 under the name OpenGL 3.0, but the Khronos Group announced on October 30 that it had run into several issues that it wished to address before releasing the specification. As a result, the spec was delayed, and the Khronos Group went into a media blackout until the release of the final OpenGL 3.0 spec. The final specification proved far less revolutionary than the Longs Peak proposal. Instead of removing all immediate mode and fixed functionality (non-shader mode), the spec included them as deprecated features. The proposed object model was not included, and no plans have been announced to include it in any future revisions. As a result, the API remained largely the same with a few existing extensions being promoted to core functionality. Among some developer groups this decision caused something of an uproar, with many developers professing that they would switch to DirectX in protest. Most complaints revolved around the lack of communication by Khronos to the development community and multiple features being discarded that were viewed favorably by many. Other frustrations included the requirement of DirectX 10 level hardware to use OpenGL 3.0 and the absence of geometry shaders and instanced rendering as core features. Other sources reported that the community reaction was not quite as severe as originally presented, with many vendors showing support for the update. OpenGL 3.0 Release date: August 11, 2008 OpenGL 3.0 introduced a deprecation mechanism to simplify future revisions of the API. Certain features, marked as deprecated, could be completely disabled by requesting a forward-compatible context from the windowing system. OpenGL 3.0 features could still be accessed alongside these deprecated features, however, by requesting a full context. Deprecated features include: All fixed-function vertex and fragment processing Direct-mode rendering, using glBegin and glEnd Display lists Indexed-color rendering targets OpenGL Shading Language versions 1.10 and 1.20 OpenGL 3.1 Release date: March 24, 2009 OpenGL 3.1 fully removed all of the features which were deprecated in version 3.0, with the exception of wide lines. From this version onwards, it's not possible to access new features using a full context, or to access deprecated features using a forward-compatible context. An exception to the former rule is made if the implementation supports the ARB_compatibility extension, but this is not guaranteed. Hardware: Mesa supports ARM Panfrost with Version 21.0. OpenGL 3.2 Release date: August 3, 2009 OpenGL 3.2 further built on the deprecation mechanisms introduced by OpenGL 3.0, by dividing the specification into a core profile and compatibility profile. Compatibility contexts include the previously-removed fixed-function APIs, equivalent to the ARB_compatibility extension released alongside OpenGL 3.1, while core contexts do not. OpenGL 3.2 also included an upgrade to GLSL version 1.50. OpenGL 3.3 Release date: March 11, 2010 Mesa supports software Driver SWR, softpipe and for older Nvidia cards with NV50. OpenGL 4.0 Release date: March 11, 2010 OpenGL 4.0 was released alongside version 3.3. It was designed for hardware able to support Direct3D 11. As in OpenGL 3.0, this version of OpenGL contains a high number of fairly inconsequential extensions, designed to thoroughly expose the abilities of Direct3D 11-class hardware. Only the most influential extensions are listed below. Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer. OpenGL 4.1 Release date: July 26, 2010 Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer. Minimum "maximum texture size" is 16,384 × 16,384 for GPU's implementing this specification. OpenGL 4.2 Release date: August 8, 2011 Support for shaders with atomic counters and load-store-atomic read-modify-write operations to one level of a texture Drawing multiple instances of data captured from GPU vertex processing (including tessellation), to enable complex objects to be efficiently repositioned and replicated Support for modifying an arbitrary subset of a compressed texture, without having to re-download the whole texture to the GPU for significant performance improvements Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), and Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge and newer) OpenGL 4.3 Release date: August 6, 2012 Compute shaders leveraging GPU parallelism within the context of the graphics pipeline Shader storage buffer objects, allowing shaders to read and write buffer objects like image load/store from 4.2, but through the language rather than function calls. Image format parameter queries ETC2/EAC texture compression as a standard feature Full compatibility with OpenGL ES 3.0 APIs Debug abilities to receive debugging messages during application development Texture views to interpret textures in different ways without data replication Increased memory security and multi-application robustness Hardware support: AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge without stencil texturing, Haswell and newer), Nvidia GeForce 400 series and newer. VIRGL Emulation for virtual machines supports 4.3+ with Mesa 20. OpenGL 4.4 Release date: July 22, 2013 Enforced buffer object usage controls Asynchronous queries into buffer objects Expression of more layout controls of interface variables in shaders Efficient binding of multiple objects simultaneously Hardware support: AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1. OpenGL 4.5 Release date: August 11, 2014 Direct State Access (DSA) – object accessors enable state to be queried and modified without binding objects to contexts, for increased application and middleware efficiency and flexibility. Flush Control – applications can control flushing of pending commands before context switching – enabling high-performance multithreaded applications; Robustness – providing a secure platform for applications such as WebGL browsers, including preventing a GPU reset affecting any other running applications; OpenGL ES 3.1 API and shader compatibility – to enable the easy development and execution of the latest OpenGL ES applications on desktop systems. Hardware support: AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1, and Tegra X1. OpenGL 4.6 Release date: July 31, 2017 more efficient, GPU-sided, geometry processing more efficient shader execution () more information through statistics, overflow query and counters higher performance through no error handling contexts clamping of polygon offset function, solves a shadow rendering problem SPIR-V shaders Improved anisotropic filtering Hardware support: AMD Radeon HD 7000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel Haswell and newer, Nvidia GeForce 400 series and newer. Driver support: Mesa 19.2 on Linux supports OpenGL 4.6 for Intel Broadwell and newer. Mesa 20.0 supports AMD Radeon GPUs, while support for Nvidia Kepler+ is in progress. Zink as Emulation Driver with 21.1 and software driver LLVMpipe also support with Mesa 21.0. AMD Adrenalin 18.4.1 Graphics Driver on Windows 7 SP1, 10 version 1803 (April 2018 update) for AMD Radeon™ HD 7700+, HD 8500+ and newer. Released April 2018. Intel 26.20.100.6861 graphics driver on Windows 10. Released May 2019. NVIDIA GeForce 397.31 Graphics Driver on Windows 7, 8, 10 x86-64 bit only, no 32-bit support. Released April 2018 Alternative implementations Apple deprecated OpenGL in iOS 12 and macOS 10.14 Mojave in favor of Metal, but it is still available as of macOS 11 Big Sur (including Apple silicon devices). The latest version supported for OpenGL is 4.1 from 2011. A proprietary library from Molten – authors of MoltenVK – called MoltenGL, can translate OpenGL calls to Metal. There are several projects which attempt to implement OpenGL on top of Vulkan. The Vulkan backend for Google's ANGLE achieved OpenGL ES 3.1 conformance in July 2020. The Mesa3D project also includes such a driver, called Zink. The future of OpenGL In June 2018, Apple deprecated OpenGL APIs on all of their platforms (iOS, macOS and tvOS), strongly encouraging developers to use their proprietary Metal API, which was introduced in 2014. Stadia and the Google operating system Fuchsia only support Vulkan. In 2016, id Software released an update for the id Tech 6 game engine that added support for Vulkan, while retaining support for OpenGL. ID Tech 7 eliminated support for OpenGL. On September 17, 2021 Valve announced that it would be removing OpenGL from Dota 2 in a future update. Atypical Games, with support from Samsung, updated their game engine to use Vulkan, rather than OpenGL, across all non-Apple platforms. OpenGL doesn't support Ray Tracing, API for video decoding on GPU in comparison of Vulkan Mesh Shaders supports only for nVidia Vulkan Vulkan, formerly named the "Next Generation OpenGL Initiative" (glNext), is a grounds-up redesign effort to unify OpenGL and OpenGL ES into one common API that will not be backwards compatible with existing OpenGL versions. The initial version of Vulkan API was released on February 16, 2016. See also ARB assembly language – OpenGL's legacy low-level shading language Comparison of OpenGL and Direct3D Direct3D – main competitor of OpenGL Glide API – a graphics API once used on 3dfx Voodoo cards List of OpenGL applications Metal (API) – a graphics API for iOS, macOS, tvOS, watchOS OpenAL – cross-platform audio library, designed to resemble OpenGL OpenGL ES – OpenGL for embedded systems OpenSL ES – API for audio on embedded systems, developed by the Khronos Group OpenVG – API for accelerated 2D graphics, developed by the Khronos Group RenderMan Interface Specification (RISpec) – Pixar's open API for photorealistic off-line rendering VOGL – a debugger for OpenGL Vulkan – low-overhead, cross-platform 2D and 3D graphics API, the "next generation OpenGL initiative" Graphics pipeline WebGPU References Further reading External links OpenGL Overview and OpenGL.org's Wiki with more information on OpenGL Language bindings SGI's OpenGL website Khronos Group, Inc. 1992 software 3D graphics APIs Application programming interfaces Cross-platform software Graphics libraries Graphics standards Video game development Video game development software Virtual reality
36573433
https://en.wikipedia.org/wiki/2012%20Yahoo%21%20Voices%20hack
2012 Yahoo! Voices hack
Yahoo! Voices, formerly Associated Content, was hacked in July 2012. The hack is supposed to have leaked approximately half a million email addresses and passwords associated with Yahoo! Contributor Network. The suspected hacker group, D33ds, used a method of SQL Injection to penetrate Yahoo! Voice servers. Security experts said that the passwords were not encrypted and the website did not use a HTTPS Protocol, which was one of the major reasons of the data breach. The email addresses and passwords are still available to download in a plaintext file on the hacker's website. The hacker group described the hack as a "wake-up call" for Yahoo! security experts. Joseph Bonneau, a security researcher and a former product analysis manager at Yahoo, said "Yahoo can fairly be criticized in this case for not integrating the Associated Content accounts more quickly into the general Yahoo login system, for which I can tell you that password protection is much stronger." Reaction by communities and users D33DS, the suspected hacker group, said that the hack was a "wake-up call". They said that it was not a threat to Yahoo!, Inc. The IT Security firm TrustedSec.net said that the passwords contained a number of email addresses from Gmail, AOL, Yahoo, and more such websites. Response from Yahoo Immediately after the hack, Yahoo!, in a written statement, apologized for the breach. Yahoo! did not disclose how many passwords were valid after the hack, because they said that every minute, 1–3 passwords are changed on their site. Yahoo! said that only 5% of its passwords were stolen during the hack. The hackers' website, d33ds.co, was not available later on Thursday, after the hack. Yahoo! said in a written statement that it takes security very seriously and is working together to fix the vulnerability in its site. Yahoo! said that it was in the process of changing the passwords of the hacked accounts and notifying other companies of the hack. Controversy A simple matter had sparked a controversy over Yahoo!. The controversy was sparked because of Yahoo!'s silence about the data breach. After the servers were hacked, Yahoo! did not mail the affected victims, although it was promised earlier. There was no site-wide notifications about the hack, nor did any victim get any type of personal messages detailing how to reset their account passwords from Yahoo. References External links http://ycorpblog.com/2012/07/13/yahoo-0713201/ 2012 crimes Hacking in the 2010s Hacking of Yahoo! Data breaches
47646403
https://en.wikipedia.org/wiki/List%20of%20Hawaii%20Five-0%20%282010%20TV%20series%29%20characters
List of Hawaii Five-0 (2010 TV series) characters
This is a list of fictional characters in the television series Hawaii Five-0, which aired on CBS from 2010 to 2020. The article deals with the series' main, recurring, and minor characters. Main Steve McGarrett Danny "Danno" Williams Chin Ho Kelly Kono Kalakaua Officer Kono Kalakaua is a fresh HPD academy graduate who was recruited by Steve for the new task force in the pilot episode. She is the cousin of Chin Ho Kelly. Despite her slight frame, she is well-versed in martial arts and is a skilled marksman (she is usually the designated sniper when the situation requires one). Kono was a former professional surfer but a serious injury to her knee ended her career. She decides to join the "family trade" and was days away from graduating from the HPD academy when McGarrett recruited her for the new "Governor's Task Force", as Five-0 was initially called. In the first several episodes of season 1, she works under the watchful eye of her cousin and quickly wins the trust and confidence of the other team members. She is detained and questioned by the HPD, having been accused of stealing $10 million from an HPD asset forfeiture locker. While on suspension, she helps Chin and Danny go after Wo Fat, who was responsible for framing McGarrett for the murder of the Governor. To the members of the Five-0 task force and to the general public, Kono had been stripped of her badge by the Internal Affairs Department of the HPD, but it was later revealed to be a ploy for her to go undercover to bust a string of dirty ex-cops. In Season 2, Kono began a relationship with Hiro Noshimuri's son, Adam. When Adam's brother, Michael, is released from prison, he kills someone with Kono's gun to frame her. She is cleared, but Adam has to kill Michael to protect her and Kono goes into hiding with Adam, skipping from one place to another to evade capture by vengeful yakuza members. At the beginning of Season 5, Adam starts to talk about marriage, and in the episode "Blackout" Kono accepts his proposal. They marry at the end of the season, but in the Season 6 pilot they are tortured by Chin's brother-in-law, Gabriel, who runs away with Adam's money. The money was to pay the yakuza to free Adam from his past. Without the money the yakuza's men persecute Adam and he is forced to kill them before discovering that Gabriel has paid off his debt with the yakuza. Upset, Adam goes to the police with Kono and he accords for an 18 months reclusion. At the end of Season 7, Kono left Hawaii for Carson City, Nevada where she joined a multi-agency task force combating sex trafficking. In the season 8 episode I Ka Wa Ma Mua, I Ka Wa Ma Hope (The Future is in the Past) Danny dreams of the future and Kono has a child with Adam. However, midway through season 9, Adam returns to Hawaii and tells the team he and Kono have broken up. Mary Ann McGarrett Mary Ann McGarrett is the younger sister of Steve McGarrett. She, along with her brother, were both sent to the mainland after their mother was (presumably) murdered. Steve went to the Army and Navy Academy in Carlsbad, California while she lived with their Aunt Deb several hours away. As a result, both siblings had harbored resentment over their father splitting up the family and drifted apart over the years; in the episode "Lanakila", she comments to her brother that the last time they met in person was at their mom's funeral over fifteen years ago. She is the "black sheep" of the family, wandering from job to job, and was said to be living in Los Angeles when Steve returns to Hawaii for good. In a scene deleted from the Pilot episode, Steve mentions bailing her out of trouble more than once and keeping it from their father to "let Dad go to his grave believing that you were his perfect little girl". She takes a job as a flight attendant but quit after her so-called friend Angela had betrayed her and took advantage of her naivete to be a mule for trafficking blood diamonds. In season 3 Mary returned to Hawaii working as a caretaker, originally not wanting to reconnect with Doris she later meets her after strong encouragement from Steve and the person she was caring after. In season 4, Mary adopts a baby girl and names her Joan, after her father. Steve was initially against it but he comes to accept Joan after being forced to babysit her for the day. Since then, Mary and Steve have reconnected and she regularly sends him videos of Joan. In the original show, Mary Ann only appeared in a two-episode arc. She was also estranged from her brother and was married and had an infant son who died of cancer. She was portrayed by Nancy Malone. In the reboot, the character features more in Steve's life and reconnected with him. Dr. Max Bergman Dr. Max Bergman is a former medical examiner who belonged to the City and County of Honolulu. Bergman was initially billed as a recurring character but Oka joined as a series regular and has been part of the main cast since Season 2. Dr. Bergman was first introduced to Five-0 by Governor Jameson. He was playing the piano and then wordlessly proceeded to explain the victim's cause of death to Danny and McGarrett, before finally greeting them and introducing himself. He has the tendency to rattle off trivia, prompting the Five-0 team to cut him off and tell him to get to the point. Danny often mocks him with medical jokes. Despite his lack of social skills, he does get along with the other members of Five-0. He has a Halloween tradition of dressing up as a character from a Keanu Reeves film. In the season 2 episode "Haʻalele", it is revealed that his biological mother, Machiyo Takeshita, was murdered by the notorious "Trashman", a serial killer in Hawaii from the 1980s who earned the nickname because his victims were all found in trash bags or the dumpster. He was born to a Japanese-American mother but adopted by a Jewish family, the Bergmans, after a short stint in foster care, hence his last name. As an undergraduate at Arizona State University, he was a self-confessed party boy and earned the nickname "Beerman". Dr. Bergman is dating Sabrina Lane, a bank teller at Hawaii National Bank. He first saw her in the episode "Haʻawe Make Loa / Death Wish" while at the bank and had a crush on her at first sight. A botched bank robbery took place minutes later resulting in Sabrina being shot and Dr. Bergman having to put his medical expertise to use while being held hostage. Following the incident, he eventually works up the nerve to ask her out. It is implied that they are in a relationship (as of Season 6 Episode 14) as she is mentioned by Dr. Bergman numerous times in conversations with other members of Five-0; Rumer Willis, who portrays Sabrina, has only appeared twice on the show. It is revealed in the season 7 episode "Ka hale ho'okauwel / House of Horrors" that he married Sabrina during his sabbatical. Max retired from the Honolulu Medical Examiner's Office and from Five-0 in the season 7 episode "Ua ho'i ka 'opua i Awalua" and returned to Africa as a member of Doctors Without Borders. Lori Weston Special Agent Lori Weston is a former DHS agent with a background as a profiler. Due to the circumstances of her assignment McGarrett initially views her with suspicion and assigns her to "babysit" the victim's parents in a kidnapping case on her first day on the job. A graduate of Pennsylvania State University, her background as a federal agent proves to be an asset, as she is able to take down suspects and quickly proves her worth. She returns to the DHS at Governor Denning's request when Five-0 get tangled in an incident with an employee of the Russian consulate. As Catherine Rollins was deployed, she spends a lot of time with McGarrett, but the latter regards her a good friend rather than a romantic interest. She had developed a crush on him, but realizes that McGarrett has always been in love with Catherine. Before her departure, she hands him the UH season tickets she had bought. The character is based on Lori Wilson, interpreted by Sharon Farrell in the original series of 1968. She becomes the principal of the series in the twelfth season. Catherine Rollins Lieutenant Catherine "Cath" Rollins, USN (Ret.) is the on-off lover of Steve McGarrett. How or when they met is never fully explained, but it has been implied that they have known each for a long time. Their first date was revealed to have been in 2002. It is likely they would have had to keep their relationship a secret due to strict fraternization rules in the U.S. military and the fact that McGarrett outranked her. Cath is a "Navy brat" and moved around frequently due to her father's various assignments. During Pro Bowl weekend, when asked why she supported the Dallas Cowboys, she explained that had never stayed in any one place long enough to feel an affinity for an NFL team. She herself joined the Navy and is an Intelligence Officer. Like McGarrett, her military service record remains vague. She previously dated Billy Harrington, one of McGarrett's SEAL buddies whom she had worked with. Cath is first introduced in the Season 1 episode "Lanakila", when McGarrett calls her for "a favor". Later in the season, she stays with him while on leave, resulting in Danny commenting on how McGarrett had "that stupid smile" despite being called in on a Saturday to work a murder-abduction case. In the season 4 episode "Makani ʻolu a holo malie/Fair Winds and Following Seas", Cath persuades McGarrett to go with her to Afghanistan to find a young boy named Najib. Najib's father had saved her when she was injured and separated from her unit while deployed to Kabul. McGarrett was captured by Taliban insurgents and is nearly beheaded but a team of Navy SEALs rescue him in the nick of time. As a result, he escapes a court-martial and returns stateside with a stern warning that "these rogue ops of yours are over". Cath decides to remain in Afghanistan to continue her search for Najib and bids a tearful goodbye to McGarrett over the satellite phone. It was implied that McGarrett never really moved on, as he never indicated any interest in any of the women Danny or Ellie Clayton attempted to set him up with. Cath makes a surprise return in the season 5 finale for Kono and Adam's wedding. Steve welcomes her with a hug when she surprises him at his backyard. She remained for the first three episodes of the series sixth season however she leaves once again before Steve is able to propose, saying that she had something to do. A heartbroken Steve tells her that he could not wait for her any longer if she decides to leave again, and she departs in tears. However, she makes a call to an unidentified person stating that Steve believed her story and that she was "ready". She once again returned in the series seventh season for the shows 150th episode to inform Steve that his mother had been detained following an attempt to break Wo Fat's father out of prison. She assists Steve and the rest of the Five-0 Task Force in rescuing Doris McGarrett and Wo Fat's father. She returns again in the eighth season's twentieth episode, getting Steve's help once again to find someone making dirty bombs out of an unused military bunker's depleted uranium. In the series finale, Catherine cracks the show's final mystery- a cypher Steve's dead mother Doris had left for him. This aids in the capture of the show's final villain- the wife of McGarrett's long-time nemesis Wo Fat. In the final moments of the series, Steve and Catherine are reunited as they get ready to depart Hawaii. Awards and decorations The following are the awards and decorations worn by Lt. Rollins. In "Kaʻoia iʻo Ma Loko", Catherine is honorably discharged from the United States Navy and is awarded her second Navy and Marine Corps Achievement Medal as an end-of-tour award. Lou Grover Captain Louis Purnell "Lou" Grover is the former head of the Honolulu Police Department SWAT Team. He was often at loggerheads with McGarrett over what he felt was Five-0's tendency to be trigger-happy with armed suspects and refusal to obey a "wait for SWAT" order, to the point where he lodges an official complaint with Governor Denning. The Governor dismissed Grover's complaint and then promptly orders the duo on an assignment to serve a warrant for a computer hacker. By the latter half of Season 4, he is fully accepted into the Five-0 ohana. In the season 4 finale, Grover's actions to rescue his daughter result in forced early retirement from HPD, but also allows McGarrett to recruit him to be part of the Five-0 team, which he accepts. The character was named after a Captain Grover (Scott Brady) from the original series. A 25-year veteran of the force, Grover graduated from the Chicago Police Academy in 1989, and was promptly recruited by the FBI to go undercover in the Philadelphia Black Mafia. Upon the completion of the assignment, he returned to the Chicago Police Department. He left Chicago around 2012 or 2013, after an incident where he blames himself for having failed to save a boy who was taken hostage by his father, which ultimately resulted in a murder-suicide. He and his wife Renée (portrayed by Michelle Hurd) have two adolescent children: daughter Samantha (portrayed by Paige Hurd) and son Will (portrayed by Chosen Jacobs). The Grovers are a close-knit family and Lou is shown to be a doting father who is fair but firm with his children. As Danny also has a daughter, Grover sometimes gives him tips and unsolicited advice on dealing with preteens. Although he loves his wife and vice versa, he has a tendency to forget important dates such as birthdays, anniversaries and Valentine's Day. In season 6, he forgets about Valentine's Day and hastily makes plans but Renée saw through it and he ends up in the doghouse as a result. The Grovers live in the Honolulu neighborhood of Manoa. Despite Grover's rocky introduction to the Five-0 team, he has since become a valued member of the team. He has an ongoing good-natured rivalry with McGarrett. McGarrett would light-heartedly joke about Grover being a "city boy" out of his element on a tropical island while Grover would poke fun about McGarrett getting himself into trouble because of his refusal to ask for help. Unlike McGarrett, Danny Williams took longer to accept Grover but they bond over the fact that they are the only members of the team with children and would often vent their frustrations to one another about parenting issues. Their friendship becomes slightly awkward after both discover that Will and Grace had been dating behind their backs. Grover's weapon of choice is the Kimber Warrior. Jerry Ortega Jerry Ortega is a conspiracy theorist who lives in his mother's basement (until she moves to Maui) who regularly assists the Five-0 Task Force with various cases. He was first introduced in the episode "Ka 'oia'i'o ma loko / The Truth Within". Jerry was classmates with Chin Ho Kelly at Kukui High and they were in the school band. A running gag in the show is the fact that he is either resistant or reluctant to use cellular technology to communicate with McGarrett (e.g. he once used a rotary phone to directly call McGarrett about updates for a case) and tends to ramble about conspiracies over everything. The Five-0 team largely tolerate and humor him as more often than not, his conspiracy theorist ramblings provide the team with a lead or valuable insight into a case. After much pestering, in season 6, McGarrett finally lets Jerry have his own "office"—an empty file storage room in the basement—and officially hires him as a "consultant". Jerry is a fan of Elvis Presley, which was discovered when Five-0 was called to investigate a murder of an Elvis impersonator in the episode "Ua heleleʻi ka hoku / Fallen Star". In the same episode he shows off his musical talent by singing at the open mic when hanging out with the team for drinks after the case. While his job officially is a consultant, he operates behind the scenes as a technical operator, often providing information about the cases Five-0 investigates or technical support when they are in the field. In the episode 'Ua Malo'o Ka Wai', Jerry is awarded his very own Five-0 badge after he coordinates with Duke Lukela and HPD to rescue Five-0 from the Yakuza. In the season 10 premiere, it is revealed that Jerry had been shot, continuing the cliffhanger left in the Season 9 finale. Two weeks later while recovering in a hospital, Jerry begins thinking about moving on to other things after his near death experience. After helping Five-0 with a new case, Jerry ultimately decides to move on from Five-0 and write a book that he no longer wished to put off until it was “too late”. Despite the fact that he no longer appears on the series, the producers have confirmed that he will be appearing in an episode of MacGyver. Tani Rey Tani Rey was first introduced in the season 8 premiere as a lifeguard who was kicked out of the police academy for cheating. Steve asks her to go undercover to assist Five-0 on a case. She originally turned him down. However, she later changed her mind and eventually became an officer on the Five-0 Task Force. It was revealed that she has a delinquent brother named Koa and Tani repeatedly bails him out of trouble, including saving him from an HPD raid moments before he would've been arrested. Despite her best efforts to help him, Koa ends up overdosing and would have died if Adam and Noelani had not intervened. He is sent to rehab to get cleaned up, but decided to stay to work as a counsellor. She and Junior Reigns became a couple in Season 10 episode 17. Kamekona Tupuola Kamekona Tupuola is the owner of Waiola Shave Ice, Kamekona's Shrimp Truck and Kamekona's Helicopter Tours. Kamekona comes from a typically large Polynesian family and often mentions various relatives whenever one of the members of Five-0 needs a favor, whether for a personal matter or for an investigation. His late cousin, Thomas Hoapili, was a master of Kapu Kuʻialua, an ancient Hawaiian martial art, as is Thomas's daughter, Maggie (portrayed by Summer Glau). Another cousin, Flippa (Shawn Mokuahi Garnett), often helps Kamekona at the shrimp truck. Kamekona was introduced in the Pilot as Chin's confidential informant and owner of a shave ice stall on the beach. His full background and how he first met Chin were only revealed in the season 6 episode "Kuleana / One's Personal Sense of Responsibility". As the elder son in an impoverished family, he would skip school to earn extra money to help his single mother. He gradually became a drug kingpin and did some time at Halawa. As a result of his younger brother's involvement with a noted drug dealer, whom Chin Ho Kelly was coincidentally investigating, he agreed to be a CI for the HPD on the condition his brother is let go without any charges. Chin visits Kamekona in prison and gives him Napoleon Hill's book Think and Grow Rich, beginning a friendship that continues to the formation of Five-0. He started his shave ice stand while getting back on his feet after being released. In the season 5 finale, he mentioned that his shrimp truck business is now in doubt, after Steve and Danny dropped a nuke into the sea to prevent it from blowing up on land. In season 3 he qualifies as a helicopter pilot with the help of McGarrett. Members of the Five-0 Task Force and their families are often seen patronizing his shrimp truck or asking him for favors (e.g. he babysat Grace Williams, Danny's daughter, and chaperoned Steve's sister Mary while they were at work). Duke Lukela Sergeant Duke Lukela is a veteran HPD officer who often acts as a liaison to Five-0. He was one of the few HPD cops who was not antagonistic towards Danny or the other Five-0 members from the beginning, as he was colleagues with Steve's father and many of Chin and Kono's family members (they come from a family of cops) and had known the three of them since they were children. In "Hookman", he was shot by Curt Stoner (Peter Weller), but survived and recovered. He is married to Nalani (portrayed by Chun's real-life partner Laura Mellow.) They have a granddaughter. Chun is the son of Kam Fong Chun, who played Chin Ho Kelly on the original Hawaii Five-O. The character Duke Lukela was portrayed by Herman Wedemeyer in the original series. Chun himself also played various minor characters throughout the original series. Noelani Cunha Noelani was first introduced in season 7 working with Dr. Max Bergman as a medical examiner. Following his departure in season 7 Noelani took his place. She also appeared in numerous episodes of Magnum PI, reluctantly helping the titular character in some of his cases. Junior Reigns Junior Reigns was first introduced in the second episode of season 8. Junior came to Steve looking for a job on the Task Force immediately after being discharged from the U.S. Navy. Steve initially turned him. However, after Junior insists, Steve tells him that before becoming a part of Five-0 he must train at the police academy. Junior eventually receives his badge and joins Five-0. Unlike the other members, Junior rarely questions Steve's tactics or methods due to their shared military background as Navy SEALs and has occasionally had to explain to Danny Williams and Lou Grover the rationale behind Steve's actions. Other characters have noted similarities between Steve (when he first started Five-0) and Junior, in particular Junior's instructor at the HPD Academy. In season 8, it is revealed that Junior had been putting up at a homeless shelter despite having family on the island (Steve later finds out and asks him to move in with him until he finds an apartment). Throughout season 9, it is gradually revealed that he has avoided going home as he was estranged from his father and they only communicated through his mother. It has been implied that Junior had grown up in a loving and stable home, unlike some of his peers at school who mixed with the wrong crowd. Junior had initially planned to return to marry his high school sweetheart Layla after a tour but re-enlisted without consulting her, leading to them breaking up. He left the Navy with the rank of Special Warfare Operator 2nd class. Junior had a sister Maya, who was killed in a car accident when he was young, and their father never fully got over it, leading to him vehemently opposing his only remaining child enlisting in the Navy. However Junior went against him and followed his footsteps into the military and it deepened their estrangement to the point where they were no longer on speaking terms for years. When he returned stateside, he attempted to make amends with his father. Their relationship hits a rough patch when Junior decides forgive the man who was driving the car that killed Maya and his grief-stricken father disowns him in the season 9 finale as a result. In season 10 he reaches out to his father again after discovering that his father had actually been regularly communicating with his CO while he was deployed in Afghanistan. At the end of the episode, they reconcile and Junior brings his parents to Steve's Thanksgiving barbecue. He and Tani Rey became a couple in Season 10 episode 17. Awards and decorations The following are the awards and decorations worn on SO2 Reigns' Class-A uniform, as seen in "A'ohe Kio Pohaku Nalo i Ke Alo Pali". Adam Noshimuri Adam Noshimuri, Kono's husband who becomes a new head of the Japanese Yakuza after Wo Fat murders his father, Hiro. First introduced in the season 2 episode "Pahele / Trap", he has been trying to legitimize and clean up his organization with the intention of distancing himself from his father's criminal past. When he and Kono first started dating, the Five-0 team were extremely suspicious of him, particularly Steve and Chin – the former due to his father's history with the yakuza and the latter due to him seeing Kono as a younger sister. He is accepted into the Five-0 ohana after proving his love for Kono. After Adam kills his brother Michael (Daniel Henney) to protect Kono, he and Kono leave Hawaii and go into hiding. He comes out of hiding after his brother's loyalists are no longer a threat. At the beginning of Season 5, Adam proposes to Kono and at the end of this season they marry. In "Mai ho`oni i ka wai lana mālie", Adam is shot by Gabriel Waincroft, who runs away with the money that Adam would use to distance himself from the Yakuza. Almost immediately after Adam killed two Yakuza thugs in "Piko Pau 'iole" to save himself and Kono, Gabriel buys off Adam's debt in order for his cartel to team up with the head of the Hawaiian Yakuza. He subsequently turns himself in to the Honolulu Police Department. After a short legal battle he is sentenced to 18 months in prison, which was agreed on since he had been acting in self-defense. He has been released on parole as of the season 7 episode "Ka 'Aelike / The Deal". Adam is the elder son of Hiro Noshimuri and, with the introduction of Adam's brother Michael, it becomes apparent that Adam was the brains who ran the legitimate business front of their father's operation out of New York while Michael was the brawn who usually took care of the "dirty" side. For example, when the Five-0 team uncover a yakuza burial site, it is revealed that he knew nothing about it. He is implied to be a Nisei (first generation American-born Japanese American) and his grandfather served in the Imperial Japanese Navy. Dale was initially confirmed for a four-episode arc but following positive fan response, became a recurring guest star. His character had been written as a potential antagonist for McGarrett and the team as he saw the latter as a stumbling block in his quest for answers about his father's death. Like his character, Dale is of Japanese descent. In the season 8 episode I Ka Wa Ma Mua, I Ka Wa Ma Hope (The Future is in the Past) Danny dreams of the future and Adam has a child with Kono. In the Season 8 episode Ka Hopu Nui 'Ana (The Round Up), after a devastating hit on Hawaii's crime bosses by a new crime syndicate, Steve asks Adam to head up a new special division within the Five-0 Task Force to focus specifically on organized crime. Through his investigation, he discovers he has a half-sister, Noriko, who tries to manipulate him by threatening Kono, Chin, Abby and Sarah. However, Adam ends up leaving the island after his informant, Jessie, is killed, but, at the same time, Noriko is killed. In the season 8 finale, Tani discovers a gun in Adam's house, leading her to believe Adam may have been responsible for Noriko's murder. In the season 9 episode, "A'ohe Mea 'Imi A Ka Maka", Tani tests the gun's ballistics and confirm it was the murder weapon. In "Aia I Hi'Ikua; I Hi'Ialo", Adam returns home, depressed by his break-up with Kono. However, Danny helps him through his pain and as a thank you for everything he has done, he offers Adam a spot on Five-0 which he accepts. After HPD got an anonymous tip about Adam's gun but was unable to find it, he confronts Tani about it, but she reveals she did not call it in after Adam helped save her brother's life, leading them both to discover a third party is trying to frame him. Quinn Liu Quinn Liu first appears as a staff sergeant in the CID who was recently demoted for insubordination the season 10 premiere. Recurring Family members John McGarrett Portrayed by William Sadler and Ryan Bittle Sergeant John McGarrett, HPD (Ret.) (1942-2010) is the father of Steve and Mary Ann McGarrett and (presumed) widower of Doris McGarrett. His own father, Ensign Steve McGarrett, was killed in action on the U.S.S. Arizona (BB-39) during the attack on Pearl Harbor and John was born three months later on March 15. John would follow in his father's footsteps into the navy, serving in the Vietnam War and reaching the rank of Lieutenant before becoming a Honolulu Police Department detective. After Doris' presumed death, he launched his own investigation and sent Steve, then a high school junior, to board at the Army and Navy Academy in Carlsbad, California and Mary to live with his older sister Deb, who lived several hours away from Steve's new school, for their safety. It is presumed that this was the last time the siblings ever saw him in person as they became estranged from their father for sending them away; Steve only learns the real reason why his father divided the family when he returns to Hawaii to head the new task force following John's murder. In light of the revelation, Steve and Mary were able to let go of their animosity and are often seen visiting his grave site. John has been mentioned a number of times as Steve regularly works with HPD officers, many of whom were John's former colleagues, and is implied to have been a much respected figure within the police force. Even renowned diamond smuggler and fence August March (Ed Asner) paid tribute to John's sense of integrity by telling Steve about how John turned down his $100,000 bribe over three decades ago. John McGarrett was murdered by Victor Hesse on September 20, 2010, as witnessed by Steve, who was overseas on a highly classified mission in South Korea, via satellite phone. He was buried at the National Cemetery of the Pacific with full military honors. In the 100th episode "Ina Paha" ("If Perhaps") he is never murdered by Hesse and is able to reconnect with Steve when he returns home. Doris McGarrett Portrayed by Christine Lahti Doris McGarrett is the wife of the late John McGarrett and mother of Steve and Mary Ann McGarrett. She was presumably murdered by a car bomb when Steve was aged fifteen or sixteen. It is revealed in Season 3 that she was a former CIA operative who went by the alias "Shelburne" and had faked her own death as a cover to escape underground in order to protect her family. In the Season 3 premiere, Joe White brings Steve to Suruga Bay in Japan where mother and son are reunited. Steve was hostile to his mother, even refusing to call her "Mom", due to the years of resentment over how the family was torn apart by her supposed death and watching his father grieve over her. Mary also initially refused to see her after learning the truth from Steve. For the rest of the season she constantly refuses to tell everything to Steve and he does not fully trust her. He sends Cath to "babysit" her at a safehouse but Wo Fat bypasses the security system and Cath into the house to confront Doris. Despite being a crack shot and armed, ballistic evidence showed that she had intentionally shot away from him, which she attributed to her nerves and shock over Wo Fat's sudden appearance, but Steve refuses to believe her story. In the third-season finale she leaves Hawaii with Kono and Adam to help them evade the Yakuza with her promising to Steve that her story is "To be continued.". Over the next two seasons, bits and pieces (and rumors) of Doris' connection to Wo Fat are revealed. Steve only learns the full truth after he was abducted and tortured by Wo Fat in the 100th episode. She is mentioned in the episode "Pono Kaulike / Justice for All" where she uses her contacts to keep her son safe from a rogue CIA agent who attempts to have Danny and Chin framed to cover his tracks. Steve was able to use the information she provided to Joe White to exonerate his co-workers. In the seventh seasons 150th episode Steve learns from Catherine that she was captured after attempting to break Wo Fat's father out of prison. Steve and Catherine along with the rest of the team (excluding Danny) go to Morocco to break her and Wo Fat's father out. Later Doris is seen again in Season 10 Episode 7 where she is reunited briefly with Steve for a little before she is stabbed and dies. Aunt Deb Portrayed by Carol Burnett Debra "Deb" McGarrett is the late older sister of John McGarrett and aunt of Steve and Mary Ann McGarrett. She raised Mary when John had sent the kids away from Hawaii for their safety after Doris was presumed to be murdered by a car bomb. Steve and Mary both see her as a maternal figure and she, having never married or had children, treats them as her own. Deb had been a struggling singer on the cusp of a big break when she left a promising career to raise Mary. In season 4, she pays Steve a surprise visit for Thanksgiving and reveals that she has stage four cancer. In season 5, she agrees to go for chemotherapy and her brain tumor has shrunk. She marries Leonard Cassano (Frankie Valli), a retired defense lawyer with stage four leukemia whom met during chemo. In the episode "Ua Ola Loko I Ke Aloha / Love Gives Life Within" she returns to Hawaii with Leonard's ashes on the pretext of spending time with Steve, Mary, and Joan, but does not tell them that she was actually dying and trying to complete her bucket list. She passes away peacefully in Steve and Mary's childhood home (which Steve has lived in since his father's death) after fulfilling all but one thing on her bucket list—climb a tall mountain. Steve and Mary complete her list by scattering her ashes on the mountain top. Grace Williams Grace Williams is the daughter of Danny Williams and Rachel Edwards. Her parents divorced in the timeline before the pilot and she moved to Hawaii with her mother and millionaire new stepfather. She attends the Academy of the Sacred Heart, a fictional private school, and participates in the cheerleading team. She calls her father "Danno" and is the only person he unconditionally allows to call him that; McGarrett would mock Danny using the nickname or with the tagline "Book 'em, Danno", much to Danny's exasperation. Grace is named after a former partner of her father's who was killed in the line of duty by a criminal. Grace is well-liked by the rest of the Five-0 ohana. She took to "Uncle Steve", as she calls McGarrett, who affectionately calls her "Gracie". Whenever Danny was unavailable due to extenuating circumstances, McGarrett would be the one to pick her up from school or personally reassure her. Grace was often the source of Danny's disputes with his ex-wife over his visitation rights. Rachel would use their daughter as leverage and make empty threats about rescinding or lessening his visitation days. Eventually, Danny takes the matter to court and is awarded joint custody, meaning that Grace cannot move away from Hawaii without Danny's approval. As of season 7 Grace is dating Will Grover, Lou's son. In the season 8 episode "I Ka Wa Ma Mua, I Ka Wa Ma Hope" ("The Future is in the Past") Grace is shown as an adult marrying Will. Charlie Williams Charlie is the second child of Danny and Rachel. Charlie was originally thought to be the son of Stan however it was revealed in season 5 that Danny is the true father. In the season 8 episode I Ka Wa Ma Mua, I Ka Wa Ma Hope (The Future is in the Past) Charlie is an adult and has become a member of the Honolulu Police Department. He is then offered a job by Tani Rey, the now leader of the Five-0 Task Force. Will Grover Will Grover is the son of Lou Grover. Although not seen until season 6 he moved to Hawaii with his father, mother, and sister. He attends Academy of the Sacred Heart with Grace Williams. Will and Grace are often seen spending time together doing activities such as homework, eating breakfast, and dancing. As of season 7 Will is dating Grace. In the season 8 episode I Ka Wa Ma Mua, I Ka Wa Ma Hope (The Future is in the Past) an adult Will is marrying Grace. Rachel Edwards/Hollander Portrayed by Claire van der Boom Danny's ex-wife and mother of Grace and Charlie. She moves to Hawaii after marrying millionaire Stan Edwards. Early in Season 1, she and Danny are often seen bitterly arguing on the phone to the point where the whole team knew about the feud even before they had met Rachel or Grace in person. She often used Grace as leverage and threatened to further limit his visitation rights when his job prevented him from being punctual to their father-daughter dates but Danny successfully files for joint custody, meaning that Grace cannot leave Hawaii without his consent. They are now on friendly terms, particularly after her marriage with Stan hits a rocky patch and Danny was there to help with Charlie's birth (which he later discovers was actually his, not Stan's). In the seventh season of the series Rachel divorces Stan and takes her maiden name once again. Later in seasons 8 and 9, Rachel and Danny reconcile. In season 10, Danny says reconciliation did not go well. Malia Waincroft Portrayed by Reiko Aylesworth Chin's wife, Dr. Malia Waincroft was previously engaged to Chin before he broke up with her after losing his badge. The two reconnected after he joined Five-0, and then resumed their relationship. She marries Kelly in the episode "Alaheo Pauʻole / Gone Forever", but dies in "La O Na Makuahine", the third-season premiere episode, from injuries sustained after she was shot, leaving Chin devastated. She had a brother named Gabriel Waincroft who was revealed to be responsible for the murder of Chin's father 15 years earlier. Lynn Downey Lynn Downey is seen dating Steve McGarrett in season 6 and season 7. They are presumedly still together through season 8, as per comments early in the season. The cause of their break up is not known nor is it made clear when it happened. Other Jenna Kaye Portrayed by Larisa Oleynik Jenna Kaye is an ex-CIA analyst, assistant to Five-0. She is later revealed to be an associate of Wo Fat. At the end of "Ha'i'ole", she is seen driving Wo Fat away from the prison after he murdered Victor Hesse. She leaves Five-0 to follow up on a lead that her fiancé, Josh, who she claimed had been killed by Wo Fat, might in fact be alive and asks Steve to go with her to North Korea to get him. It turns out to be a trap for McGarrett, from Wo Fat telling Jenna he would release Josh if she brought him McGarrett, explaining her association with him. Jenna later discovers that Josh had been dead the entire time and that it was also a trap for her. She is able to tell McGarrett she is sorry for setting him up and gives him a pin from Josh's knee repair to help him escape, moments before Wo Fat shoots and kills her. In the 100th episode she is seen in the same episode that Victor Hesse is in looking for her fiancé who was in a motorcycle accident. Detective Abby Dunn Portrayed by Julie Benz Abby Dunn is a Detective for the HPD and is the girlfriend of Chin Ho Kelly. She held the rank of Inspector at the San Francisco Police Department and was temporarily assigned to the Five-0 task force as SFPD intended to start a similar task force. However, it is revealed that she had been forced by an FBI agent with a grudge against Five-0 to tail them and find a reason to shut the task force down. She discloses this information to Chin after starting a relationship with him and resigns from the SFPD as well as Five-0, transferring to the HPD to become a detective. Governor Pat Jameson Portrayed by Jean Smart Pat Jameson was the Governor of Hawaii in Season 1. When Steve McGarrett returned home to Hawaii to bury his father, who was murdered by arms dealer Victor Hesse, she requests him to set up a task force with "full immunity and means" to apprehend criminals such as Hesse. McGarrett flatly turns her down as he felt she only asked him to do so for political reasons and boost her ratings. Once he realizes he had no jurisdiction over the investigation, he quickly accepts the job and sets the "Governor's Task Force", which came to be called "Five-0". Over the course of season 1 it is revealed that Governor Jameson was actually colluding with Wo Fat and other criminals such as Adam's father Hiro. In the season 1 finale it is revealed that she was responsible for the murder of Laura Hills. McGarrett confronts her about it in her office and records her confession on his phone. Wo Fat sneaks in, tasers McGarrett and then shoots and kills the governor after she deletes the recording. Wo Fat attempts to frame McGarrett for the murder by placing the murder weapon in his hand. Laura Hills Portrayed by Kelly Hu Laura Hills, Gov. Jameson's public safety liaison. In the season 1 finale it is revealed that she was sending McGarett evidence from his "Champ" box. She is killed in a car bomb by Wo Fat and Jameson. McGarrett is also framed for her murder after his fingerprints were found in her house. Michael Noshimuri Portrayed by Daniel Henney Michael Noshimuri is Adam's brother who was released from prison during season three, and appears apprehensive about any plans to remake the Yakuza. He attempted to frame Kono for a murder he committed using her gun. In the season three finale, he attempts to kill his brother Adam and Adam's girlfriend Kono, but Adam kills him in self-defense. Joe White Portrayed by Terry O'Quinn Lieutenant Commander Joe White, USN (Ret.) (born July 9, 1954) was Steve McGarrett's former SEAL trainer and a long-time friend of John McGarrett and mentor to Steve. A Wisconsin native, he is a "mustang", having been a master chief petty officer before gaining a commission and retiring as a lieutenant commander. He appears to be well known amongst SEALs from his time as a trainer at the Naval Special Warfare Center, as McGarrett's former SEAL teammate Billy Harrington and SO2 Graham Wilson, who was a person of interest in the episode "Hoʻapono", both knew of him. In the season 2 episode "Kiʻilua / Deceiver", he is forced into retirement after leading an unsanctioned mission into North Korea to rescue McGarrett. White was one of the few people who knew about Shelburne but he refused to disclose it to McGarrett as he knew it would only hurt him even more emotionally. He would repeatedly give excuses or evade McGarrett's persistent questions about the true identity of Shelburne. As a result, McGarrett, despite seeing him as a father figure, regards him with some distrust. Joe was killed in season 9, because of Greer's treason (Steve's ex lover). Mamo Kahike Portrayed by Al Harrington Mamo owns a surf rental shop at Waikiki Beach. During the off season he works as a bus driver. A long-time friend of McGarrett's father and Chin and Kono's family, he taught Steve, Mary, and Kono how to surf as children and is seen as a fatherly figure by them. He is also a history enthusiast and participates in historical reenactments of Ancient Hawaiian society. Al Harrington is one of several actors who appeared on the original series. He had a recurring role as Detective Ben Kokua. Governor Sam Denning Portrayed by Richard T. Jones Governor Samuel "Sam" Denning was Lieutenant Governor of Hawaii, replacing Gov. Jameson after her death. He is much more "by the book" and withdraws his predecessor's assurance to the Five-0 Task Force of "full immunity and means". Instead, he brings in former federal agent Lori Weston as a "spy" to keep McGarrett and the rest of Five-0 in line, much to their displeasure. In season 3 he and McGarrett come to a mutual agreement to be open with one another about investigations, with McGarrett pointing out that Governor Jameson's dishonesty was what got her killed. Although not always agreeing with McGarrett's style, Governor Denning does vouch for and backs him when necessary. For example, he dismisses Lou Grover's complaint against McGarrett in season 4 and instead forces the both of them to serve a warrant together and work out their differences themselves. Nahele Huikala Portrayed by Kekoa Kekumano In the episode "Poina ʻOle / Not Forgotten", Nahele was first introduced as a homeless juvenile delinquent who stole McGarrett's prized vintage Marquis and stripped it for scrap metal. McGarrett decides not to file charges after learning that Nahele was living on the streets after running away from his temporary guardian, his negligent uncle, and that his only parent, his father, has been in prison since he was eight. Kamekona agrees to let Nahele work for him as a waiter at the shrimp truck as part of the agreement that McGarrett will not file charges if Nahele keeps his end of the bargain to stay out of trouble. Since then he has looked up to McGarrett as a mentor and an older brother figure. In season 6, his father is released from prison and successfully files for custody of him. Distraught, Nahele confesses a long-buried secret to McGarrett explaining why he still refuses to see his father even though the latter turned his life around. Dr. Charles "Charlie" Fong Portrayed by Brian Yang Charlie Fong is a forensic scientist with the Honolulu Police Department Crime Lab. He provides evidence analysis for the Five-0 Task Force. He attended the prestigious Punahou School and is a childhood friend of Kono. Ellie Clayton Portrayed by Mirrah Foulkes Ellie Clayton is a lawyer and Deputy District Attorney introduced in the season 5 episode "Hoʻoilina / Legacy". She emigrated from Australia with her father Paul as a child. Her father owned a bar called "Aces High" but was shot and killed in a botched robbery nineteen years ago. She was upstairs in their flat and heard the kill shot. A homeless man was prosecuted but the case eventually went cold due to the lack of credible witnesses and evidence. She graduated from law school and returned to Hawaii. In several episodes she aided the Five-0 Task Force by securing arrest warrants or providing advice. McGarrett meets her by chance when he sees her putting flowers on his father's grave and accosts her. She reveals that John was the uniformed officer who responded to the scene of her father's murder. Every Christmas since then he would visit her and give her a present. According to Chin, John made an effort to keep in contact with her, partly out of guilt and regret for sending his two children away. Ellie reveals that prior to his death, John had called her up to tell her that he had found new evidence. McGarrett recalls finding an "Aces High" matchbook in his father's Champ toolbox and decides to reopen the case in memory of his father. Five-0 solves the case with the help of a young drug runner whom Paul had tried to help back then (unsuccessfully) and wanted to repay Paul for his kindness. Danny proposed Ellie as a potential love interest, but McGarrett quickly shoots down the idea, saying that due to her connection to his father, they could only be friends. She tries to set a reluctant McGarrett up with various women out of concern that he was too consumed with his work. McGarrett eventually begins a relationship with one of the women, Lynn. Eric Russo Portrayed by Andrew Lawrence Eric Russo (born 1984 or 1985) is the nephew of Detective Danny Williams; his mother Stella is Danny's older sister. He was first introduced in the episode "Kapu / Forbidden" when Stella sends him to his "Uncle D" in hopes that Danny would be able to set him straight. He dropped out of college and was still undecided about his future. In the episode, he follows McGarrett and the team around as they solve a case involving a college professor and his students. Eric provides some valuable insights which helped solve the case. To Danny's surprise, he decides to pursue forensics and, once finishes college, joins the HPD forensics lab as an assistant. Madison Gray Portrayed by Elisabeth Röhm Madison Gray is a serial killer whom Alicia Brown help captures. She later dies at the hands of Brown. Minor Commander Wade Gutches Portrayed by David Keith, commanding officer of SEAL Team 9 and friend of Cmdr. White. He is stationed at Naval Station Pearl Harbor as an instructor. Kawika Portrayed by Kala Alexander, leader of the Kapu gang/civic pride group. Clara Williams Portrayed by Melanie Griffith, mother of Danny Williams. The character was brought back from the original series and was portrayed by Helen Hayes, the real-life mother of James MacArthur, who portrayed Danny Williams. Eddie Williams Portrayed by Tom Berenger, father of Danny Williams. A retired Newark, NJ firefighter. He appears in one episode, Ma lalo o ka 'ili, traveling to Hawaii to salvage his marriage to Clara. Leilani Portrayed by Lindsay Price, Chin Ho Kelly's ex-girlfriend Officer Pua Kai Portrayed by Shawn Anthony Thomsen, an HPD rookie cop. He is often seen teasing Kono and is on friendly terms with the Five-0 team. Odell Martin Portrayed by Michael Imperioli, a New York City native and former attorney with a shady past who now owns a barber shop. McGarrett befriended him and is a regular customer. Cmdr. Harry Langford Portrayed by Chris Vance, a suave MI-6 agent who appears in four episodes and helps the team with cases relating to the UK. Criminals and antagonists Main recurring characters Wo Fat Wo Fat was the principal antagonist and nemesis of Steve McGarrett. He is well known in the criminal world for his ruthlessness and ability to wriggle his way around the law. In the 100th episode, he kidnaps McGarrett and tortures him with a heated cattle rod and truth serum. After a bitter fight to the death, McGarrett kills him with a bullet to the head. Victor and Anton Hesse Victor and Anton Hesse (James Marsters and Norman Reedus) are brothers and criminals heavily involved in the illegal arms trafficking trade. During his days as a SEAL, McGarrett was part of a task force which had been investigating and tracking the Hesse brothers for several years. He and his buddy Freddie Hart were sent into North Korea on a classified operation to kidnap Anton Hesse but Hart was killed and Anton was eventually killed when the convoy he and McGarrett were being transported in was attacked. In the pilot, McGarrett personally shoots Victor Hesse but his body was never found. Victor later resurfaces and is eventually caught and sent to prison where he is killed by Wo Fat who was posing as a corrections officer. Gabriel Waincroft Gabriel Waincroft (Christopher Sean) is the younger brother of Malia Waincroft, the late wife of Chin Ho Kelly. As an adolescent he repeatedly got into trouble with the law and Malia went to Chin, then her boyfriend, for help in setting him straight. Gabriel was initiated into a gang after killing Chin's father and rose through the ranks of the (fictional) Culiacan cartel based in Sinaloa, Mexico. With Wo Fat's death, Waincroft is now the team's primary antagonist. He has been described as daring and ambitious, particularly after he had his men attempt to assassinate the heads of three different criminal syndicates during a high-profile boxing match (the attempt was discreetly foiled by Five-0), in the episode "Ka Makau kaa kaua / The Sweet Science". In "Pa'a Ka 'ipuka I Ka 'Upena Nananana", he was badly wounded and died in surgery from cardiac arrest. His daughter Sarah is currently being cared for by Chin as her legal guardian. Michelle Shioma Michelle Shioma (Michelle Krusiec) is the daughter of the late Goro Shioma and his successor as head of the Hawaiian Yakuza. Introduced in season 6, she is seeking revenge after Gabriel Waincroft has her father assassinated in an attempt to gain control of the criminal underworld in Hawaii. She is married with two daughters. It is mentioned in season 8 that she had been murdered in prison. Sang Min Soo Sang Min Soo (Will Yun Lee) is an informant and former snakehead who was associated with Wo Fat and imprisoned on the mainland for his safety after he testified against his former associates. He chooses to be transferred to Halawa to be nearer to his young son. His appearances are often met with annoyance by the Five-0 team as he would mock McGarrett and Chin, try to rile Danny up or crack inappropriate jokes at Kono (he calls her "Spicy Hot"). The team generally tolerate him as he does provide valuable and much-needed intel for the team on several occasions. Toast Charles Adam "Toast" Charles (Martin Starr) is one of Danny Williams' CIs. He is a highly skilled hacker who sometimes helps the Five-0 team with cases related to the cyber-crime underworld that require a more underhanded approach. He was killed in the Season 8 episode "E uhi wale no 'a'ole e nalo, he imu puhi" / "No Matter How Much One Covers a Steaming Imu, The Smoke Will Rise" after his identity as a CI was leaked. August March August March (Ed Asner) is a renowned diamond smuggler who was jailed for over three decades and had been released from prison for six months, according to Captain Fryer in the season 2 episode "Kalele / Faith". He committed suicide in Season 3. Ed Asner portrayed the character in the original series and was one of several actors who returned to the 2010 reboot. JC Dekker Jason "JC" Dekker (Xzibit, credited by his real name Alvin Joiner) is a former gang leader and small-time arms dealer who was re-arrested for violating his parole. The Five-0 Task Force crossed paths with him while investigating the murder of an undercover ATF agent in the episode "Ua Nalohia / In Deep". Detective Kaleo Detective Kaleo (Jason Scott Lee) is a corrupt former Honolulu Police Department cop who was imprisoned for his involvement in the murder of fellow cop Detective Meka Hanamoa, who was the partner of Detective Danny Williams when the latter first joined the HPD. He was initially housed in protective custody due to his former occupation but was sent to "general population" after he aided his brother Daryl in the trafficking of conflict diamonds, indirectly causing McGarrett's sister to be coerced into becoming a mule. His brother is killed in Season 2 and Kaleo is finally killed in Season 3 by Chin Ho Kelly when he tries to kill him. Frank Delano Frank Delano (William Baldwin) is a corrupt former HPD homicide detective who founded his own criminal syndicate composed exclusively of former cops. After Kono was "fired", Delano recruited her into the syndicate. Unbeknownst to him, it was part of Captain Fryer's plot to arrest Delano and his associates. Delano was killed by Chin after Five-0 cornered him and his crew in the season 3 premiere. His brother Paul (portrayed by William Baldwin's real-life brother Daniel) later seeks revenge by drugging and kidnapping Chin. Gerard Hirsch Gerard Hirsch (Willie Garson) is a known conman who did time in Halawa after being arrested in the episode "Uaʻaihue" for stealing a Van Gogh painting and for forgery. The team encounter him again in season 6 during an investigation into a case involving a dead counterfeiter and his stash of counterfeit hundred-dollar bills found on several murder victims. Hirsch currently runs his own crime scene clean-up company as part of his plan to "reform" and live an honest life. His expertise as a highly skilled former forger and extensive knowledge of art history have come into handy at times. Jason Duclair Jason Duclair (Randy Couture) is a serial arsonist and murderer who was notorious for picking married couples and setting their houses on fire, after ensuring that their doors, windows, and any exits were locked. He was wanted for a series of deaths by arson throughout California but went into hiding for a period before resurfacing in Hawaii. He is imprisoned in the maximum security section of Halawa. He committed suicide in the Season 8 premiere. Greer Greer was a CIA agent who had previously met Steve McGarrett after he got his Trident, but before he started dating Catherine Rollins. They spent two days of R&R in a Marrakesh fleabag before Steve was deployed and he left her a note. She is later revealed to be a mole in the CIA, committing treason. Her treason causes the death of Joe White and many of Steve's SEAL friends. Greer was finally killed by Catherine Rollins. Aaron Wright Aaron Wright is Ian's brother. Ian Wright Ian Wright was a hacker who was originally arrested by a reluctantly paired McGarrett and Grover for unpaid parking tickets, only for Wright to be kidnapped by a group of bank robbers. It is later revealed that Wright was the mastermind behind the robberies and escapes before he can be arrested. He later reappears, kidnapping Samantha Grover to blackmail Grover into providing a group of mercenaries SWAT gear to commit a robbery of $100 million. He is unable to get his hands on the money as he is killed by Wo Fat, who escaped from his SuperMax prison and released Samantha to send McGarrett a message. Daiyu Mei Daiyu Mei is Wo Fat's widow and arms dealer who seeks revenge against McGarrett for her husband's death. She is eventually killed by McGarrett in the series finale. References Fictional characters from Hawaii Hawaii Five-O characters Lists of American drama television series characters Lists of action television characters Hawaii-related lists
47681906
https://en.wikipedia.org/wiki/Illusive%20Networks
Illusive Networks
Illusive Networks is a cybersecurity firm headquartered in Tel Aviv, Israel and New York. The company produces technology that stops cyber attackers from moving laterally inside networks by finding and eliminating errant credentials and connections, planting deceptive information about given network's resources, emulating devices, and deploying high interactivity decoys. Network administrators are alerted when cyber attackers use security deceptions in an attempt to exploit the network. Illusive Networks is the first company launched by the Tel Aviv-based incubator, Team8. In June 2015, Illusive Networks received $5 million in Series A funding from Team8. To date, it has raised over $54M. History Illusive Networks was founded in 2014 by Team8 and Ofer Israeli. In June 2015, Illusive received $5 million in Series A funding from cybersecurity incubator, Team8. Team8 is funded by a group of investors, including Google Chairman Eric Schmidt's venture capital fund, Innovation Endeavors, Alcatel-Lucent, Cisco Systems, Marker LLC, Bessemer Venture Partners, and others. While in Israel at the 5th Annual International Cybersecurity Conference in 2015, Schmidt paid a visit to Illusive Networks' headquarters on June 9 during the company's official launch. The company was named one of Gartner's Cool Vendors in Security and Intelligence for 2015. After receiving $5 million in Series A funding back in June 2015, Illusive Networks announced on October 20, 2015, their Series B round of $22 million by New Enterprise Associates. In 2018, Illusive expanded its product offering beyond deception through the Attack Surface Manager solution, which continuously analyzes and removes unnecessary credentials and pathways that allow attackers to escalate privileges and move laterally. In that same year, Illusive also unveiled Attack Intelligence System, which delivers human readable on-demand telemetry about current attacker activities to speed investigation and remediation. Along with the deception solution Attack Detection System, these three solutions make up the Active Defense Suite that seeks to paralyze attackers and eradicate in-network threats. In 2020, Illusive raised another round of funding from new investors alongside existing investors, including Spring Lake Equity Partners, Marker, New Enterprise Associates, Bessemer Venture Partners, Innovation Endeavors, Cisco, Microsoft, and Citi Technology The agentless technology produced by Illusive Networks is designed to stop cyber attacks from moving laterally inside networks, to provide early detection of cyber attackers who have penetrated a given network, and to provide forensics to threat intelligence teams and incident responders. The software blocks intruders from advancing their attacks by eliminating credentials and connections left behind by normal business processes, by providing false and misleading information that appears alongside real, valuable information, and by deploying devices and decoys to attract and distract attackers. The software is designed to thwart attacks and advanced persistent threats. Instead of targeting just malware, the Illusive software targets actual human beings (cyber attackers) who must make decisions at each step in the process in order to advance further into a network. If the attackers use the deceptive lures during the attack, network administrators will be alerted and given the option to shut the attack down immediately or observe the hacker accruing real-time breach forensics. The software has more than 50,000 users at Fortune 500 companies, healthcare companies, insurance companies, legal firms, and others. References External links Official Website Companies based in Tel Aviv Software companies established in 2014 Information technology companies of Israel Israeli companies established in 2014
20609862
https://en.wikipedia.org/wiki/AACS%20encryption%20key%20controversy
AACS encryption key controversy
A controversy surrounding the AACS cryptographic key arose in April 2007 when the Motion Picture Association of America and the Advanced Access Content System Licensing Administrator, LLC (AACS LA) began issuing cease and desist letters to websites publishing a 128-bit (16-byte) number, represented in hexadecimal as 09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0 (commonly referred to as 09 F9), a cryptographic key for HD DVDs and Blu-ray Discs. The letters demanded the immediate removal of the key and any links to it, citing the anti-circumvention provisions of the United States Digital Millennium Copyright Act (DMCA). In response to widespread Internet postings of the key, the AACS LA issued various press statements, praising those websites that complied with their requests for acting in a "responsible manner" and warning that "legal and technical tools" were adapting to the situation. The controversy was further escalated in early May 2007, when aggregate news site Digg received a DMCA cease and desist notice and then removed numerous articles on the matter and banned users reposting the information. This sparked what some describe as a digital revolt or "cyber-riot", in which users posted and spread the key on Digg, and throughout the Internet en masse, thereby leading to a Streisand effect. The AACS LA described this situation as an "interesting new twist". Background Hexadecimal is a base-16 numeral system used in the fields of computer programming and mathematics. The key is an ordinary number most widely known by its hexadecimal representation; in decimal notation, when interpreted as an integer, it is 13,256,278,887,989,457,651,018,865,901,401,704,640. Because the encryption key may be used as part of circumvention technology forbidden by the DMCA, its possession and distribution has been viewed as illegal by the AACS, as well as by some legal professionals. Since it is a 128-bit numerical value, it was dubbed an illegal number. Opponents to the expansion of the scope of copyright criticize the idea of making a particular number illegal. Commercial HD DVDs and Blu-ray discs integrate copy protection technology specified by the AACS LA. There are several interlocking encryption mechanisms, such that cracking one part of the system does not necessarily crack other parts. Therefore, the "09 F9" key is only one of many parts that are needed to play a disc on an unlicensed player. AACS can be used to revoke a key of a specific playback device, after it is known to have been compromised, as it has for WinDVD. The compromised players can still be used to view old discs, but not newer releases without encryption keys for the compromised players. If other players are then cracked, further revocation would lead to legitimate users of compromised players being forced to upgrade or replace their player software or firmware in order to view new discs. Each playback device comes with a binary tree of secret device and processing keys. The processing key in this tree, a requirement to play the AACS encrypted discs, is selected based on the device key and the information on the disc to be played. As such, a processing key such as the "09 F9" key is not revoked, but newly produced discs cause the playback devices to select a different valid processing key to decrypt the discs. Timeline of AACS cracking 2006 On December 26, 2006, a person using the alias muslix64 published a utility named BackupHDDVD and its source code on the DVD decryption forum at the website Doom9. BackupHDDVD can be used to decrypt AACS protected content once one knows the encryption key. muslix64 claimed to have found title and volume keys in main memory while playing HD DVDs using a software player, and that finding them is not difficult. 2007 On January 1, 2007, muslix64 published a new version of the program, with volume key support. On January 12, 2007, other forum members detailed how to find other title and volume keys, stating they had also found the keys of several movies in RAM while running WinDVD. On or about January 13, a title key was posted on pastebin.com in the form of a riddle, which was solved by entering terms into the Google search engine. By converting these results to hexadecimal, a correct key could be formed. Later that day, the first cracked HD DVD, Serenity, was uploaded on a private torrent tracker. The AACS LA confirmed on January 26 that the title keys on certain HD DVDs had been published without authorization. Doom9.org forum user arnezami found and published the "09 F9" AACS processing key on February 11: This key is not specific to any playback device or DVD title. Doom9.org forum user jx6bpm claimed on March 4 to have revealed CyberLink's PowerDVD's key, and that it was the key in use by AnyDVD. The AACS LA announced on April 16 that it had revoked the decryption keys associated with certain software high-definition DVD players, which will not be able to decrypt AACS encrypted disks mastered after April 23, without an update of the software. On May 17, one week before any discs with the updated processing key had reached retail, claims were reported of the new keys having been retrieved from a preview disc of The Matrix Trilogy. On May 23, the key was posted on Edward Felten's Freedom to Tinker Blog and confirmed a week later by arnezami on Doom9 as the new processing key (MKB v3). DMCA notices and Digg As early as April 17, 2007, AACS LA had issued DMCA violation notices, sent by Charles S. Sims of Proskauer Rose. Following this, dozens of notices were sent to various websites hosted in the United States. On May 1, 2007, in response to a DMCA demand letter, technology news site Digg began closing accounts and removing posts containing or alluding to the key. The Digg community reacted by creating a flood of posts containing the key, many using creative ways of disguising the key, by semi-directly or indirectly inserting the number, such as in song or images (either representing the digits pictorially or directly representing bytes from the key as colors) or on merchandise. At one point, Digg's "entire homepage was covered with links to the HD-DVD code or anti-Digg references." Eventually the Digg administrators reversed their position, with founder Kevin Rose stating: Legal opinions Lawyers and other representatives of the entertainment industry, including Michael Ayers, an attorney for Toshiba Corporation, expressed surprise at Digg's decision, but suggested that a suit aimed at Digg might merely spread the information more widely. The American Bar Association's eReport published a discussion of the controversy, in which Eric Goldman at Santa Clara University's High Tech Law Institute noted that the illegality of putting the code up is questionable (that Section 230 of the Communications Decency Act may protect the provider when the material itself is not copyrighted), although continuing to allow posting of the key may be "risky", and entertainment lawyer Carole Handler noted that even if the material is illegal, laws such as the DMCA may prove ineffective in a practical sense. Impact In a response to the events occurring on Digg and the call to "Spread this number", the key was rapidly posted to thousands of pages, blogs and wikis across the Internet. The reaction was an example of the Streisand effect. Intellectual property lawyer Douglas J. Sorocco noted, "People are getting creative. It shows the futility of trying to stop this. Once the information is out there, cease-and-desist letters are going to infuriate this community more." Outside the Internet and the mass media, the key has appeared in or on T-shirts, poetry, songs and music videos, illustrations and other graphic artworks, tattoos and body art, and comic strips. On Tuesday afternoon, May 1, 2007, a Google search for the key returned 9,410 results, while the same search the next morning returned nearly 300,000 results. On Friday, the BBC reported that a search on Google shows almost 700,000 pages have published the key, despite the fact that on April 17, the AACS LA sent a DMCA notice to Google, demanding that Google stop returning any results for searches for the key. Widespread news coverage included speculation on the development of user-driven websites, the legal liability of running a user-driven website, the perception of acceptance of DRM, the failure as a business model of "secrecy based businesses ... in every aspect" in the Internet era, and the harm an industry can cause itself with harshly-perceived legal action. In an opposing move, Carter Wood of the National Association of Manufacturers said they had removed the "Digg It"-link from their weblog. Media coverage initially avoided quoting the key itself. However, several US-based news sources have run stories containing the key, quoting its use on Digg, though none are known to have received DMCA notices as a result. Later reports have discussed this, quoting the key. Current TV broadcast the key during a Google Current story on the Digg incident on May 3, 2007, displaying it in full on screen for several seconds and placing the story on the station website. Wikipedia, on May 1, 2007, locked out the page named for the number "to prevent the former secret from being posted again. The page on HD DVD was locked, too, to keep out The Number." This action was later reversed. No one has been arrested or charged for finding or publishing the original key. AACS LA reaction On May 7, 2007, the AACS LA announced on its website that it had "requested the removal solely of illegal circumvention tools, including encryption keys, from a number of web sites", and that it had "not requested the removal or deletion of any ... discussion or commentary". The statement continued, "AACS LA is encouraged by the cooperation it has received thus far from the numerous web sites that have chosen to address their legal obligations in a responsible manner." BBC News had earlier quoted an AACS executive saying that bloggers "crossed the line", that AACS was looking at "legal and technical tools" to confront those who published the key, and that the events involving Digg were an "interesting new twist". See also DVD Copy Control Association DeCSS FCKGW (Microsoft Windows) PlayStation 3 private key compromised HDCP master key release Texas Instruments signing key controversy Security through obscurity Streisand effect References External links Doom9's Forum, original focus of the controversy 09 f9: A Legal Primer — Electronic Frontier Foundation (EFF) Original images posted Some of the images that accompanied the Digg articles on the front page from the day of the user revolt. Advanced Access Content System Compact Disc and DVD copy protection History of cryptography Internet memes Key management Motion Picture Association Digital Millennium Copyright Act takedown incidents Cryptography law
56880776
https://en.wikipedia.org/wiki/Marienbad%20%28video%20game%29
Marienbad (video game)
Marienbad was a 1962 Polish puzzle mainframe game created by Elwro engineer Witold Podgórski in Wrocław, Poland for its Odra 1003. It was an adaption of the logic game nim. Inspired by the discussion in the magazine Przekrój of a variant of nim in the 1961 film Last Year at Marienbad (L'Année dernière à Marienbad), named "Marienbad" by the magazine, Podgórski programmed the game for the in-development 1003 mainframe, released in 1963. The game had players opposing the computer in alternating rounds of removing matches from a set, with the last player to take a match the loser. As the computer always played the optimal moves, it was essentially unbeatable. Like many games in the early history of video games, Marienbad did not spread far beyond the initial location. Elwro did not produce or advertise the game, though Podgórski recreated it at the Wojskowa Akademia Techniczna (Military University of Technology in Warsaw). The game fell into obscurity, with no pictures or documentation surviving to recreate it in its original form; as there is only one known Odra 1003 remaining and no way of recreating the game exists, it is considered lost. Despite its simplicity, it is considered possibly the first Polish computer or video game. Gameplay In nim, players take turns removing at least one object from a set of objects, traditionally matchsticks, with the goal of either being or not being the player who removes the last object. The gameplay options can be modeled mathematically. In Marienbads default game mode, four rows of matches were generated, with either one, three, five or seven matches within each row. The side that was left with the last match lost. The computer printout showed the player the current layout of matches. A single player could play the game at a time, whose turn alternated with the computer's. Regardless of which side started the game, the computer was almost certain to be the winner, as it always made the perfect moves. On its maximum settings, the game consisted of 8,000 rows containing up to 1 trillion matches, requiring an hour for the computer to choose its next move. The game did not support a video output, as the Odra 1003 did not have a screen. Instead, the game was played via a teletypewriter and card perforator, on which the machine printed the results. Development Elwro was a Polish company established in 1959 and based in Wrocław, Poland, that designed and manufactured mainframe and microcomputers. Its first release was the Odra 1001 mainframe in 1960, followed by the Odra 1002 in 1962 and the Odra 1003 in 1963. Witold Podgórski was a recent graduate of the Wrocław University of Technology, having majored in electronic engineering and specialized in mathematical machines. Having first heard about the existence of "electronic brains" in 1955 in high school, he embarked on five years of study toward the automation of "digital machines". He completed his masters thesis at Elwro, where he was employed on 10 October 1961, working with the Odra 1001. He developed a way to manipulate the computer's memory to allow content to be saved, thus paving the way for the Odra 1003. The new machine used modern transistor switches that allowed for 500 operations per second, and the drum memory of the Odra 1003 program had a capacity of 40 kilobytes. While working for Elwro in 1961, Podgórski discovered the logic game nim while reading an issue in the weekly magazine Przekrój. The magazine described a two-player game where players remove any number of items from one of four rows, with the player holding the last item losing. Przekrój named this variant of nim as "Marienbad" after the 1960 French film Last Year at Marienbad (L'Année dernière à Marienbad), in which characters frequently play these mathematical duels. Podgórski became inspired by the use of the game nim in the film after reading about its principles. While sitting in a multi-hour lecture for his obligatory military study class, Podgórski decided to decipher the Marienbad algorithm and save it to a binary system which could be understood by computers. Podgórski would later assert that the algorithm was extremely simple to implement into a computer and could be expressed in two words, adding that if nobody was interested in it he would take it to the grave. He programmed the game for the first prototype of the Odra 1003, then in development and scheduled to be installed in the Board of the Topographic General Staff of the Polish Army in Wrocław. The game, intended for logical duels, was developed solely by Podgórski by creating the algorithm, writing a list of instructions which were printed on sheets the size of postcards, and manually setting the initial state of each memory cell. He designed the game to be unbeatable: if the human player made a single mistake, the computer would win. While employees in the Elwro factory knew that the result of the game was predetermined, many volunteered to play it. They could not beat the standard 16-match version of Marienbad, let alone higher settings of the game. Legacy Like many games in the early history of video games, Marienbad did not spread far beyond the initial location. Mainframe computers were rare throughout the world and primarily located in government or military institutions or large corporations, so were generally unavailable for amusement purposes. Elwro produced only 42 Odra 1003 mainframes between 1963 and 1965. Elwro declined to publish or advertise the game, and the title was mostly forgotten. The game did spread beyond the Elwro factory, however. After developing the game, Podgórski became a student at the Wojskowa Akademia Techniczna (Military University of Technology in Warsaw), and with the assistance of fellow student Bogdan Bleja set up a version of the game there on an Odra 1003 that was mediated by a university operator. The computer, and therefore the game, could be used or played by any student, but only through an operator who received the intended input from the player and passed the information to the computer. As in Podgórski's original game instance, the game was essentially unbeatable; the operators as a result actively discouraged the players from participating. The university authorities supported this, as they wanted to forcibly remove the "nasty habit of harassing computers for logic games", which they believed should be used only for serious military purposes. Players took to holding secret meetings in front of the massive computer in the late evenings when the operators were gone. Podgórski recalled that despite knowing that the game was designed to always beat the player, many people spent many hours trying to decipher the algorithm or win. In later years, variants of nim and Marienbad were the second most popular type of computer game present on Polish computers after noughts and crosses, since it was a relatively simple game to program. , who would later become the two-decade long host of Wheel of Fortune from the 1970s, proposed to Polish television a game show where players compete against a computer in nim on a Momik 8b minicomputer. Meanwhile, Podgórski continued to work for Elwro, co-creating Odra computers, though he did not stop working on games. After colleagues brought a variant of the Mancala board game from Egypt, he became inspired to adapt it to the computer. This time he did not want to create an unbeatable game, but included adjustable difficulty levels to create a fun experience regardless of the player's expertise. Writing for Polish Bytes (Bajty Polskie), Bartłomiej Kluska asserts that as the only copy of the Odra 1003 is kept at the Museum of Technology in Warsaw in an inactive state, the original game of Marienbad no longer exists outside of the memory of players. No recreations of the game have been made for more modern computers, and there are no known photos or documentation for how the game was played or created. In the research paper Gry komputerowe jako dziedzictwo kulturowe (Computer games as a cultural heritage) by Maria Garda, she notes that the game should be written about only in the past tense as the original elements have not been preserved, adding that while the Marienbad algorithm can be recreated in a new programming environment, reconstruction of the physical transistors of the original computer would be nearly impossible. The paper further compared the challenges of playing this title in the modern era to playing Super Mario Bros. on the originally intended equipment in 50 years' time. Marienbad is considered one of the first Polish video games, developed almost 20 years before later candidates OiX (1984), Gąsienica (1985), and Puszka Pandory (1986), and well before the first well-known Polish game, Tajemnica Statuetki (1993). Łódź game historian Bartek Kluska made this assertion in his foundational book on the Polish video gaming industry, Polish Bytes; his research uncovered the game as 24 years older than the game that previously held the title. Kluska notes, however, that it was preceded by "Kółko i krzyżyk", a version of tic-tac-toe written by Department of Mathematical Apparatus programmer Bogdan Miś in 1960 for the XYZ computer, using a chess-sized grid. Michał Nowicki of Gram.pl asserted that Kluska's research, in contrast with the previous and more imprecise Polish video gaming text Dawno temu w grach (Once upon a time in games), allows Marienbad's claim as the first Polish computer or video game to be made with almost 100% certainty. An article by retailer Empik further asserts that Marienbads simplicity and lack of video output means that the term "computer game" is a somewhat exaggerated term to describe it, much less "video game". Garda's paper claims, however, that regardless of its simplicity the game has importance as one of the earliest computer or video games from the region, although Marcin Kosman of Gamezilla notes that this first attempt at creating a Polish computer game went largely unnoticed. Jacek Głowacki of Gry Online stated that despite its obscurity, it should be considered the ancestor of the modern Polish video gaming industry, and that its existence and creation are worth remembering. The 2018 Ars Independent Festival held an exhibition entitled "From Marienbad to Novigrad" which explored the history of the Polish video gaming industry from Marienbad to The Witcher 3 (2015). See also Nimrod, a 1951 mainframe computer built to play nim Early history of video games Early mainframe games References Further reading Początek – Historia Polskich Gier #1 (History of Polish Games # 1) by Polskie Gry Wideo 1962 video games Lost works Mainframe games Puzzle video games Video games developed in Poland
15817099
https://en.wikipedia.org/wiki/VAW-125
VAW-125
Airborne Command & Control Squadron 125 (VAW-125), known as the "Torch Bearers" or "Tigertails", was established on 1 October 1968, at Naval Air Station Norfolk. The squadron's initial supporting command was Carrier Air Wing Three (CVW-3) deploying aboard . The squadron is equipped with the E-2 Hawkeye. It was the first east coast squadron with E-2B's in 1968, among the first to operate the E-2C in 1975, receiving the E-2C 2000 in its first operational year in 2003, and the first unit to operate the E-2D Advanced Hawkeye in 2014. Squadron History 1970s In December 1976, Vice Admiral Howard E. Greer, COMNAVAIRLANT, presented VAW-125 with the COMNAVAIRLANT Battle "E" for readiness, the CINCLANTFLT "Golden Anchor" Award for career retention, and the CNO Safety "S" Award. VAW-125 is believed to be the first Navy unit to win all three awards in the same year. On 14 January 1978, the squadron suffered the loss of an aircraft (BuNo 159107) and the deaths of three aviators. In June, VAW-125 took the E-2C's newest weapons system upgrade, the Advanced Radar Processing System (ARPS), to sea for the first time. The squadron then assigned to Carrier Air Wing Seventeen (CVW-17) in November 1979, with the squadron making their eighth Mediterranean Sea deployment, this time aboard . 1980s While deployed in August 1981, VAW-125 participated in Freedom of Navigation (FON) Operations & Open Ocean Missile (OOM) Exercise in the Central Mediterranean Sea and Gulf of Sitra, during which two Libyan MiGs were destroyed after attacking Battle Group aircraft. Upon return from deployment, the squadron provided range control services for the second launch of the NASA space shuttle, STS-2, and detection and monitoring services in the first E-2C Counter-Drug tasking, Operation Thunderbolt. While on a routine deployment in October 1985, the squadron assisted in the successful intercept of the Egyptian airliner carrying the hijackers of the Italian cruise ship, MS Achille Lauro. Squadron aircrew spoke directly to the hijackers, convincing them that the communications were coming from the two VF-103 AND VF-74 F-14s on their wing and persuading the airliner to divert into NAS Sigonella, Sicily. From January to March 1986, the squadron participated in "Freedom of Navigation" operations off the coast of Libya which escalated with the Action in the Gulf of Sidra in March. In August 1988, the squadron deployed aboard for an "Around the Horn" cruise to San Diego, California. 1990s In August 1990, CVW-17 aboard USS Saratoga responded to the invasion of Kuwait by deploying to the Red Sea. VAW-125 and VAW-126 E-2Cs flew around-the-clock as the force build-up of Operation Desert Shield continued. VAW-125 flew over 890 combat hours controlling strikes on Iraqi targets while providing AEW coverage for the Red Sea Battle Group. On a 17 January 1991 strike, squadron aircrew detected two Iraqi MiG-21s threatening the strike group. Controllers vectored two VFA-81 F/A-18s toward the MiGs which recorded the only Navy fixed-wing air-to-air kills of Operation Desert Storm. In January 1994, the squadron deployed aboard USS Saratoga for her final cruise. During the deployment, the squadron joined NATO forces flying in support of Operations Deny Flight and Provide Promise. The squadron conducted operational tests of the Navy's newest Mini-DAMA Satellite Communication Suite, using this new system, the squadron, for the first time, functioned as an Airborne Battlefield Command and Control Center (ABCCC). With the decommissioning of USS Saratoga, VAW-125 and CVW-17 were deployed aboard . After completing a two-month Counter-Drug assignment at NS Roosevelt Roads, the squadron deployed to the Mediterranean Sea aboard USS Enterprise in June 1996. In July, the squadron again joined NATO forces in the former Yugoslavia, this time in support of Operation Joint Endeavor. In September, USS Enterprise moved to respond to mounting tensions in Southwest Asia, supporting Operation Southern Watch over the next three months. Squadron pilots earned the CVW-17 "Top Hook" Award for carrier landing performance and the squadron was recognized for its achievements in 1996, being awarded the COMNAVAIRLANT Battle Efficiency Award, the CNO Safety Award, and the VAW community's AEW Excellence Award. 2000s Within hours of the September 11 attacks, squadron personnel were embarked at sea, leaving NS Norfolk to deploy on to support Operation Noble Eagle. Squadron aircraft flew numerous command and control missions in the New York City vicinity in the days following the attacks as commercial air traffic slowly resumed. During this cruise, the squadron surpassed a 32-year Class "A" mishap-free milestone with over 64,000 flight hours. In April 2003, the squadron became the first East Coast squadron to transition to the E-2C Hawkeye 2000, which boasted improved electrical and vapor cycle systems, mission computer and display stations, and Cooperative Engagement Capability (CEC). The squadron participated in the Operational Evaluation of the AN/USG-3 airborne node of the Navy's net-centric CEC sensor fusion system. While deployed aboard in the Central Persian Gulf, North Arabian Sea, and Western Indian Ocean VAW-125 played roles in the US War on Terror, Operation Enduring Freedom, and operations off the coast of Somalia. 2010s In January 2010, the squadron was deployed to Naval Station Guantanamo Bay, Cuba, in support of Operation Unified Response providing humanitarian assistance following the 2010 Haiti earthquake. The squadron flew missions to provide communications relay, command and control, and general airborne radar services allowing forces afloat and ashore to distribute thousands of tons of rations, water, and medical supplies. The squadron then joined on its trip around South America as it returned from Norfolk to its home port in San Diego. In March 2015, the squadron departed with to the Middle East as part the first deployment of Naval Integrated Fire Control-Counter Air (NIFC-CA) Carrier Strike Group. On 2 February 2017, VAW-125 arrived at Marine Corps Air Station Iwakuni, Japan. It replaced VAW-115 in Carrier Air Wing Five aboard the aircraft carrier . The squadron made its first deployment aboard Ronald Reagan from 16 May to 9 August 2017. 2020s Deployments and awards Shipborne deployments and assignments Awards VAW-125 has been presented with the following unit awards and campaign medals: See also History of the United States Navy List of United States Navy aircraft squadrons References External links US Navy Patrol Squadrons Official VAW-125 Website Globalsecurity.org – VAW-125 Early warning squadrons of the United States Navy
61002621
https://en.wikipedia.org/wiki/Elden%20Ring
Elden Ring
is an action role-playing game developed by FromSoftware and published by Bandai Namco Entertainment. The game was made in collaboration with fantasy novelist George R. R. Martin, who provided material for the game's setting. It was released for Microsoft Windows, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X/S on February 25, 2022. Elden Ring received critical acclaim, and is among the highest-reviewed games of 2022. Praise was directed at its open world design and gameplay, which was considered as an improvement to the Souls formula, though it received minor criticism for its technical issues on the PC version. Premise Elden Ring takes place in the realm of the Lands Between, sometime after the destruction of the titular Elden Ring and the scattering of its shards, the Great Runes. Once graced by the Ring and the Erdtree which symbolizes its presence, the realm is now ruled over by the demigod offspring of Queen Marika the Eternal, each possessing a shard of the Ring that corrupts and taints them with power. As Tarnished—exiles from the Lands Between who lost the Ring's grace and are summoned back after the Shattering, players must traverse the realm to ultimately find all the Great Runes, restore the Elden Ring, and become the Elden Lord. Gameplay Elden Ring is an action role-playing game played in a third-person perspective with gameplay focusing on combat and exploration; it features elements similar to those found in other games developed by FromSoftware, such as the Souls series, Bloodborne, and Sekiro: Shadows Die Twice. Director Hidetaka Miyazaki explained that players start with a linear opening but eventually progress to freely explore the Lands Between, including its six main areas, as well as castles, fortresses, and catacombs scattered throughout the open world map. These main areas are interconnected through a central hub that players can access later in the game's progression—similar to Firelink Shrine from Dark Souls—and are explorable using the character's mount as the main mode of transport, although a fast travel system is an available option. Throughout the game, players encounter non-player characters (NPCs) and enemies alike, including the demigods who rule each main area and serves as the game's main bosses. Combat in Elden Ring relies heavily on character-building elements found in previous Souls games and related intellectual properties, such as calculated and close-ranged melee-based combat with the use of skills, magic abilities, as well as blocking and dodging mechanics. Elden Ring introduces mounted combat and a stealth system, the latter being a core gameplay element from Sekiro; these features are expected to encourage players in strategizing their combat approach with each unique enemy they encounter. The game makes use of a player character stamina bar after being absent from Sekiro, although its overall influence over combat was reduced compared to previous FromSoftware games that utilized it. Unlike in Sekiro, resurrection mechanics after in-game death are not available; however, some elements were added to ensure players' progression within the game. Miyazaki stated that the customization in Elden Ring would be richer, as players are able to discover different skills through their exploration of the map instead of unlocking skill trees as in Sekiro and differing from the pre-fixed weapon skills of FromSoftware's previous games. These skills are interchangeable with a large variety of weapons which, alongside equipment, magic abilities, and items players can craft using materials found within the world, can be used to customize the player character. The game also features summoning mechanics, where players can summon a large variety of collectible spirits hidden throughout the game's world map, including previously defeated enemies, as allies to assist them in battle. Similar to the Souls series, the game's multiplayer allows other players to be summoned for cooperative play. Development and release Elden Ring was developed by FromSoftware and published by Bandai Namco Entertainment. It was announced at E3 2019 originally for Microsoft Windows, PlayStation 4, and Xbox One. No further information was revealed until a June 2021 trailer announced a release date of January 21, 2022, with additional releases on PlayStation 5 and Xbox Series X/S. In October 2021, it was announced that the game would be delayed to February 25, 2022. Elden Ring is directed by Hidetaka Miyazaki with worldbuilding by fantasy novelist George R. R. Martin, best known for his A Song of Ice and Fire novel series. A fan of Martin's work, Miyazaki contacted him with an offer to work together on a project, giving him the creative freedom to write the overarching backstory of the game's universe. Miyazaki used his contributions as the foundation of the game's narrative, comparing the process to that of using a "dungeon master's handbook in a tabletop RPG". Some staff from the Game of Thrones television series adaptation of A Song of Ice and Fire also assisted with the game's development. As with many of Miyazaki's previous games, the story was designed to not be clearly explained as FromSoftware intended for players to interpret it for themselves via flavor text and optional discussions with non-player characters (NPCs). Miyazaki hoped Martin's contributions would produce a more accessible narrative than the studio's previous games. Work began on the game in early 2017 following the release of The Ringed City, a piece of downloadable content for Dark Souls III, and was developed alongside Sekiro. As with games in the Souls series, players have the ability to create their own custom character instead of using a fixed protagonist. Miyazaki also considered Elden Ring to be a more "natural evolution" to the Souls series, featuring an open world with new gameplay mechanics such as horseback riding. Unlike many other open-world games, Elden Ring does not feature populated towns with NPCs, with the world having numerous dungeon-like ruins in their place. Miyazaki cited Fumito Ueda's work such as Shadow of the Colossus (2005) as an influence that "inspired him to create" Elden Ring, while also drawing influence from "the design and the freedom of play" in The Legend of Zelda: Breath of the Wild (2017). Yuka Kitamura, who has composed for many of Miyazaki's previous games, contributed to the game's soundtrack. Reception Elden Ring received "universal acclaim" according to review aggregator Metacritic. The game is among the highest-reviewed of 2022. On Twitch, it drew nearly 900,000 viewers within 24 hours of release, making it the third largest Twitch debut of all time after Lost Ark and Cyberpunk 2077. The Guardian praised the sense of exploration present in the game, writing "[There] is a deep sense of wonder and excitement whenever you happen upon a cave mouth on some out-of-the-way shoreline that leads to a warren of trinkets". GamesRadar+ liked the changes Elden Ring made to the soulslike formula, "Bows and arrows no longer suck, which was a nice surprise, weapon upgrades have been simplified a bit... and there's the power to summon helpful NPCs by spending mana. Destructoid felt the story was better told than in previous Souls games "I enjoyed the effort to pack in more lore via textual ruins, as it’s way easier to piece things together that are actually in the game, and not in an interview or wiki". While criticizing the game's map for not properly conveying height changes, Ars Technica enjoyed the game's horse, Torrent, "Not only does Torrent make traveling long distances faster and less tedious than simply running, but the mount also comes with an incredibly satisfying double jump that adds some welcome light platform challenges and vertical exploration to the proceedings". Eurogamer praised the world design in visuals in level design, stating, "the Lands Between still feel meticulously hand-crafted; dark corners hide lonely secrets and forgotten tales richly woven into the tapestry of the world itself, through environmental storytelling". Polygon liked how the title gave players more options to fight enemies, but still kept a sense of challenge intact, "Elden Ring’s toolset for growing more capable against daunting odds is deep and impressively varied — but it has not made me a god". IGN praised the lack of objective markers in Elden Ring, but felt that the game should have some sort of quest log or journal to note plot threads, "it becomes very easy to forget about certain plot threads and accidentally leave them unresolved by the end". While enjoying the open world, Game Informer highlighted the level design of the dungeons, "the legacy dungeons are the stuff of legend, often connecting to the outside from multiple points. These curated sprawls are a dream to dissect, and Elden Ring often offers different ways to maneuver through these areas". PCGamesN felt the UI improvements made the game vastly more accessible, "your character sheet is crystal clear about what your strengths and weaknesses are, item descriptions aren’t needlessly obscured with esoteric lore snippets, and tutorial prompts explain every core mechanic in the game". The PC version of the game was criticized for having framerate issues such as "stutters and bizarre slowdowns", with Digital Foundry finding that the port "has a number of issues that will affect all hardware configurations on all graphical settings presets". Bandai Namco promised a future patch that would address these issues. The Steam release received mixed reviews on release day due to these issues, with 60% positive reviews. Sales The PC version drew 734,000 concurrent players on Steam within minutes of its launch, surpassing Dark Souls III and Sekiro to become FromSoftware's largest launch on the service. It topped the Steam weekly sales chart upon launch, with various editions of the game occupying four positions in the top five: the top three spots and number five. Steam Spy estimates the game to have sold more than units on Steam, . In the United Kingdom, Elden Ring debuted at the top of the physical sales chart, with launch week sales surpassing Dark Souls III to make it the largest UK launch for a soulslike title and the third largest physical launch of early 2022 after Pokémon Legends: Arceus and Horizon: Forbidden West. Awards References Notes External links 2022 video games Action role-playing video games Bandai Namco games Dark fantasy role-playing video games FromSoftware games Open-world video games PlayStation 4 games PlayStation 5 games Soulslike video games Video games developed in Japan Video games directed by Hidetaka Miyazaki Video games featuring protagonists of selectable gender Windows games Works by George R. R. Martin Xbox One X enhanced games Xbox One games Xbox Series X and Series S games
160216
https://en.wikipedia.org/wiki/RC6
RC6
In cryptography, RC6 (Rivest cipher 6) is a symmetric key block cipher derived from RC5. It was designed by Ron Rivest, Matt Robshaw, Ray Sidney, and Yiqun Lisa Yin to meet the requirements of the Advanced Encryption Standard (AES) competition. The algorithm was one of the five finalists, and also was submitted to the NESSIE and CRYPTREC projects. It was a proprietary algorithm, patented by RSA Security. RC6 proper has a block size of 128 bits and supports key sizes of 128, 192, and 256 bits up to 2040-bits, but, like RC5, it may be parameterised to support a wide variety of word-lengths, key sizes, and number of rounds. RC6 is very similar to RC5 in structure, using data-dependent rotations, modular addition, and XOR operations; in fact, RC6 could be viewed as interweaving two parallel RC5 encryption processes, although RC6 does use an extra multiplication operation not present in RC5 in order to make the rotation dependent on every bit in a word, and not just the least significant few bits. Encryption/decryption Note that the key expansion algorithm is practically identical to that of RC5. The only difference is that for RC6, more words are derived from the user-supplied key. // Encryption/Decryption with RC6-w/r/b // // Input: Plaintext stored in four w-bit input registers A, B, C & D // r is the number of rounds // w-bit round keys S[0, ... , 2r + 3] // // Output: Ciphertext stored in A, B, C, D // // '''Encryption Procedure:''' B = B + S[0] D = D + S[1] for i = 1 to r do { t = (B * (2B + 1)) <<< lg w u = (D * (2D + 1)) <<< lg w A = ((A ^ t) <<< u) + S[2i] C = ((C ^ u) <<< t) + S[2i + 1] (A, B, C, D) = (B, C, D, A) } A = A + S[2r + 2] C = C + S[2r + 3] // '''Decryption Procedure:''' C = C - S[2r + 3] A = A - S[2r + 2] for i = r downto 1 do { (A, B, C, D) = (D, A, B, C) u = (D * (2D + 1)) <<< lg w t = (B * (2B + 1)) <<< lg w C = ((C - S[2i + 1]) >>> t) ^ u A = ((A - S[2i]) >>> u) ^ t } D = D - S[1] B = B - S[0] Possible use in NSA "implants" In August 2016, code reputed to be Equation Group or NSA "implants" for various network security devices was disclosed. The accompanying instructions revealed that some of these programs use RC6 for confidentiality of network communications. Licensing As RC6 was not selected for the AES, it was not guaranteed that RC6 is royalty-free. , a web page on the official web site of the designers of RC6, RSA Laboratories, states the following: "We emphasize that if RC6 is selected for the AES, RSA Security will not require any licensing or royalty payments for products using the algorithm". The emphasis on the word "if" suggests that RSA Security Inc. may have required licensing and royalty payments for any products using the RC6 algorithm. RC6 was a patented encryption algorithm ( and ); however, the patents expired between 2015 and 2017. Notes References External links Block ciphers
38275
https://en.wikipedia.org/wiki/Tru64%20UNIX
Tru64 UNIX
Tru64 UNIX is a discontinued 64-bit UNIX operating system for the Alpha instruction set architecture (ISA), currently owned by Hewlett-Packard (HP). Previously, Tru64 UNIX was a product of Compaq, and before that, Digital Equipment Corporation (DEC), where it was known as Digital UNIX (originally DEC OSF/1 AXP). As its original name suggests, Tru64 UNIX is based on the OSF/1 operating system. DEC's previous UNIX product was known as Ultrix and was based on BSD. It is unusual among commercial UNIX implementations, as it is built on top of the Mach kernel developed at Carnegie Mellon University. (Other UNIX and UNIX-like implementations built on top of the Mach kernel are GNU Hurd, NeXTSTEP, MkLinux, macOS and Apple iOS.) Tru64 UNIX required the SRM boot firmware found on Alpha-based computer systems. DEC OSF/1 AXP In 1988, Digital Equipment Corporation (DEC) joined with IBM, Hewlett-Packard, and others to form the Open Software Foundation (OSF). A primary aim was to develop a version of Unix, named OSF/1, to compete with System V Release 4 from AT&T Corporation and Sun Microsystems. After DEC's first release (OSF/1 Release 1.0) in January 1992 for their line of MIPS-based DECstation workstations, DEC ported OSF/1 to their new Alpha AXP platform (as DEC OSF/1 AXP), and this was the first version (Release 1.2) of what is most commonly referred to as OSF/1. DEC OSF/1 AXP Release 1.2 was shipped on March 1993. OSF/1 AXP was a full 64-bit operating system and the native UNIX implementation for the Alpha architecture. After OSF/1 AXP V2.0 onwards, UNIX System V compatibility was also integrated into the system. Digital UNIX In 1995, starting with release 3.2, DEC renamed OSF/1 AXP to Digital UNIX to reflect its conformance with the X/Open Single UNIX Specification. Tru64 UNIX After Compaq's purchase of DEC in early 1998, with the release of version 4.0F, Digital UNIX was renamed to Tru64 UNIX to emphasise its 64-bit-clean nature and de-emphasise the Digital brand. In April 1999, Compaq announced that Tru64 UNIX 5.0 successfully ran on Intel's IA-64 simulator. However, this port was cancelled a few months later. A Chinese version of Tru64 UNIX named COSIX was jointly developed by Compaq and China National Computer Software & Technology Service Corporation (CS&S). It was released in 1999. TruCluster Server From release V5.0 Tru64 UNIX offered a clustering facility named TruCluster Server. TruCluster utilised a cluster-wide filesystem visible to each cluster member, plus member specific storage and an optional quorum disk. Member specific files paths were enhanced symbolic links incorporating the member id of the owning member. Each member had one or zero votes, which, combined with a possible quorum disk, implemented a cluster formation algorithm similar to that found in OpenVMS. End of Life With their purchase of Compaq in 2002, HP announced their intention to migrate many of Tru64 UNIX's more innovative features (including its AdvFS, TruCluster, and LSM) to HP-UX. In December 2004, HP announced a change of plan: they would instead use the Veritas File System and abandon the Tru64 advanced features. In the process, many of the remaining Tru64 developers were laid off. The last maintenance release, 5.1B-6 was released in October 2010. In October 2010, HP stated that they would continue to support Tru64 UNIX until 31 December 2012. In 2008, HP contributed the AdvFS filesystem to the open-source community. Versions These versions were released for Alpha AXP platforms. References External links Tru64 UNIX - HP's official Tru64 UNIX site Tru64 FAQ from UNIXguide.net comp.unix.tru64 - Newsgroup on running, owning and administering Tru64 UNIX (web-accessible via Google Groups) comp.unix.osf.osf1 - Newsgroup on running, owning and administering OSF/1 (web-accessible via Google Groups) HP Tru64 Unix man pages and shell accounts provided by Polarhome Computer-related introductions in 1992 DEC operating systems Mach (kernel) Microkernel-based operating systems Unix variants HP software Compaq Discontinued operating systems
61188838
https://en.wikipedia.org/wiki/L3Harris%20Technologies
L3Harris Technologies
L3Harris Technologies (L3Harris) is an American technology company, defense contractor, and information technology services provider that produces C6ISR systems and products, wireless equipment, tactical radios, avionics and electronic systems, night vision equipment, and both terrestrial and spaceborne antennas for use in the government, defense, and commercial sectors. They specialize in surveillance solutions, microwave weaponry, and electronic warfare. It was formed from the merger of L3 Technologies (formerly L-3 Communications) and Harris Corporation on June 29, 2019, and was expected to be the sixth-largest defense contractor in the United States. History The "Harris Automatic Press Company" was founded by Alfred S. Harris in Niles, Ohio, in 1895. The company spent the next 60 years developing lithographic processes and printing presses before acquiring typesetting company Intertype Corporation. In 1967, they merged with Radiation, Inc. of Melbourne, Florida, a developer of antenna, integrated circuit, and modem technology used in the space race. The company headquarters was moved from Cleveland to Melbourne in 1978. On May 29, 2015, Harris finalized the purchase of competitor Exelis Inc., almost doubling the size of the original company. L-3 Communications was formed in 1997 to acquire certain business units from Lockheed Martin that had previously been part of Loral Corporation. These units had belonged to Lockheed Corporation and Martin Marietta, which had merged three years before in 1993. The company was founded by (and named for) Frank Lanza and Robert LaPenta in partnership with Lehman Brothers. Lanza and LaPenta had both served as executives at Loral and Lockheed. The company continued to expand through mergers and acquisitions to become one of the top ten U.S. government contractors. At the end of 2016, the company changed its name from L-3 Communications Holdings, Inc. to L3 Technologies, Inc. to better reflect the company's wider focus since its founding in 1997. In October 2018, Harris and L3 announced an all-stock "merger of equals". The merger was completed on June 29, 2019, and the new company, L3Harris Technologies, Inc., is based in Melbourne, Florida, where Harris was headquartered. The new company was led by former Harris CEO William M. Brown as the Chairman and CEO, with former L3 CEO as the President and COO. On June 29, 2021, Brown turned over the role of CEO to Kubasik, retaining the title of Executive Chair, and Kubasic adding the title of Vice Chair. In January 2022, L3Harris reorganized its business structure, eliminating the Aviation Systems business segment and distributing its divisions between the remaining three Integrated Mission Systems, Space & Airborne Systems, and Communications Systems segments. Business organization , L3Harris is organized under three business segments: Integrated Mission Systems, Space & Airborne Systems, and Communication Systems. It is led by a 13-member board of directors, including Executive Chair William M. Brown (former Harris CEO) and Vice Chair and CEO Chris Kubasik (former L3 CEO). According to merger document, Kubasik will become both chairman and CEO in 2022. Integrated Mission Systems Headquartered in Palm Bay, Florida, Integrated Missions Systems specializes in intelligence, surveillance, and reconnaissance (ISR) and signals intelligence systems; electrical and electronic systems for maritime use; electro-optical systems including infrared, laser imaging, and targeting systems; defense aviation systems including weapons systems and UAVs; and commercial aviation solutions, including avionics, collision avoidance systems, flight recorders, flight simulators, and pilot training. It comprises divisions, including some of those formerly in the Aviation Systems segment and Wescam, that had a combined revenue of $7.0 billion in 2021. Space & Airborne Systems Headquartered in Palm Bay, Florida, Space & Airborne Systems specializes in space mission, payloads, and sensors for satellite navigation, ISR, weather, and missile defense; ground systems for space command and control and tracking; optical and wireless networking for situational awareness and air traffic management; defense avionics; and electronic warfare countermeasures. It comprises divisions, including some of those formerly in the Aviation Systems segment, that had a combined revenue of $6.0 billion in 2021. Communications Systems Headquartered in Rochester, New York, Communications Systems specializes in tactical communications, broadband communications, night vision, and public safety. It comprises divisions that had a combined revenue of $4.3 billion in 2021. Controversies Arms Export Control Act (AECA) and International Traffic in Arms Regulations (ITAR) Violations In 2019 L3Harris paid $13 million to settle allegations that Harris, before the merger, violated AECA and ITAR regulations. According to a proposed charging letter Harris Corporation violated AECA (22U.S.C.2751 et seq.) and ITAR (22 CFR parts 120–131) for a total of 131 separate violations. The proposed charging letter outlines the following 9 violations: Unauthorized Exports of Technical Data in the form of Software Unauthorized Exports of Tactical Radios Unauthorized Exports of Military Electronics to Canada Unauthorized Exports of the T7 Remote Controlled Vehicle, the AN/PLM-4 Radar Signal Simulator, and Jagwire Software Plugin Unauthorized Exports of Technical Data Related to Night Vision Equipment and Tactical Radios Providing a False Part 130 Statement on a Technical Assistance Agreement Violation of License Provisos Violation of the Terms or Conditions of Licenses and Agreements Violations Caused by Systemic Administrative Issues Products AVCATT, a mobile aviation training simulator StingRay and Hailstorm phone trackers. OpenSky wireless communication system hC2 L3Harris Command and Control Battle Management Suite—former "Harris Command and Control" Integrated Core Processor, main computer in F-35 Lightning II and in C-130J Super Hercules GPNVG-18, a night vision device that utilises four night vision tubes to give the user a wider field of view See also 36th Electronic Warfare Squadron 55th Wing ADM-160 MALD Association of Old Crows Battle of Latakia Carnivore (FBI) Cyberwarfare DARPA DCSNET ECHELON Electromagnetic interference Electromagnetic pulse Electronic countermeasure Electronic harassment Electronic-warfare aircraft Electronic warfare officer Electronic Warfare Squadron (JASDF) Electronic warfare support measures Fleet Electronic Warfare Center Global surveillance disclosures (2013–present) Havana syndrome Hepting v. AT&T Joint Functional Component Command – Network Warfare Krasukha (electronic warfare system) L3Harris Electron Devices Lawful interception Magic Lantern (software) Microwave transmission NSA ANT catalog National Electronics Museum No. 100 Group RAF Radar warning receiver Radio Reconnaissance Platoon SORM Samyukta electronic warfare system Secrecy of correspondence Secure communication Sky Shadow (radar) Surveillance Telecommunications Intercept and Collection Technology Unit Telephone tapping Total Information Awareness USACEWP Verint References American companies established in 2019 Electronics companies of the United States Defense companies of the United States Multinational companies headquartered in the United States Companies based in Brevard County, Florida Manufacturing companies based in Florida Melbourne, Florida Electronics companies established in 2019 2019 establishments in Florida Military equipment of the United States Companies listed on the New York Stock Exchange Avionics companies Security equipment manufacturers
1905277
https://en.wikipedia.org/wiki/Thymoetes
Thymoetes
In Greek mythology, there were at least three people named Thymoetes (; Ancient Greek: Θυμοίτης Thumoítēs). Thymoetes, one of the elders of Troy (also spelled Thymoetus) and also a Trojan prince as the son of King Laomedon. A soothsayer had predicted that, on a certain day, a boy would be born by whom Troy would be destroyed. On that very day Paris was born to King Priam of Troy, and Munippus to Thymoetes. Priam ordered Munippus and his mother Cilla to be killed in order to prevent the prophecy from being fulfilled while sparing his own son. It is believed that Thymoetes, in order to avenge his family, advised to draw the wooden horse into the city. Thymoetes, an Athenian hero, son of Oxyntes, king of Attica. He was the last Athenian king descended from Theseus. He was succeeded by Melanthus (according to Pausanias, overthrown by him). Thymoetes, a Trojan and a companion of Aeneas, who was slain by Turnus. Notes References Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. John Tzetzes, Book of Histories, Book I translated by Ana Untila from the original Greek of T. Kiessling's edition of 1826. Online version at theio.com Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Trojans People of the Trojan War Kings of Athens Kings in Greek mythology Attican characters in Greek mythology Characters in the Aeneid Characters in Greek mythology
5312691
https://en.wikipedia.org/wiki/HACS
HACS
High Angle Control System (HACS) was a British anti-aircraft fire-control system employed by the Royal Navy from 1931 onwards and used widely during World War II. HACS calculated the necessary deflection required to place an explosive shell in the location of a target flying at a known height, bearing and speed. Early history The HACS was first proposed in the 1920s and began to appear on Royal Navy (RN) ships in January 1930, when HACS I went to sea in . HACS I did not have any stabilization or power assist for director training. HACS III which appeared in 1935, had provision for stabilization, was hydraulically driven, featured much improved data transmission and it introduced the HACS III Table. The HACS III table (computer) had numerous improvements including raising maximum target speed to 350 knots, continuous automatic fuze prediction, improved geometry in the deflection Screen, and provisions for gyro inputs to provide stabilization of data received from the director. The HACS was a control system and was made possible by an effective data transmission network between an external gun director, a below decks fire control computer, and the ship's medium calibre anti-aircraft (AA) guns. Development Operation The bearing and altitude of the target was measured directly on the UD4 Height Finder/Range Finder, a coincidence rangefinder located in the High Angle Director Tower (HADT). The direction of travel was measured by aligning a binocular graticule with the target aircraft fuselage. The early versions of HACS, Mk. I through IV, did not measure target speed directly, but estimated this value based on the target type. All of these values were sent via selsyn to the HACS in the High Angle Calculating Position (HACP) located below decks. The HACS used these values to calculate the range rate (often called rate along in RN parlance), which is the apparent target motion along the line of sight. This was also printed on a paper plot so that a range rate officer could assess its accuracy. This calculated range rate was fed back to the UD4 where it powered a motor to move prisms within the UD4. If all the measurements were correct, this movement would track the target, making it appear motionless in the sights. If the target had apparent movement, the UD4 operator would adjust the range and height, and in so doing would update the generated range rate, thereby creating a feedback loop which could establish an estimate of the target's true speed and direction. The HACS also displayed the predicted bearing and elevation of the target on indicators in the Director tower, or on later variants, the HACS could move the entire Director through Remote Power Control so that it could continue to track the target if the target became obscured. The angle measured by the graticule also caused a metal wire to rotate around the face of a large circular display on one side of the HACS, known as the Deflection Display. The measured value of altitude and range, and estimated value of target speed, caused optics to focus a lamp onto a ground glass screen behind the wire, displaying an ellipse who's shape changed based on these measures. The deflection operator used two controls to move additional wire indicators so they lay on top of the intersection of the outer edge of the ellipse where it was crossed by the rotating metal wire. The intersection of the ellipse and the target direction was used as a basis for calculating elevation and training of the guns. The ellipse method had the advantage of requiring very little in the way of mechanical computation and essentially modelled target position in real-time with a consequent rapid solution time. Information flow The HADT provides target direction, range, speed, altitude and bearing data to the HACP, which transmits direction and fuse timing orders to the guns. The HACP transmits the computer generated range rate and generated bearing back to the HADT, creating a feedback loop between the HADT and HACP, so that the fire control solution generated by the computer becomes more accurate over time if the target maintains a straight line course. The HADT also observes the accuracy of the resulting shell bursts and uses these bursts to correct target speed and direction estimates, creating another feed back loop from the guns to the HADT and thence to the HACP, again increasing the accuracy of the solution, if the target maintains a straight line course. Most guns controlled by the HACS had Fuze Setting Pedestals or Fuze Setting Trays where the correct fuze timing was set on a clockwork mechanism within the AA shell warhead, so that the shell would explode in the vicinity of the target aircraft. Target drones The HACS was the first Naval AA system to be used against radio controlled aircraft, and achieved the first AA kill against these targets in 1933. In March 1936, six Queen Bee targets were destroyed by the RN Mediterranean Fleet during intensive AA practice at a time of extreme tension between the UK and Italy. Target practice against target drones was carried out using special shells which were designed to minimize the possibility of destroying expensive targets. The RN allowed media coverage of AA target practice and a 1936 newsreel has footage of an actual shoot. In 1935 the RN also began to practice HACS controlled shoots of target aircraft at night. Tachometric and radar additions The RN moved quickly to add true tachometric target motion prediction and radar ranging to the HACS by mid 1941. The RN was the first navy to adopt dedicated FC AA radars. However the system, in common with all World War II-era mechanical AA fire control system still had severe limitations as even the highly advanced United States Navy (USN) Mk 37 system in 1944 needed an average of 1,000 rounds of ammunition fired per kill. In 1940 the Gyro Rate Unit (GRU) was added to the HACS system, an analogue computer capable of directly calculating target speed and direction, converting the HACS into a tachymetric system. Also in 1940, radar ranging was added to the HACS. The GRU and its associated computer, the "Gyro Rate Unit Box" (GRUB) no longer assumed straight and level flying on the part of the target. GRU/GRUB could generate target speed and position data at angular rates of up to 6 degrees per second, which was sufficient to track a crossing target at a range of . The Fuze Keeping Clock RN destroyers were hampered by the lack of good dual-purpose weapons suitable for ships of destroyer size; for much of the war 40° was the maximum elevation of the guns equipping such ships, which were consequently unable to engage directly attacking dive bombers, although they could provide "barrage" and "predicted fire" to protect other ships from such attacks. Destroyers did not use HACS, but rather the Fuze Keeping Clock (FKC), a simplified version of HACS. Starting in 1938 all new RN destroyers, from the onwards, were fitted with a FKC and continuous prediction fuse setting trays for each main armament gun. WWII experience from all navies showed that dive bombers could not be engaged successfully by any remote computer-predictive AA system using mechanical fuzes due to the lag time in the computer and the minimum range of optical rangefinders. In common with other contemporary navies, prewar-designed RN destroyers suffered from a lack of short-range, rapid-fire AA with which to engage dive bombers. The Auto Barrage Unit The Auto Barrage Unit or ABU, was a specialized gunnery computer and radar ranging system that used Type 283 radar. It was developed to provide computer prediction and radar anti-aircraft fire control to main and secondary armament guns that did not have inherent anti-aircraft capability. The ABU was designed to allow the guns to be pre-loaded with time fused ammunition, and it then tracked incoming enemy aircraft, aimed the guns continuously to track the aircraft, and then fired the guns automatically when the predicted aircraft position reached the preset fuse range of the previously loaded shells. The ABU was also used with guns that were nominally controlled by the HACS to provide a limited blind fire capability. Wartime experience By May 1941, RN cruisers, such as , were engaging the Luftwaffe with stabilized HACS IV systems with GRU/GRUB and Type 279 radar with the Precision Ranging Panel, which gave +/- 25 yd accuracy out to 14,000 yds. HMS Fiji was sunk in the Battle of Crete after running out of AA ammunition but her HACS IV directed 4-inch AA gun battery fended off Luftwaffe attacks for many hours. Demonstrating the RN's rapid strides in naval AA gunnery, in May 1941, HMS Prince of Wales went to sea with HACS IVGB, with full radar ranging systems, and nine AA associated fire control radars: four Type 285 radar, one on each High Angle Director Tower (HADT) and four Type 282 radar, one on each Mk IV director for the QF 2 pdr (40mm) "pom pom" mounts, and a long range Type 281 radar Warning Air (WA) radar which also had precision ranging panels for aerial and surface targets. This placed HMS Prince of Wales in the forefront of naval HA AA fire control systems at that time. In August and September 1941, HMS Prince of Wales demonstrated excellent long range radar directed AA fire during Operation Halberd. Although the shortcomings of HACS are often blamed for the loss of Force Z, the scope of the Japanese attack far exceeded anything the HACS had been designed to handle in terms of aircraft numbers and performance. The failure of anti-aircraft gunnery to deter the Japanese bombers was also influenced by unique circumstances. The HACS was originally designed with Atlantic conditions in mind and Prince of Waless AA FC radars had become unserviceable in the extreme heat and humidity in Malayan waters and her 2-pdr ammunition had deteriorated badly as well. The RN made the following claims for ship borne anti-aircraft fire against enemy aircraft, from September 1939 up to 28 March 1941: :Certain kills: 234, Probable kills: 116, Damage claims: 134 The RN made the following claims for ship borne anti-aircraft fire against enemy aircraft, from September 1939 up to 31 Dec 1942: Major warships (ships likely to have HACS or FKC fire control systems) Certain kills: 524. Probable kills: 183. Damage claims: 271. Minor warships and merchant vessels (most having no AA fire control systems) Certain kills: 216. Probable kills: 83. Damage claims: 177. Total kill claims: 740. Total probable claims: 266. Total damage claims: 448 Radar and the Mark VI Director HACS used various director towers that were generally equipped with Type 285 as it became available. This metric wavelength system employed six yagi antennas that could take ranges of targets, and take accurate readings of bearing using a technique known as "lobe switching" but only crude estimates of altitude. It could not, therefore, "lock on" to aerial targets and was unable to provide true blindfire capabilities, which no other navy was able to do until the USN developed advanced radars in 1944 using technology transfers from the UK. This situation was not remedied until the introduction of the HACS Mark VI director in 1944 that was fitted with centimetric Type 275 radar. Another improvement was the addition of Remote Power Control (RPC), in which the anti-aircraft guns automatically trained with the director tower, with the necessary changes in bearing and elevation to allow for convergent fire. Previously the gun crews had to follow mechanical pointers that indicated where the director tower wanted the guns to train. HACS systems in use or planned in August 1940 HACS Directors fitted to ships in a document dated as "revised Aug 1940": HACS III: ABC transmission, AV cradle for 15 ft HF/RF. Introduced Mk III table. HMS Ajax, Galatea, Arethusa, Coventry, HMAS Hobart, Sydney, Perth HACS III*: Similar to MarkIII but with larger windscreen and space for a rate officer. HMS Penelope, Southampton, Newcastle, Malaya, Hood*, Australia*, Nelson*, Royal Sovereign*, Barham*, Resolution*, Cairo*, Excellent (gunnery training school)*, Revenge*, Calcutta*, Carlisle*, Curacoa*, Exeter*, Adventure*, Warspite*. Ships marked with * had roll stabilization for layer. HACS III*G as mark III but fitted with GRU and roll stabilization for the layer. HACS IV: Similar to MkIII but with circular screen, magslip transmission and roll stabilization for the layer. Introduced Mk IV table. HMS Birmingham, Sheffield, Glasgow, Aurora, Liverpool, Manchester, Gloucester, Dido, and Fiji classes, Forth, Maidstone, Renown, Valiant, Illustrious, Formidable and Ark Royal. HACS IV G: Mk IV with Gyro rate unit. and es. HACS IV GB: Mk IV and fitted with GRU and complete stabilization in laying and training, Keelavite system of power training. HMS King George V and Prince of Wales, Dido and Fiji classes. HACS V: Improved design, partially enclosed, complete stabilization for elevation and training. Keelavite system of power training, and GRU. Duplex 15 ft HF/RF. Uses Mk IV table. HMS Duke of York, Anson and Howe. HACS V* :As Mk V but single HF/RF and raised HF/RF compared to Mk V. HMS Indomitable, Implacable and Indefatigable. See also Argo clock Ship gun fire-control system References External links Newsreel video of HACS controlled guns in action More Newsreel footage of HACS guns engaging high level bombers HACS: A Debacle or Just-in-Time?) Illustration of the HACS Deflection Screen Appendix one, Classification of Director Instruments HACS III Operating manual Part 1 HACS III Operating manual Part 2 USS Enterprise Action Log The Gunnery Pocket Book, B.R. 224/45, 1945 The true experiences of Mr. Leonard Charles HACS III*G with Type 285 Radar and the MkIII table British Mechanical Gunnery Computers of World War II Progress in Naval Gunnery, 1914–1936 Memoirs of Sub-Lieutenant Robert Hughes, Gunnery Control Officer on HMS Scylla Anti-aircraft artillery Military computers Naval artillery Artillery operation Royal Navy Fire-control computers of World War II Military equipment introduced in the 1930s
34794178
https://en.wikipedia.org/wiki/Derek%20Corneil
Derek Corneil
Derek Gordon Corneil is a Canadian mathematician and computer scientist, a professor emeritus of computer science at the University of Toronto, and an expert in graph algorithms and graph theory. Life When he was leaving high school, Corneil was told by his English teacher that doing a degree in mathematics and physics was a bad idea, and that the best he could hope for was to go to a technical college. His interest in computer science began when, as an undergraduate student at Queens College, he heard that a computer was purchased by the London Life insurance company in London, Ontario, where his father worked. As a freshman, he took a summer job operating the UNIVAC Mark II at the company. One of his main responsibilities was to operate a printer. An opportunity for a programming job with the company sponsoring his college scholarship appeared soon after. It was a chance that Corneil jumped at after being denied a similar position at London Life. There was an initial mix-up at his job as his overseer thought that he knew how to program the UNIVAC Mark II, and so he would easily transition to doing the same for the company's newly acquired IBM 1401 machine. However, Corneil did not have the assumed programming background. Thus, in the two-week window that Corneil had been given to learn how to grasp programming the IBM 1401, he learned how to write code from scratch by relying heavily on the instruction manual. This experience pushed him further on his way as did a number of projects he worked on in that position later on. Corneil went on to earn a bachelor's degree in mathematics and physics from Queen's University in 1964. Initially he had planned to do his graduate studies before becoming a high school teacher, but his acceptance into the brand new graduate program in computer science at the University of Toronto changed that. At the University of Toronto, Corneil earned a master's degree and then in 1968 a doctorate in computer science under the supervision of Calvin Gotlieb. (His post-doctoral supervisor was Jaap Seidel.) It was during this time that Corneil became interested in graph theory. He and Gotlieb eventually became good friends. After postdoctoral studies at the Eindhoven University of Technology, Corneil returned to Toronto as a faculty member in 1970. Before his retirement in 2010, Corneil held many positions at the University of Toronto, including Department Chair of the Computer Science department (July 1985 to June 1990), Director of Research Initiatives of the Faculty of Arts and Science (July 1991 to March 1998), and Acting Vice President of Research and International Relations (September to December 1993). During his time as a professor, he was also a visiting professor at universities such as the University of British Columbia, Simon Fraser University, the Université de Grenoble and the Université de Montpellier. Work Corneil did his research in algorithmic graph theory and graph theory in general. He has overseen 49 theses and published over 100 papers on his own or with co-authors. These papers include: A proof that recognizing graphs of small treewidth is NP-complete, The discovery of the cotree representation for cographs and of fast recognition algorithms for cographs, Generating algorithms for graph isomorphism. Algorithmic and structural properties of complement reducible graphs. Properties of asteroidal triple-free graphs. An algorithm to solve the problem of determining whether a graph is a partial graph of a k-tree. Results addressing graph theoretic, algorithmic, and complexity issues with regard to tree spanners. An explanation of the relationship between tree width and clique-width. Determining the diameter of restricted graph families. Outlining the structure of trapezoid graphs. As a professor emeritus, Corneil still does research and is also an editor of several publications such as Ars Combinatoria and SIAM Monographs on Discrete Mathematics and Applications. Awards He was inducted as a Fields Institute Fellow in 2004. References External links Interview with Corneil, Stephen Ibaraki, 13 June 2011 List of publications at DBLP 1942 births Living people Canadian mathematicians Canadian computer scientists Graph theorists Queen's University at Kingston alumni University of Toronto alumni University of Toronto faculty
4888909
https://en.wikipedia.org/wiki/Systems%20Programming%20Language
Systems Programming Language
Systems Programming Language, often shortened to SPL but sometimes known as SPL/3000, was a procedurally-oriented programming language written by Hewlett-Packard for the HP 3000 minicomputer line and first introduced in 1972. SPL was used to write the HP 3000's primary operating system, Multi-Programming Executive (MPE). Similar languages on other platforms were generically referred to as system programming languages, confusing matters. Originally known as Alpha Systems Programming Language, named for the development project that produced the 3000-series, SPL was designed to take advantage of the Alpha's stack-based processor design. It is patterned on ESPOL, a similar ALGOL-derived language used by the Burroughs B5000 mainframe systems, which also influenced a number of 1960s languages like PL360 and JOVIAL. Through the mid-1970s, the success of the HP systems produced a number of SPL offshoots. Examples include ZSPL for the Zilog Z80 processor, and Micro-SPL for the Xerox Alto. The later inspired Action! for the Atari 8-bit family, which was fairly successful. The latter more closely followed Pascal syntax, losing some of SPL's idiosyncrasies. SPL was widely used during the lifetime of the original integrated circuit-based versions HP 3000 platform. In the 1980s, the HP 3000 and MPE were reimplemented in an emulator running on the PA-RISC-based HP 9000 platforms. HP promoted Pascal as the favored system language on PA-RISC and did not provide an SPL compiler. This caused code maintenance concerns, and 3rd party SPL compilers were introduced to fill this need. History Hewlett-Packard introduced their first minicomputers, the HP 2100 series, in 1967. The machines had originally been designed by an external team working for Union Carbide and intended mainly for industrial embedded control uses, not the wider data processing market. HP saw this as a natural fit with their existing instrumentation business and initially pitched it to those users. In spite of this, HP found that the machine's price/performance ratio was making them increasingly successful in the business market. During this period, the concept of time sharing was becoming popular, especially as core memory costs fell and systems began to ship with more memory. In 1968, HP introduced a bundled system using two 2100-series machine running HP Time-Shared BASIC, which provided a complete operating system as well as the BASIC programming language. These two-machine systems, collectively known as HP 2000s, were an immediate success. HP BASIC was highly influential for many years, and its syntax can be seen in a number microcomputer BASICs, including Palo Alto TinyBASIC, Integer BASIC, North Star BASIC, Atari BASIC, and others. Designers at HP began to wonder "If we can produce a time-sharing system this good using a junky computer like the 2116, think what we could accomplish if we designed our own computer." To this end, in 1968 the company began putting together a larger team to design a new mid-sized architecture. New team members included those who had worked on Burroughs and IBM mainframe systems, and the resulting concepts bore a strong resemblance to the highly successful Burroughs B5000 system. The B5000 used a stack machine processor that made multiprogramming simpler to implement, and this same architecture was also selected for the new HP concept. Two implementations were considered, a 32-bit mainframe-scale machine known as Omega, and a 16-bit design known as Alpha. Almost all effort was on the Omega, but in June 1970, Omega was canceled. This led to an extensive redesign of Alpha to differentiate it from the 2100's, and it eventually emerged with plans for an even more aggressive operating system design. Omega had intended to run in batch mode and use a smaller computer, the "front end", to process interactions with the user. This was the same operating concept as the 2000 series. However, yet-another-2000 would not be enough for Alpha, and the decision was made to have a single operating for batch, interactive and even real time operation. To make this work, it needed an advanced computer bus design with extensive direct memory access (DMA) and required an advanced operating system (OS) to provide quick responses to user actions. The B5000 was also unique, for its time, in that its operating system and core utilities were all programmed in a high-level language, ESPOL. ESPOL was a derivative of the ALGOL language tuned to work on the B5000's, a concept that was highly influential in the 1960s and led to new languages like JOVIAL, PL/360 and BCPL. The HP team decided they would also use an ALGOL-derived language for their operating systems work. HP's similar language was initially known as the Alpha Systems Programming Language. Alpha took several years to develop before emerging in 1972 as the HP 3000. The machine was on the market for only a few months before it was clear it simply wasn't working right, and HP was forced to recall all 3000's already sold. It was reintroduced in late 1973 with most of its problems having been fixed. A major upgrade to the entire system, the CX machine, and MPE-C to run on it, reformed its image and the 3000 went on to be another major success during the second half of the 1970s. This success made SPL almost as widespread as the 2000 series' BASIC, and like that language, SPL resulted in a number of versions for other platforms. Notable among them was Micro-SPL, a version written for the Xerox Alto workstation. This machine had originally used BCPL as its primary language, but dissatisfaction with its performance led Henry Baker to design a non-recursive language that he implemented with Clinton Parker in 1979. Clinton would then further modify Micro-SPL to produce Action! for the Atari 8-bit family in 1983. HP reimplemented the HP 3000 system on the PA-RISC chipset, running a new version of the operating system known as MPE/iX. MPE/iX had two modes, in "native mode" it ran applications that had been recompiled for the PA-RISC using newer Pascal compilers, while under "compatible mode" it could run all existing software via emulation. HP did not supply a native mode compiler for MPE/iX so it was not an easy process to move existing software to the new platform. To fill the need, Allegro Consultants wrote an SPL-compatible language named "SPLash!" that could compile to original HP 3000 code to run within the emulator, or to native mode. This offered a porting pathway for existing SPL software. Language Basic syntax SPL generally follows ALGOL 60 syntax conventions, and will be familiar to anyone with experience in ALGOL or its descendants, like Pascal and Modula-2. Like those languages, program statements can span multiple physical lines and end with a semicolon. Comments are denoted with the keyword, or by surrounding the comment text in << and >>. Statements are grouped into blocks using BEGIN and END, although, as in Pascal, the END of a program must be followed by a period. The program as a whole is surrounded by BEGIN and END., similar to Pascal, but lacking a PROGRAM keyword or similar statement at the top. The reason for this is that SPL allows any block of code to be used as a program on its own, or compiled into another program to act as a library. The creation of code as a program or subprogram was not part of the language itself, handled instead by placing the compiler directive at the top of the file. The language used the INTRINSIC keyword to allow external code to be called directly by giving it a local name. For instance, a machine language library exposing a function to run the console bell could be imported to an SPL program as and then the bell could be operated by using the keyword as if it was a native command. In contrast to Pascal, where and were separate concepts, SPL uses a more C-like approach where any can be prefixed with a type to turn it into a function. In keeping with the syntax of other ALGOL-like languages, the types of the parameters were listed after the name, not part of it. For instance: INTEGER PROCEDURE FACT(N); VALUE N; INTEGER N; Declares a function FACT that takes a value N that is an integer. The indicates that this variable is also the return value for the procedure. Although frowned upon, ALGOL and Pascal allowed code to be labeled using a leading name ending with a colon, which could then be used for the target of loops and statements. One minor difference is that SPL required the label names to be declared in the variable section using the keyword. SPL added to this concept with the statement which allowed these labels to be further defined as "entry points" that could be accessed from the command line. Labels named in the entry statement(s) were exposed to the operating system and could be called from the RUN command. For instance, one could write a program containing string functions to convert to uppercase or lowercase, and then provide ENTRY points for these two. This could be called from the command line as . Data types Where SPL differs most noticeably from ALGOL is that its data types are very machine specific, based on the 3000's 16-bit big endian word format. The type is a 16-bit signed type, with 15 bits of value and the least significant bit as the sign. is a 32-bit integer, not a floating-point type. is a 32-bit floating-point value with 22 bits for the mantissa and 9 for the exponent, while is a 64-bit floating-point value with 54 bits of mantissa and 9 bits exponent. is used for character processing, consisting of a 16-bit machine word holding two 8-bit characters. is a boolean type that stores a single bit in the most significant bit. There is no equivalent of a modifier as found in Pascal, so is somewhat wasteful of memory. Like C, data is weakly typed, memory locations and variable storage are intermixed concepts, and one can access values directly through their locations. For instance, the code: INTEGER A,B,C LOGICAL D=A+2 defines three 16-bit integer variables, A, B and C, and then a LOGICAL, also a 16-bit value. The , like Pascal, means "is equivalent to", not "gets the value of", which uses in Algol-like languages. So the second line states "declare a variable D that is in the same memory location as A+2", which in this case is also the location of the variable C. This allows the same value to be read as an integer via C or a logical through D. This syntax may seem odd to modern readers where memory is generally a black box, but it has a number of important uses in systems programming where particular memory locations hold values from the underlying hardware. In particular, it allows one to define a variable that points to the front of a table of values, and then declare additional variables that point to individual values within the table. If the table location changes, only a single value has to change, the initial address, and all of the individual variables will automatically follow in their proper relative offsets. Pointers were declared by adding the modifier to any variable declaration, and the memory location of a variable dereferenced with the . Thus declares a pointer whose value contains the address of the variable A, not the value of A. can be used on either side of the assignment; puts the value of A into P, likely resulting in a dangling pointer, makes P point to A, while puts the value of A into the location currently pointed to by P. In a similar fashion, SPL includes C-like array support in which the index variable is a number-of-words offset from the memory location set for the initial variable. Unlike C, SPL only provided one-dimensional arrays, and used parentheses as opposed to brackets. Variables could also be declared , in which case no local memory was set aside for them and the storage was assumed to be declared in another library. This mirrors the keyword in C. Literals can be specified with various suffixes, and those without a suffix are assumed to be . For instance, would be interpreted as an , while was a . denoted a and a . String constants were delimited by double-quotes, and double-quotes within a line were escaped with a second double-quote. Variable declarations could use constants to define an initial value, as in . Note the use of the assign-to rather than is-a. Additionally, SPL had a keyword that allowed a string of text to be defined as a variable, and then replaced any instances of that variable in the code with the literal string during compiles. This is similar to the keyword in C. Memory segmentation As was common in the era, the HP 3000 used a byte-oriented segmented memory model in which an address was a single 16-bit word, allowing code to access up to 65,536 bytes (or as they termed it, "half-words"). To allow larger amounts of memory to be accessed, a virtual memory system was used. When memory was accessed, the 16-bit address was prefixed with one of two 8-bit segment values, one for the program code (PB) and another for variable data. The result was a 24-bit address. Thus, while each program had access to a total of 128 kB at any one time, it could swap the segments to access a full 16 MB memory space. SPL included a variety of support systems to allow programs to be easily segmented and then make that segmentation relatively invisible in the code. The primary mechanism was to use the compiler directive which defined which segment the following code should be placed in. The default was , but the programmer could add any number of additional named segments to organize the code into blocks. Other features SPL included a "bit-extraction" feature that allowed simplified bit fiddling. Any bit, or string of bits, in a word could be accessed using the syntax, where x and y were the start and end bit positions from 0 to 15. Thus returned the lower byte of the word storing A. This format could be used to split and merge bits as needed. Additionally, additional operations were provided for shifts and rotates, and could be applied to any variable with the , for instance . Example This simple program, from the 1984 version of the reference manual, shows most of the features of the SPL language. The program as a whole is delimited between the and . It begins with the definition of a series of global variables, A, B and C, defines a single procedure and then calls it twenty times. Note that the procedure does not have a BEGIN and END of its own because it contains only one line of actual code, the is not considered part of the code itself, it is indicating the type of the three parameters being passed in on the line above and is considered part of that line. BEGIN INTEGER A:=0, B, C:=1; PROCEDURE N(X,Y,Z); INTEGER X,Y,Z; X:=X*(Y+Z); FOR B:=1 UNTIL 20 DO N(A,B,C); END. References Citations Bibliography Systems programming languages HP software
7855048
https://en.wikipedia.org/wiki/PP3
PP3
PP3 is free software that produces sky charts, focussing on high quality graphics and typography. It is distributed a license based on the MIT License, but with this restriction added: Sky charts are produced as LaTeX files, so an installation of LaTeX and Ghostscript is required to obtain results in PostScript or PDF formats. Knowledge of command line syntax for these packages is however not required, as PP3 can run the conversions automatically. Initially Wikipedia's own star charts were produced by PP3. PP3 generates maps in the azimuthal equidistant projection. See also Space flight simulation game List of space flight simulation games Planetarium software List of observatory software References External links Official website Free astronomy software Science software for Windows Free software programmed in C++
70055217
https://en.wikipedia.org/wiki/1996%E2%80%9397%20USC%20Trojans%20men%27s%20basketball%20team
1996–97 USC Trojans men's basketball team
The 1996–97 USC Trojans men's basketball team represented the University of Southern California during the 1996–97 NCAA Division I men's basketball season. Led by head coach Henry Bibby, they played their home games at the L. A. Sports Arena in Los Angeles, California as members of the Pac-10 Conference. The Trojans finished the season with a record of 17–11 (12–6 Pac-10) and received an at-large bid to the NCAA Tournament. Roster Schedule and results |- !colspan=9 style=| Regular season |- !colspan=9 style=| NCAA Tournament Rankings Team Players in the 1997 NBA Draft References Usc Trojans USC Trojans men's basketball seasons USC USC Trojans USC Trojans
64705488
https://en.wikipedia.org/wiki/Tejas%20Thackeray
Tejas Thackeray
Tejas Thackeray (born 10 October 1995) is an Indian conservationist and wildlife researcher. He is the son of Uddhav Thackeray, an Indian politician serving as the 19th and current Chief Minister of Maharashtra, leader of the Shiv Sena, and grandson of Bal Thackeray. Early life Thackeray was born to Rashmi Thackeray and Uddhav Thackeray in Mumbai. His elder brother Aditya Thackeray is an Indian Politician serving as Cabinet Minister of Tourism and Environment Government of Maharashtra. Career Gubernatoriana Thackerayi – In February 2016, Thackeray discovered 5 new species of freshwater crabs, one of which was named after his surname. A few species of freshwater crabs native to the Western Ghats were discovered and described. The paper was published in Zootaxa on 23 February 2016. Tejas then as a 21-year-old, city college student had discovered and collected the types specimens of Ghatiana atropurpurea (Amboli), Ghatiana splendida (Chakul) and Gubernatoriana thackerayi (Raghuveer Ghat). The various specimen were named by Thackeray, Dr. Pati and Anil Khaire. These are: Ghatiana atropurpurea (collected near Amboli) Ghatiana splendida (found near Chakul) Gubernatoriana thackerayi (discovered near Raghuveer Ghats) Gubernatorianas alcocki Gubernatorianas waghi Along with Dr. S.K. Pati of ZSI and Anil Khaire, he was a part of the naming and description of Ghatiana atropurpurea and Ghatiana splendida. The three Gubernatorianas ( alcocki, thackerayi and waghi) were described and named by Dr. Pati. Cnemaspis thackerayi – In May 2019, a newly discovered species of gecko, a nocturnal and often highly vocal lizard usually found in warm regions, which was discovered in Tamil Nadu by Indian researchers, has been named after Thackeray. It was named Cnemaspis Thackerayi after Mr. Thackeray as a recognition of his contributions to Systematic Zoology by discovering and naming more than dozens of fresh water crab species in Maharashtra. Boiga thackerayi – In September 2019, A new species of snakes has been discovered in the Western Ghats in Maharashtra and named after Thackeray for his contribution to the discovery. Boiga thackerayi sp. nov – Thackeray's cat snake, is a new species found with Tiger like stripes on its body from the Sahyadri tiger reserve in Maharashtra. In October 2020, Thackeray was part of a team that discovered a new type of fish named Schistura hiranyakeshi. In 2021, with other peer-members, he founded Thackeray Wildlife Foundation References Indian conservationists Living people 21st-century Indian zoologists People from Mumbai Thackeray family (Maharashtra) 1995 births
18352206
https://en.wikipedia.org/wiki/Lawrence%20J.%20Rosenblum
Lawrence J. Rosenblum
Lawrence Jay Rosenblum (born 1944) is an American mathematician, and Program Director for Graphics and Visualization at the National Science Foundation. Work Rosenblum received his Ph.D. in Mathematics from the Ohio State University in 1971. From 1992 to 1994, he was Liaison Scientist for Computer Science at the Office of Naval Research European Office. From 1994 he has been Director of Virtual Reality (VR) Systems and Research at the Information Technology Division of the Naval Research Laboratory (NRL) and Program Officer for Visualization and Computer Graphics at the Office of Naval Research (ONR) for ten years. Since 2004 he is Program Director for Graphics and Visualization at the National Science Foundation. Rosenblum is on the editorial boards of IEEE CG&A and Virtual Reality. He has guest edited special issues/sections of IEEE Computer Graphics and Applications (CG&A), Computer, and Presence on visualization, VR, and ARHe. He also has served on both the editorial board and advisory board of the IEEE Transactions on Visualization and Computer Graphics. He was the elected Chairman of the IEEE Technical Committee on Computer Graphics from 1994–1996 and is currently Director of the IEEE Visualization and Graphics Technical Committee. Rosenblum received an IEEE Outstanding Contribution Certificate for initiating and co-founding the IEEE Visualization conference. He serves on the program, conference, and steering committees of numerous international conferences. He is a senior member of the IEEE and a member of the IEEE Computer Society, ACM, and Siggraph. Work Rosenblum's research interests include mobile augmented reality (AR), scientific and uncertainty visualization, VR displays, and applications of VR/AR systems. His research group has produced advances in mobile augmented reality (AR), scientific and uncertainty visualization, VR displays, applications of VR/AR systems, and understanding human performance in graphics systems. The emergence of scientific visualization In the 1990s scientific visualization developed as an emerging research discipline. According to Rosenblum (1994) "new algorithms are just beginning to effectively handle the recurring scientific problem of data collected at nonuniform intervals. Volume visualization today is being extended from examining scientific data to reconstructing scattered data and representing geometrical objects without mathematically describing surfaces. Fluid dynamics visualization affects numerous scientific and engineering disciplines. It has taken its place with molecular modeling, imaging remote-sensing data, and medical imaging as a domain-specific visualization research area". Much of the progress in the field of scientific modeling, according to Rosenblum (1994), came "from using algorithms with roots in both computer graphics and computer vision. One important research thread has been the topological representation of important features. Volume and hybrid visualization now produce 3D animations of complex flows. However, while impressive 3D visualizations have been generated for scalar parameters associated with fluid dynamics, vector and especially tensor portrayal has proven more difficult. Seminal methods have appeared, but much remains to do. Great strides have also occurred in visualization systems. The area of automated selection of visualizations especially requires more work. Nonetheless, the situation has much improved, with these tools increasingly accessible to scientists and engineers". Research trends in Visualization The field of visualization has undergone considerable changes since its founding in the late 1980s. From its origins in scientific visualization, new areas have arisen in the new Millennium . These include information visualization and, more recently, mobile visualization including location-aware computing, and visual analytics. Several new trends are emerging. The most important is the fusion of visualization techniques with other areas such as computer vision, data mining and data bases to promote broad-based advances. Another trend, which has not been well met to date by visualization researchers, is for algorithms to be combined with usability studies to assure that techniques and systems are well designed and that their value is quantified. This presentation will discuss current research trends in visualization as well as briefly discuss trends in U.S. research funding. Foundations of Data and Visual Analytics Rosenblum current program responsibilities at the NSF in 2008 is "Foundations of Data and Visual Analytics (FODAVA)" project. Those involved with science, engineering, commerce, health, and national security all increasingly face the challenge of synthesizing information and deriving insight from massive, dynamic, ambiguous and possibly conflicting digital data. The goal of collecting and examining these data is not to merely acquire information, but to derive increased understanding from it and to facilitate effective decision-making. To capitalize on the opportunities provided by these data sets, a new, interdisciplinary field of science is emerging called "Data and Visual Analytics", which is defined as the science of analytical reasoning facilitated by interactive visual interfaces. Data and Visual Analytics requires interdisciplinary science, going beyond traditional scientific and information visualization to include statistics, mathematics, knowledge representation, management and discovery technologies, cognitive and perceptual sciences, decision sciences, and more. This solicitation is concerned only with a subset of the overall problem, namely the creation of the mathematical and computational sciences foundations required to transform data in ways that permit visual-based understanding. To facilitate visual-based data exploration, it is necessary to discover new algorithms that will represent and transform all types of digital data into mathematical formulations and computational models that will subsequently enable efficient, effective visualization and analytic reasoning techniques. Publications Rosenblum has published over eighty scientific articles and has edited two books, including Scientific Visualization: Advances & Challenges. 1990. Visualization in scientific computing. Edited with Gregory M. Nielson and Bruce Shriver. 1991. Visualization '91, October 22–25, 1991, San Diego, California : proceedings / sponsored by IEEE Computer Society Technical Committee on Computer Graphics,in cooperation with ACM/SIGGRAPH. Edited with Gregory M. Nielson. 1994. Scientific Visualization : Advances and challenges. Academic Press. 1999. IEEE virtual reality : March 13–17, 1999, Houston, Texas : proceedings. Edited with Peter Astheimer, and Detlef Teichmann ; sponsored by IEEE Computer Society Technical Committee on Visualization and Graphics. References External links Lawrence Rosenblum homepage at the NSF. 1949 births 20th-century American mathematicians 21st-century American mathematicians Living people Information visualization experts United States National Science Foundation officials Ohio State University Graduate School alumni Senior Members of the IEEE Virtual reality pioneers
8604018
https://en.wikipedia.org/wiki/National%20University%20of%20Management
National University of Management
The National University of Management (NUM; ) is a business school in Phnom Penh, Cambodia, located near Phnom Penh Railway Station. The university provides training programmes to all people in the areas of management, economics, commerce, IT, business law, tourism, and foreign languages, accompanied by research and development in response to the needs of the job market. History The National University of Management was founded in 1983 as the Economics Science Institute (ESI) and, until 1991, received assistance from the National Economics University in Hanoi, Vietnam. During this period, students were enrolled in a five-year undergraduate programme with the Vietnamese language serving as the main language of instruction. The curriculum, set by the visiting faculty from Hanoi, included major fields in finance, commerce, agriculture, industry, and socialist planning. With the opening up of Cambodia to the international community during the early 1990s, the ESI was renamed the Faculty of Business (FOB). Initial support for the FOB was provided by the Asia Foundation and later, through a three-year USAID grant (1994 to 1997), by Georgetown University and the University of San Francisco. This support provided teacher training and institutional development, and encouraged transformation of the curriculum along the lines of an international or American-style business school. Marketing and accounting majors were introduced during this period, and the length of the undergraduate programme was reduced from five to four years of study. Commercial law courses were added to the curriculum by the University of San Francisco Law School. In 2004, the FOB was transformed into the National University of Management (NUM); programme offerings were expanded to include the fields of tourism and hospitality management, finance and banking, and MIS. NUM opened the first MBA program in Cambodia in cooperation with the University Utara Malaysia (UUM), a state-sponsored university in northern Malaysia. NUM maintains a five-year faculty exchange and research program with the University of Antwerp, Belgium and has recently opened a Center for Entrepreneurship and Development in partnership with Fisk University (Nashville, United States) and Tennessee State University, sponsored by UNCF/USAID. Currently, more than 10,000 students attend courses at NUM's main campus in Phnom Penh. NUM operates a full Bachelor of Business Administration degree programme in Battambang with more than 700 students attending courses at NUM's provincial campus. Academic programmes Bachelor's degrees (four-year): Management, Marketing, Accounting and Finance, Finance and Banking, Tourism and Hospitality, English Literature, Eco-Business, Law, Faculty of Information Technology Master's degrees (two-year): Business Administration, Accounting and Finance, Economics, Information Technology (IT), Tourism and Hospitality Ph.D. (three-year): Business Administration, Tourism and Hospitality, Economics Certificate programs: Business Administration-Marketing, Economics, Information Technology, Accounting and Finance, Finance and Banking, Foreign Language (Korean, Japanese) References External links Universities in Cambodia Educational institutions established in 1983 Education in Phnom Penh 1983 establishments in Cambodia
857605
https://en.wikipedia.org/wiki/Havok%20%28software%29
Havok (software)
Havok is a middleware software suite developed by the Irish company Havok. Havok provides a physics engine component and related functions to video games. In September 2007, Intel announced it had signed a definitive agreement to acquire Havok Inc. In 2008, Havok was honored at the 59th Annual Technology & Engineering Emmy Awards for advancing the development of physics engines in electronic entertainment. In October 2015, Microsoft announced it had acquired Havok. Products The Havok middleware suite consists of the following modules: Havok Physics: It is designed primarily for video games, and allows for real-time collision and dynamics of rigid bodies in three dimensions. It provides multiple types of dynamic constraints between rigid bodies (e.g. for ragdoll physics), and has a highly optimized collision detection library. By using dynamical simulation, Havok Physics allows for more realistic virtual worlds in games. The company was developing a specialized version of Havok Physics called Havok FX that made use of ATI and NVIDIA GPUs for physics simulations; however, the goal of GPU acceleration did not materialize until several years later. Havok AI: In 2009, Havok released Havok AI, which provides advanced pathfinding capabilities for games. Havok AI provides navigation mesh generation, pathfinding and path following for video game environments. Havok Cloth: Released in 2008, Havok Cloth deals with efficient simulation of character garments and soft body dynamics. Havok Destruction (discontinued): Also released in 2008, Havok Destruction provides tools for creation of destructible and deformable rigid body environments. Havok Animation Studio (discontinued): Havok Animation Studio is formally known as Havok Behavior and Havok Animation. Havok Behavior is a runtime SDK for controlling game character animation at a high level using finite state machines. Havok Animation provides efficient playback and compression of character animations in games, and features such as inverse kinematics. Havok Script (discontinued): Havok Script is a Lua-compatible virtual machine designed for video game development. It is shipped as part of the Havok Script Studio. Havok Vision Engine (discontinued): On August 8, 2011, Havok announced their acquisition of German game engine development company Trinigy and their Vision Engine and toolset. Platforms Version 1.0 of the Havok SDK was unveiled at the Game Developers Conference (GDC) in 2000. The Havok SDK is multi-platform by nature and is always updated to run on the majority of the latest platforms. Licensees are given access to most of the C/C++ source-code, giving them the freedom to customize the engine's features, or port it to different platforms although some libraries are only provided in binary format. In March 2011, Havok showed off a version of the Havok physics engine designed for use with the Sony Xperia Play, or more specifically, Android 2.3. During Microsoft's //BUILD/ 2012 conference, Havok unveiled a full technology suite for Windows 8, Windows RT , Windows Phone 8 and later Windows 10. Usage Video games Since the SDK's launch in 2000, it has been used in over 600 video games. Other software Havok can also be found in: Futuremark's 3DMark2001 and 03 benchmarking tools a plug-in for Maya animation software Valve's Source game engine uses VPhysics, which is a physics engine modified from Havok Havok supplies tools (the "Havok Content Tools") for export of assets for use with all Havok products from Autodesk 3ds Max, Autodesk Maya, and (formerly) Autodesk Softimage. Havok was also used in the virtual world Second Life, with all physics handled by its online simulator servers, rather than by the users' client computers. An upgrade to Havok version 4 was released in April 2008 and an upgrade to version 7 started June, 2010. Second Life resident Emilin Nakamori constructed a weight-driven, pendulum-regulated mechanical clock functioning entirely by Havok Physics in March 2019. References External links 2000 software Computer physics engines Microsoft software Middleware for video games Video game development software Video game engines Virtual reality
36205861
https://en.wikipedia.org/wiki/Aemulor
Aemulor
In computing, Aemulor is an emulator of the earlier addressing-mode ARM microprocessors. It runs on ARM processors under addressing-mode versions of . It was written by Adrian Lees and released in 2003. An enhanced version is available under the name Aemulor Pro. The software allows Raspberry Pi, Iyonix PC and A9home computers running to make use of some software written for older hardware. , compatibility with the BeagleBoard single-board computer was under development. Development The software's existence was first reported around the time of the announcement of the Iyonix in October 2002. A demo version was released in February 2003, with the commercial release in March of that year. Aemulor Pro was released in 2004. This added enhancements, including support for low colour modes, required by scorewriter Sibelius and many games. A version for the A9home was released in 2005. The software was exhibited at the 2006 Wakefield Show. In 2009, author Adrian Lees posted on The Icon Bar, showing an early prototype of the software running on the BeagleBoard. Progress on further compatibility for the Raspberry Pi single-board computer was announced by Lees on the RISC OS Open forum in 2012. Developer R-Comp was reported in May 2012 to be hoping to make Aemulor available for its BeagleBoard-xM-based ARMini computer. Features The software provides full 26-bit emulation for applications written in C and ARM assembly language. It employs an XScale-optimised ARM code interpreter, supports SWI emulation from to 5, flag preservation and creation of dynamic areas in low memory. Support for running A310Emu is included, allowing users to further emulate earlier versions of the OS, going back to Arthur. , due to the memory remapping employed, native 32-bit applications are restricted to a maximum size of 28Mb while Aemulor is running. The original release included an Easter egg, with a prize of an upgrade to the Pro version for the person who found it. Aemulor Pro adds support for low-bpp screen modes, sound, hardware emulation of VIDC/IOC, an altered memory map and filing systems. Some software, such as Sibelius, can be run both in the desktop and in full screen. Compatible software References RISC OS emulators RISC OS emulation software Proprietary software
58827999
https://en.wikipedia.org/wiki/1996%20Troy%20State%20Trojans%20football%20team
1996 Troy State Trojans football team
The 1996 Troy State Trojans football team represented Troy State University in the 1996 NCAA Division I-AA football season. The Trojans played their home games at Veterans Memorial Stadium in Troy, Alabama and competed in the Southland Conference. Troy State finished the season ranked No. 5 in the Sports Network Poll. Schedule References Troy State Troy Trojans football seasons Southland Conference football champion seasons Troy State Trojans football
37921496
https://en.wikipedia.org/wiki/Alibre%20Design
Alibre Design
Alibre Design is a parametric computer-aided design (CAD) software suite developed by Alibre for Microsoft Windows. Alibre is a brand of Alibre, LLC, a company based in Texas. About Founded in 1997, Alibre began working closely with Microsoft in 1998 to develop the first web-based collaborative 3D design environment. The environment operated on a web-browser and allowed multiple users to work on the same design simultaneously. Following this development, Alibre received a patent for "System and method for solid modeling," protecting their technologies for generating 3D geometries across a high bandwidth, distributed network. Alibre's purported aim in this development was to give businesses a cost-effective way to geographically distribute teams by enabling networked design environments without incurring large capital expenditures. Alibre Design is based on the ACIS modeling kernel from Spatial, and a 2D constraint system from Siemens PLM, among other technologies. It allows users to create modeled representations of concepts to facilitate design and manufacturing, with 2D and 3D functionality. Parametric solid modeling is driven by intelligent dimensions, meaning that the software automatically recomputes designs to accommodate changes to a single dimension, thereby maintaining the design's dimensional accuracy without necessitating manual adjustment of each dimension. Products and features Alibre's products fall into two categories intended for different users and applications. Alibre Design Professional has a basic set of features intended for users to get started with CAD, whereas Alibre Design Expert is a 3D and 2D modeling application suitable or intended for professional use. Design tools Some of Alibre's key design tools include: Part modeling to define the geometry of individual components Sheet metal modeling to define the geometry of individual components created from sheeted materials, such as sheet metal. Software adheres to the real-world constraints of sheeted goods Assembly modeling to define relationships between individual components for final assembled designs. Software analyzes the relation of components to assess real-word constraints and conditions, such as tangency or alignment Surface modeling to create organic surface models 2D drafting to convert previously created 3D designs into 2D engineering drawings for manufacturing, patents and design communication Technical support and training Alibre includes free training through a built-in help section in the software. Free training is also available via online tutorials and videos. To get direct technical assistance for Alibre products, customers must buy a software maintenance plan, which gives access to support via telephone or online ticket system. Compatibility Supported files for import STEP AP203/214 (*.stp, *.step, *.ste) IGES (*.igs) ACIS (*.sat) Rhino (*.3dm) DXF (*.dxf), DWG (*.dwg) SolidWorks Files (*.sldprt, *.sldasm) Autodesk Inventor (*.ipt, *.iam) ProEngineer (*.prt, *.asm, *.xpr, *.xas) Catia (*.CATPart, *.CATProduct) Parasolid (*.x_t, *.x_b, *.xmt_txt, *xmt_bin) Solid Edge (*.par, *.psm, *.asm) Various image formats (bmp, dib, rle, gif, tif, tiff, png, jpg, jpeg, jfif, emf, wmf) Supported files for export STEP AP203/214 (*.stp, *.step, *.ste) IGES (*.igs) ACIS (*.sat) Stereolithography (*.stl) DXF (*.dxf), DWG (*.dwg) SolidWorks (*.sldprt, *.sldasm) Parasolid (*.x_t, *.x_b, *.xmt_txt, *xmt_bin) OBJ (*.obj) - via an add-on in Alibre Design Expert SketchUp (*.skp) - via an add-on in Alibre Design Expert See also Comparison of computer-aided design editors References Further reading Computer-aided design software 3D graphics software Windows-only software Product design Computer-aided design software for Windows Proprietary software
18075484
https://en.wikipedia.org/wiki/Modern%20Operating%20Systems
Modern Operating Systems
Modern Operating Systems is a book written by Andrew Tanenbaum, a version (which does not target implementation) of his book Operating Systems: Design and Implementation. It is now in its 4th edition, published March 2014 (), written together with Herbert Bos. Modern Operating Systems (mostly known as MOS) is a popular book across the globe and includes the fundamentals of an operating system with small amounts of code written in autonomous C language. MOS describes many scheduling algorithms. Books on operating systems Computer science books 1992 non-fiction books
30818680
https://en.wikipedia.org/wiki/Abbe%20Mowshowitz
Abbe Mowshowitz
Abbe Mowshowitz (born 13 November 1939, Liberty, New York) is an American academic, a professor of computer science at the City College of New York and a member of the Doctoral Faculty in Computer Science at The City University of New York who works in the areas of the organization, management, and economics of information systems; social and policy implications of information technology; network science; and graph theory. He is known in particular for his work on virtual organization, a concept he introduced in the 1970s; on information commodities; on the social implications of computing; and on the complexity of graphs and networks. Before joining the faculty at The City College of New York, Mowshowitz was a faculty member at the University of Toronto (Departments of Computer Science and Industrial Engineering, 1968–1969); the University of British Columbia (Department of Computer Science, 1969–1980); and was research director in the Department of Science and Technology Studies at Rensselaer Polytechnic Institute (1982–1984). In addition, he was a visiting professor at the Graduate School of Management, Delft, The Netherlands (1979–1980); held the Tinbergen Chair in the Graduate School of Management at Erasmus University, Rotterdam, The Netherlands (1990–1991); was a professor in the Department of Social Science Informatics at the University of Amsterdam, The Netherlands (1991–1993, 1994–1997); and was the CeTim professor of Technology Innovation Management at the Rotterdam School of Management, Rotterdam, The Netherlands (2001–2002). Mowshowitz received a Ph.D. in Computer Science from the University of Michigan in 1967 (under the direction of Professor Anatol Rapoport), and a BS in Mathematics from the University of Chicago in 1961. His research on the structural complexity of graphs (published in 1968) was based on a paper by Professor Nicolas Rashevsky who first introduced the idea of measuring the information content of a graph using Shannon's entropy measure. Mowshowitz formalized and extended Rashevsky's idea and characterized the structural complexity of various classes of graphs and binary operations on graphs. Two measures of structural complexity were defined, both relative to a partition of the vertices of a graph. One of the measures, based on a partition related to independent sets, stimulated Körner's development of graph entropy. Mowshowitz was an early and persistent advocate of and contributor to studies of the social relations of computing. He introduced an undergraduate course on that topic at the University of British Columbia in 1973; published a comprehensive text in 1976; served as vice-chairman (1983–1985) and chairman (1985–1987) of the ACM's Special Interest Group on Computers and Society; and was a member of IFIP Working Group 9.2 (Computers and Social Accountability) from 1977 to 1997. As the title of his book The Conquest of Will suggests, Mowshowitz aimed to extend the idea of conquest of the material world - theme of many inquiries into the implications of technology - to the realm of behavior and culture. He called attention to the threats posed by computer technology to personal privacy, political freedom and human identity, and, like Professor Joseph Weizenbaum in Computer Power and Human Reason (published in the same year), he pointed to the danger of excessive reliance on computers in areas traditionally requiring human judgment. As an extension of the last chapter of The Conquest of Will he produced a study-anthology of computers in fiction in an effort to stimulate further discussion of the social consequences of computer technology. In recent years he has (together with colleague Professor Akira Kawaguchi) developed and applied a quantitative measure of the bias of search engines on the World Wide Web. He also worked on the ethical implications of computing and, as a participant in a workshop held at SRI International in 1977 (organized by Mr. Donn Parker), developed a taxonomy of ethical issues that informed the later discussion leading to the ACM code of ethics adopted in 1992. As well as conducting research on ethical implications, he contributed to policy discussions surrounding computer technology. In 1979 he consulted (together with Professor Rob Kling) for the Rathenau Commission of the Dutch Ministry of Science Policy on the societal implications of microelectronics, and from 1980 until it closed in 1995, he consulted regularly for the U.S. Congressional Office of Technology Assessment, producing a variety of background reports on the social impact of information technology. His conceived the idea of virtual organization in the late 1970s, drawing on an analogy between the structure and function of global companies, on the one hand, and virtual memory in computer systems, on the other. This analogy led eventually to the formal definition presented in a paper that appeared in 1994 and elaborated in his book on virtual organization published in 2002. During the year 1979-1980, he was stimulated to develop and codify the idea of virtual organization through discussions with Henk van Dongen and his colleagues at the Graduate School of Management in Delft, The Netherlands. In the course of elaborating the concept and its implications for society, he introduced the notion of information commodity to explain a key part of the economic foundation of virtual organization and developed mathematical models for pricing information commodities, both from the supply and the demand perspective. His work in network science combined an interest in the complexity of graphs and networks with practical experience in designing networks to support administrative functions. While at the University of Amsterdam, The Netherlands in the 1990s, he worked on the design and development of a network to support information sharing on drug related issues among member states of the European Union. This work contributed to the formation of the European Monitoring Centre for Drugs and Drug Addiction which was eventually established in Lisbon, Portugal. In recent years he has resumed his earlier research on the analysis of complex networks. Selected publications Matthias Dehmer, Abbe Mowshowitz, Frank Emmert-Streib, Connections between Classical and Parametric Network Entropies, PLoS ONE 6(1): 2011, e15733. Matthias Dehmer and Abbe Mowshowitz, A history of graph entropy measures, Information Sciences 181, 2011, pp. 57–78. Abbe Mowshowitz, Technology as excuse for questionable ethics. AI & Society 22, 2008, pp. 271–282. Abbe Mowshowitz and Murray Turoff, eds. The digital society. Communications of the ACM 48(10), 2005, pp. 32–74. Abbe Mowshowitz and Akira Kawaguchi, Bias on the Web. Communications of the ACM, 45 (9), 2002, pp. 56–60. Abbe Mowshowitz, Virtual Organization: Toward a Theory of Societal Transformation Stimulated by Information Technology (Westport: Quorum Books, 2002) Abbe Mowshowitz, ed. Virtual organization, Communications of the ACM, 40(9), 1997, pp. 30–37. Abbe Mowshowitz, Virtual feudalism. In: P.J. Denning and R.M. Metcalf, eds. Beyond Calculation: the Next Fifty Years of Computing New York: Copernicus (Springer-Verlag, 1997, pp. 213–231). Abbe Mowshowitz, On the theory of virtual organization, Systems Research and Behavioral Science, 14(6),1997, pp. 373–384. Abbe Mowshowitz, On the market value of information commodities: I. The nature of information and information commodities. Journal of the American Society for Information Science, 43, 1992, pp. 225–232. Abbe Mowshowitz, On the market value of information commodities: II. Supply price. Journal of the American Society for Information Science, 43, 1992, pp. 233–241. Abbe Mowshowitz, On the market value of information commodities: III. Demand Price. Journal of the American Society for Information Science, 43, 1992, pp. 242–248. Abbe Mowshowitz, Information, Globalization, and National Sovereignty. Report, Office of Technology Assessment, Congress of the United States, 1987. Abbe Mowshowitz, On approaches to the study of social issues in computing, Communications of the ACM 24, 1981, pp. 146‑155. Abbe Mowshowitz, ed. Human Choice and Computers, 2. Proceedings of the Second IFIP Conference on Human Choice and Computers, Baden, Austria, June 4‑8, 1979 (Amsterdam: North‑Holland, 1980) Abbe Mowshowitz, Inside Information: Computers in Fiction (Reading: Addison-Wesley, 1977) Abbe Mowshowitz, The Conquest of Will: Information Processing in Human Affairs (Reading: Addison-Wesley, 1976) Abbe Mowshowitz, Entropy and the complexity of graphs: I. An index of the relative complexity of a graph, Bulletin of Mathematical Biophysics 30, 1968, pp. 175‑204. Abbe Mowshowitz, Entropy and the complexity of graphs: II. The information content of digraphs and infinite graphs, Bulletin of Mathematical Biophysics 30, 1968, pp. 225‑240. Abbe Mowshowitz, Entropy and the complexity of graphs: III. Graphs with prescribed information content, Bulletin of Mathematical Biophysics 30, 1968, pp. 387‑414. Abbe Mowshowitz, Entropy and the complexity of graphs: IV. Entropy measures and graphical structure, Bulletin of Mathematical Biophysics 30, 1968, pp. 533‑546. References 1939 births Living people American computer scientists City College of New York faculty University of Amsterdam faculty University of Chicago alumni University of Michigan alumni People from Liberty, New York Scientists from New York (state)
51784737
https://en.wikipedia.org/wiki/Medical%20device%20hijack
Medical device hijack
A medical device hijack (also called medjack) is a type of cyber attack. The weakness they target are the medical devices of a hospital. This was covered extensively in the press in 2015 and in 2016. Medical device hijacking received additional attention in 2017. This was both a function of an increase in identified attacks globally and research released early in the year. These attacks endanger patients by allowing hackers to alter the functionality of critical devices such as implants, exposing a patient's medical history, and potentially granting access to the prescription infrastructure of many institutions for illicit activities. MEDJACK.3 seems to have additional sophistication and is designed to not reveal itself as it searches for older, more vulnerable operating systems only found embedded within medical devices. Further, it has the ability to hide from sandboxes and other defense tools until it is in a safe (non-) environment. There was considerable discussion and debate on this topic at the RSA 2017 event during a special session on MEDJACK.3. Debate ensued between various medical device suppliers, hospital executives in the audience and some of the vendors over ownership of the financial responsibility to remediate the massive installed base of vulnerable medical device equipment. Further, notwithstanding this discussion, FDA guidance, while well intended, may not go far enough to remediate the problem. Mandatory legislation as part of new national cyber security policy may be required to address the threat of medical device hijacking, other sophisticated attacker tools that are used in hospitals, and the new variants of ransomware which seem targeted to hospitals. Overview In such a cyberattack the attacker places malware within the networks through a variety of methods (malware-laden website, targeted email, infected USB stick, socially engineered access, etc.) and then the malware propagates within the network. Most of the time existing cyber defenses clear the attacker tools from standard serves and IT workstations (IT endpoints) but the cyber defense software cannot access the embedded processors within medical devices. Most of the embedded operating systems within medical devices are running on Microsoft Windows 7 and Windows XP. The security in these operating systems is no longer supported. So they are relatively easy targets in which to establish attacker tools. Inside of these medical devices, the cyber attacker now finds safe harbor in which to establish a backdoor (command and control). Since medical devices are FDA certified, hospital and cybersecurity team personnel cannot access the internal software without perhaps incurring legal liability, impacting the operation of the device or violating the certification. Given this open access, once the medical devices are penetrated, the attacker is free to move laterally to discover targeted resources such as patient data, which is then quietly identified and exfiltrated. Organized crime targets healthcare networks in order to access and steal the patient records. Impacted devices Virtually any medical device can be impacted by this attack. In one of the earliest documented examples testing identified malware tools in a blood gas analyzer, magnetic resonance imaging (MRI) system, computerized tomogram (CT) scan, and x-ray machines. In 2016 case studies became available that showed attacker presence also in the centralized PACS imaging systems which are vital and important to hospital operations. In August 2011, representatives from IBM demonstrated how an infected USB device can be used to identify the serial numbers of devices within a close range and facilitate fatal dosage injections to patients with an insulin pump in the annual BlackHat conference. Impacted institutions This attack primarily centers on the largest 6,000 hospitals on a global basis. Healthcare data has the highest value of any stolen identity data, and given the weakness in the security infrastructure within the hospitals, this creates an accessible and highly valuable target for cyber thieves. Besides hospitals, this can impact large physician practices such as accountable care organizations (ACOs) and Independent Physician Associations (IPAs), skilled nursing facilities (SNFs) both for acute care and long-term care, surgical centers and diagnostic laboratories. Instances There are many reports of hospitals and hospital organizations getting hacked, including ransomware attacks, Windows XP exploits, viruses, and data breaches of sensitive data stored on hospital servers. Community Health Systems, June 2014 In an official filing to the United States Securities and Exchange Commission, Community Health Systems declared that their network of 206 hospitals in 28 states were targets of a cyber-attack between April and June 2014. The breached data included sensitive personal information of 4.5 million patients including social security numbers. The FBI determined that the attacks were facilitated by a group in China and issued a broad warning to the industry, advising companies to strengthen their network systems and follow legal protocols to help the FBI restraint future attacks. Medtronic, March 2019 In 2019 the FDA submitted an official warning concerning security vulnerabilities in devices produced by Medtronic ranging from Insulin pumps to various models of cardiac implants. The agency concluded that CareLink, the primary mechanism used for software updates in addition to monitoring patients and transferring data during implantation and follow-up visits, did not possess a satisfactory security protocol to prevent potential hackers from gaining access to these devices. The FDA recommended that health care providers restrict software access to established facilities while unifying the digital infrastructure in order to maintain full control throughout the process. Scope Various informal assessments have estimated that medical device hijacking currently impacts a majority of the hospitals worldwide and remains undetected in the bulk of them. The technologies necessary to detect medical device hijacking, and the lateral movement of attackers from command and control within the targeted medical devices, are not installed in the great majority of hospitals as of February 2017. A statistic would note that in a hospital with 500 beds, there are roughly fifteen medical devices (usually internet of things (IoT) connected) per bed. That is in addition to centralized administration systems, the hospital diagnostic labs which utilized medical devices, EMR/EHR systems and CT/MRI/X-ray centers within the hospital. Detection and remediation These attacks are very hard to detect and even harder to remediate. Deception technology (the evolution and automation of honeypot or honey-grid networks) can trap or lure the attackers as they move laterally within the networks. The medical devices typically must have all of their software reloaded by the manufacturer. The hospital security staff is not equipped nor able to access the internals of these FDA approved devices. They can become reinfected very quickly as it only takes one medical device to potentially re-infect the rest in the hospital. Countermeasures On 28 December 2016 the US Food and Drug Administration released its recommendations that are not legally enforceable for how medical device manufacturers should maintain the security of Internet-connected devices. The United States Government Accountability Office studied the issue and concluded that the FDA must become more proactive in minimizing security flaws by guiding manufacturers with specific design recommendations instead of exclusively focusing on protecting the networks that are utilized to collect and transfer data between medical devices. The following table provided in the report highlights the design aspects of medical implants and how they affect the overall security of the device in focus. See also References Medical privacy Hijack Malware
50606558
https://en.wikipedia.org/wiki/Rolling%20release
Rolling release
Rolling release, rolling update, or continuous delivery, in software development, is the concept of frequently delivering updates to applications. This is in contrast to a standard or point release development model which uses software versions that must be reinstalled over the previous version. An example of this difference would be the multiple versions of Ubuntu Linux versus the single, constantly updated version of Arch Linux. Rolling release Rolling release development models are one of many types of software release life cycles. Although a rolling release model can be used in the development of any piece or collection of software, it is often seen in use by Linux distributions, notable examples being for instance GNU Guix System, Arch Linux, Gentoo Linux, openSUSE Tumbleweed, PCLinuxOS, Solus, SparkyLinux and Void Linux. Some modern Distributed SQL databases such as YugabyteDB can also support this feature A rolling release is typically implemented using small and frequent updates. However, simply having updates does not automatically mean that a piece of software is using a rolling release cycle; for this, the philosophy of developers must be to work with one code branch, versus discrete versions. When the rolling release is employed as the development model, software updates are typically delivered to users by a package manager on the user's personal computer, accessing through the internet a remote software repository (often via a download mirror) stored on an internet file server. See also Continuous delivery References Software distribution Software release
226715
https://en.wikipedia.org/wiki/Surat
Surat
Surat is a city in the western Indian state of Gujarat. Located at the mouth of the Tapti River, it used to be a large seaport. It is now the commercial and economic center in South Gujarat, and one of the largest urban areas of western India. It has well-established diamond and textile industries, and is a shopping centre for apparels and accessories. About 90% of the world's diamonds supply are cut and polished in the city. It is the second largest city in Gujarat after Ahmedabad and the eighth largest city by population and ninth largest urban agglomeration in India. It is the administrative capital of the Surat district. The city is located south of the state capital, Gandhinagar; south of Ahmedabad; and north of Mumbai. The city centre is located on the Tapti River, close to Arabian Sea. Surat will be the world's fastest growing city from 2019 to 2035, according to a study conducted by Economic Times. The city registered an annualised GDP growth rate of 11.5% over the seven fiscal years between 2001 and 2008. Surat was awarded "best city" by the Annual Survey of India's City-Systems (ASICS) in 2013. Surat is selected as the first smart IT city in India which is being constituted by the Microsoft CityNext Initiative tied up with IT services majors Tata Consultancy Services and Wipro. The city has 2.97 million internet users, about 65% of total population. Surat was selected in 2015 for an IBM Smarter Cities Challenge grant. Surat has been selected as one of twenty Indian cities to be developed as a smart city under PM Narendra Modi's flagship Smart Cities Mission. Surat is listed as the second cleanest city of India as of 21 August 2020 according to the Swachh Survekshan 2020 on 20 August. It suffered a major pipeline fire which caused some damage. Surat, famous for its diamond cutting and polishing, is known as the Diamond City of India. The city has various engineering plants like Essar, Larsen and Toubro and RIL. Surat won the Netexplo Smart Cities Award 2019 with UNESCO in the resilience category. Surat's mayor will receive the award at the UNESCO House in Paris, France in March next year. Etymology Surat was founded by a man called Gopi, who named the area Surajpur or Suryapur. Duarte Barbosa described Surat as Suratt. Jacob Peeters referred to Surat as Sourratte which is a Dutch name. There are many other names of Surat in history. Surat is referred to as Surrat, Surate or Soorat in some literature. History Surat before the Mughal Empire From 1297, Gujarat was gradually conquered by Allauddin Khilji, the ruler of the principal state in north India at the time, the Delhi Sultanate. The Delhi Sultanate appointed Governors to control Gujarat, but this had to be forcefully imposed, notably in 1347, when Muhammad bin Tughluq sacked Surat, among other cities. As control from the Delhi Sultanate waned at the end of the 14th century, pressure grew for an independent Gujarat, culminating in the then Governor Zafar Khan declaring independence in 1407. Surat was controlled directly by the nobles of the Rajput kingdom of Baglana who fell either under the Gujarat Sultans or the Deccan sultanates. However, following the fall of the Gujarat Sultanate in 1538 it was controlled by more local nobles starting with Chengiz Khan who enjoyed absolute authority over Surat, Broach, Baroda and Champaner. However, in 1637, Aurangzeb fully annexed Baglana into the Mughal Empire. In 1514, the Portuguese traveler Duarte Barbosa described Surat as an important seaport, frequented by many ships from Malabar and various parts of the world. By 1520, the name of the city had become Surat. It was burned by the Portuguese (1512 and 1530) and conquered by the Mughals (1573) and was twice raided by the Maratha king Shivaji (17th century). During the Mughal Empire It was the most prosperous port in the Mughal empire. Despite being a rich city, Surat looked like a typical "grubby" trader's town with mud-and-bamboo tenements and crooked streets, although along the riverfront there were a few mansions and warehouses belonging to local merchant princes and the establishments of Turkish, Armenian, English, French and Dutch traders. There were also hospitals for cows, horses, flies and insects run by religious Jains, which puzzled travelers. Some streets were narrow while others were of sufficient width. In the evening, especially near the Bazaar (marketplace), the streets became crowded with people and merchants (including Banyan merchants) selling their goods. Surat was a populous city during the Mughal era but also had a large transient population: during the monsoon season, when ships could come and go from the ports without danger, the city's population would swell. In 1612, England established its first Indian trading factory in Surat. The city was looted twice by the Maratha king Shivaji, with the first sacking occurring in 1664. Shivaji's raids scared trade away and caused ruin to the city. Later, Surat became the emporium of India, exporting gold and cloth. Its major industries were shipbuilding and textile manufacture. The coast of the Tapti River, from Athwalines to Dumas, was specially meant for shipbuilders, who were usually Rassis. The city continued to be prosperous until the rise of Bombay (present-day Mumbai). Afterward, Surat's shipbuilding industry declined and Surat itself gradually declined throughout the 18th century. During 1790–1791, an epidemic killed 100,000 Gujaratis in Surat. The British and Dutch both claimed control of the city, but in 1800, the British took control of Surat. By the middle of the 19th century, Surat had become a stagnant city with about 80,000 inhabitants. When India's railways opened, the city started becoming prosperous again. Silks, cotton, brocades, and objects of gold and silver from Surat became famous and the ancient art of manufacturing fine muslin was revived. Geography Surat is a port city situated on the banks of the Tapi river. Damming of the Tapi caused the original port facilities to close; the nearest port is now in the Magadalla and Hazira area of Surat Metropolitan Region. It has a famous beach called 'Dumas Beach' located in Hazira. The city is located at . It has an average elevation of 13 metres. The Surat district is surrounded by the Bharuch, Narmada, Navsari, to the west is the Gulf of Cambay and the surrounding districts. The climate is tropical and monsoon rainfall is abundant (about 2,500 mm a year). According to the Bureau of Indian Standards, the town falls under seismic zone-III, in a scale of I to V (in order of increasing vulnerability to earthquakes). Climate Surat has a tropical savanna climate (Köppen: Aw), moderated strongly by the Sea to the Gulf of Camboy. The summer begins in early March and lasts until June. April and May are the hottest months, the average maximum temperature being . Monsoon begins in late June and the city receives about of rain by the end of September, with the average maximum being during those months. October and November see the retreat of the monsoon and a return of high temperatures until late November. Winter starts in December and ends in late February, with average mean temperatures of around , and negligible rain. Since the 20th century, Surat has experienced some 20 floods. In 1968, most parts of the city were flooded and in 1994 a flood caused a country-wide plague outbreak, Surat being the epicenter. In 1998, 30 per cent of Surat had gone under water due to flooding in Tapti river following release of water from Ukai dam located 90 km from Surat and in Aug 2006 flood more than 95 per cent of the city was under Tapti river waters, killing more than 120 people, stranding tens of thousands in their homes without food or electricity and closing businesses and schools for weeks. The city is expected to experience more flooding and extreme weather as climate change becomes worse, so has invested in flood protection and climate resilience infrastructure. Economy Surat ranked 9th in India with a GDP of 2.60 lakh crore in fiscal year 2016 ($40 billion in 2016). Surat GDP in 2020 will be around $57 billion estimated by The City Mayors Foundation, an international think tank on urban affairs. Surat is a major hub of diamond cutting and polishing. The first diamond workshops in Gujarat appeared in Surat and Navasari in the late 1950s. The major group working in this industry is people from the Saurashtra region of Gujarat. Because of demand in the American market from the early 1970s to the mid-1980s (with only a brief recession in 1979), Surat's diamond industry grew tremendously. Currently, most of the diamond polishing workshops are running in the Varachha area of Surat, mostly by the people of the Patel community. Around the world, 8 out of 10 diamonds on the market were cut and polished in Surat. This industry earns India about US$10 billion in annual exports. That declined by about 18% in 2019 due to reduced demand for diamonds. The decline continued in 2020 when the industry closed for some months because of the COVID-19 pandemic in India. A legacy of old Dutch trade links, it began after a Surti entrepreneur returned from East Africa bringing diamond cutters. The rough diamonds are mined in South Africa and other regions of the African continent, and go from here as smooth gems to Antwerp, Belgium where the international diamond trade is run mainly by Hasidic Jews and Jains from Palanpur in North Gujarat. Surat's economy drives from a range of manufacturing and industry fields such as diamonds, textiles, petrochemicals, shipbuilding, automobile, port etc. Since it is known for producing textiles, including silk, Surat is known as the textile hub of the nation or the Silk City of India. It is very famous for its cotton mills and Surat Zari Craft. Surat is the biggest center of MMF (man-made fiber) in India. It has a total of 381 dyeing and printing mills and 41,100 power loom units. There are over a hundred thousand units and mills in total. The overall annual turnover is around 5 billion rupees. There are over 800 cloth wholesalers in Surat. It is the largest manufacturer of clothes in India, and Surti dress material can be found in any state of India. Surat produces 9 million meters of fabric annually, which accounts for 60% of the total polyester cloth production in India. Now the city is focusing on increasing the exports of its textile. There are many SME Domestic IT Companies present in Surat. MNC IT companies like IBM, HCL have satellite or virtual branches in Surat. On 14 February 2014, Government of Gujarat DST had handover STPI Surat at Bhestan-Jiav Road, Bhestan Near Udhana-Sachin BRTS Route. Surat city administration will demand for setting up of an information technology (IT) hub and an Indian Institute of Information Technology (IIIT) on the outskirts of the city. Microsoft CityNext initiative has tied up with IT services majors Tata Consultancy Services and Wipro to leverage technology for sustainable growth of cities in India. The first smart IT city in India is being constituted by the Microsoft CityNext Initiative in Surat, Gujarat. In 2011, Surat hosted India's first Microsoft DreamSpark Yatra (a tech event) with speakers from Microsoft Headquarters at Redmond, Washington. The event was organised by Ex-Microsoft Student Partner Samarth Zankharia. In May 2015, Tech giant IBM has chosen Surat among 16 global locations for its smart cities program to help them address challenges like waste management, disaster management and citizen services. Under the program, IBM will send a team of experts to each of the chosen cities where they will spend three weeks working closely with city staff analysing data about critical issues faced by its local bodies; the co-operation continued into 2016. Surat is being a port city, it has turned as a major commercial and industrial hub in India. It is home for many companies such as Oil and Natural Gas Corporation, Reliance Industries (Hazira Manufacturing Division), Essar Steel, Larsen & Toubro, Krishak Bharati Cooperative, NTPC Limited, Bharat Petroleum, Indian Oil Corporation, UltraTech Cement, Shell, GAIL, GSEG, Gujarat State Petroleum Corporation, Hero MotoCorp etc. Hazira Port is located in Hazira, an industrial suburb where most of the industries are located while other region is Magdalla which is also developed as Port of Magdalla. The government of Gujarat plans another project near Surat similar to Gujarat International Finance Tec-City (GIFT). The Chief Minister has suggested that the government wishes to develop DREAM to have a five-seven star hotel, bank, IT, corporate trading house, entertainment zone and other facilities while the Surat Diamond Bourse (SDB) will be based there. Allotment of Khajod land for the project is convenient for the state government because they have of available land. The Trade Centre, located near Sarsana village, will have a pillar-less air-conditioned hall with a pillar-less dome. Transport Built in 1860, Surat railway station falls under the administrative control of Western Railway zone of the Indian Railways. In early 2016, the Indian Railway Catering and Tourism Corporation rated the facility the best large station in India based on cleanliness. The Sitilink or Surat BRTS is a bus rapid transit system in the city. Initiated by Bharat Shah, additional city engineer of Surat Municipal Corporation. It is operated by Surat Municipal Corporation and as of August 2017, had a network of 245 buses connecting major localities. Surat International Airport located in Magdalla, 11 kilometres (7 mi) southwest of Surat. It is the 2nd busiest airport in Gujarat in terms of both aircraft movements and passenger traffic. Currently, airlines such as Air India, Alliance Air, AirAsia India, SpiceJet, IndiGo Airlines, Air Odisha, Ventura AirConnect provide flight services from the Surat to various major cities like New Delhi, Mumbai, Kolkata, Chennai, Bengaluru, Hyderabad, Goa, Jaipur, Visakhapatnam. There are also running international flights for the Sharjah route of Air India Express. Apart from the main city, Surat Airport also caters to various localities of south Gujarat including Navsari, Bardoli, Valsad, Bharuch, Ankleshwar. Surat Metro is a under construction rapid transit rail system for the city. Civic institutions The Surat Municipal Corporation is responsible for maintaining the city's civic infrastructure as well as carrying out associated administrative duties. At present, BJP is the ruling party with a majority. Under the Provisions of Bombay Provincial Municipal Corporations Act, 1949, Section – 4, the powers have been vested in three Distinct Statutory Authorities: the General Board, the Standing Committee, and the Municipal Commissioner. It ranked 7 out of 21 cities for best administrative practices in India in 2014. It scored 3.5 on 10 compared to the national average of 3.3. It is the only city in India to disclose municipal budgets on a weekly basis. Science Center Science Center, Surat is a multi-facility complex built by the Surat Municipal Corporation in 2009, the first of its type in western India. The complex houses a science center, museum, an art gallery, an auditorium, an amphitheater and a planetarium. Pandit Dindayal Upadhyay Indoor Stadium With a seating capacity of 6800, Pandit Dindayal Upadhyay Indoor Stadium is the first of its kind in the Western Region of India. The stadium frequently organizes national and international indoor games such as volleyball, table tennis, gymnastics, handball, boxing, wrestling, badminton, basketball, and tennis. It has a central arena of size 63 m x 33 m, rooms for participants and team officials, and other essential facilities including snack bars. This is also a convenient venue for organizing cultural programs, music concerts, drama, fashion shows, seminars, conferences, and many more. The Indoor Stadium also hosted TEDxSurat 2018 on 7 October 2018 which is the largest TEDx conference in Gujarat and one of the largest TEDx conferences in the world. Lalbhai Contractor Cricket Stadium Lalbhai contractor cricket stadium has a capacity of more than 7000 and hosted several Ranji, Irani and Duleep Trophy matches. The stadium also serves as a primary destination for local budding cricketers and enthusiasts. The stadium has hosted several benefit matches for international cricketers as well. Public Safety Surat began the 'Safe City Project' in 2011 aimed at keeping the city safe using surveillance cameras. The project was headed by Sanjay Srivastava (IPS) who was then the Joint-Commissioner of Surat Police. The 280-square-foot video wall claimed to be the largest surveillance screen in the country, is being installed in the control room of Police Commissioner Mr. Rakesh Asthana (IPS). This will help the police view the entire city live through 10,000 CCTV cameras across the city. Surat police have decided to install 5,000 CCTV cameras at sensitive points across the city. While 1,000 cameras will be night vision cameras, 4,000 others will be simple CCTV cameras. This has been installed on PPP base with the help of the city's businessmen, the city's social persons, Surat Municipal Corporation, and the Surat City Police. Demographics A resident of Surat is called Surati. According to the 2011 India census, the population of Surat is 4,467,797. Surat has an average literacy rate of 89%, higher than the national average of 79.5%, male literacy is 93%, and female literacy is 84%. Males constitute 53% of the population and females 47%. In Surat, 13% of the population is under 6 years of age. Education Universities Sardar Vallabhbhai National Institute of Technology, Surat is one of 31 National Institutes of Technology that are recognised as Institutes of National Importance by the Government of India. Indian Institute of Information Technology, Surat started in 2017. Most of the regional colleges are affiliated to Veer Narmad South Gujarat University (VNSGU, named after the poet Veer Narmad), which has headquarters in the Surat Metropolitan Region. Colleges are also affiliated to SNDT, Gujarat Technological University and other universities. Government Medical College, Surat is a more than 50 years old medical school of 250 yearly student admission capacity with attached tertiary care hospital, New Civil Hospital. Surat Municipal Institute of Medical Education and Research (SMIMER) is a Municipal Medical College affiliated with the Veer Narmad South Gujarat University. Auro University has also started to provide education in Surat. Culture Food Surat is known for its food and has its own list of cherished street foods. There is a famous saying in Gujarati language "સુરતનું જમણ અને કાશીનું મરણ", meaning Eat in Surat and Die in Kashi for the ultimate experience of the soul. The unique dishes of surat includes Locho, Ghari (sweet), Surti Bhusu, Alupuri, Kavsa, Ponk, Undhiyu, Dhokla, Khaman, Sev Khamani, and so forth. People's love for food in Surat is so much that there is a lane called as " Khaudra gali" which means foodie's lane which has all stalls of various types of dishes specialty being Mysore Dosa. Surat in Literature The Coffee-House of Surrat - By Leo Tolstoy A Voyage to Surat in the Year 1689 - by John Ovington Gazetteer of the Bombay Presidency: Gujarát Surat and Broach The Land of Malabar - by Duarte Barbosa Plague in Surat: Crisis in Urban Governance- By Archana Ghosh & S. Sami Ahmad Surat In The Seventeenth Century - by Balkrishna Govind Gokhale Surat, Port of the Mughal Empire - by Ruby Maloni Surat, Broach and Other Old Cities of Goojerat - by Theodore Hope People Abbas–Mustan, Bollywood directors Hashim Amla, South African Cricketer Henry Barnes-Lawrence (1815–1896), Anglican clergyman, and founder of the Association for the Protection of Sea-Birds Kiransinh Chauhan, Gujarati poet and scriptwriter Abdulgani Dahiwala, Gujarati poet Ismail Darbar, Bollywood composer Freddy Daruwala, Bollywood actor Harmeet Desai, table-tennis player Prachi Desai, actress in Bollywood Savji Dholakia, an Indian businessman. He is the founder and chairman of Hari Krishna Export. Pratik Gandhi, Bollywood actor Yazdi Karanjia, theatre person - noted as one of the doyens of Parsi theatre Sanjeev Kumar (actual name Haribhai Jariwala), film actor Mareez, 20th century Gujarati poet, popular for his ghazals Narmad, Gujarati poet, playwright, essayist, orator, lexicographer and reformer under the British Raj Dhwanil Parekh, 20th century Gujarati poet Hardik Pandya, Indian international Cricketer Laljibhai Patel, an Indian diamantaire and philanthropic social activist, who is the chairman of Dharmanandan Diamonds Pvt. Ltd.(DDPL) Hendrik van Rheede (1636–1691), Dutch botanist and colonial administrator. Died of the coast of Mumbai and was buried at the Dutch Cemetery in Surat. Mufaddal Saifuddin religious leader of the Dawoodi Bohra Gunvant Shah, educationist and columnist Bhagwatikumar Sharma, author and journalist Farooq Sheikh, actor and television presenter Naval Tata, former chairman of Tata Group Virji Vora, businessman known as "merchant prince" during Mughal era See also List of tourist attractions in Surat Surat Railway Station Surat International Airport Surat BRTS Surat Metro Surat Metropolitan Region References External links Website of Surat Municipal Corporation Pincode list Populated coastal places in India Port cities in India Port cities and towns of the Arabian Sea Cities and towns in Surat district Smart cities in India Former capital cities in India Gulf of Khambhat Metropolitan cities in India 1612 establishments in the British Empire Populated places established in the 2nd millennium
40125682
https://en.wikipedia.org/wiki/OrangeFS
OrangeFS
OrangeFS is an open-source parallel file system, the next generation of Parallel Virtual File System (PVFS). A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. OrangeFS was designed for use in large-scale cluster computing and is used by companies, universities, national laboratories and similar sites worldwide. Versions and features 2.8.5 Server-to-server communication infrastructure SSD option for storage of distributed metadata Full native Windows client support Replication for immutable files 2.8.6 Direct interface for applications Client caching for the direct interface with multi-process single-system coherence Initial release of the webpack supporting WebDAV and S3 via Apache modules 2.8.7 Updates, fixes and performance improvements 2.8.8 Updates, fixes and performance improvements, native Hadoop support via JNI shim, support for newer Linux kernels 2.9 Distributed Metadata for Directory Entries Capability-based security in 3 modes Standard security Key-based security Certificate-based security with LDAP interface support Extended documentation History OrangeFS emerged as a development branch of PVFS2, so much of its history is shared with the history of PVFS. Spanning twenty years, the extensive history behind OrangeFS is summarized in the time line below. A development branch is a new direction in development. The OrangeFS branch was begun in 2007, when leaders in the PVFS2 user community determined that: Many were satisfied with the design goals of PVFS2 and needed it to remain relatively unchanged for future stability Others envisioned PVFS2 as a foundation on which to build an entirely new set of design objectives for more advanced applications of the future. This is why OrangeFS is often described as the next generation of PVFS2. 1993 Parallel Virtual File System (PVFS) was developed by Walt Ligon and Eric Blumer under a NASA grant to study I/O patterns of parallel programs. PVFS version 0 was based on the Vesta parallel file system developed at IBM's Thomas J. Watson Research Center, and its name was derived from its development to work on Parallel Virtual Machine (PVM). 1994 Rob Ross rewrote PVFS to use TCP/IP, departing significantly from the original Vesta design. PVFS version 1 was targeted to a cluster of DEC Alpha workstations on FDDI, a predecessor to Fast Ethernet networking. PVFS made significant gains over Vesta in the area of scheduling disk I/O while multiple clients access a common file. Late 1994''' The Goddard Space Flight Center chose PVFS as the file system for the first Beowulf (early implementations of Linux-based commodity computers running in parallel). Ligon and Ross worked with key GSFC developers, including Thomas Sterling, Donald Becker, Dan Ridge, and Eric Hendricks over the next several years. 1997 PVFS released as an open-source package 1999 Ligon proposed the development of a new PVFS version. Initially developed at Clemson University, the design was completed in a joint effort among contributors from Clemson, Argonne National Laboratory and the Ohio Supercomputer Center, including major contributions by Phil Carns, a PhD student at Clemson. 2003 PVFS2 released, featuring object servers, distributed metadata, accommodation of multiple metadata servers, file views based on MPI (Message Passing Interface, a protocol optimized for high performance computing) for multiple network types, and a flexible architecture for easy experimentation and extensibility. PVFS2 becomes an “Open Community” project, with contributions from many universities and companies around the world. 2005 PVFS version 1 was retired. PVFS2 is still supported by Clemson and Argonne. In recent years, various contributors (many of them charter designers and developers) continued to improve PVFS performance. 2007 Argonne National Laboratories chose PVFS2 for its IBM Blue Gene/P, a super computer sponsored by the U.S. Department of Energy. 2008 Ligon and others at Clemson began exploring possibilities for the next generation of PVFS in a roadmap that included the growing needs of mainstream cluster computing in the business sector. As they began developing extensions for supporting large directories of small files, security enhancements, and redundancy capabilities, many of these goals conflicted with development for Blue Gene. With diverging priorities, the PVFS source code was divided into two branches. The branch for the new roadmap became "Orange" in honor of Clemson school colors, and the branch for legacy systems was dubbed "Blue" for its pioneering customer installation at Argonne. OrangeFS became the new open systems brand to represent this next-generation virtual file system, with an emphasis on security, redundancy and a broader range of applications. Fall 2010 OrangeFS became the main branch of PVFS, and Omnibond began offering commercial support for OrangeFS/PVFS, with new feature requests from paid support customers receiving highest development priority. First production release of OrangeFS introduced. Spring 2011 OrangeFS 2.8.4 released September 2011 OrangeFS adds Windows client February 2012 OrangeFS 2.8.5 released June 2012 OrangeFS 2.8.6 released, offering improved performance, web clients and direct-interface libraries. The new OrangeFS Web pack provides integrated support for WebDAV and S3. January 2013 OrangeFS 2.8.7 released May 2013 OrangeFS available on Amazon Web Services marketplace. OrangeFS 2.9 Beta Version available, adding two new security modes and allowing distribution of directory entries among multiple data servers. April 2014 OrangeFS 2.8.8 released adding shared mmap support, JNI support for Hadoop Ecosystem Applications supporting direct replacement of HDFS November 2014 OrangeFS 2.9.0 released adding support for distributed metadata for directory entries using an extensible hashing algorithm modeled after giga+, POSIX backward compatible capability base security supporting multiple modes. January 2015 OrangeFS 2.9.1 released March 2015 OrangeFS 2.9.2 released June 2015 OrangeFS 2.9.3 released November 2015 OrangeFS included in CloudyCluster 1.0 release on AWS May 2016 OrangeFS supported in Linux Kernel 4.6 October 2017 2.9.6 Released January 2018 2.9.7 Released, OrangeFS rpm will now be included in Fedora distribution February 2019 CloudyCluster v2 released on AWS marketplace featuring OrangeFS June 2019 CloudyCluster v2 released on GCP featuring OrangeFS July 2019 OreangeFS is integrated with the Linux page cache in Linux kernel 5.2 January 2020 OrangeFS interim fix for write after open issues, merged into the Linux kernel 5.5 August 2020 kernel patch back to 5.4lts that fixes issues with nonstandard block sizes. September 2020 2.9.8 Released June 2021 Linux 5.13 kernel: OrangeFS readahead in the in Linux kernel has been reworked to take advantage of the new xarray and readahead_expand logic. This significantly improved read performance. July 2021 df results bug - df on OrangeFS was reporting way too small vs. reality and causing canned installer (and confused human) issues. This has been backported to several previous kernels in addition to pulled into the latest. References External links Orange File System - Next Generation of the Parallel Virtual File System Architecture of a Next-Generation Parallel File System (Video archive) Scalable Distributed Directory Implementation on Orange File System Elasticluster with OrangeFS OrangeFS in the AWS Marketplace Free software Distributed file systems supported by the Linux kernel Distributed file systems
5272519
https://en.wikipedia.org/wiki/Error%20hiding
Error hiding
In computer programming, error hiding (or error swallowing) is the practice of catching an error or exception, and then continuing without logging, processing, or reporting the error to other parts of the software. Handling errors in this manner is considered bad practice and an anti-pattern in computer programming. In languages with exception handling support, this practice is called exception swallowing. Errors and exceptions have several purposes: Help software maintainers track down and understand problems that happen when a user is running the software, when combined with a logging system Provide useful information to the user of the software, when combined with meaningful error messages, error codes or error types shown in a UI, as console messages, or as data returned from an API (depending on the type of software and type of user) Indicate that normal operation cannot continue, so the software can fall back to alternative ways of performing the required task or abort the operation. When errors are swallowed, these purposes can't be accomplished. Information about the error is lost, which makes it very hard to track down problems. Depending on how the software is implemented, it can cause unintended side effects that cascade into other errors, destabilizing the system. Without information about the root cause of the problem, it's very hard to figure out what is going wrong or how to fix it. Examples Languages with exception handling In this C# example, even though the code inside the try block throws an exception, it gets caught by the blanket catch clause. The exception has been swallowed and is considered handled, and the program continues. try { throw new Exception(); } catch { // do nothing } In this PowerShell example, the trap clause catches the exception being thrown and swallows it by continuing execution. The "I should not be here" message is shown as if no exception had happened. &{ trap { continue } throw write-output "I should not be here" } Exception swallowing can also happen if the exception is handled and rethrown as a different exception, discarding the original exception and all its context. In this C# example, all exceptions are caught regardless of type, and a new generic exception is thrown, keeping only the message of the original exception. The original stacktrace is lost, along with the type of the original exception, any exception for which the original exception was a wrapper, and any other information captured in the exception object. try { // do something } catch (Exception ex) { // maybe do some local handling of the exception throw new Exception(ex.Message); } A better way of rethrowing exceptions without losing information is to throw the original exception from the catch clause: try { // do something } catch (Exception ex) { // do some local handling of the exception throw; } Alternatively, a new exception can be created that wraps the original exception, so any other handlers will have access to both: try { // do something } catch(Exception ex) { // maybe do some local handling of the exception throw new CustomException(ex); } Other languages In Go, errors are propagated by returning an Error object along with the normal function return value. It can be ignored, as in this example. f, _ := "I should not be here", errors.New("") fmt.Print(f) In the case of C system calls, errors are indicated by the return value of the call being NULL, and error information is stored in a global errno variable. This code makes sure the file is valid before accessing it, but if fopen failed, the error is swallowed. FILE *file = fopen("", "r"); if (file) { // do something with the file } Causes The most common underlying cause of error swallowing is the lack of good logging tools and processes while the developer is building software. When faced with an error that can't be easily handled, if the developer has good logging tools, logging an unexpected error does not cost the developer any time or effort. Logging the error should be straightforward (one method call), quick (with no impact to application performance), safe (does not raise any errors or exceptions), and ensures that all information is saved, by recording the type of error and any relevant data associated with it, the stacktrace of the error (so the developer can identify exactly where the error occurred and what instructions led up to it), and the timestamp of the error. Temporary exception handlers In languages with checked exceptions, all exceptions raised in a method must be listed in a signature of that method. When prototyping and implementing software, code changes often, which means that the type of exceptions that might be raised in a method also change often. Having to adjust the method signature every time something changes slows down development and can be frustrating, so swallowing exceptions as a temporary measure while doing large code changes is appealing. This temporary exception handling code might end up in the released codebase. Even in languages without checked exceptions, adding temporary exception handlers while undergoing large code changes to speed up prototyping can happen, which can lead to error swallowing. Preventing crashes In situations where software must not crash for any reason, error swallowing is a practice that a programmer can easily fall into. For example, a plugin that is running inside another application is expected to handle all errors and exceptions in such a way as to not crash the application in which it is embedded. Blanket catching of errors and exceptions is a pattern that is easy to fall into when attempting to prevent crashes at all costs, and when you combine that with poor logging tools, error swallowing can happen. Hiding complexity from users When showing errors to users, it's important to turn cryptic technical errors into messages that explain what happened and what actions the user can take, if any, to fix the problem. While performing this translation of technical errors into meaningful user messages, specific errors are often grouped into more generic errors, and this process can lead to user messages becoming so useless that the user doesn't know what went wrong or how to fix it. As far as the user is concerned, the error got swallowed. See also Error message Log file Tracing (software) References Anti-patterns Articles with example code
37552
https://en.wikipedia.org/wiki/Thetis
Thetis
Thetis (; ), is a figure from Greek mythology with varying mythological roles. She mainly appears as a sea nymph, a goddess of water, or one of the 50 Nereids, daughters of the ancient sea god Nereus. When described as a Nereid in Classical myths, Thetis was the daughter of Nereus and Doris, and a granddaughter of Tethys with whom she sometimes shares characteristics. Often she seems to lead the Nereids as they attend to her tasks. Sometimes she also is identified with Metis. Some sources argue that she was one of the earliest of deities worshipped in Archaic Greece, the oral traditions and records of which are lost. Only one written record, a fragment, exists attesting to her worship and an early Alcman hymn exists that identifies Thetis as the creator of the universe. Worship of Thetis as the goddess is documented to have persisted in some regions by historical writers such as Pausanias. In the Trojan War cycle of myth, the wedding of Thetis and the Greek hero Peleus is one of the precipitating events in the war which also led to the birth of their child Achilles. As goddess Most extant material about Thetis concerns her role as mother of Achilles, but there is some evidence that as the sea-goddess she played a more central role in the religious beliefs and practices of Archaic Greece. The pre-modern etymology of her name, from tithemi (τίθημι), "to set up, establish," suggests a perception among Classical Greeks of an early political role. Walter Burkert considers her name a transformed doublet of Tethys. In Iliad I, Achilles recalls to his mother her role in defending, and thus legitimizing, the reign of Zeus against an incipient rebellion by three Olympians, each of whom has pre-Olympian roots: You alone of all the gods saved Zeus the Darkener of the Skies from an inglorious fate, when some of the other Olympians – Hera, Poseidon, and Pallas Athene – had plotted to throw him into chains ... You, goddess, went and saved him from that indignity. You quickly summoned to high Olympus the monster of the hundred arms whom the gods call Briareus, but mankind Aegaeon, a giant more powerful even than his father. He squatted by the Son of Cronos with such a show of force that the blessed gods slunk off in terror, leaving Zeus free — E.V. Rieu translation Quintus of Smyrna, recalling this passage, does write that Thetis once released Zeus from chains; but there is no other reference to this rebellion among the Olympians, and some readers, such as M. M. Willcock, have understood the episode as an ad hoc invention of Homer's to support Achilles' request that his mother intervene with Zeus. Laura Slatkin explores the apparent contradiction, in that the immediate presentation of Thetis in the Iliad is as a helpless minor goddess overcome by grief and lamenting to her Nereid sisters, and links the goddess's present and past through her grief. She draws comparisons with Eos' role in another work of the epic Cycle concerning Troy, the lost Aethiopis, which presents a strikingly similar relationship – that of the divine Dawn, Eos, with her slain son Memnon; she supplements the parallels with images from the repertory of archaic vase-painters, where Eos and Thetis flank the symmetrically opposed heroes, Achilles and Memnon, with a theme that may have been derived from traditional epic songs. Thetis does not need to appeal to Zeus for immortality for her son, but snatches him away to the White Island Leuke in the Black Sea, an alternate Elysium where he has transcended death, and where an Achilles cult lingered into historic times. Mythology Thetis and the other deities Pseudo-Apollodorus' Bibliotheke asserts that Thetis was courted by both Zeus and Poseidon, but she was married off to the mortal Peleus because of their fears about the prophecy by Themis (or Prometheus, or Calchas, according to others) that her son would become greater than his father. Thus, she is revealed as a figure of cosmic capacity, quite capable of unsettling the divine order. (Slatkin 1986:12) When Hephaestus was thrown from Olympus, whether cast out by Hera for his lameness or evicted by Zeus for taking Hera's side, the Oceanid Eurynome and the Nereid Thetis caught him and cared for him on the volcanic isle of Lemnos, while he labored for them as a smith, "working there in the hollow of the cave, and the stream of Okeanos around us went on forever with its foam and its murmur" (Iliad 18.369). Thetis is not successful in her role protecting and nurturing a hero (the theme of kourotrophos), but her role in succoring deities is emphatically repeated by Homer, in three Iliad episodes: as well as her rescue of Zeus (1.396ff) and Hephaestus (18.369), Diomedes recalls that when Dionysus was expelled by Lycurgus with the Olympians' aid, he took refuge in the Erythraean Sea with Thetis in a bed of seaweed (6.123ff). These accounts associate Thetis with "a divine past—uninvolved with human events—with a level of divine invulnerability extraordinary by Olympian standards. Where within the framework of the Iliad the ultimate recourse is to Zeus for protection, here the poem seems to point to an alternative structure of cosmic relations." Once, Thetis and Medea argued in Thessaly over which was the most beautiful; they appointed the Cretan Idomeneus as the judge, who gave the victory to Thetis. In her anger, Medea called all Cretans liars, and cursed them to never say the truth. Marriage to Peleus Zeus had received a prophecy that Thetis's son would become greater than his father, as Zeus had dethroned his father to lead the succeeding pantheon. In order to ensure a mortal father for her eventual offspring, Zeus and his brother Poseidon made arrangements for her to marry a human, Peleus, son of Aeacus, but she refused him. Proteus, an early sea-god, advised Peleus to find the sea nymph when she was asleep and bind her tightly to keep her from escaping by changing forms. She did shift shapes, becoming flame, water, a raging lioness, and a serpent. Peleus held fast. Subdued, she then consented to marry him. Thetis is the mother of Achilles by Peleus, who became king of the Myrmidons. According to classical mythology, the wedding of Thetis and Peleus was celebrated on Mount Pelion, outside the cave of Chiron, and attended by the deities: there they celebrated the marriage with feasting. Apollo played the lyre and the Muses sang, Pindar claimed. At the wedding Chiron gave Peleus an ashen spear that had been polished by Athena and had a blade forged by Hephaestus. While the Olympian goddesses brought him gifts: from Aphrodite, a bowl with an embossed Eros, from Hera a chlamys while from Athena a flute. His father-in-law Nereus endowed him a basket of the salt called 'divine', which has an irresistible virtue for overeating, appetite and digestion, explaining the expression '...she poured the divine salt'''. Zeus then bestowed the wings of Arce to the newly-wed couple which was later given by Thetis to her son, Achilles. Furthermore, the god of the sea, Poseidon gave Peleus the immortal horses, Balius and Xanthus. Eris, the goddess of discord, had not been invited, however, and in spite, she threw a golden apple into the midst of the goddesses that was to be awarded only "to the fairest." In most interpretations, the award was made during the Judgement of Paris and eventually occasioned the Trojan War. As is recounted in the Argonautica, written by the Hellenistic poet Apollonius of Rhodes, Thetis, in an attempt to make her son Achilles immortal, would burn away his mortality in a fire at night and during the day, she would anoint the child with ambrosia. When Peleus caught her searing the baby, he let out a cry. Thetis heard him, and catching up the child threw him screaming to the ground, and she like a breath of wind passed swiftly from the hall as a dream and leapt into the sea, exceeding angry, and thereafter returned never again. In a variant of the myth first recounted in the Achilleid, an unfinished epic written between 94–95 AD by the Roman poet Statius, Thetis tried to make Achilles invulnerable by dipping him in the River Styx (one of the five rivers that run through Hades, the realm of the dead). However, the heel by which she held him was not touched by the Styx's waters and failed to be protected. (A similar myth of immortalizing a child in fire is seen in the case of Demeter and the infant Demophoon). Some myths relate that because she had been interrupted by Peleus, Thetis had not made her son physically invulnerable. His heel, which she was about to burn away when her husband stopped her, had not been protected. Peleus gave the boy to Chiron to raise. Prophecy said that the son of Thetis would have either a long but dull life, or a glorious but brief one. When the Trojan War broke out, Thetis was anxious and concealed Achilles, disguised as a girl, at the court of Lycomedes, king of Skyros. Achilles was already famed for his speed and skill in battle. Calchas, a priest of Agamemnon, prophesied the need for the great soldier within their ranks. Odysseus was subsequently sent by Agamemnon to try and find Achilles. Skyros was relatively close to Achilles’ home and Lycomedes was also a known friend of Thetis, so it was one of the first places that Odysseus looked. When Odysseus found that one of the girls at court was not a girl, he came up with a plan. Raising an alarm that they were under attack, Odysseus knew that the young Achilles would instinctively run for his weapons and armour, thereby revealing himself. Seeing that she could no longer prevent her son from realizing his destiny, Thetis then had Hephaestus make a shield and armor. Iliad and the Trojan War Thetis played a key part in the events of the Trojan War. Beyond the fact that the Judgement of Paris, which essentially kicked off the war, occurred at her wedding, Thetis influenced the actions of the Olympians and her son, Achilles.Nine years after the beginning of the Trojan War, Homer's Iliad starts with Agamemnon, king of Mycenae and the commander of the Achaeans, and Achilles, son of Thetis, arguing over Briseis, a war prize of Achilles. After initially refusing, Achilles relents and gives Briseis to Agamemnon. However, Achilles feels disrespect for having to give up Briseis and prays to Thetis, his mother, for restitution of his lost honor. She urges Achilles to wait until she speaks with Zeus to rejoin the fighting, and Achilles listens. When she finally speaks to Zeus, Thetis convinces him to do as she bids, and he seals his agreement with her by bowing his head, the strongest oath that he can make. Following the death of Patroclus, who wore Achilles' armor in the fighting, Thetis comes to Achilles to console him in his grief. She vows to return to him with armor forged by Hephaestus, the blacksmith of the gods, and tells him not to arm himself for battle until he sees her coming back. While Thetis is gone, Achilles is visited by Iris, the messenger of the gods, sent by Hera, who tells him to rejoin the fighting. He refuses, however, citing his mother's words and his promise to her to wait for her return. Thetis, meanwhile, speaks with Hephaestus and begs him to make Achilles armor, which he does. First, he makes for Achilles a splendid shield, and having finished it, makes a breastplate, a helmet, and greaves. When Thetis goes back to Achilles to deliver his new armor, she finds him still upset over Patroclus. Achilles fears that while he is off fighting the Trojans, Patroclus' body will decay and rot. Thetis, however, reassures him and places ambrosia and nectar in Patroclus' nose in order to protect his body against decay. After Achilles uses his new armor to defeat Hector in battle, he keeps Hector's body to mutilate and humiliate. However, after nine days, the gods call Thetis to Olympus and tell her that she must go to Achilles and pass him a message, that the gods are angry that Hector's body has not been returned. She does as she is bid, and convinces Achilles to return the body for ransom, thus avoiding the wrath of the gods. Worship in Laconia and other places A noted exception to the general observation resulting from the existing historical records, that Thetis was not venerated as a goddess by cult, was in conservative Laconia, where Pausanias was informed that there had been priestesses of Thetis in archaic times, when a cult that was centered on a wooden cult image of Thetis (a xoanon), which preceded the building of the oldest temple; by the intervention of a highly placed woman, her cult had been re-founded with a temple; and in the second century AD she still was being worshipped with utmost reverence. The Lacedaemonians were at war with the Messenians, who had revolted, and their king Anaxander, having invaded Messenia, took as prisoners certain women, and among them Cleo, priestess of Thetis. The wife of Anaxander asked for this Cleo from her husband, and discovering that she had the wooden image of Thetis, she set up the woman Cleo in a temple for the goddess. This Leandris did because of a vision in a dream, but the wooden image of Thetis is guarded in secret. In one fragmentary hymn by the seventh century Spartan poet, Alcman, Thetis appears as a demiurge, beginning her creation with poros (πόρος) "path, track" and tekmor (τέκμωρ) "marker, end-post". Third was skotos (σκότος) "darkness", and then the sun and moon. A close connection has been argued between Thetis and Metis, another shape-shifting sea-power later beloved by Zeus but prophesied bound to produce a son greater than his father because of her great strength. Herodotus noted that the Persians sacrificed to "Thetis" at Cape Sepias. By the process of interpretatio graeca, Herodotus identifies a sea-goddess of another culture (probably Anahita) as the familiar Hellenic "Thetis". In other works Homer's Iliad makes many references to Thetis. Euripides's Andromache, 1232-1272 Apollonius Rhodius, Argonautica IV, 770–879. Bibliotheca 3.13.5. Francesco Cavalli's first opera Le nozze di Teti e di Peleo, composed in 1639, concerned the marriage of Thetis and Peleus WH Auden's poem The Shield of Achilles imagines Thetis's witnessing of the forging of Achilles's shield. In 1939, HMS Thetis (N25) then a new design of submarine, sank on her trials in the River Mersey shortly after she left the dock in Liverpool. There were 103 people on board and 99 died. The cause of the accident was an inspection hole to allow a sailor to look into the torpedo tubes. A special closure for this inspection hole had been painted over. Once submerged the torpedo tube flooded and the bow of the vessel sank. The stern was still above water. Ninety-nine people, half of them dockyard workers, died of carbon monoxide poisoning. In 1981, British actress Maggie Smith portrayed Thetis in the Ray Harryhausen film Clash of the Titans (for which she won a Saturn Award). In the film, she acts as the main antagonist to the hero Perseus for the mistreatment of her son Calibos. In 1999, British poet Carol Ann Duffy published The World's Wife poetry collection, which included a poem based on Thetis In 2004, British actress Julie Christie portrayed Thetis in the Wolfgang Petersen film Troy. In 2011, American novelist Madeline Miller portrayed Thetis in The Song of Achilles'' as a harsh and remote deity. She does not approve of Patroclus and tries to separate the him and Achilles on multiple occasions. The 2018 novel The Silence of the Girls focuses on the character of Briseis in the first person, with interjections giving Achilles' internal state of mind, including his tormented relationship with his mother. In 2019, New Zealand graphic designer Rachel Smythe portrayed Thetis in "Lore: Olympus". She is Zeus' personal secretary whom she also has an affair with. She is also the toxic best friend of Minthe and works with her to bring down Persephone. Gallery Thetis, Peleus and Zeus Wedding of Peleus and Thetis Thetis and Achilles Notes External links THETIS from the Theoi Project Slatkin: The Power of Thetis: a seminal work freely available in the University of California Press, eScholarship collection. Greek goddesses Greek sea goddesses Nereids Nymphs Shapeshifting Women in Greek mythology Characters in Greek mythology Deities in the Iliad Deeds of Zeus Deeds of Poseidon Achilles Metamorphoses characters
1590096
https://en.wikipedia.org/wiki/List%20of%20Apple%20II%20clones
List of Apple II clones
The following is an incomplete list of clones of Apple's Apple II home computer. For more details on some models see Apple II clones. North American clones United States Albert Bell & Howell Apple II Collins Orange+ Two Formula II kit ("Fully compatible with Apple II+") Franklin Ace series InterTek System IV Laser 128 MicroSCI Havac Micro-Craft Dimension 68000 Sekon Syscom 2 Unitronics Sonic Canada Apco Arcomp Super 400 Super 800 CV-777 Golden II (Spiral) Logistics Arrow 1000 Arrow 2000 Mackintosh Microcom II+ Microcom IIe MIPC O.S. Micro Systems OS-21 OS-22 Orange Computers Orangepeel Peach Microcomputer Brazilian clones CCE Exato IIe Exato Pró MC-4000 - Page in Portuguese MC-4000 //e - Page in Portuguese Del MC01 - Page in Portuguese (Unreleased Apple II+ clone) Microcraft Craft II Plus Microdigital Microdigital TK2000 Color (not 100% binary-compatible) Microdigital TK2000 II Color (not 100% binary-compatible) Microdigital TK-3000 IIe - Page in Portuguese Microdigital TK-3000 //e Compact Micronix Dactron E - Page in Portuguese Polymax Maxxi - Page in Portuguese Spectrum Equipam. Eletrônicos Ind.Com.Ltda Spectrum ED - Page in Portuguese (Apple IIe) Spectrum Microengenho I - Page in Portuguese (Apple II) Spectrum Microengenho II - Page in Portuguese (Apple IIe) Unitron Ap II - Page in Portuguese (not to be confused with the Taiwanese Unitron, the makers of the infamous U2000 and the U2200 systems) D-8100 (Dismac) Victor do Brasil Eletrônica Ltda Elppa II (1983) Elppa II Plus TS (1983) Elppa Jr. (1984) Micronix Ind. e Com.de Computadores ltda Dactron Dactron E DGT AT (Digitus Ind.e Com.Serv.de Eletrônica Ltda - 1985) DM II (D.M. Eletrônica Ltda - 1983) Link 323 (Link Tecnologia - 1984) Maneger I (Magenex Eletrônica Ltda - 1983) Maxxi (Polymax Sistemas e Periféricos Ltda - 1982) Ômega - Ind e Com. Ltda MC 100 (1983) MC 400 (1984) MG-8065 (Magenex Eletrônica Ltda - 1983) Apple Laser IIc (Milmar Ind. e Com. Ltda - 1985) Chinese clones China China Education Computer CEC-I CEC-M CEC-G CEC-E CEC-2000 Venus II Series (Apple II+ Clone) Venus IIA Venus IIB ChangJiang-I (Apple II+ Clone) DJS-033 Series (Apple II+ Clone) DJS-033e Series (Apple IIe Clone) Hong Kong ACC 8000 (a.k.a. Accord 8000) Basis Medfly CTC (Computer Technologies Corporation) Wombat Wombat AB Wombat Professional Pineapple Computers Pineapple 48K Color Computer (or "ananas") Pineapple DP-64E Teleco Electronics ATEX 2000 Personal Computer VTech (Video Technology) Laser 128 Laser 3000 Taiwan AP Computer BAT 250 Chia-ma SPS-109 Chin Hsin Industrial RX-8800 Copam Electronics Base 48 Base 64 Base 64A Base 64D Fugu Elite 5 Golden Formosa Microcomputer Golden II Happy Home Computer Co. Multi-System I.H. Panda CAT-100 CAT-200 CAT-400 IMC IMC-320 IMC-480 IMC-640 IMC-640E IMC-2001 (with officially licensed DOS 3.3 from Apple; after battle in court IMC Taiwan got an agreement with Apple to officially license them DOS 3.3) IMC Fox IMC Junior IMC Portcom II Lazar II Mitac LIC-2001A/LIC-2001 (Little Intelligent Computer) LIC-3001 (Little Intelligent Computer) Multitech Microprofessor II (MPF II) Microprofessor III (MPF III) Panda 64 Rakoa Computer Rakoa I SMC-II MCAD (Microcomputer Aided Design System) Sages Computer Zeus 2001 Surwave Electronics Amigo 202 Amigo 505 The Jow Dian Enterprise ZD-103 (The ZD 8/16 Personal Computer) Unitron U2000 Unitron U2200 European clones Austria Zema Twin Bulgaria IMKO 2 Pravetz series 8 Pravetz 8A Pravetz 8M Pravetz 8E Pravetz 8C France 3CI Robot (non-Apple II clone, but comes with a dedicated cash register for hairdressing salons) TMS Vela (TMS means Troyes Micro Service) Germany Basis Microcomputer GmbH Basis 108 Basis 208 Blaupunkt Blaupunkt Apple II Citron II CSC Euro 1000 CSC Euro Plus CSC Euro Profi CSC Euro Super ComputerTechnik Space 83 ComputerTechnik SK-747/IBS Space-83 Eurocon II Eurocon II+ ITT 2020 (Europlus) Precision Echo Phase II (Basis 108 with a light milk chocolate brown case) Greece Gigatronics KAT Italy Asem AM-64e Selcom Lemon II Staff C1 The Netherlands AVT Electronics AVT Comp 2 Computer Hobbyvereniging Eindhoven CHE-1 Pearcom Pear II Norway West PC-800 Spain Katson Katson II Yugoslavia Ananas Marta kompjuteri Israel General 48A General 64A RMC Kosmos 285 Spring (sold, inter alia, in Israel) Winner 64K Elite //E East Asian clones Japan Akihabara Japple Honda Computers (also known as Pete Perkins Apple) it used custom Vectorio motherboard with a custom user EPROM socket (shown ThamesTV in 1984). Wakou Marvel 2000 Singapore Creative Labs CUBIC-88 Creative Labs CUBIC-99 Lingo 128 Personal Computer South Korea Hyosung PC-8000 Sambo TriGem20 Sambo Busicom SE-6003 E-Haeng Cyborg-3 Zungwon HART Champion-86XT Sanho ACME 2000 Australian clones Dick Smith Cat (VTech Laser 3000) Soviet clones Agat Agat-4 Agat-7 Agat-8 Agat-9 Unknown models Bannana Banana CB-777 (confiscated by Apple Computer) CV-777 REON TK 8000 (confiscated by Apple Computer) Other models AES easy3 AMI II Aloha 50 Aton II Bimex BOSS-1 Elppa II Energy Control General 64 Iris 8 Ivel Z3 MCP Mango II Mind II Multi-system computer Orange Panasia Shuttle (computer) Space II Tiger TC-80A Plug-in Apple II compatibility boards Apple IIe Card (Macintosh LC) Diamond Trackstar (IBM PC) Trackstar Trackstar 128 Trackstar Plus Trackstar E Mimic Systems Spartan (Commodore 64) Quadram Quadlink (IBM PC) Titan III (Apple III) III Plus II III Plus IIe External links epocalc Apple II clones list References Clones Apple II clones Apple II
77118
https://en.wikipedia.org/wiki/Apple%20Lisa
Apple Lisa
Lisa is a desktop computer developed by Apple, released on January 19, 1983. It is one of the first personal computers to present a graphical user interface (GUI) in a machine aimed at individual business users. Development of the Lisa began in 1978, and it underwent many changes during the development period before shipping at with a five-megabyte hard drive. Lisa was affected by its high price, insufficient software, unreliable Apple FileWare floppy disks, and the immediate release of the cheaper and faster Macintosh. Only 10,000 Lisas were sold in two years. Considered a commercial failure (albeit one with technical acclaim), Lisa introduced a number of advanced features that would later reappear on the Macintosh and eventually IBM PC compatibles. Among these is an operating system with protected memory and a document-oriented workflow. The hardware was more advanced overall than the forthcoming Macintosh 128K; the Lisa included hard disk drive support, capacity for up to 2 megabytes (MB) of random-access memory (RAM), expansion slots, and a larger, higher-resolution display. The complexity of the Lisa operating system and its associated programs (most notably its office suite), as well as the ad hoc protected memory implementation (due to the lack of a Motorola MMU), placed a high demand on the CPU and, to some extent, the storage system as well. As a result of cost-cutting measures designed to bring the system more into the consumer bracket, advanced software, and factors such as the delayed availability of the 68000 and its impact on the design process, Lisa’s user experience feels sluggish overall. The workstation-tier price (albeit at the low end of the spectrum at the time) and lack of a technical software application library made it a difficult sell for much of the technical workstation market. Compounding matters, the runaway success of the IBM PC and Apple's decision to compete with itself, mainly via the lower-cost Macintosh, were further impediments to platform acceptance. In 1982, after Steve Jobs was forced out of the Lisa project by Apple’s Board of Directors, he then appropriated the Macintosh project from Jef Raskin, who had originally conceived of a sub-$1,000 text-based appliance computer in 1979. Jobs immediately redefined Macintosh as a less expensive and more focused version of the graphical Lisa. When Macintosh launched in January 1984, it quickly surpassed Lisa’s sluggish sales. Jobs then began assimilating increasing numbers of Lisa staff, as he had done with the Apple II division after assuming control over Raskin’s project. Newer Lisa models were eventually introduced to address its shortcomings but, even after lowering the list price considerably, the platform failed to achieve favorable sales numbers compared to the much less expensive Mac. The final model, the Lisa 2/10, was rebranded as the Macintosh XL to become the high-end model in the Macintosh series. History Development Name Though the documentation shipped with the original Lisa only refers to it as "The Lisa", Apple officially stated that the name was an acronym for "Locally Integrated Software Architecture" or "LISA". Because Steve Jobs' first daughter was named Lisa Nicole Brennan (born in 1978), it was normally inferred that the name also had a personal association, and perhaps that the acronym was a backronym invented later to fit the name. Andy Hertzfeld states the acronym was reverse engineered from the name "Lisa" in late 1982 by the Apple marketing team, after they had hired a marketing consultancy firm to come up with names to replace "Lisa" and "Macintosh" (at the time considered by Jef Raskin to be merely internal project codenames) and then rejected all of the suggestions. Privately, Hertzfeld and the other software developers used "Lisa: Invented Stupid Acronym", a recursive backronym, while computer industry pundits coined the term "Let's Invent Some Acronym" to fit the Lisa's name. Decades later, Jobs would tell his biographer Walter Isaacson: "Obviously it was named for my daughter." Research and design The project began in 1978 as an effort to create a more modern version of the then-conventional design epitomized by the Apple II. A ten-person team occupied its first dedicated office, which was nicknamed "the Good Earth building" and located at 20863 Stevens Creek Boulevard next to the restaurant named Good Earth. Initial team leader Ken Rothmuller was soon replaced by John Couch, under whose direction the project evolved into the "window-and-mouse-driven" form of its eventual release. Trip Hawkins and Jef Raskin contributed to this change in design. Apple's cofounder Steve Jobs was involved in the concept. At Xerox's Palo Alto Research Center, research had already been underway for several years to create a new humanized way to organize the computer screen, today known as the desktop metaphor. Steve Jobs visited Xerox PARC in 1979, and was absorbed and excited by the revolutionary mouse-driven GUI of the Xerox Alto. By late 1979, Jobs successfully negotiated a payment of Apple stock to Xerox, in exchange for his Lisa team receiving two demonstrations of ongoing research projects at Xerox PARC. When the Apple team saw the demonstration of the Alto computer, they were able to see in action the basic elements of what constituted a workable GUI. The Lisa team put a great deal of work into making the graphical interface a mainstream commercial product. The Lisa was a major project at Apple, which reportedly spent more than $50 million on its development. More than 90 people participated in the design, plus more in the sales and marketing effort, to launch the machine. BYTE credited Wayne Rosing with being the most important person on the development of the computer's hardware until the machine went into production, at which point he became technical lead for the entire Lisa project. The hardware development team was headed by Robert Paratore. The industrial design, product design, and mechanical packaging were headed by Bill Dresselhaus, the Principal Product Designer of Lisa, with his team of internal product designers and contract product designers from the firm that eventually became IDEO. Bruce Daniels was in charge of applications development, and Larry Tesler was in charge of system software. The user interface was designed in a six-month period, after which the hardware, operating system, and applications were all created in parallel. In 1982, after Steve Jobs was forced out of the Lisa project, he appropriated the existing Macintosh project, which Jef Raskin had conceived in 1979 and led to develop a text-based appliance computer. Jobs redefined Macintosh as a cheaper and more usable Lisa, leading the project in parallel and in secret, and substantially motivated to compete with the Lisa team. In September 1981, below the announcement of the IBM PC, InfoWorld reported on Lisa, "McIntosh", and another Apple computer secretly under development "to be ready for release within a year". It described Lisa as having a 68000 and 128KB RAM, and "designed to compete with the new Xerox Star at a considerably lower price". In May 1982, the magazine reported that "Apple's yet-to-be-announced Lisa 68000 network work station is also widely rumored to have a mouse." Launch Lisa's low sales were quickly surpassed by the January 1984 launch of the Macintosh. Newer versions of the Lisa were introduced that addressed its faults and lowered its price considerably, but it failed to achieve favorable sales compared to the much less expensive Mac. The Macintosh project assimilated a lot more Lisa staff. The final revision of the Lisa, the Lisa 2/10, was modified and sold as the Macintosh XL. Discontinuation The high cost and the delays in its release date contributed to the Lisa's discontinuation although it was repackaged and sold at $4,995, as the Lisa 2. In 1986, the entire Lisa platform was discontinued. In 1987, Sun Remarketing purchased about 5,000 Macintosh XLs and upgraded them. In 1989, with the help of Sun Remarketing, Apple disposed of approximately 2,700 unsold Lisas in a guarded landfill in Logan, Utah, in order to receive a tax write-off on the unsold inventory. Some leftover Lisa computers and spare parts were available until Cherokee Data (who purchased Sun Remarketing) went out of business. Overview Hardware The Lisa was first introduced on January 19, 1983. It is one of the first personal computer systems with a graphical user interface (GUI) to be sold commercially. It uses a Motorola 68000 CPU clocked at 5 MHz and has 1 MB of RAM. It can be upgraded to 2 MB and later shipped with as little as 512 kilobytes. The CPU speed and model was not changed from the release of the Lisa 1 to the repackaging of the hardware as Macintosh XL. The real-time clock uses a 4-bit integer and the base year is defined as 1980; the software won't accept any value below 1981, so the only valid range is 1981–1995. The real-time clock depends on a 4 x AA-cell NiCd pack of batteries that only lasts for a few hours when main power is not present. Prone to failure over time, the battery packs could leak corrosive alkaline electrolyte and ruin the circuit boards. The integrated monochrome black-on-white monitor has 720 × 364 rectangular pixels on a screen. Among the printers supported by Lisa are the Apple Dot Matrix Printer, Apple Daisy Wheel Printer, the Apple ImageWriter dot matrix, and a Canon inkjet printer. Inkjet printing was quite new at the time. Despite having a monochromatic monitor, Apple enabled software to support some color printing, due to the existence of the Canon printer. CPU The use of the slowest-clocked version of Motorola's 68000 was a cost-cutting measure, as the 68000 was initially expensive. By the time the price had come down, Apple had already designed the Lisa software around the timing of the 5 MHz processor. Lisa had been in development for such a long time that it was not initially developed for the 68000 and much of its development was done on a pre-chip form of the 68000, which was much slower than the shipping CPU. Lisa software was primarily coded in Pascal to save development time, given the high complexity of the software. The sophistication of the Lisa software (which included a multitasking GUI requiring a hard disk), coupled with the slow speed of the CPU, RAM, lack of hardware graphics acceleration coprocessor, and protected memory implementation, led to the impression that the Lisa system was very slow. However, a productivity study done in 1984 rated the Lisa above the IBM PC and Macintosh, perhaps countering the high degree of focus on UI snappiness and other factors in perceived speed rather than actual productivity speed. RAM Lisa was designed to use slower (albeit more reliable) parity memory, and other features that reduced speed but increased stability and value. Lisa is able to operate when RAM chips failed on its memory boards, unlike later Macintosh systems, reducing the cost to owners by enabling the usage of partially-failed boards. The Lisa system isolates the failed chip or chips and uses the rest of the board's RAM. This was particularly important given the large number of individual RAM chips Lisa used in 1983 for a consumer system (at around $2,500 in cost to Apple per machine). RAM could be upgraded to 2 MB. Drives The original Lisa, later called the Lisa 1, has two Apple FileWare 5.25-inch double-sided variable-speed floppy disk drives, more commonly known by Apple's internal code name for the drive, "Twiggy". They had, for the time, a very high capacity of approximately 871 kB each, but proved to be unreliable and required nonstandard diskettes. Competing systems implementing that level of per-diskette data storage had to utilize much larger 8" floppy disks. These disks were seen as cumbersome and old-fashioned for a consumer system. Apple had worked hard to increase the storage capacity of the minifloppy-size disk by pioneering features that Sony perfected shortly after with its microfloppy drives. Although it used a Twiggy drive in the prototype stage, the first Macintosh was launched the following year with one of the Sony 400 KB 3.5" "microfloppy" disk drives. 1984 also saw the release of the first revision of Lisa, the Lisa 2, which also included a single Sony drive. Apple provided free upgrades for Lisa 1 owners to Lisa 2 hardware, including the replacement of the Twiggy drives with a single Sony drive. The Sony drive, being only single-sided, could not store nearly as much data as a single Twiggy, but did so with greater reliability. The IBM PC shipped with a minifloppy (5.25-inch) drive that stored even less data: 360 KB. It was also slower and did not have the protective shell of the Sony microfloppy drive diskettes, which improves reliability. An optional external 5 MB or, later, a 10 MB Apple ProFile hard drive (originally designed and produced for the Apple III by a third party), was available. With the introduction of the Lisa 2/10, an optional 10 MB compact internal proprietary hard disk manufactured by Apple, known as the "Widget", was also offered. As with the Twiggy, the Widget developed a reputation for reliability problems. The ProFile, by contrast, was typically long-lived. The Widget was incompatible with earlier Lisa models. In an effort to increase the reliability of the machine, Apple included, starting with Lisa 1, several mechanisms involved with disk storage that were innovative and not present on at least the early releases of the Macintosh, nor on the IBM PC. For example, block sparing was implemented, which would set aside bad blocks, even on floppy disks. Another feature was the redundant storage of critical operating system information, for recovery in case of corruption. Lisa 2 The first hardware revision, the Lisa 2, was released in January 1984 and was priced between $3,495 and $5,495 US. It was much less expensive than the original model and dropped the Twiggy floppy drives in favor of a single 400k Sony microfloppy. The Lisa 2 has as little as 512 KB of RAM. The Lisa 2/5 consists of a Lisa 2 bundled with an external 5- or 10-megabyte hard drive. In 1984, at the same time the Macintosh was officially announced, Apple offered free upgrades to the Lisa 2/5 to all Lisa 1 owners, by swapping the pair of Twiggy drives for a single 3.5-inch drive, and updating the boot ROM and I/O ROM. In addition, the Lisa 2's new front faceplate accommodates the reconfigured floppy disk drive, and it includes the new inlaid Apple logo and the first Snow White design language elements. The Lisa 2/10 has a 10MB internal hard drive (but no external parallel port) and a standard configuration of 1MB of RAM. Developing early Macintosh software required a Lisa 2. There were relatively few third-party hardware offerings for the Lisa, as compared to the earlier Apple II. AST offered a 1.5 MB memory board, which – when combined with the standard Apple 512 KB memory board – expands the Lisa to a total of 2 MB of memory, the maximum amount that the MMU can address. Late in the product life of the Lisa, there were third-party hard disk drives, SCSI controllers, and double-sided 3.5-inch floppy-disk upgrades. Unlike the original Macintosh, the Lisa has expansion slots. The Lisa 2 motherboard has a very basic backplane with virtually no electronic components, but plenty of edge connector sockets and slots. There are two RAM slots, one CPU upgrade slot, and one I/O slot, all in parallel placement to each other. At the other end, there are three "Lisa" slots in parallel. Macintosh XL In January 1985, following the Macintosh, the Lisa 2/10 (with integrated 10 MB hard drive) was rebranded as Macintosh XL. It was given a hardware and software kit, enabling it to reboot into Macintosh mode and positioning it as Apple's high-end Macintosh. The price was lowered yet again (to $4,000) and sales tripled, but CEO John Sculley said that Apple would have lost money increasing production to meet the new demand. Apple discontinued the Macintosh XL, leaving an eight-month void in Apple's high-end product line until the Macintosh Plus was introduced in 1986. The report that many Lisa machines were never sold and were disposed of by Apple is particularly interesting in light of Sculley's decision concerning the increased demand. Software Lisa OS The Lisa operating system features protected memory, enabled by a crude hardware circuit compared to the Sun-1 workstation (c. 1982), which features a full memory management unit. Motorola did not have an MMU (memory-management unit) for the 68000 ready in time, so third parties such as Apple had to come up with their own solutions. Despite the sluggishness of Apple's solution, which was also the result of a cost-cutting compromise, the Lisa system differed from the Macintosh system which would not gain protected memory until Mac OS X, released eighteen years later. (Motorola's initial MMU also was disliked for its high cost and slow performance.) Based, in part, on elements from the Apple III SOS operating system released three years earlier, the Lisa's disk operating system also organizes its files in hierarchical directories, as do UNIX workstations of the time which were the main competition to Lisa in terms of price and hardware. Filesystem directories correspond to GUI folders, as with previous Xerox PARC computers from which the Lisa borrowed heavily. Unlike the first Macintosh, whose operating system could not utilize a hard disk in its first versions, the Lisa system was designed around a hard disk being present. Conceptually, the Lisa resembles the Xerox Star in the sense that it was envisioned as an office computing system. It also resembles Microsoft Office from a software standpoint, in that its software is designed to be an integrated "office suite". The Lisa's office software suite shipped long before the existence of Microsoft Office, although some of the constituent components differ (e.g. Lisa shipped with no presentation package and Office shipped without a project management package). Consequently, Lisa has two main user modes: the Lisa Office System and the Workshop. The Lisa Office System is the GUI environment for end users. The Workshop is a program development environment and is almost entirely text-based, though it uses a GUI text editor. The Lisa Office System was eventually renamed "7/7", in reference to the seven supplied application programs: LisaWrite, LisaCalc, LisaDraw, LisaGraph, LisaProject, LisaList, and LisaTerminal. Apple's warranty said that this software works precisely as stated, and Apple refunded an unspecified number of users, in full, for their systems. These operating system frailties, and costly recalls, combined with the very high price point, led to the failure of the Lisa in the marketplace. NASA purchased Lisa machines, mainly to use the LisaProject program. In 2018, the Computer History Museum announced it would be releasing the source code for Lisa OS, following a check by Apple to ensure this would not impact other intellectual property. For copyright reasons, this release did not include the American Heritage dictionary. Task-oriented workflow With Lisa, Apple presented users with what is generally, but imprecisely, known as a document-oriented paradigm. This is contrasted with program-centric design. The user focuses more on the task to be accomplished than on the tool used to accomplish it. Apple presents tasks, with Lisa, in the form of stationery. Rather than opening LisaWrite, for instance, to begin to do word processing, users initially "tear off stationery", visually, that represents the task of word processing. Either that, or they open an existing LisaWrite document that resembles that stationery. By contrast, the Macintosh and most other GUI systems focus primarily on the program that is used to accomplish a task — directing users to that first. One benefit of task-based computing is that users have less of a need to memorize which program is associated with a particular task. That problem is compounded by the contemporary practice of naming programs with very nonintuitive names such as Chrome and Safari. A drawback of task-oriented design, when presented in document-oriented form, is that the naturalness of the process can be lacking. The most frequently cited example with Lisa is the use of LisaTerminal, in which a person tears off "terminal stationery" — a broken metaphor. However, task-based design does not necessarily require characterizing everything as a document, or as stationery specifically. More recently, menus and tabs have been used, rather sparingly, to present more task-based workflows. A "power user" could have somewhat laboriously customized the Apple menu in many versions of Mac OS (prior to Mac OS X) to contain folders that are task-oriented. Tab systems are typically add-ons for contemporary operating systems and can be organized in a task-based manner — such as having a "web browsing" tab that contains various web browser programs. Task-oriented presentation is very helpful for systems that have many programs and a variety of users, such as a language-learning computer lab that caters to those learning a variety of languages. It is also helpful for computer users who have not yet memorized what program name, however unintuitive, is associated with a task. Some Linux desktop systems combine some unintuitive program names (e.g. Amarok) with task-based organization (menus that organize programs by task) — in the desire to make utilizing Linux desktop systems less of a challenge for those switching from the dominant desktop platforms. The desire for emotional marketing reinforcement appears to be a strong factor in the choice, by most companies, to promote the program-centric paradigm. Otherwise, there would be little incentive to give programs obscure unintuitive names and/or to add company names to the program name (e.g. Microsoft Word, Microsoft Excel, etc.). Combining unintuitive names with company names is especially popular today (e.g. Google Chrome and Mozilla Firefox). This is the opposite goal of the Lisa paradigm where the brand name and the program name are intentionally made more invisible to the user. Internationalization Within a few months of the Lisa's introduction in the US, fully translated versions of the software and documentation were commercially available for the British, French, West German, Italian, and Spanish markets, followed by several Scandinavian versions shortly thereafter. The user interface for the OS, all seven applications, LisaGuide, and the Lisa diagnostics (in ROM) can be fully translated, without any programming required, using resource files and a translation kit. The keyboard can identify its native language layout, and the entire user experience will be in that language, including any hardware diagnostic messages. Although several non-English keyboard layouts are available, the Dvorak keyboard layout was never ported to the Lisa, though such porting had been available for the Apple III, IIe, and IIc, and was later done for the Macintosh. Keyboard-mapping on the Lisa is complex and requires building a new OS. All kernels contain images for all layouts, so due to serious memory constraints, keyboard layouts are stored as differences from a set of standard layouts; thus only a few bytes are needed to accommodate most additional layouts. An exception is the Dvorak layout that moves just about every key and thus requires hundreds of extra bytes of precious kernel storage regardless of whether it is needed. Each localized version (built on a globalized core) requires grammatical, linguistic, and cultural adaptations throughout the user interface, including formats for dates, numbers, times, currencies, sorting, even for word and phrase order in alerts and dialog boxes. A kit was provided, and the translation work was done by native-speaking Apple marketing staff in each country. This localization effort resulted in about as many Lisa unit sales outside the US as inside the US over the product's lifespan, while setting new standards for future localized software products, and for global project coordination. MacWorks In April 1984, following the release of the Macintosh, Apple introduced MacWorks, a software emulation environment which allows the Lisa to run Macintosh System software and applications. MacWorks helped make the Lisa more attractive to potential customers, although it did not enable the Macintosh emulation to access the hard disk until September. Initial versions of the Mac OS could not support a hard disk on the Macintosh machines. In January 1985, re-branded MacWorks XL, it became the primary system application designed to turn the Lisa into the Macintosh XL. Third-party software A significant impediment to third-party software on the Lisa was the fact that, when first launched, the Lisa Office System could not be used to write programs for itself. A separate development OS, called Lisa Workshop, was required. During this development process, engineers would alternate between the two OSes at startup, writing and compiling code on one OS and testing it on the other. Later, the same Lisa Workshop was used to develop software for the Macintosh. After a few years, a Macintosh-native development system was developed. For most of its lifetime, the Lisa never went beyond the original seven applications that Apple had deemed enough to "do everything", although UniPress Software did offer UNIX System III for $495. The company known as the Santa Cruz Operation (SCO) offered Microsoft XENIX (version 3), a UNIX-like command-line operating system, for the Lisa 2 — and the Multiplan spreadsheet (version 2.1) that ran on it. Reception BYTE wrote in February 1983 after previewing the Lisa that it was "the most important development in computers in the last five years, easily outpacing [the IBM PC]". It acknowledged that the $9,995 price was high, and concluded "Apple ... is not unaware that most people would be incredibly interested in a similar but less expensive machine. We'll see what happens". The Apple Lisa was a commercial failure for Apple, the largest since the failure of the Apple III of 1980. Apple sold approximately 10,000 Lisa machines at a price of , generating total sales of $100 million against a development cost of more than $150 million. The high price put the Lisa at the bottom of the price realm of technical workstations, but without much of a technical application library. Some features that some much more expensive competing systems included were such things as hardware graphics coprocessors (which increased perceived system power by improving GUI snappiness) and higher-resolution portrait displays. Lisa's implementation of the requisite graphical interface paradigm was novel but many of the time associated UI snappiness with power, even if that was so simplistic as to miss the mark, in terms of overall productivity. The mouse, for example, was dismissed by many critics of the time as being a toy, and mouse-driven machines as being unserious. Of course, the mouse would go on to displace the pure-CLI design for the vast majority of users. The largest Lisa customer was NASA, which used LisaProject for project management. Lisa was not slowed purely by having a 5 MHz CPU (the lowest clock offered by Motorola), sophisticated parity RAM, a slow hard disk interface (for the ProFile), and the lack of a graphics coprocessor (which would have increased cost). It also had its software mainly coded in Pascal, was designed to multitask, and had advanced features like the clipboard for pasting data between programs. This sophistication came at the price of snappiness although it added to productivity. The OS even had "soft power", remembering what was open and where desktop items were positioned. Many such features are taken for granted today but were not available on typical consumer systems. The massive brand power of IBM at that time was the largest factor in the PC's eventual dominance. Computing critics complained about the relatively primitive hardware ("off-the-shelf components") of the PC but admitted that it would be a success simply due to IBM's mindshare. By the time Lisa was available in the market, the less-expensive and less-powerful IBM PC had already become entrenched. The x86 platform's backward compatibility with the CP/M operating system was helpful for the PC, given that many existing business software applications were originally written for CP/M. Apple had attempted to compete with the PC, via the Apple II platform. DOS was very primitive when compared with the Lisa OS, but the CLI was familiar territory for most users of the time. It would be years before Microsoft would offer an integrated office suite. The 1984 release of the Macintosh further eroded the Lisa's marketability, as the public perceived that Apple was abandoning it in favor of the Macintosh. Any marketing of the Macintosh clashed with promotion of the Lisa, since Apple had not made the platforms compatible. Macintosh was superficially faster (mainly in terms of UI responsiveness) than Lisa but much more primitive in other key aspects, such as the lack of protected memory (which led to the famous bomb and completely frozen machines for so many years), very small amount of non-upgradable RAM, no ability to use a hard disk (which led to heavy criticism about frequent disk-swapping), no sophisticated file system, a smaller display (with lower resolution), lack of numeric keypad, lack of a built-in screensaver, inability to multitask, lack of parity RAM, lack of expansion slots, lack of a calculator with a paper tape and RPN, more primitive office software, and more. The Macintosh beat the Lisa in terms of having sound support (Lisa had only a beep), having square pixels (which reduced perceived resolution but removed the problem of display artifacts), having a nearly 8 MHz CPU, having more resources placed into marketing (leading to a large increase in the system's price tag), and being coded primarily in assembly. Some features, like protected memory, remained absent from the Macintosh platform for eighteen years, when Mac OS X was released for the desktop. The Lisa was also designed to readily support multiple operating systems, making booting between them intuitive and convenient — something that has taken a very long time to achieve since Lisa, at least as a standard desktop OS feature. The Lisa 2 and its Mac ROM-enabled sibling the Macintosh XL are the final two releases in the Lisa line, which was discontinued in April 1985. The Macintosh XL is a hardware and software conversion kit to effectively reboot Lisa into Macintosh mode. In 1986, Apple offered all Lisa and XL owners the opportunity to return their computer, with an additional payment of US$1,498, in exchange for a Macintosh Plus and Hard Disk 20. Reportedly, 2,700 working but unsold Lisa computers were buried in a landfill. Legacy The Macintosh project, led by Apple's cofounder Steve Jobs, borrowed heavily from the Lisa's GUI paradigm and directly took many of its staff, to create Apple's ultimate flagship platform of the next several decades and progenitor of the iPhone. The column-based interface, for instance, made particularly famous via Mac OS X, had originally been developed for Lisa. It had been discarded in favor of the icon view. Apple's culture of object-oriented programming on Lisa contributed to the 1988 conception of Pink, the first attempt to rearchitect the operating system of Macintosh. See also Macintosh 128K People: Bill Atkinson Rich Page Brad Silverberg Technology: History of the graphical user interface Cut, copy, and paste Xerox Star Visi On Apple ProFile GEMDOS (adaptation for Lisa 2/5) References External links A LISA Filmed Demonstration from 1984 Using Apple' Lisa for Real Work Lisa 2/5 info. mprove: Graphical User Interface of Apple Lisa Apple Lisa Memorial Exhibition at Dongdaemun Design Plaza, Seoul, Korea Computer-related introductions in 1983 Apple Inc. hardware Apple computers Products introduced in 1983 Pascal (programming language) software 68k architecture 68000-based home computers 32-bit computers
6899065
https://en.wikipedia.org/wiki/Wayne%20Stevens%20%28software%20engineer%29
Wayne Stevens (software engineer)
Wayne P. Stevens (1944 - 1993) was an American software engineer, consultant, author, pioneer, and advocate of the practical application of software methods and tools. Life & Work Stevens grew up in Missouri, spent two years in India, where he attended the Woodstock School, and earned his M.S. in Electrical Engineering from MIT in 1967. He eventually became the chief architect of application development methodology for IBM's consulting group. The annual Stevens Award Lecture on Software Development Methods is named after him. He belonged to the IEEE and the ACM as well as the following honorary societies: Tau Beta Pi, Sigma Xi, and Eta Kappa Nu. He wrote a seminal paper on Structured Design, with Larry Constantine and Glenford Myers, and was the author of a number of books and articles on application design methodologies. He also worked with John Paul Morrison to refine and promote the concepts of what is now called Flow-based programming, including descriptions of FBP in several of these references. Publications Stevens published several articles and books, including: 1982. How Data Flow can Improve Application Development Productivity, IBM System Journal, Vol. 21, No. 2. 1981. Using Structured Design: How to make Programs Simple, Changeable, Flexible and Reusable, John Wiley and Sons. 1985. Using Data Flow for Application Development. Byte 1990. Software Design - Concepts and Methods, Practical Software Engineering Series, Ed. Allen Macro, Prentice Hall. Articles, a selection 1988. "Integrating Applications with SAA (Systems Application Architecture)". With L.A. Buchwald & R. W. Davison. In: IBM Systems Journal, Vol 27 No 3, pp 315–324, 1988 1991. "Structured Design, Structured Analysis, and Structured Programming". In: American Programmer, Nov. 1991. 1994. "Data Flow Analysis and Design". In: Encyclopedia of Software Engineering. John J. Marciniak, Editor-in-Chief, Volume 1, pp 242 – 247, John Wiley & Sons, Inc, 1994. References 1944 births 1993 deaths IBM employees American computer scientists American software engineers
2758056
https://en.wikipedia.org/wiki/Crackme
Crackme
A crackme (often abbreviated by cm) is a small program designed to test a programmer's reverse engineering skills. They are programmed by other reversers as a legal way to crack software, since no intellectual property is being infringed upon. Crackmes, reversemes and keygenmes generally have similar protection schemes and algorithms to those found in proprietary software. However, due to the wide use of packers/protectors in commercial software, many crackmes are actually more difficult as the algorithm is harder to find and track than in commercial software. Keygenme A keygenme is specifically designed for the reverser to not only find the protection algorithm used in the application, but also write a small keygen for it in the programming language of their choice. Most keygenmes, when properly manipulated, can be self-keygenning. For example, when checking, they might generate the corresponding key and simply compare the expected and entered keys. This makes it easy to copy the key generation algorithm. Often anti-debugging and anti-disassemble routines are used to confuse debuggers or make the disassembly useless. Code-obfuscation is also used to make the reversing even harder. References External links tdhack.com - Includes cryptographic riddles, hackmes and software applications to crack for both Windows and Linux. Polish and English languages are supported. Ollydbg - A program used both by beginners and experienced people. Computer security Software cracking Reverse engineering
4376698
https://en.wikipedia.org/wiki/Hassan%20Ugail
Hassan Ugail
Professor Hassan Ugail is a mathematician and a computer scientist. He is currently working as a Professor of Visual Computing at the School of Engineering and Informatics at the University of Bradford. Professor Ugail is the first Maldivian to obtain a PhD in mathematics. He is also the first and to date the only Maldivian to receive a professorship in the field of Science. Professor Ugail is well known for his work on computer-based human face analysis including, face recognition, face ageing, emotion analysis and lie detection. For example, in 2018, he has used his face recognition tools to help unmask the two suspected Russian spies at the heart of the Salisbury Novichok poisoning case. And, in 2020, Professor Ugail collaborated with the BBC News investigators to uncover an alleged Nazi war criminal, who settled in the UK, who could have worked for the British intelligence during the Cold War. Prof. Ugail's principal research interests are in the area of Visual Computing, particularly in the area of 3D geometric design, 3D imaging, computer-based simulations and machine learning. Prof. Ugail is a leader in the field of Visual Computing and has greatly contributed to the development of the field by successfully delivering a number of high-profile research and innovation projects, publications and international lectures. He is a member of the UK Engineering and Physical Sciences Research Council (EPSRC) peer review college and also a peer reviewer for several related journals and conferences in his field of research. Early life Hassan Ugail was born in Hithadhoo, Maldives. He completed his primary education at Nooranee School in Hithadhoo. In the 1960s, Hassan Ugail's father, Ahmed Ugail, worked as a clerk at the British Royal Air Force base in Gan Island in the Maldives. Hassan Ugail claims that though his beginning was humble, he was academically much privileged from a young age because his father could speak English and had access to books of varied nature that his father was able to obtain from the British based in Gan Island. In 1987, he moved to Malé to continue his education at the English Preparatory And Secondary School and at the Centre for Higher Secondary Education. Hassan Ugail was a top student and considered to be bright. In 1992, he received the opportunity to continue his studies in the UK as a result of a British Council scholarship. Academic life Ugail received a B.Sc. degree with First Class Honours in Mathematics in 1995 and a PGCE in 1996 both from King's College London. He was awarded his PhD by the Department of Applied Mathematics at University of Leeds in the year 2000 for his research in geometric design. He then worked as a post-doctoral research fellow at the Department of Applied Mathematics at University of Leeds until September 2002. Prof. Ugail joined the School of Informatics, University of Bradford, as a lecturer in September 2002. He was appointed as a Senior Lecturer in April 2005. Ugail became a professor in 2009 at the age of 38 and is among the youngest professors at the University of Bradford. He currently serves as the director of the Centre for Visual Computing at the University of Bradford. Research Professor Ugail's principal research interests are in the areas of geometric design, computer-based physical analysis, and machine learning that all fall into a broad area of research known as Simulation-Based Design and Machine Learning. The focus of his research has been particularly upon a novel method for geometric design known as the PDE (partial differential equation) method developed at the University of Leeds. The PDE method is based on a suitably chosen PDE that enables to model complex shapes in an easy and predictable fashion. Prof. Ugail also developed the method of biharmonic Bézier surface for boundary based smooth surface design with Professor Monterde from University of Valencia, Spain. His work on computer-based human face analysis based on artificial intelligence and machine learning has introduced numerous novel methods which are being practically utilized in the area of biometrics as well for applications in healthcare. His research has many practical applications, which include building new application environments for complex interactive computer-aided design and computer animation, design analysis and optimisation for engineering and biomedical applications such as accurate computer modelling of shapes of biological membranes, the human heart and artificial limbs. Other than that, his research using artificial intelligence and machine learning is applicable to biometric identification such as face recognition, non-invasive human emotion analysis, lie detection as well as medical image understanding for diagnostic purposes. Achievements His methods for the representation of a three-dimensional object and for the storage and transmission of data representing a three-dimensional object; and his method for the time-dependent animation of a three-dimensional object are all protected under British and US patent laws, between 2008 and 2015. His research in Visual Computing techniques has led to the establishment of a university spin-out company Tangentix Ltd that looked at defining and manipulating complex digital data applied to develop computer games. Tangentix subsequently launched GameSessions that enable users to try or buy PC games online with ease. Professor Ugail was the founder and CSO of Tangentix Ltd. It was a UK-based startup exploring the use of 3D graphics compression. The company raised both Series A and Series B funding, in 2013 and 2015 respectively, and was aquaired by Toadman Interactive in 2019. In 2010 Professor Ugail won the most prestigious award from University of Bradford, the 'Vice-Chancellor's Excellence in Knowledge Transfer Award'. In September 2011, Prof Ugail unveiled a new lie detector system that uses two cameras and a computer to try to observe slight changes in facial expressions and facial temperature profile. This new system he developed is a complete step change from the traditional polygraph lie detector, which requires the subject to be wired up to a range of physiological sensors. This system is purely non-invasive and can be used in a covert situation, where the person being monitored potentially knows nothing about it. In 2011 Professor Ugail received the Maldives National Award for Innovation. He is the first and the only Maldivian to have received this award to date. Prof. Ugail's research work has been funded by a variety of sources. His research finding has been widely published in related international journals and conference proceedings. In late 2005, the political Maldivian webzine proposed a so-called "Dream Team" to constituent a future government that would bring forth "democracy and prosperity" in the Maldives. Despite Prof. Ugail's training as a mathematician, the compiler of this list placed him as Ambassador to the UK. Though this may be the case, Prof Ugail continues to remain politically neutral and on several occasions, he has openly said that he has no interest in getting politically involved and that he has no interest in running for the office in the Maldives. Aside from his academic work as a university professor, Professor Ugail continues to inspire people especially Maldivians in the field of science by giving motivational talks, running local television programmes on science and delivering science related information in enthusiastic and engaging ways. For example, he runs a science column, called Professor Ugail's Opinion in the local Maldivian language in Mihaaru - the most prominent and widely distributed newspaper in the Maldives. Additionally, Professor Ugail undertakes substantial philanthropic work of empowering people by delivering STEM knowledge through the Ugail Foundation, primarily through theCircle by Ugail Foundation, which has imparted coding, critical thinking and leadership skills to many thousands of kids and young people. Selected works Books Patents References External links Centre for Visual Computing at University of Bradford Hassan Ugail at Google Scholar Hassan Ugail at LinkedIn Hassan Ugail at Twitter Tangentix Ltd GameSessions 1970 births Maldivian mathematicians Maldivian computer scientists Alumni of King's College London Alumni of the University of Leeds Academics of the University of Bradford Living people
700814
https://en.wikipedia.org/wiki/Alcohol%20120%25
Alcohol 120%
Alcohol 120% is an optical disc authoring program and disk image emulator created by Alcohol Soft. Alcohol 120% can also mount disc images, with support for their proprietary Media Descriptor Image (.mds/.mdf) disc image format. It is capable of converting image files to the ISO format. Alcohol Soft has cited it will not be developing an image editor for Alcohol 120%. The latest versions of Alcohol 120% contain the A.C.I.D. wizard. A.C.I.D., or "Alcohol Cloaking Initiative for DRM" (based originally on the Y.A.S.U. application), operates as an "SCSI-drive protector" which hides emulated drives from SecuROM 7 and SafeDisc 4. Supported file types The file formats supported by Alcohol 120%'s image mounting feature are listed in the adjacent table. Alcohol 120%'s image making tool supports the following formats: .mds/.mdf (default) .iso The following formats are supported, except when making disk images of DVDs .ccd/.img/.sub .cue/.bin Copy protection Alcohol 120%'s image recording feature is capable of bypassing certain copy protection schemes, such as SafeDisc, SecuROM, and Data Position Measurement (DPM). However, certain copy protection schemes require burner hardware that is capable of reproducing the copy protection. It can also create images of PlayStation and PlayStation 2 file systems. It lacks the ability to back up DVD titles encrypted with the Content Scramble System. Due to legal restrictions, Alcohol Soft has opted not to include this feature. Some software manufacturers employ software blacklist methods to prevent Alcohol 120% from copying the software. There are third party tools available that attempt to counteract the blacklist methods, such as Anti-blaxx and CureROM. These two programs have been replaced by A.C.I.D. Alcohol 52% Alcohol 52% is a version of Alcohol 120% without the burning engine. It can still create image files, and mount those images on up to 31 virtual drives. There are two versions of Alcohol 52%, free and 30-day trial. The free version contains an optional adware toolbar bundled and is limited to 6 virtual drives. Alcohol 68% Alcohol 68% was a version of Alcohol 120% without media mounting and emulating engine. This software was providing the CD/DVD burning functions, later discontinued and integrated into Alcohol 120%. Alcohol 120% Free Edition Alcohol 120% Free Edition is a free for non-commercial use version of Alcohol 120% with certain limitations. These include only being able to burn to one drive at a time, only using up to two virtual drives and no copy protection emulation options. Awards European ShareWare Conference 2006 Epsilon Award Y.A.S.U Y.A.S.U (Yet Another SecuROM Utility) originally developed for DAEMON Tools utility, which allows hiding virtual drives from SecuROM 7 and SafeDisc 4. Later the third-party developers created a version for Alcohol 120%. A.C.I.D A.C.I.D (Alcohol Cloaking Initiative for DRM) built-in Alcohol 120% utility is based on the Y.A.S.U program and is similar to it. See also Comparison of ISO image software SCSI Pass-Through Direct (SPTD) References External links Alcohol 52% Free Edition Windows-only software Optical disc authoring software Disk image emulators Windows CD/DVD writing software
38914995
https://en.wikipedia.org/wiki/Tuxera
Tuxera
Tuxera Inc. is a Finnish software company that develops and sells file systems software. Its most popular products are Tuxera NTFS and Tuxera exFAT, both available on a number of platforms including Linux, Android, QNX and macOS. Tuxera's customers include a number of consumer electronics manufacturers in mobile phones, tablets, TVs, set-top boxes, automotive infotainment and storage markets. Tuxera NTFS for Mac provides read/write connectivity to Windows-formatted hard drives for macOS. The company was founded in 2008 and is currently headquartered in Espoo, Finland. Tuxera's other offices are located in the US, South Korea, Japan, Germany, Taiwan and China. History The origin of the company dates back to the open-source NTFS development in the late 1990s. NTFS had been introduced in 1993 by Microsoft as the file system for Windows NT. At that time Anton Altaparmakov emerged as the lead developer and maintainer of the Linux NTFS kernel driver. Meanwhile, Szabolcs Szakacsits continued to lead a platform-independent project under the name NTFS-3G. In 2006, NTFS-3G became the first driver to gain full read and write support. Commercial activity started in 2007 and the company was founded next year. In 2009 the company signed agreements with Microsoft, which was followed by global expansion and establishing the collaboration with chipset vendors and software platform companies. In February 2011 Tuxera joined the Linux Foundation, which was an expected step as for many years Tuxera has contributed to the Linux kernel. In July 2019 Tuxera acquired Datalight, to strengthen its internal storage offering and expertise in flash memory and flash management software. Embedded products Microsoft NTFS by Tuxera (formerly Tuxera NTFS) Tuxera NTFS is a performance optimized, fail-safe, fully compatible NTFS file system driver. It ships for example in smart TVs, set-top boxes, smartphones, tablets, routers, NAS and other devices. It is available for Android and other Linux platforms, QNX, WinCE Series 40, Nucleus RTOS and VxWorks. Supported architectures are ARM architecture, MIPS architecture, PowerPC, SuperH and x86. Microsoft exFAT by Tuxera (formerly Tuxera exFAT) Tuxera exFAT technology is used for SDXC memory card support. Tuxera was the first independent vendor to receive legal access to exFAT and TexFAT specifications, source code and verification tools from Microsoft. Tuxera exFAT can be found in automotive infotainment systems, Android phones and tablets from ASUS, Fujitsu, Panasonic, Pantech and others. Microsoft FAT by Tuxera (formerly Tuxera FAT) Tuxera FAT software provides interoperability and support for storage types such as SD memory card, CF card, Memory Stick, SSD, HDD via USB, SATA, eSATA, MMC and others. It is used by chipset and hardware manufacturers, and software and system integrators for full compliance with Microsoft patent licenses and GPL. NTFS-3G NTFS-3G is the original free-software "community edition" driver used widely in Linux distributions, including Fedora, Ubuntu, and others. On April 12, 2011 it was announced that Ntfsprogs project was merged with NTFS-3G. VelocityFS by Tuxera (formerly Tuxera Flash File System) Tuxera also develops and commercializes its own proprietary Flash file system. Due to its fail-safe technology it can be found for instance in vehicles and cars, integrated with the event data recorder to make sure the data recorded from sensors is consistent even in case of a crash. Tuxera FAT+ In 2017, Tuxera introduced FAT+, a file system implementation for Universal Flash Storage cards and removable storage that is compatible with FAT32 but without the file size limitation of 4 GiB. It is royalty free for UFS card host devices and a standard recommended by the Universal Flash Storage Association. Consumer products AllConnect AllConnect is a mobile app that allows to stream music, photos and videos from Android devices to DLNA receivers (smart TVs, set-top-boxes, wireless speakers, etc.). It was launched on November 12, 2013 under the name of Streambels. Microsoft NTFS for Mac by Tuxera (formerly Tuxera NTFS for Mac) Microsoft NTFS for Mac by Tuxera allows macOS computers to read and write NTFS partitions. By default, macOS provides only read access to NTFS partitions. The latest stable version of the driver is 2020.1, which added support for Apple silicon Macs. With the introduction of System Integrity Protection (SIP) by Apple in OS X El Capitan, usage of third-party software in Disk Utility is no longer possible. As a workaround, Microsoft NTFS for Mac by Tuxera ships together with Tuxera Disk Manager to facilitate the format and maintenance of NTFS volumes in macOS. Currently the software supports 13 languages: Arabic, Simplified and Traditional Chinese, English, French, German, Italian, Japanese, Korean, Portuguese, Russian, Spanish and Turkish. The software supports 64-bit kernels, including OS X El Capitan. It supports NTFS extended attributes and works with virtualization and encryption solutions including Parallels Desktop and VMware Fusion. SD Memory Card Formatter Tuxera, in association with SD Association, developed the official formatting application for Secure Digital memory cards, which is available as a free download for Windows and macOS. See also NTFS exFAT NTFS-3G FAT References Software companies of Finland
60546963
https://en.wikipedia.org/wiki/Amanda%20Randles
Amanda Randles
Amanda Randles is an American computer scientist who is the Alfred Winborne and Victoria Stover Mordecai Assistant Professor of Biomedical Sciences at Duke University. Randles has been an assistant professor of biomedical engineering and computer science at the university and works at the Duke Cancer Institute. Her research interests include biomedical simulation and high-performance computing. Early career and education While in high school, Randles attended the Utica Center for Math, Science, and Technology, where she learned computer programming and its applications in the sciences. She also participated in Science Olympiad and FIRST Robotics. Randles attended Duke University, where she completed a B.A. in physics and computer science in 2005. After working for three years as a software developer on the IBM Blue Gene project, she went to Harvard University to earn an S.M. in computer science (2010) and a PhD in applied physics (2013), advised by Efthimios Kaxiras and Hanspeter Pfister. In 2011, she was awarded a Computational Science Graduate Fellowship by the Krell Institute. She subsequently completed a practicum at Lawrence Livermore National Laboratory and was a visiting scientist at Franziska Michor's laboratory within the Dana–Farber Cancer Institute. Academic career Randles joined the Duke University Biomedical Engineering Department in 2015, where she is currently serving as the Alfred Winborne and Victoria Stover Mordecai Assistant Professor of Biomedical Sciences. She has been an assistant professor of biomedical engineering, computer science, and works at the Duke Cancer Institute. Randles was also an assistant professor of mathematics from 2016 to 2019. Research Randles' research interests are biomedical simulation and high-performance computing; specifically, her focus is developing computational tools that can examine the behavior of different diseases, from atherosclerosis to cancer. Randles and her research group have developed fluid dynamics simulation software capable of modeling blood flowing throughout a human body based on full-body CT and MRI scans, dubbed HARVEY after the physician William Harvey. Possible applications include examining how different medical interventions in cardiovascular disease impact the circulatory system and modeling the flow of singular cancer cells through the system. In 2018, Randles was one of ten researchers selected to test simulation-based projects on the Aurora exascale supercomputer when it debuts in 2021, as part of the Aurora Early Science Program at the Argonne National Laboratory. She was awarded a NSF CAREER Award in May 2020 to support her work on HARVEY. Awards and honors In 2014, Randles was awarded the NIH Director's Early Independence Award. She was named to the 2015 World Economic Forum Young Scientist List for her work on the "design of large-scale parallel applications targeting problems in physics". In 2017, she was awarded the Grace Murray Hopper Award and was later named to the MIT Technology Review Innovators Under 35, both given for her work on HARVEY. Selected publications References External links Amanda Randles at Duke University Randles Lab at the Pratt School of Engineering Living people American computer scientists American women engineers Duke University alumni Harvard University alumni Duke University faculty 21st-century American engineers 21st-century women engineers Grace Murray Hopper Award laureates Year of birth missing (living people) American women academics 21st-century American women
6324689
https://en.wikipedia.org/wiki/IQVIA
IQVIA
IQVIA, formerly Quintiles and IMS Health, Inc., is an American multinational company serving the combined industries of health information technology and clinical research. IQVIA is a provider of biopharmaceutical development and commercial outsourcing services, focused primarily on Phase I-IV clinical trials and associated laboratory and analytical services, including consulting services. It has a network of more than 88,000 employees in more than 100 countries and a market capitalization of $49 Billion as of August 2021. As of 2017, IQVIA was reported to be one of the world's largest contract research organizations. History IQVIA is the result of the 2016 merger of Quintiles, a leading global contract research organization, and IMS Health, a leading healthcare data and analytics provider The name of the modern company honors the legacy organizations. IQVIA: I (IMS Health), Q (Quintiles), and VIA (by way of). IMS Health IMS Health was best known for its collection of healthcare information spanning sales, de-identified prescription data, medical claims, electronic medical records and social media. IMS Health's products and services were used by companies to develop commercialization plans and portfolio strategies, to select patient and physician populations for specific therapies, and to measure the effectiveness of pharmaceutical marketing and sales resources. The firm used its data to produce syndicated reports such as market forecasts and market intelligence. The original name of the company was Intercontinental Marketing Statistics, hence the IMS name. IMS Health's corporate headquarters were located in Danbury, Connecticut, United States. Ari Bousbib was the chairman and CEO of IMS Health before the merger. In 1998, the parent company, Cognizant Corporation, split into two companies: IMS Health and Nielsen Media Research. After this restructuring, Cognizant Technology Solutions became a public subsidiary of IMS Health In 2002, IMS Health acquired Cambridge Pharma Consultancy, a privately held international firm that provides strategic advice to pharmaceutical management. In 2003, acquired Marketing Initiatives, a specialist in healthcare facility profile data, and Data Niche Associates, a provider of rebate validation services for Medicaid and managed care. In 2003, IMS Health sold its entire 56% stake in Cognizant and both companies are separated into two independent entities as IMS Health and Cognizant In 2004, United Research China Shanghai was acquired, providing coverage of China's consumer health market. In 2005, acquired PharMetrics, a U.S. provider of patient-centric integrated claims data. In 2006, acquired the Life Sciences practice of Strategic Decisions Group, a portfolio strategy consultant to the life sciences industry. In 2007, IMS Health acquired IHS and MedInitiatives, providers of healthcare data management analytics and technology services. That same year, ValueMedics Research was acquired, extending IMS Health's health economics and outcomes research capabilities. In 2007, ranked in the Businessweek 50. This list represents "best in class" companies from the ten economic sectors that make up the S&P 500. In 2008, named to the World's Most Admired Companies list by Fortune. The company received the recognition again in 2010. In 2008, acquired RMBC, a provider of national pharmaceutical market intelligence and analytics in Russia. In 2008, acquired the Skura professional services group, based out of Mississauga, Ontario, Canada and specialized in data integration, consulting, and services in business intelligence platforms to pharmaceutical and healthcare clients in North America and Europe. In 2009, named to the Dow Jones Sustainability North America Index in recognition of the company's economic, environmental and social performance among the largest 600 North American companies. In February 2010, IMS Health was taken private by TPG Capital, CPP Investment Board, and Leonard Green & Partners. In 2010, acquired Brogan, Inc., a privately held market research and consulting firm serving the Canadian healthcare market. In 2011, expanded its specialty and patient-level data assets in the United States with the acquisition of SDI Health. Also that year, the company acquired Ardentia Ltd in the UK, and Med-Vantage in the United States to build on its payer services in those markets. In 2012, acquired PharmARC Analytic Solutions Pvt. Ltd, a Bangalore-based analytics company. In 2012, acquired DecisionView, a software solutions company that helps life sciences organizations plan and track patient enrollment for clinical trials and TTC, a benchmarking solutions and analytics company that helps clients plan for and negotiate the costs of clinical trials. Also in 2012, the company purchased PharmaDeals Ltd., In 2013, acquired several companies to expand its portfolio of SaaS products: Incential Software, a provider of sales performance management technology services; 360 Vantage, which delivers multi-channel CRM software capabilities; Appature, which offers a relationship marketing platform; and Semantelli, a provider of social media analytics for the global healthcare industry. In May 2015, IMS increased its software development capability by acquiring Dataline Software Ltd, a bespoke software development company and big data research specialist in the UK. In April 2015, IMS Health completed the purchase of Cegedim's Customer Relationship Management (CRM) software and Strategic Data business for €396 million. Cegedim acquired the software and related business when it purchased Dendrite International in 2007. In August 2015, IMS Health completed the purchase of Boston Biomedical Consultants, a provider of market data and market research covering the in vitro Diagnostics market Quintiles Quintiles was the world’s largest provider of biopharmaceutical development and commercial outsourcing services. The company offered clinical data management, clinical trial execution services, pharmaceuticals, drug development, financial partnering, and commercialization expertise to companies in the biotechnology, pharmaceutical and healthcare sectors. In 1982, Dennis Gillings founded and incorporated Quintiles Transnational in North Carolina. Quintiles Transnational established Quintiles Pacific Inc. and Quintiles Ireland Ltd. in 1990. In 1991 Quintiles GmbH was established in Germany and Quintiles Laboratories Ltd. was established in Atlanta, Georgia. In September 1996, Quintiles purchased Innovex Ltd. of Britain for $747.5 million in stock. Quintiles went public in 1997 and completed a successful secondary stock offering. In 1974, Dennis Gillings signs the first contract to provide statistical and data management consulting for pharmaceutical clients. In 1982, Quintiles, Inc., is incorporated in North Carolina. In 1990, Quintiles Pacific Inc. and Quintiles Ireland Ltd. are established. In 1991, Quintiles GMBH is established in Germany; Quintiles Laboratories Ltd. is established in Atlanta, Georgia. In 1996, Quintiles buys Innovex Ltd. and BRI International Inc., becoming the world’s largest CRO. In 1997, Quintiles goes public, completing a successful secondary stock offering. In 1998, Quintiles becomes the first company in the industry to break the $1 billion mark, when it reports net revenues of $1.19 billion. In 1999, the company joins the S&P 500 Index. In 2003, the Board of Directors agrees to merge with Pharma Services Holdings Inc; Quintiles becomes a private company. In 2009, Quintiles opens new corporate headquarters in Durham, North Carolina. In 2010, Quintiles opens new European headquarters in the UK and establishes operations in East Africa. In 2011, Quintiles buys Advion Biosciences, a bioanalytical lab based out of Ithaca, New York. In 2013, Quintiles filed for an IPO on 15 February in order to go public again; Quintiles begins trading on the New York Stock Exchange (NYSE) under ticker symbol Q.” IMS Health and Quintiles become IQVIA In May 2016, Quintiles agreed plans to merge with IMS Health in a deal worth $9 billion dollars IMS Health shareholders received 0.384 shares of Quintiles common stock for each share of IMS Health common stock they held, leaving the split of ownership at 51.4% IMS and 48.6% Quintiles. The merger was completed in October and the resulting company was a $17.6 billion company called QuintilesIMS. In November 2017, the company adopted the new name of IQVIA, and changed its ticker symbol on the NYSE from Q to IQV. Controversies Throughout its history, the legacy IMS Health's business of collecting anonymized pharmaceutical sales data came under scrutiny from both the media and the legal system. IMS Health v. Ayotte was a free speech case involving IMS Health. Sorrell v. IMS Health Inc. was a case about physician-data privacy, which went to the U.S. Supreme Court. The High Court ruled in favor of the company. IQVIA was contracted by the UK government's Office of National Statistics to provide data on the prevalence of Covid-19 infection in the population. Some users of the survey reported problems contacting IQVIA and arranging for testing. The problems with how the survey results were collected were criticised for potentially leading to biased data by New Scientist. References External links Companies listed on the New York Stock Exchange Companies based in Danbury, Connecticut Companies based in Durham, North Carolina Contract research organizations Consulting firms established in 1982 Life sciences industry 1982 establishments in North Carolina 2013 initial public offerings 1997 initial public offerings International management consulting firms
44336789
https://en.wikipedia.org/wiki/Software%20Ganda
Software Ganda
Software Ganda () is a 2014 Indian Kannada romantic comedy comedy drama film directed and co-produced by Venkatesh. The film features Jaggesh and Nikita Thukral in the lead roles besides Sakshi Agarwal & Srinath in other pivotal role. The film's score and soundtrack is composed by Veer Samarth while the cinematography is by Nagesh Acharya. The film released across Karnataka on 5 December 2014. The satellite rights for the film was sold for 16 crore to a leading television channel. The film is a remake of the 2012 Malayalam film My Boss which itself was based on the 2009 movie The Proposal. Premise Manu, a software engineer, faces difficulty working under Priya, his short-tempered NRI boss. Priya is forced to leave India due to visa issues and she decides to marry Manu for her selfish needs. Cast Jaggesh as Manu Nikita Thukral as Priya S Rao Srinath Sakshi Agarwal as Nancy Music The film score and soundtrack has been composed by Veer Samarth and the audio has been brought out by Anand Audio label. The lyrics are written by Dr. Nagendra Prasad and Hrudaya Shiva. An Atlanta-based singer, Rekha Pallath, made her debut in playback singing with this film. References External links Puneeth & Sudeep at `Software Ganda` launch Jaggesh Washes His Hands of Software Ganda Jaggesh and Nikita Thukral attend audio release of Software Ganda in Bangalore Atlanta singer Rekha Pallath makes playback debut with ‘Software Ganda’ 2014 films 2010s Kannada-language films Indian comedy-drama films Indian films Kannada remakes of Malayalam films Sham marriage
1124059
https://en.wikipedia.org/wiki/Ferranti
Ferranti
Ferranti or Ferranti International plc was a UK electrical engineering and equipment firm that operated for over a century from 1885 until it went bankrupt in 1993. The company was once a constituent of the FTSE 100 Index. The firm was known for work in the area of power grid systems and defence electronics. In addition, in 1951 Ferranti began selling an early computer, the Ferranti Mark 1. The Belgian subsidiary lives on as Ferranti Computer Systems and as of 1994 is part of the Nijkerk Holding. History Beginnings Sebastian Ziani de Ferranti established his first business Ferranti, Thompson and Ince in 1882. The company developed the Ferranti-Thompson Alternator. Ferranti focused on alternating current power distribution early on, and was one of the few UK experts. To avoid confusion, he is often referred to as Dr Ferranti to distinguish him from the Ferranti company itself. In 1885 Dr Ferranti established a new business, with Francis Ince and Charles Sparks as partners, known as S. Z. de Ferranti. According to J F Wilson, Dr Ferranti's association with the electricity meter persuaded Ince to partner him in this new venture, and meter development was fundamental to the survival and growth of his business for several decades to come. Despite being a prime exponent of Alternating Current, Ferranti became an important supplier to many electric utility firms and power-distribution companies for both AC and DC meters. In 1887, the London Electric Supply Corporation (LESCo) hired Dr Ferranti for the design of their power station at Deptford. He designed the building, the generating plant and the distribution system and on its completion in October 1890, it was the first truly modern power station. It supplied high-voltage AC power at 10,000 volts, which was transformed to a lower voltage for consumer use where required. Success followed and Ferranti started producing electrical equipment (especially transformers) for sale. Soon the company was looking for considerably more manufacturing space. Land prices in the London area were too high, so the company moved to Hollinwood in Oldham in 1896. In July 1901, Ferranti Limited was formed, specifically to take over the assets of S. Z. de Ferranti Ltd and raise equity, but failed to impress potential new investors as it was still dominated by family ownership. Over-optimistic market projections in the boom of 1896–1903, declining revenues and liquidity problems, forced the company bankers Parrs to send the company into receivership in 1903. The business was restructured in 1905, Dr Ferranti's shareholding being reduced to less than 10%. For the next eleven years the company was run by receiver managers and Dr Ferranti was effectively excluded from commercial financial strategies. He spent much of this period working in partnership with the likes of J P Coats of Paisley on cotton spinning machinery and Vickers on re-superheating turbines. Expansion Through the early part of the century power was supplied by small companies, typically as an offshoot of plant set up to provide power to local industry. Each plant supplied a different standard, which made the mass production of domestic electrical equipment inefficient. In 1910, Dr Ferranti made a presidential speech to the IEE addressing this issue, but it would be another sixteen years before the commencement of the National Grid in 1926. In 1912, in a move driven by A B Anderson, the Ferranti Managing Director, Ferranti formed a company in Canada, Ferranti Electric, to exploit the overseas meter market. But in 1914, two significant events happened, Anderson drowned on his return from Canada in the Empress of Ireland sinking and the outbreak of WWI signalled an opportunity for Dr Ferranti to once again get involved in day-to-day events in the company. He wanted to get involved in the manufacture of shells and fuzes but it wasn't until 1915 that he finally convinced the board to accept this. As a result of this work Ferranti were in a healthier financial position at the end of the war. High voltage power transformers became an important product for Ferranti; some of the largest types weighed over a hundred tons. Dr Ferranti's son Vincent joined the transformer department as manager in 1921 and was instrumental in expanding the work started by his father. After the death of Dr Ferranti in 1930, he became the chairman and chief executive. In 1935, Ferranti purchased a disused wire drawing mill at Moston: from here it manufactured many "brown goods" such as televisions, radios, and electric clocks. The company later sold its radio and television interests to EKCO in 1957. Production of clocks ended in 1957 and other product lines phased out in 1960 Ferranti Instruments, based at Moston, developed various items for scientific measurements, including one of the first cone and plate viscometers. Ferranti built a new power transformer works at Hollinwood in the mid 1950s at a time when there was growth in the power supply distribution industry. By 1974, Ferranti had become an important supplier to the defence industry, but its power transformer division was making losses, creating acute financial problems. This led to the company being bailed out by the government's National Enterprise Board, taking a 65% share of the company in return. Defence electronics During World War II, Ferranti became a major supplier of electronics, fuzes, valves, and was, through development of the Identification Friend or Foe (IFF) system, heavily involved in the early development of radar in the United Kingdom. In the post-war era, this became a large segment of the company, with various branches supplying radar sets, avionics and other military electronics, both in the UK and the various international offices. In 1943, Ferranti opened a factory at Crewe Toll in Edinburgh to manufacture gyro gunsights for the Spitfire aircraft. After the war they set up, Ferranti Research to complement this business which grew to employ 8,000 staff in 8 locations, becoming the birthplace of the Scottish electronics industry, and a major contributor to company profitability. Later products included solid state ring laser gyros. From 1949, Ferranti-Packard assisted the Royal Canadian Navy develop DATAR (Digital Automated Tracking and Resolving). DATAR was a pioneering computerized battlefield information system that combined radar and sonar information to provide commanders with an "overall view" of a battlefield, allowing them to coordinate attacks on submarines and aircraft. In the 1950s, work focused on the development of airborne radar, with the company subsequently supplying radars to most of the UK's fast jet and helicopter fleets. Today the Crewe Toll site (now part of Leonardo S.p.A.) leads the consortium providing the Euroradar CAPTOR radar for the Eurofighter Typhoon. In the 1960s and 1970s, inertial navigation systems became an important product line for the company with systems designed for fast jet (Harrier, Phantom, Tornado), space and land applications. The electro-mechanical inertial navigation systems were constructed at the Silverknowes site in Edinburgh. In addition to their other military and civil applications, they were used in the ESA Ariane 4 and first Ariane 5 launches. Ferranti also produced the PADS (Position and Azimuth Determining System), an inertial navigation system which could be mounted in a vehicle and was used by the British Army. With the invention of the laser in the 1960s, the company quickly established itself in the electro-optics arena. From the early 1970s, it was delivering the Laser Rangefinder and Marked Target Seeker (LRMTS) for the Jaguar and Harrier fleets, and later for Tornado. It supplied the world's first man-portable laser rangefinder/designator (Laser Target Marker, or LTM) to the British Army in 1974, and had notable successes in the US market, establishing Ferranti Electro-optics Inc in Huntington Beach, California. Its TIALD Pod (Thermal Imaging Airborne Laser Designator) has been in almost constant combat operation on the Tornado since it was rushed into service during the first Gulf War. From the 1960s through to the late 1980s, the Bristol Ferranti Bloodhound SAM, for which Ferranti developed radar systems, was a key money earner. In 1970, Ferranti became involved in the sonar field through its involvement with Plessey in a new series of sonars, for which it designed and built the computer subsystems. This work later expanded when it won a contract for the complete Sonar 2050. The work was originally carried out at the Wythenshawe factory and then at Cheadle Heath. Takeovers of other companies gave it expertise in sonar arrays. This business later became Ferranti Thomson Sonar Systems. The selection of the radar for the project that became the Eurofighter Typhoon became a major international issue in the early 1990s. Britain, Italy, and Spain supported the Ferranti-led ECR-90, while Germany preferred the MSD2000 (a collaboration between Hughes, AEG and GEC). An agreement was reached after UK Defence Secretary Tom King assured his German counterpart Gerhard Stoltenberg that the British government would underwrite the project and allow GEC to acquire Ferranti Defence Systems from its troubled parent. Hughes sued GEC for $600 million for its role in the selection of the EFA and alleged that it used Hughes technology in the ECR-90 when it took over Ferranti. It later dropped this allegation and was awarded $23 million; the court judged that the MSD-2000 "had a real or substantial chance of succeeding had GEC not tortuously intervened ... and had the companies, which were bound by the Collaboration Agreement, faithfully and diligently performed their continuing obligations thereunder to press and promote the case for MSD-2000." Industrial electronics The company began marketing optical position measuring equipment for machine tools in 1956. Moire fringes produced by diffraction gratings were the basis for the position measurement. In the late 1980s there were several sections of the company involved in non-military areas. These included microwave communications equipment (Ferranti Communications), and petrol (gas) station pumps (Ferranti Autocourt). Both of these departments were based at Dalkeith, Scotland. Computers In the late 1940s Ferranti joined with various university-based research groups to develop computers. Their first effort was the Ferranti Mark 1, completed in 1951, with about nine delivered between 1951 and 1957. The Pegasus introduced in 1956 was their most popular valve (vacuum tube) system, with 38 units sold. Circa 1956, Ivan Idelson, at Ferranti, originated the Cluff–Foster–Idelson coding of characters on 7-track paper tape for a BSI committee. This also inspired the development of ASCII. In collaboration with the Victoria University of Manchester they built a new version of the famous Mark 1 that replaced valve diodes with solid state versions, which allowed the speed to be increased dramatically as well as increasing reliability. Ferranti offered the result commercially as the Mercury starting in 1957, and eventually sold nineteen in total. Although a small part of Ferranti's empire, the computer division was nevertheless highly visible and operated out of a former steam locomotive factory in West Gorton. Work on a completely new design, the Atlas, started soon after the delivery of the Mercury, aiming to dramatically improve performance. Ferranti continued their collaboration with the University of Manchester, and Plessey Co., plc, became a third partner. The second generation supercomputer first ran in December 1962. Eventually six machines were built, one of which was a stripped-down version that was modified for the needs of the University of Cambridge Mathematical Laboratory; the Titan (or Atlas 2) was the mainstay of scientific computing in Cambridge for nearly 8 years. Atlas was the first computer in the world to implement virtual memory. By the early 1960s their mid-size machines were no longer competitive, but efforts to design a replacement were bogged down. Into this void stepped the Canadian division, Ferranti-Packard, who had used several of the ideas under development in England to very quickly produce the Ferranti-Packard 6000. By this time Ferranti's management had tired of the market and were looking for someone to buy the entire division. Eventually it was merged into International Computers and Tabulators (ICT) in 1963, becoming the Large Systems Division of ICL in 1968. After studying several options, ICT selected the FP 6000 as the basis for their ICT 1900 series line which sold into the 1970s. The deal setting up ICT excluded Ferranti from the commercial sector of computing, but left the industrial field free. Some of the technology of the FP 6000 was later used in its Ferranti Argus range of industrial computers which were developed in its Wythenshawe factory. The first of these, simply Argus, was initially developed for military use. Meanwhile, in Bracknell the Digital Systems Division was developing a range of mainframe computers for naval applications. Early computers using discrete transistors were the Hermes and Poseidon and these were followed by the F1600 in the mid 1960s. Some of these machines remained in active service on naval vessels for many years. The FM1600B was the first of the range to use integrated circuits and was used in many naval and commercial applications. The FM1600D was a single-rack version of the computer for smaller systems. An airborne version of this was also made and used aboard the RAF Nimrod. The FM1600E was a redesigned and updated version of the FM1600B, and the last in the series was the F2420, an upgraded FM1600E with 60% more memory and 3.5 times the processing speed, still in service at sea in 2010. Semiconductors Ferranti had been involved in the production of electronic devices, including radio valves, cathode-ray tubes and germanium semiconductors for some time before it became the first European company to produce a silicon diode, in 1955. In 1972 they launched the ZN414, a single-chip AM radio integrated circuit in a 3-pin package. Ferranti Semiconductor Ltd. went on to produce a range of silicon bipolar devices, including, in 1977, the Ferranti F100-L, an early 16-bit microprocessor with 16-bit addressing. An F100-L was carried into space on the amateur radio satellite UoSAT-1 (OSCAR 9). Ferranti's ZTX series bipolar transistors gave their name to the inheritor of Ferranti Semiconductor's discrete semiconductor business, Zetex Semiconductors plc. In the early 1980s, Ferranti produced some of the first large uncommitted logic arrays (ULAs), used in home computers such as the Sinclair ZX81, Sinclair ZX Spectrum, Acorn Electron and BBC Micro. The microelectronics business was sold to Plessey in 1988. Acquisition of International Signal and Control In 1987 Ferranti purchased International Signal and Control (ISC), a United States defence contractor based in Pennsylvania. The company subsequently changed its name to Ferranti International plc. and restructured the combined business into the following divisions: Ferranti Computer Systems, Ferranti Defence Systems, Ferranti Dynamics, Ferranti Satcomms, Ferranti Telecoms, Ferranti Technologies and International Signal and Control. Collapse Unknown to Ferranti, ISC's business primarily consisted of illegal arms sales started at the behest of various US clandestine organizations. On paper the company looked to be extremely profitable on sales of high-priced "above board" items, but these profits were essentially non-existent. With the sale to Ferranti all illegal sales ended immediately, leaving the company with no obvious cash flow. In 1989 the UK's Serious Fraud Office started criminal investigation regarding alleged massive fraud at ISC. In December 1991 James Guerin, founder of ISC and co-Chairman of the merged company, pleaded guilty before the federal court in Philadelphia to fraud committed both in the US and UK. All offences which would have formed part of any UK prosecution were encompassed by the US trial and as such no UK trial proceeded. The financial and legal difficulties that resulted forced Ferranti into bankruptcy in December 1993. Operations The company had factories in Greater Manchester at Hollinwood, Moston, Chadderton (Gem Mill), Waterhead (Cairo Mill), Derker, Wythenshawe, Cheadle Heath, West Gorton, and Poynton. Eventually it set up branch-plants in Edinburgh (Silverknowes, Crewe Toll, Gyle, Granton and Robertson Avenue factories, plus its own hangar facility at Turnhouse Airport), Dalkeith, Aberdeen, Dundee, Kinbuck (near Dunblane), Bracknell, Barrow in Furness and Cwmbran as well as Germany and the United States (inc. Ferranti International Controls Corporation in Sugar Land, Texas) and several British Commonwealth countries including Canada, Australia and Singapore. Ferranti Australia was based in Revesby, Sydney NSW. There was also a primarily defence-related branch office in South Australia. Products manufactured by Ferranti Defence Systems included cockpit displays (moving map, head-down, head-up) video cameras and recorders, gunsight cameras, motion detectors, pilot's night vision goggles, integrated helmets, and pilot's stick controls. On the Tornado aircraft, Ferranti supplied the radar transmitter, inertial navigation system, LRMTS, TIALD pod, mission recording equipment, and cockpit displays. Current ownership of former Ferranti businesses Ferranti Autocourt: Acquired by Wayne Dresser, renamed to Wayne Autocourt, before Autocourt name dropped Ferranti Communications: Acquired by Thorn and branded Thorn Communications and Telecontrol Systems (CATS). Later acquired by Tyco International and renamed Tyco Communications. Still operating under the name TS Technology Services. Ferranti Computer Systems:The Belgian subsidiary lives on as Ferranti Computer System and as of 1994 is part of the Nijkerk Holding. The remainder was acquired out of administration by SYSECA, the IT arm of Thomson-CSF and renamed Ferranti-SYSECA Ltd.. Later, the Ferranti name was dropped and when Thomson changed its name to Thales Group, SYSECA became Thales Information Systems. Thales Information Systems later sold its German interest to Consinto Gmbh. The department dealing with airport systems was bought by Datel in around 1995 and continued to trade under the name Ferranti Airport Systems until it was bought by Ultra Electronics. Other parts of Ferranti Computer Systems were acquired out of administration by GEC-Marconi. When GEC-Marconi sold on its defence-related businesses to BAE Systems, many of these former Ferranti entities became part of the BAE/Finmeccanica joint venture called Alenia Marconi Systems. This JV has now been dissolved and the former Ferranti entities are now part of BAE Systems Integrated System Technologies (Insyte). Ferranti Defence Systems: Acquired by GEC-Marconi out of administration and renamed GEC Ferranti, later becoming part of GEC Marconi Avionics (GMAv). This business was acquired in 2000 by BAE Systems (BAE Systems Avionics). Part of this business, including the heritage Ferranti operation, was acquired by Finmeccanica in 2007 and renamed SELEX Galileo (now Selex ES). At one time there were design offices at Silverknowes, Robertson Avenue, South Gyle 1 and 2, Crewe Toll, Granton. After BAE Systems was formed the remaining factories at South Gyle were sold off and the staff made redundant despite their ground breaking work on the Avionics and Helmet for EFA and Aircraft Mission Computers. Ferranti Dynamics: Acquired by GEC-Marconi in 1992 Ferranti Electronics (Ceramic Seals division): Acquired by Ceramic Seals Limited in 1990. Ferranti Instrumentation: Dissolved. Some assets acquired by GEC-Marconi and Ravenfield Designs Ferranti Tapchangers Ltd: Independent company, then acquired by UK-based grid control specialists Fundamentals Ltd Ferranti Tapchangers Ltd | Welcome in 2017 Ferranti Satcomms: Acquired out of administration by Matra Marconi Space in 1994 Ferranti Technologies: Was bought out by management and continues in Oldham specialising in avionics, defence electronics, and electronic power systems. It was acquired by Elbit Systems in 2007. Ferranti Air Systems: Acquired by Datel then turned into an independent company. Later bought by Ultra Electronics Ferranti Thomson Sonar Systems: A 50% share was acquired by GEC-Marconi. Now owned by Thales and renamed Thales Underwater Systems. Ferranti Helicopters: Acquired by British Caledonian Airways in April 1979 to become British Caledonian Helicopters which was in turn acquired by Bristow Helicopters in 1987 Ferranti Subsea Systems: Management buyout in the early 1990s, renamed FSSL. Kværner bought more shares in 1994 and then turned to Kværner FSSL. Kværner is now known as Aker Solutions Ferranti Computer Systems Service Department: This was acquired by the third party maintenance company ServiceTec. The regional Service Centres were rebranded as ServiceTec and all of the service engineers and management were taken on. The support of the Argus computers dominated activities although new (non-Argus) business was added to the regional centres. The repair centre at Cairo Mill also became part of the ServiceTec group, ultimately as a separate entity. Ferranti Semiconductors: Became Zetex Semiconductors after a management buyout in 1989. In 2008 it was acquired by Diodes Inc. Ferranti Photonics Ltd.: Independent, liquidated after bankruptcy in 2005 Other uses of the Ferranti name A number of uses of the Ferranti name remain in use. In Edinburgh, the Ferranti Edinburgh Recreation Club (FERC), the Ferranti Mountaineering Club and the Ferranti Ten-pin Bowling League are still in existence. While these organisations no longer have any formal ties with the companies which subsumed the Ferranti companies which operated in Edinburgh, they still operate under the old names. Ferranti Thistle F.C. was formed in 1943 and joined the Scottish Football League in 1974. Due to strict sponsorship rules it changed its name to Meadowbank Thistle F.C., and later to Livingston F.C. Denis Ferranti Meters Limited is still (2021) owned by a direct descendant of Sebastian de Ferranti but is not directly related to the major Ferranti corporation. The company has over 200 employees that manufacture BT's public phones, oil pumps for large industrial vehicles, electric motors for motorbility solutions, electronics, and small MOD equipment. References Further reading Halton, Maurice J. "The Impact of Conflict and Political Change on Northern Industrial Towns, 1890 to 1990, " MA Dissertation, Faculty of Humanities and Social Science, Manchester Metropolitan University September 2001 (PDF; 326 kB) External links Museum of Science and Industry in Manchester - Timeline of Ferranti's History Ferranti Scotland Apprentices 1970 Community Group Aircraft component manufacturers of the United Kingdom Avionics companies Companies based in Oldham Electronics companies established in 1885 Companies formerly listed on the London Stock Exchange Defunct companies of the United Kingdom Defunct computer hardware companies Former defence companies of the United Kingdom Electrical engineering companies of the United Kingdom Electronics companies of the United Kingdom Missile guidance Radar manufacturers Science and technology in Greater Manchester History of science and technology in the United Kingdom 1885 establishments in England Technology companies disestablished in 1993 1993 disestablishments in England British companies disestablished in 1993 British companies established in 1885 Defunct computer companies of the United Kingdom
98668
https://en.wikipedia.org/wiki/James%20H.%20Clark
James H. Clark
James Henry Clark (born March 23, 1944) is an American entrepreneur and computer scientist. He founded several notable Silicon Valley technology companies, including Silicon Graphics, Inc., Netscape Communications Corporation, myCFO, and Healtheon. His research work in computer graphics led to the development of systems for the fast rendering of three-dimensional computer images. In 1998, Clark was elected a member of the National Academy of Engineering for the development of computer graphics and for technical leadership in the computer industry. Early life and education Clark was born in Plainview, Texas, on March 23, 1944. He dropped out of high school at 16 and spent four years in the Navy, where he was introduced to electronics. Clark began taking night courses at Tulane University's University College where, despite his lack of a high school diploma, he was able to earn enough credits to be admitted to the University of New Orleans. There, Clark earned his bachelor's and a master's degrees in physics, followed by a PhD in computer science from the University of Utah in 1974. Career Academia After completing his PhD, Clark worked at NYIT's Computer Graphics Lab, serving as an assistant professor at the University of California, Santa Cruz, from 1974 to 1978, and then as an associate professor of electrical engineering at Stanford University from 1979 to 1982. Clark's research work concerned geometry pipelines, specialized software or hardware that accelerates the display of three dimensional images. The peak of his group's advancements was the Geometry Engine, an early hardware accelerator for rendering computer images based on geometric models which he developed in 1979 with his students at Stanford. Silicon Graphics, Inc. In 1982, Clark along with several Stanford graduate students founded Silicon Graphics, Inc. (SGI). The earliest Silicon Graphics graphical workstations were mainly terminals, but they were soon followed by stand-alone graphical Unix workstations with very fast graphics rendering hardware. In the mid-1980s, Silicon Graphics began to use the MIPS CPU as the foundation of their newest workstations, replacing the Motorola 68000. By 1991, Silicon Graphics had become the world leader in the production of Hollywood movie visual effects and 3-D imaging. Silicon Graphics focused on the high-end market where they could charge a premium for their special hardware and graphics software. Clark had differences of opinion with Silicon Graphics management regarding the future direction of the company, and departed in late January 1994. Netscape In February 1994, Clark sought out Marc Andreessen who had led the development of Mosaic, the first widely distributed and easy-to-use software for browsing the World Wide Web, while employed at the National Center for Supercomputing Applications (NCSA). Clark and Andreessen founded Netscape, and developed the Netscape Navigator web browser. The founding of Netscape and its IPO in August 1995 launched the Internet boom on Wall Street during the mid-to-late 1990s. Clark's initial investment in Netscape was $4 million in 1994; he exited with $1.2 billion when Netscape was acquired by AOL in 1999. Healtheon/WebMD In 1995, Clark became interested in streamlining the paperwork associated with the health-care industry. The resulting start-up, Healtheon, was founded in early 1996 with backing from Kleiner Perkins and New Enterprise Associates. Although Clark's original idea of eliminating the paperwork and bureaucracy associated with medical care was ambitious, it did lead to successes in administrative streamlining of medical records technology. However, an Atlanta, Georgia startup company, WebMD originally focused on medical content was also making similar in-roads. Knowing WebMD had financial backing from Microsoft, Clark decided to merge Healtheon with the original WebMD to form the WebMD Corporation (NASDAQ: WBMD). WebMD is a leader in health information on the Internet. Other affiliations In 1999, Clark launched myCFO, a company formed to help wealthy Silicon Valley individuals manage their fortunes. In late 2002, while Clark served on the board of directors, most of myCFO's operations were sold to Harris Bank and now operate as Harris myCFO. Clark was chairman and financial backer of network-security startup Neoteris, founded in 2000, which was acquired by NetScreen in 2003 and subsequently by Juniper Networks. Clark was a founding director and investor in the biotechnology company DNA Sciences, founded in 1998 to unravel the genetics of common disease using volunteers recruited from the Internet launched August 1, 2000 (see The New York Times). In 2003, the company was acquired by Genaissance Pharmaceuticals Inc. Clark was the subject of the 1999 bestseller The New New Thing: A Silicon Valley Story by U.S. author Michael Lewis. Clark was a notable investor in Kibu.com, an Internet website for teens, which received approximately $22 million in funding. The website shut down in 2000, returning its remaining capital to investors. Clark coproduced the 2009 movie The Cove. His funding made possible the purchase and covert installation of some high-tech camera and sound-recording equipment required to capture the film's climactic dolphin slaughter. The film addresses the problem of whale and dolphin killing in Taiji, Wakayama, Japan. Clark sits on the board and is one of the primary investors in the consumer facing mobile technology company Ibotta. In 2017, Clark announced the launch of CommandScape, a cyber secure building management and automation platform. Awards Clark received the ACM SIGGRAPH Computer Graphics Achievement Award in 1984. In 1996, he received the Golden Plate Award of the American Academy of Achievement. He was a recipient of the 1997 Kilby International Awards, which honored him for his computer graphics vision and for enabling networked information exchange. In 1988, Clark was an Award Recipient of the EY Entrepreneur of the Year Award in the Northern California Region. Clark was awarded an honorary Doctor of Science (ScD) from the University of East Anglia in 1998. Personal life Clark has been married four times and has four children. In 2000, his daughter Kathy married Chad Hurley, a co-founder of YouTube, they were divorced in 2014. The divorce from his third wife of 15 years, Nancy Rutter, a Forbes journalist, is reported to have cost him $125 million in cash and assets in the settlement. Soon afterwards he began dating Australian model Kristy Hinze, 36 years his junior. Hinze became his fourth wife when they married in the British Virgin Islands on March 22, 2009. She gave birth to a daughter, Dylan Vivienne in September 2011, and later, Harper Hazelle, in August 2013. Yachting Clark is an enthusiastic yachtsman but cannot sail in rough ocean races such as the Sydney-Hobart due to an arthritic condition in his ankles and prefers one-day regattas on the smoother waters of the Mediterranean, the Caribbean and off Newport, Rhode Island. In 2012, however, he commented that "after 28 years of owning boats, I'm over it." He is the past owner of two important sailing yachts: Hyperion, the world's largest sloop when she was launched in 1998 at in length. She was designed by Germán Frers and built by Royal Huisman. With an air draught of , she briefly featured the world's longest carbon fiber spar. Clark developed her own chartplotter and SCADA system to control vessel operation remotely, as well as automate sailing operation and optimize sailing performance using a large bank of sensors and SGI processors. Clark sold Hyperion in 2004. Comanche, a carbonfiber maxi yacht designed by VPLP and built by Hodgdon Shipbuilding for line honours victories in offshore races. She lost line honours to Wild Oats XI in the 2014 Sydney-Hobart race but returned and won in 2015. She also won line honours in the 2015 Transatlantic race in which she set a new 24-hour speed record for monohulls. In 2016 with Skipper Ken Read and Stan Honey navigating, she set the Newport to Bermuda Race record, shaving five hours off the previous fastest time recorded in the 635 mile race. In December 2017, Comanche was sold to Australian Jim Cooney. He remains the current owner of two other large sailing yachts: Athena, a three-mast gaff-rigged aluminum schooner built by Royal Huisman. Athena has been listed for sale since July 2012, originally with an asking price of US$95 million, reduced to $59 million as of February 2017. Hanuman, a replica of the J-Class Endeavour II, built by Royal Huisman. Hanuman has been listed for sale since May 2012 with an asking price of US$14.9 million as of 2021. Flying Clark is a passionate pilot who enjoys flying helicopters, gliders (built in Germany) and acrobatic aircraft (extra 300). His approach to learning to fly a helicopter was very much like trial and error as he explored how this aircraft works. Philanthropy Clark has contributed to Stanford University, where he was an associate electrical engineering professor. In 1999, he pledged $150 million toward construction of the James H. Clark Center for Biomedical Engineering and related programs for interdisciplinary biomedical research. At the time, it was the largest-ever contribution to Stanford, other than the university's founding grant. Construction started in 2001 and was completed in the summer of 2003, as part of Stanford's Bio-X program. In September 2001, Clark rescinded $60 million of his initial pledge, citing anger over President Bush's restrictions on stem cell research. In a New York Times opinion piece, Clark said federal funding is essential for research in the United States, and he was not interested in funding research that could be suppressed for political reasons. President Barack Obama lifted the restrictions in question in 2009. In 2013, Clark pledged an additional $60 million to Stanford for interdisciplinary research in the life sciences, technology, and engineering. His commitment was finally completely fulfilled in 2020. Clark has donated an additional $10 million to fund fellowships at the Stanford Institute for Theoretical Physics. In 2004, Clark and David Filo of Yahoo! each donated $30 million to Tulane University's School of Engineering for merit-based scholarships to provide education to deserving students regardless of financial situation in the discipline of engineering. Clark is a board member for the national council of the World Wide Fund for Nature (WWF) and contributes towards the organization. The Perlman Music Program has recognized Clark for his continued philanthropic efforts towards their organization and their endowment fund. See also Catmull–Clark subdivision surface, a 3D modelling technique Clark invented in collaboration with Edwin Catmull References External links "Jim Clark", Salon.com, November 24, 1999. "James H. Clark", Business Week, 1999. 1944 births Living people People from Plainview, Texas Tulane University alumni University of Utah alumni University of New Orleans alumni American computer scientists American computer businesspeople Computer graphics researchers Computer graphics professionals Stanford University School of Engineering faculty Silicon Graphics people Netscape people Businesspeople from the San Francisco Bay Area Scientists from the San Francisco Bay Area Members of the United States National Academy of Engineering American technology company founders New York Institute of Technology faculty
939133
https://en.wikipedia.org/wiki/Modular%20programming
Modular programming
Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each contains everything necessary to execute only one aspect of the desired functionality. A module interface expresses the elements that are provided and required by the module. The elements defined in the interface are detectable by other modules. The implementation contains the working code that corresponds to the elements declared in the interface. Modular programming is closely related to structured programming and object-oriented programming, all having the same goal of facilitating construction of large software programs and systems by decomposition into smaller pieces, and all originating around the 1960s. While the historical usage of these terms has been inconsistent, "modular programming" now refers to the high-level decomposition of the code of an entire program into pieces: structured programming to the low-level code use of structured control flow, and object-oriented programming to the data use of objects, a kind of data structure. In object-oriented programming, the use of interfaces as an architectural pattern to construct modules is known as interface-based programming. Terminology The term assembly (as in .NET languages like C#, F# or Visual Basic .NET) or package (as in Dart, Go or Java) is sometimes used instead of module. In other implementations, these are distinct concepts; in Python a package is a collection of modules, while in Java 9 the introduction of the new module concept (a collection of packages with enhanced access control) was implemented. Furthermore, the term "package" has other uses in software (for example .NET NuGet packages). A component is a similar concept, but typically refers to a higher level; a component is a piece of a whole system, while a module is a piece of an individual program. The scale of the term "module" varies significantly between languages; in Python it is very small-scale and each file is a module, while in Java 9 it is planned to be large-scale, where a module is a collection of packages, which are in turn collections of files. Other terms for modules include unit, used in Pascal dialects. Language support Languages that formally support the module concept include Ada, Algol, BlitzMax, C++, C#, Clojure, COBOL, Common_Lisp, D, Dart, eC, Erlang, Elixir, Elm, F, F#, Fortran, Go, Haskell, IBM/360 Assembler, Control Language (CL), IBM RPG, Java, MATLAB, ML, Modula, Modula-2, Modula-3, Morpho, NEWP, Oberon, Oberon-2, Objective-C, OCaml, several derivatives of Pascal (Component Pascal, Object Pascal, Turbo Pascal, UCSD Pascal), Perl, PL/I, PureBasic, Python, R, Ruby, Rust, JavaScript, Visual Basic .NET and WebDNA. Conspicuous examples of languages that lack support for modules are C and have been C++ and Pascal in their original form, C and C++ do, however, allow separate compilation and declarative interfaces to be specified using header files. Modules were added to Objective-C in iOS 7 (2013); to C++ with C++20, and Pascal was superseded by Modula and Oberon, which included modules from the start, and various derivatives that included modules. JavaScript has had native modules since ECMAScript 2015. Modular programming can be performed even where the programming language lacks explicit syntactic features to support named modules, like, for example, in C. This is done by using existing language features, together with, for example, coding conventions, programming idioms and the physical code structure. IBM i also uses modules when programming in the Integrated Language Environment (ILE). Key aspects With modular programming, concerns are separated such that modules perform logically discrete functions, interacting through well-defined interfaces. Often modules form a directed acyclic graph (DAG); in this case a cyclic dependency between modules is seen as indicating that these should be a single module. In the case where modules do form a DAG they can be arranged as a hierarchy, where the lowest-level modules are independent, depending on no other modules, and higher-level modules depend on lower-level ones. A particular program or library is a top-level module of its own hierarchy, but can in turn be seen as a lower-level module of a higher-level program, library, or system. When creating a modular system, instead of creating a monolithic application (where the smallest component is the whole), several smaller modules are written separately so when they are composed together, they construct the executable application program. Typically these are also compiled separately, via separate compilation, and then linked by a linker. A just-in-time compiler may perform some of this construction "on-the-fly" at run time. These independent functions are commonly classified as either program control functions or specific task functions. Program control functions are designed to work for one program. Specific task functions are closely prepared to be applicable for various programs. This makes modular designed systems, if built correctly, far more reusable than a traditional monolithic design, since all (or many) of these modules may then be reused (without change) in other projects. This also facilitates the "breaking down" of projects into several smaller projects. Theoretically, a modularized software project will be more easily assembled by large teams, since no team members are creating the whole system, or even need to know about the system as a whole. They can focus just on the assigned smaller task. History Modular programming, in the form of subsystems (particularly for I/O) and software libraries, dates to early software systems, where it was used for code reuse. Modular programming per se, with a goal of modularity, developed in the late 1960s and 1970s, as a larger-scale analog of the concept of structured programming (1960s). The term "modular programming" dates at least to the National Symposium on Modular Programming, organized at the Information and Systems Institute in July 1968 by Larry Constantine; other key concepts were information hiding (1972) and separation of concerns (SoC, 1974). Modules were not included in the original specification for ALGOL 68 (1968), but were included as extensions in early implementations, ALGOL 68-R (1970) and ALGOL 68C (1970), and later formalized. One of the first languages designed from the start for modular programming was the short-lived Modula (1975), by Niklaus Wirth. Another early modular language was Mesa (1970s), by Xerox PARC, and Wirth drew on Mesa as well as the original Modula in its successor, Modula-2 (1978), which influenced later languages, particularly through its successor, Modula-3 (1980s). Modula's use of dot-qualified names, like M.a to refer to object a from module M, coincides with notation to access a field of a record (and similarly for attributes or methods of objects), and is now widespread, seen in C#, Dart, Go, Java, and Python, among others. Modular programming became widespread from the 1980s: the original Pascal language (1970) did not include modules, but later versions, notably UCSD Pascal (1978) and Turbo Pascal (1983) included them in the form of "units", as did the Pascal-influenced Ada (1980). The Extended Pascal ISO 10206:1990 standard kept closer to Modula2 in its modular support. Standard ML (1984) has one of the most complete module systems, including functors (parameterized modules) to map between modules. In the 1980s and 1990s, modular programming was overshadowed by and often conflated with object-oriented programming, particularly due to the popularity of C++ and Java. For example, the C family of languages had support for objects and classes in C++ (originally C with Classes, 1980) and Objective-C (1983), only supporting modules 30 years or more later. Java (1995) supports modules in the form of packages, though the primary unit of code organization is a class. However, Python (1991) prominently used both modules and objects from the start, using modules as the primary unit of code organization and "packages" as a larger-scale unit; and Perl 5 (1994) includes support for both modules and objects, with a vast array of modules being available from CPAN (1993). Modular programming is now widespread, and found in virtually all major languages developed since the 1990s. The relative importance of modules varies between languages, and in class-based object-oriented languages there is still overlap and confusion with classes as a unit of organization and encapsulation, but these are both well-established as distinct concepts. See also Architecture description language Cohesion (computer science) Component-based software engineering Conway's law Coupling (computer science) David Parnas Information hiding (encapsulation) Library (computing) List of system quality attributes Modular design Plug-in (computing) Snippet (programming) Structured Analysis Structured programming Notes References External links How To Decompose a System into Modules SMC Platform Programming paradigms Programming
54066377
https://en.wikipedia.org/wiki/Software%20as%20a%20Product
Software as a Product
Software as a product (SaaP, also programming product, software product) — is a product, software, which is made to be sold to users, and users pay for licence which allows them to use it, in contrast to SaaS, where users buy subscription and where the software is centrally hosted. One example of software as a product has historically been Microsoft Office, which has traditionally been distributed as a file package using CD-ROM or other physical media or is downloaded over network. Office 365, on the other hand, is an example of SaaS, where a monthly subscription is required. Development effort estimation In the book The Mythical Man-Month Fred Brooks tells that when estimating project times, it should be remembered that programming products (which can be sold to paying customers) are three times as hard to write as simple independent in-house programs, because requirement to work on different situations, which increases testing efforts and as a documentation. See also The Mythical Man-Month Minimum viable product Product manager Software as a service Literature References Software distribution
1794797
https://en.wikipedia.org/wiki/Dynamic%20enterprise%20modeling
Dynamic enterprise modeling
Dynamic enterprise modeling (DEM) is an enterprise modeling approach developed by the Baan company, and used for the Baan enterprise resource planning system which aims "to align and implement it in the organizational architecture of the end-using company". According to Koning (2008), Baan introduced dynamic enterprise modelling in 1996 as a "means for implementing the Baan ERP product. The modelling focused on a Petri net–based technique for business process modelling to which the Baan application units were to be linked. DEM also contains a supply-chain diagram tool for the logistic network of the company and of an enterprise function modelling diagram". Overview To align a specific company with dynamic enterprise modeling, the organizational structure is blueprinted top-down from high-level business processes to low-level processes. This blueprint is used as a roadmap of the organization, that is compatible with the structural roadmap of the software package. Having both roadmaps, the software package and the organizational structure are alienable. The blueprint of an organizational structure in dynamic enterprise modeling is called a reference model. A reference model is the total view of visions, functions, and organizational structures and processes, which together can be defined as a representative way of doing business in a certain organizational typology. The DEM reference model consists of a set of underlying models that depict the organizational architecture in a top-down direction. The underlying models are: Enterprise structure diagrams: The company site structure is visualized with the dispersed geographic locations, the headquarters, manufacturing plants, warehouses, and supplier and customer locations. Physical as well as logical multi-site organizations for internal logistic or financial flow optimization can be diagrammed. Business control model : The business control model represents the primary processes of the organization and their control, grouped in business functions. The DEM reference model exists of one main Business Control Model, resulting in several other Business Control Models per function area of the organization. Business function model : The business function model is a function model that focuses on the targets of the several functions within the company. Business process model : The business process model focuses on the execution of the functions and processes that originate from the business control model, and the business function model. Processes flows are depicted and processes are detailed out. Business organization model : The business organization model focuses less on the processes and more on the organizational aspects such as roles and responsibilities. Together these models are capable of depicting the total organizational structure and aspects that are necessary during the implementation of the dynamic enterprise modeling. The models can have differentiations, which are based on the typology of the organization (i.e.: engineer-to-order organizations require different model structures than assemble-to-order organizations. To elaborate on the way that the reference model is used to implement software and to keep track of the scope of implementation methods, the business control model and the business process model will be explained in detail. Dynamic enterprise modeling topics Business control model The business control model exists of the business functions of the organization and their internal and external links. Basic features in the model are: Request-feedback-loop: A link from, to, or between business functions is called a request-feedback-loop, which consists of 4 states that complete the process and information flows between both business functions. The states are labeled: requested, committed, completed, and accepted. Workflow case. A workflow case is the description of the execution and the target of the process that occurs between two business functions. The most important critical factors of the workflow case are quantity, quality, and time. The 4 states of Request-feedback-loop the together represent the workflow case. Triggers: Business functions are aggregates of business processes and focus mainly on the triggers (control) between processes, thus not on the information flows. Business functions : In an optimal situation for the modeling process, a company has only one business function. Business functions are however subdivided when: The nature and characteristics of workflow cases fluctuate The frequency in underlying processes fluctuate Detail-level fluctuates More than 1 type of request triggers a function Next to interaction between two business functions, interaction can also exist between objects that are not in the scope of the reference model. These objects can be external business functions and agents. External business function : this is a group of processes that are part of the organization (meaning that the organization can control the functions), but that is outside of the scope of the reference model. Agents on the other hand are entities similar to business functions with the exception that they are external of the business (i.e.: customers and suppliers). Processes within or between business functions are executed by triggers, which can be event-driven or time-driven. Exceptions in a system are handled, according to the set handling level in the business process configuration, when the success path of the model is not met in practice. Subroutines of processes can be modeled in the Business Control Model to take care of possible exceptions that can occur during the execution of a process (i.e.: delay handling in the delivery of goods). In addition to business functions that consist of the main processes of the organization, management functions exist. Management business functions: These are functions that manage the business process itself, and that thus, support the execution and triggering of the main business functions. Having this reference, the main processes of the organization can be captured in the Business Control Model. The main functions of the organization are grouped in the business functions, which consist of the processes that are part of the specific business function. Interactions between the business functions are then depicted using the request-feedback loops. Constructing the business control model A business control model is constructed according to a set path. First, the scope of the business is defined. The scope includes scoping what to model and includes the definition of the agents and external business functions that relate to the business. Next, the scope is depicted to a model of the black box with al the agents and external business functions surrounding the black box. The next step is to define the process and information flows (request-feedback flows) between the agents and external business functions to and from the black box of the business control model. Defining the request-feedback flows enables the modeler to define what processes are inside the black box. After creating the main business functions within the business control model, the several business functions are detailed out. In case of a production business it is vital to define the customer order decoupling point, referring to the split in the physical process where processes are based on the customer order instead of forecasts. Service based businesses on the other hand do not have a physical goods flow and thus do not require a physical process model. It is however imaginable that the same type of process flow can be utilized to construct a business control model for a service based business, as a service can be interpreted as a product as well. In this way, a business control model can be constructed similarly for a service based business as for a physical goods production business, having intangible goods instead of tangible. Next to the low-level physical production process, the high-level business functions need to be defined as well. In most cases the higher level business functions relate to planning functions and other tactical and strategical business functions, followed by functions as sales and purchase. After high-level detail definitions, the business functions are decomposed to lower-level detail definitions to make the business control model alienable to the lower models within the reference model, for this practice, mainly the Business Process Model. In the Business Process Model the processes are elaborated until the lowest level of detail. Given this level of detail, the Baan software functionality is then projected on the processes, depicted in the Business Process Model. Business process model The modeling of processes in DEM, modeling the business process model is done using Petri net building blocks. DEM uses 4 construction elements: State : A state element represents the state of a job token and is followed by the activity that executes the job token of the state. Processing activity : A processing activity is the activity that processes the job token of a state, transforming the state of the job token to another state. Control activity: A control activity navigates the process activity but does not execute it. Sub-process : A sub-process is a collection of different other processes, aggregated in a single element by means of complexity management. These 4 construction elements enables the modeling of DEM models. The modeling is due to a set collection of modeling constraints, guiding the modeling process in order to have similarly created models by different modelers. Control activities exist in different structures in order to set different possible routes for process flows. The used structures for control activities are: OR-split / XOR-split : This structure creates 2 new states out of 1 state, signaling the creation of 2 job tokens out of 1 job token. If the new state can be both of the output tokens, the split is OR, if not, the split is an exclusive OR split (XOR). AND-join construction : 2 job tokens are both needed to enable the control activity, creating 1 new job token (thus 1 new state). OR-join / XOR-join : 2 job tokens are needed to enable the control activity, creating 1 new job token. OR means one of the two starting job tokens can be used or both, XOR means only one of the tokens can be used to create the output job token. An example The example below demonstrates the modeling of the concept of marriage and divorce using Petri net building blocks. The Petri net built model expresses the transformation from a single man and woman to a married couple through marriage and back to single individuals through divorce. The model starts with the two states called man and woman. Through an AND-join construction (both man and woman are needed in order to form a couple) the two states are joined within the control activity called coupling to the new state called couple. The couple state then is transformed through the processing activity called marriage, resulting in the transformed state of married couple. The state married couple is then transformed to the state divorced couple using the process activity called divorce, resulting in the state called divorced couple. The control activity called decoupling finally splits the divorced couple state into the states of man and woman. Assessments Using an embedded method, brings the power that the method is designed to implement the software product that the method comes with. This suggests a less complicated usage of the method and more support possibilities. The negative aspect of an embedded method obviously is that it can only be used for specific product software. Engineers and consultants, operating with several software products, could have more use of a general method, to have just one way of working. See also Dynamic enterprise Dynamic enterprise architecture (DYA) Enterprise resource planning SAP R/3 References Further reading Fred Driese and Martin Hromek (1999). "Some aspects of strategic tactical and operational usage of dynamic enterprise modeling". Van Es, R.M., Post, H.A. eds. (1996). Dynamic Enterprise Modelling : A Paradigm Shift in Software Implementation. Kluwer. External links Baan Dynamic Enterprise Management short intro DynamicEnterprise Modeling presentation 1999. Management Enterprise modelling
522449
https://en.wikipedia.org/wiki/Requirements%20analysis
Requirements analysis
In systems engineering and software engineering, requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating and managing software or system requirements. Requirements analysis is critical to the success or failure of a systems or software project.<ref>{{cite book |editor1= Alain Abran |editor2=James W. Moore |editor3=Pierre Bourque |editor4=Robert Dupuis | title = Guide to the software engineering body of knowledge|url = http://www.swebok.org |access-date = 2007-02-08|edition=2004 |date=March 2005 | publisher = IEEE Computer Society Press | location = Los Alamitos, CA | isbn = 0-7695-2330-7 | chapter = Chapter 2: Software Requirements | chapter-url = http://www.computer.org/portal/web/swebok/html/ch2 | quote = It is widely acknowledged within the software industry that software engineering projects are critically vulnerable when these activities are performed poorly. }}</ref> The requirements should be documented, actionable, measurable, testable, traceable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design. Overview Conceptually, requirements analysis includes three types of activities: Eliciting requirements: (e.g. the project charter or definition), business process documentation, and stakeholder interviews. This is sometimes also called requirements gathering or requirements discovery. Recording requirements: Requirements may be documented in various forms, usually including a summary list and may include natural-language documents, use cases, user stories, process specifications and a variety of models including data models. Analyzing requirements: determining whether the stated requirements are clear, complete, unduplicated, concise, valid, consistent and unambiguous, and resolving any apparent conflicts. Analyzing can also include sizing requirements. Requirements analysis can be a long and tiring process during which many delicate psychological skills are involved. New systems change the environment and relationships between people, so it is important to identify all the stakeholders, take into account all their needs and ensure they understand the implications of the new systems. Analysts can employ several techniques to elicit the requirements from the customer. These may include the development of scenarios (represented as user stories in agile methods), the identification of use cases, the use of workplace observation or ethnography, holding interviews, or focus groups (more aptly named in this context as requirements workshops, or requirements review sessions) and creating requirements lists. Prototyping may be used to develop an example system that can be demonstrated to stakeholders. Where necessary, the analyst will employ a combination of these methods to establish the exact requirements of the stakeholders, so that a system that meets the business needs is produced. Requirements quality can be improved through these and other methods Visualization. Using tools that promote better understanding of the desired end-product such as visualization and simulation. Consistent use of templates. Producing a consistent set of models and templates to document the requirements. Documenting dependencies. Documenting dependencies and interrelationships among requirements, as well as any assumptions and congregations. Requirements analysis topics Stakeholder identification See Stakeholder analysis for a discussion of people or organizations (legal entities such as companies, standards bodies) that have a valid interest in the system. They may be affected by it either directly or indirectly. A major new emphasis in the 1990s was a focus on the identification of stakeholders. It is increasingly recognized that stakeholders are not limited to the organization employing the analyst. Other stakeholders will include: anyone who operates the system (normal and maintenance operators) anyone who benefits from the system (functional, political, financial and social beneficiaries) anyone involved in purchasing or procuring the system. In a mass-market product organization, product management, marketing and sometimes sales act as surrogate consumers (mass-market customers) to guide development of the product. organizations which regulate aspects of the system (financial, safety, and other regulators) people or organizations opposed to the system (negative stakeholders; see also Misuse case) organizations responsible for systems which interface with the system under design. those organizations who integrate horizontally with the organization for whom the analyst is designing the system. Joint Requirements Development (JRD) Sessions Requirements often have cross-functional implications that are unknown to individual stakeholders and often missed or incompletely defined during stakeholder interviews. These cross-functional implications can be elicited by conducting JRD sessions in a controlled environment, facilitated by a trained facilitator (Business Analyst), wherein stakeholders participate in discussions to elicit requirements, analyze their details and uncover cross-functional implications. A dedicated scribe should be present to document the discussion, freeing up the Business Analyst to lead the discussion in a direction that generates appropriate requirements which meet the session objective. JRD Sessions are analogous to Joint Application Design Sessions. In the former, the sessions elicit requirements that guide design, whereas the latter elicit the specific design features to be implemented in satisfaction of elicited requirements. Contract-style requirement lists One traditional way of documenting requirements has been contract style requirement lists. In a complex system such requirements lists can run to hundreds of pages long. An appropriate metaphor would be an extremely long shopping list. Such lists are very much out of favour in modern analysis; as they have proved spectacularly unsuccessful at achieving their aims; but they are still seen to this day. Strengths Provides a checklist of requirements. Provide a contract between the project sponsor(s) and developers. For a large system can provide a high level description from which lower-level requirements can be derived. Weaknesses Such lists can run to hundreds of pages. They are not intended to serve as a reader-friendly description of the desired application. Such requirements lists abstract all the requirements and so there is little context. The Business Analyst may include context for requirements in accompanying design documentation. This abstraction is not intended to describe how the requirements fit or work together. The list may not reflect relationships and dependencies between requirements. While a list does make it easy to prioritize each individual item, removing one item out of context can render an entire use case or business requirement useless. The list doesn't supplant the need to review requirements carefully with stakeholders in order to gain a better shared understanding of the implications for the design of the desired system / application. Simply creating a list does not guarantee its completeness. The Business Analyst must make a good faith effort to discover and collect a substantially comprehensive list, and rely on stakeholders to point out missing requirements. These lists can create a false sense of mutual understanding between the stakeholders and developers; Business Analysts are critical to the translation process. It is almost impossible to uncover all the functional requirements before the process of development and testing begins. If these lists are treated as an immutable contract, then requirements that emerge in the Development process may generate a controversial change request. Alternative to requirement lists As an alternative to requirement lists, Agile Software Development uses User stories to suggest requirements in everyday language. Measurable goals Best practices take the composed list of requirements merely as clues and repeatedly ask "why?" until the actual business purposes are discovered. Stakeholders and developers can then devise tests to measure what level of each goal has been achieved thus far. Such goals change more slowly than the long list of specific but unmeasured requirements. Once a small set of critical, measured goals has been established, rapid prototyping and short iterative development phases may proceed to deliver actual stakeholder value long before the project is half over. Prototypes A prototype is a computer program that exhibits a part of the properties of another computer program, allowing users to visualize an application that has not yet been constructed. A popular form of prototype is a mockup, which helps future users and other stakeholders to get an idea of what the system will look like. Prototypes make it easier to make design decisions, because aspects of the application can be seen and shared before the application is built. Major improvements in communication between users and developers were often seen with the introduction of prototypes. Early views of applications led to fewer changes later and hence reduced overall costs considerably. Prototypes can be flat diagrams (often referred to as wireframes) or working applications using synthesized functionality. Wireframes are made in a variety of graphic design documents, and often remove all color from the design (i.e. use a greyscale color palette) in instances where the final software is expected to have graphic design applied to it. This helps to prevent confusion as to whether the prototype represents the final visual look and feel of the application. Use cases A use case is a structure for documenting the functional requirements for a system, usually involving software, whether that is new or being changed. Each use case provides a set of scenarios that convey how the system should interact with a human user or another system, to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end-user or domain expert. Use cases are often co-authored by requirements engineers and stakeholders. Use cases are deceptively simple tools for describing the behavior of software or systems. A use case contains a textual description of the ways in which users are intended to work with the software or system. Use cases should not describe internal workings of the system, nor should they explain how that system will be implemented. Instead, they show the steps needed to perform a task without sequential assumptions. Requirements specification Requirements specification is the synthesis of discovery findings regarding current state business needs and the assessment of these needs to determine, and specify, what is required to meet the needs within the solution scope in focus. Discovery, analysis and specification move the understanding from a current as-is state to a future to-be state. Requirements specification can cover the full breadth and depth of the future state to be realized, or it could target specific gaps to fill, such as priority software system bugs to fix and enhancements to make. Given that any large business process almost always employs software and data systems and technology, requirements specification is often associated with software system builds, purchases, cloud computing strategies, embedded software in products or devices, or other technologies. The broader definition of requirements specification includes or focuses on any solution strategy or component, such as training, documentation guides, personnel, marketing strategies, equipment, supplies, etc. Types of requirements Requirements are categorized in several ways. The following are common categorizations of requirements that relate to technical management: Business requirements Statements of business level goals, without reference to detailed functionality. These are usually high level (software and/or hardware) capabilities that are needed to achieve a business outcome. Customer requirements Statements of fact and assumptions that define the expectations of the system in terms of mission objectives, environment, constraints, and measures of effectiveness and suitability (MOE/MOS). The customers are those that perform the eight primary functions of systems engineering, with special emphasis on the operator as the key customer. Operational requirements will define the basic need and, at a minimum, answer the questions posed in the following listing:Operational distribution or deployment: Where will the system be used?Mission profile or scenario: How will the system accomplish its mission objective?Performance and related parameters: What are the critical system parameters to accomplish the mission?Utilization environments: How are the various system components to be used?Effectiveness requirements: How effective or efficient must the system be in performing its mission?Operational life cycle: How long will the system be in use by the user?Environment: What environments will the system be expected to operate in an effective manner? Architectural requirements Architectural requirements explain what has to be done by identifying the necessary systems architecture of a system. Structural requirements Structural requirements explain what has to be done by identifying the necessary structure of a system. Behavioral requirements Behavioral requirements explain what has to be done by identifying the necessary behavior of a system. Functional requirements Functional requirements explain what has to be done by identifying the necessary task, action or activity that must be accomplished. Functional requirements analysis will be used as the toplevel functions for functional analysis. Non-functional requirements Non-functional requirements are requirements that specify criteria that can be used to judge the operation of a system, rather than specific behaviors. Performance requirements The extent to which a mission or function must be executed; generally measured in terms of quantity, quality, coverage, timeliness or readiness. During requirements analysis, performance (how well does it have to be done) requirements will be interactively developed across all identified functions based on system life cycle factors; and characterized in terms of the degree of certainty in their estimate, the degree of criticality to system success, and their relationship to other requirements. Design requirements The "build to", "code to", and "buy to" requirements for products and "how to execute" requirements for processes expressed in technical data packages and technical manuals. Derived requirements Requirements that are implied or transformed from higher-level requirement. For example, a requirement for long range or high speed may result in a design requirement for low weight. Allocated requirements A requirement that is established by dividing or otherwise allocating a high-level requirement into multiple lower-level requirements. Example: A 100-pound item that consists of two subsystems might result in weight requirements of 70 pounds and 30 pounds for the two lower-level items. Well-known requirements categorization models include FURPS and FURPS+, developed at Hewlett-Packard. Requirements analysis issues Stakeholder issues Steve McConnell, in his book Rapid Development'', details a number of ways users can inhibit requirements gathering: Users do not understand what they want or users don't have a clear idea of their requirements Users will not commit to a set of written requirements Users insist on new requirements after the cost and schedule have been fixed Communication with users is slow Users often do not participate in reviews or are incapable of doing so Users are technically unsophisticated Users do not understand the development process Users do not know about present technology This may lead to the situation where user requirements keep changing even when system or product development has been started. Engineer/developer issues Possible problems caused by engineers and developers during requirements analysis are: A natural inclination towards writing code can lead to implementation beginning before the requirements analysis is complete, potentially resulting in code changes to meet actual requirements once they are known. Technical personnel and end-users may have different vocabularies. Consequently, they may wrongly believe they are in perfect agreement until the finished product is supplied. Engineers and developers may try to make the requirements fit an existing system or model, rather than develop a system specific to the needs of the client. Attempted solutions One attempted solution to communications problems has been to employ specialists in business or system analysis. Techniques introduced in the 1990s like prototyping, Unified Modeling Language (UML), use cases, and agile software development are also intended as solutions to problems encountered with previous methods. Also, a new class of application simulation or application definition tools have entered the market. These tools are designed to bridge the communication gap between business users and the IT organization — and also to allow applications to be 'test marketed' before any code is produced. The best of these tools offer: electronic whiteboards to sketch application flows and test alternatives ability to capture business logic and data needs ability to generate high fidelity prototypes that closely imitate the final application interactivity capability to add contextual requirements and other comments ability for remote and distributed users to run and interact with the simulation See also Business analysis Business Analysis Body of Knowledge (BABOK) Business process reengineering Creative brief Data modeling Design brief Functional requirements Information technology Model-driven engineering Model Transformation Language Non-functional requirements Process architecture Process modeling Product fit analysis Requirements elicitation Requirements Engineering Specialist Group Requirements management Requirements Traceability Search Based Software Engineering Software prototyping Software requirements Software Requirements Specification Systems analysis System requirements System requirements specification User-centered design References Bibliography External links Peer-reviewed Encyclopedia Entry on Requirements Engineering and Analysis Defense Acquisition University Stakeholder Requirements Definition Process--- MIL-HDBK 520 Systems Requirements Document Guidance Systems engineering Business analysis pl:Wymaganie (inżynieria)#Analiza wymagań lub inżynieria wymagań
59036060
https://en.wikipedia.org/wiki/Social%20media%20use%20in%20politics
Social media use in politics
Social media use in politics refers to the use of online social media platforms in political processes and activities. Political processes and activities include all activities that pertain to the governance of a country or area. This includes political organization, global politics, political corruption, political parties, and political values. The internet has created channels of communication that play a key role in circulating news, and social media has the power to change not just the message, but the dynamics of political corruption, values, and the dynamics of conflict in politics. Through the use of social media in election processes, global conflict, and extreme politics, diplomacy around the world has become less private and susceptive to the public perception. Background Participatory role Social media have been championed as allowing anyone with an Internet connection to become a content creator and empowering their users. The idea of “new media populism” encompasses how citizens can include disenfranchised citizens, and allow the public to have an engaged and active role in political discourse. New media, including social media platforms such as Facebook and Twitter, can enhance people's access to political information. Social media platforms and the internet have facilitated the dissemination of political information that counters mainstream media tactics that are often centralized and top-down, and include high barriers to entry. Writer Howard Rheingold characterized the community created on social networking sites:"The political significance of computer mediated communication lies in its capacity to challenge the existing political hierarchy’s monopoly on powerful communications media, and perhaps thus revitalize citizen-based democracy."Scholar Derrick de Kerckhove described the new technology in media: "In a networked society, the real powershift is from the producer to the consumer, and there is a redistribution of controls and power. On the Web, Karl Marx’s dream has been realized: the tools and the means of production are in the hands of the workers."The role of social media in democratizing media participation, which proponents herald as ushering in a new era of participatory democracy, with all users able to contribute news and comments, may fall short of the ideals. International survey data suggest online media audience members are largely passive consumers, while content creation is dominated by a small number of users who post comments and write new content. Others argue that the effect of social media will vary from one country to another, with domestic political structures playing a greater role than social media in determining how citizens express opinions about stories of current affairs involving the state. Most people see social media platforms as censoring objectionable political views. In June 2020, users of the Social Media platform TikTok organised a movement to prank a Trump Rally in Tulsa, Oklahoma by buying tickets and not attending so that the rally appeared empty. As a news source See also Social media and political communication in the United States. Social media platforms are increasingly used for political news and information by adults in the United States, especially when it comes to election time. A study by Pew Research conducted in November 2019, found that one-in-five US adults get their political news primarily through social media. 18% of adults use social media to get their political and election news. The Pew Research Center further found that out of these United States Adults relying on social media for this information, 48% of them are from ages 18–29. In addition, Reddit, Twitter, Facebook, lead the social media platforms in which the majority of the users use the platforms to acquire news information. Of all United States adults, 67% use the platform with 44% who use the platform to get news. According to the Reuters Institute Digital News Report in 2013, the percentage of online news users who blog about news issues ranges from 1–5%. Greater percentages use social media to comment on news, with participation ranging from 8% in Germany to 38% in Brazil. But online news users are most likely to just talk about online news with friends offline or use social media to share stories without creating content. The rapid propagation of information on social media, spread by word of mouth, can impact the perception of political figures quickly with information that may or may not be true. When political information is propagated in this manner on purpose, the spread of information on social media for political means can benefit campaigns. On the other hand, the word-of-mouth propagation of negative information concerning a political figure can be damaging. For example, the use of the social media platform Twitter by United States congressman Anthony Weiner to send inappropriate messages played a role in his resignation. Attention economy Social media, especially news that is spread through social media sites, plays into the idea of the attention economy. In which content that attracts more attention will be seen, shared, and disseminated far more than news content that does not gather as much traction from the public. Tim Wu from Columbia Law School coins the attention economy as “the resale of human attention.” A communication platform such as social media is persuasive, and often works to change or influence opinions when it comes to political views because of the abundance of ideas, thoughts, and opinions circulating through the social media platform. It is found that news use leads to political persuasion, therefore the more that people use social media platforms for news sources, the more their political opinions will be affected. Despite that, people are expressing less trust in their government and others due to media use- therefore social media directly affects trust in media use. It is proven that while reading newspapers there is an increase in social trust where on the contrary watching the news on television weakened trust in others and news sources. Social media, or more specifically news media- plays an important role in democratic societies because they allow for participation among citizens.Therefore, when it comes to healthy democratic networks, it is crucial that that news remains true so it doesn't affect citizens’ levels of trust. A certain amount of trust is necessary for a healthy and well functioning democratic system. Younger generations are becoming more involved in politics due to the increase of political news posted on various types of social media. Due to the heavier use of social media among younger generations, they are exposed to politics more frequently, and in a way that is integrated into their online social lives. While informing younger generations of political news is important, there are many biases within the realms of social media. In May 2016, former Facebook Trending News curator Benjamin Fearnow revealed his job was to "massage the algorithm," but dismissed any "intentional, outright bias" by either human or automated efforts within the company. Fearnow was fired by Facebook after being caught leaking several internal company debates about Black Lives Matter and presidential candidate Donald Trump. As a public utility A key debate centers on whether or not social media is a public good based on the premises of non-rival and non-excludable consumption. Social media can be considered an impure public good as it can be excludable given the rights of platforms such as Facebook and Twitter to remove content, disable accounts, and filter information based on algorithms and community standards. Arguments for platforms such as Google in being treated as a public utility and public service provider include statements from Benjamin Barber in The Nation"For new media to be potential equalizers, they must be treated as public utilities, recognizing that spectrum abundance (the excuse for privatization) does not prevent monopoly ownership of hardware and software platforms and hence cannot guarantee equal civic, educational, and cultural access to citizens."Similarly, Zeynep Tufekci argues online services are natural monopolies that underwrite the "corporatization of social commons" and the "privatization of our publics." One argument that displays the nature of social media as an impure public good is the fact that the control over content remains in the hands of a few large media networks, Google and Facebook, for example. Google and Facebook have the power to shape the environment under personal and commercial goals that promotes profitability, as opposed to promoting citizen voice and public deliberation. Government regulation Proponents and aims for regulation of social media are growing due to economic concerns of monopolies of the platforms, to issues of privacy, censorship, network neutrality and information storage. The discussion of regulation is complicated due to the issue how Facebook, and Google are increasingly becoming a service, information pipeline, and content provider, and thus centers on how the government would regulate both the platform as a service and information provider. Thus, other proponents advocate for “algorithmic neutrality”, or the aim for search engines on social media platforms to rank data without human intervention. Opponents of regulation of social media platforms argue that platforms such as Facebook and Twitter do not resemble traditional public utilities, and regulation would harm consumer welfare as public utility regulation can hinder innovation and competition. Second, as the First Amendment values are criticized on social media platforms, the media providers should retain the power to how the platform is configured. Effect on democracy Social media has been criticized as being detrimental to democracy. According to Ronald Deibert, "The world of social media is more conducive to extreme, emotionally charged, and divisive types of content than it is to calm, principled considerations of competing or complex narratives". On the contrary, Ethan Zuckerman says that social media presents the opportunity to inform more people, amplify voices, and allow for an array of diverse voices to speak. Mari K. Eder points to failures of the Fourth Estate that have allowed outrage to be disguised as news, contributing to citizen apathy when confronting falsehoods and further distrust in democratic institutions. Politicians and social media Social media has allowed politicians to subvert typical media outlets by engaging with the general public directly, Donald Trump utilised this when he lost the 2020 presidential election by claiming the election to be fraudulent and therefore creating the need for a re-election. The consequences of Trump's online actions were displayed when on January 6 the U.S. Capital was attacked by supporters of the former president. Being a popular presence on social media also boosts a politicians likelihood of coming to power take Boris Johnson in the 2019 bid to replace Theresa May as Prime Minister, Johnson had more than half a million page 'liking' his page (substantially more than the other candidates) which meant that when he released his launch video it gained more than 130,000 views which could have been a prominent factor in him eventually winning power. A study conducted by Sounman Hong found that in the case of politicians utilising social media and whether its use would increase on their individual weighing up on the consequences and if they would be largely positive or negative found that in the case of backbenchers, 'underdogs' and opposition it was likely to increase in order to gain recognition and support from the public eye where they otherwise might go unnoticed. Democratization The Arab Spring During the peak of the Egyptian Revolution of 2011, the Internet and social media played a huge role in facilitating information. At that time, Hosni Mubarak was the president of Egypt and head the regime for almost 30 years. Mubarak was so threatened by the immense power that the Internet and social media gave the people that the government successfully shut down the Internet, using the Ramses Exchange, for a period of time in February 2011. Egyptians used Facebook, Twitter, and YouTube as a means to communicate and organize demonstrations and rallies to overthrow President Hosni Mubarak. Statistics show that during this time the rate of Tweets from Egypt increased from 2,300 to 230,000 per day and the top 23 protest videos had approximately 5.5 million views. Disinformation in relation to US election Though fake news can generate some utility for consumers, in terms of confirming far-right beliefs and spreading propaganda in favor of a presidential candidate, it also imposes private and social costs. For example, one social cost to consumer is the spread of disinformation which can make it harder for consumers to seek out the truth and, in the case of the 2016 Election, for consumers to choose an electoral candidate. Summarized by a Congressional Research Service Study in 2017,“Cyber tools were also used [by Russia] to create psychological effects in the American population. The likely collateral effects of these activities include compromising the fidelity of information, sowing discord and doubt in the American public about the validity of intelligence community reports, and prompting questions about the democratic process itself.” The marginal social cost of fake news is exponential, as the first article is shared it can affect a small number of people, but as the article is circulated more throughout Facebook, the negative externality multiplies. As a result, the quantity demanded of news can shift up around election season as consumers seek to find correct news, however the quantity demanded can also shift down as people have a lower trust in mainstream media. In the American public, a Gallup poll in 2016 found “Americans’ trust in the mass media ‘to report the news fully, accurately and fairly’ was, at 32%, the lowest in the organization's polling history.” In addition, trust in mainstream media is lower in Republican and far-right political viewers at 14%. About 72% of American adults claim that social media firms excessively control and influence the politics today, as per the June 16–22 survey conducted by Pew Research Center. Only 21% believe that the power held by these social media firms over today’s politics is of the right amount, while 6% believe it is not enough. Algorithms can facilitate the rapid spread of disinformation through social media channels. Algorithms use users’ past behavior and engagement activity to provide them with tailored content that aligns with their interests and beliefs. Algorithms commonly create echo chambers and sow radicalism and extremist thinking in these online spaces. Algorithms promote social media posts with high 'engagement,' meaning posts that received a lot of 'likes' or 'comments'/'replies'. For better or for worse, engagement and controversy go hand-in-hand. Controversy attracts attention as it evokes an emotional response, however "Benford's Law" of controversy states that "passion is inversely proportional to the amount of real information available". This means that the less grounded in facts a political tweet is, the more engagement it is likely to receive, therefore the likelihood of spreading disinformation is high. Twitter has become a battleground for political debate. Psychologist, Jordan Peterson, spoke of Twitter's radicalising effect in an interview conducted by GQ. He explained that for any given tweet that appears on one's 'feed,' the tweet shall have been seen by a far greater number of people than is reflected by its likes and comments. Therefore, who are the people who comment on a tweet? The people who comment shall be those who have the strongest views on the matter, the people who want their opinion to be heard. Peterson claims that this creates an environment in which the opinions that the average user sees on twitter do not reflect the views of a random sample of the population. The opinions most commonly seen on twitter tend to be those of people at each extreme end of the political ideology spectrum, hence the 'radicalising effect'. Advertisement Political advertisements—for example, encouraging people to vote for or against a particular candidate, or to take a position on a particular issue—have often been placed on social media. On 22 November 2019, Twitter said it would no longer facilitate political advertising anywhere in the world. Due to the nature of Social media bringing different information to different people based on their interests, advertising methods such as "Microtargeting" and "Black ads" have become prominent on social media and allow advertising to be much more effective for the same price, relative to traditional adverts such as those on cable TV. Grassroots campaigns When it comes to political referendums, individuals often gather on social media at the grassroots level to campaign for change. This is particularly effective where it comes to feminist political issues, as studies have proven that women are more likely to tweet about policy problems and do so in a way that is more aggressive than their male counter-parts. Like-minded individuals can collectively work together to influence social change and utilise social media as a tool for social justice. An example of this is in the referendum to appeal Ireland's eighth amendment. Civil society organisations, such as TogetherForYes, utilised Twitter as a tool to bring abortion law into the public and make the harms of the eighth amendment visible and accessible. The positive outcome of the referendum (in the amendments repeal) can be equated to the efforts of individuals and advocates coming together at the grassroots level to make the vote visible, as social media goes beyond the local level to create a widespread global political impact, making the issue of strict abortion laws a global one, rather than one just confined to Ireland. The strength in a political grassroots campaign on social media is the increased mobilisation of participants. Due to the fact that social media platforms are largely accessible, a political platform can be provided to the voices of those traditionally silenced in the political sphere or in traditional media. US election interference The 2016 United States Presidential Election was an example in which social media was used by the state actor Russia to influence public opinion. Tactics such as propaganda, trolling, and bots were used to leak fake news stories that included an "FBI agent had been killed after leaking Clinton’s emails" and "Pope Francis had endorsed Donald Trump.” Studies have found that pro-Trump news was as many as four-time more than pro-Clinton fake news, and a third of the pro-Trump tweets were generated by bots. Social media has also provided the means for large amounts of data to be collected on social media users – allowing analysis and predictions to be made on what information and advertising the user is most likely to be susceptible to. This was highlighted in 2018 when the Cambridge Analytica – Facebook scandal emerged. Data and predictions from the company were used to influence voters in the 2016 Brexit/Leave campaign and also the 2016 US election Trump Campaign. This scandal first appeared in the news in 2016 following both the UK's Brexit referendum results and the US' presidential election result but was an on-going operation by Cambridge Analytica with the permission of Facebook using Alexsandr Kogan's app "This is your Digital Life". However, the methods were exposed on 27 September 2016 during a presentation by Alexander Nix named "The Power of Big Data and Psychographics". Nix was the chief executive officer of market-research at Cambridge Analytica. After founding the company in 2013 he was then suspended on 20 March 2018 following the release of a video in which he admitted to working directly with Donald Trump to gather data on the US electorate. In his 2016 presentation, Nix highlights his contribution the 2016 Ted Cruz campaign and how taking the focus away from demographics and geographics for the targeted ads and instead using psychographics in order to target personality traits and get a better understanding of voter demands is a more effective method of gaining votes. In 2016, one of Nix's business associates, Steve Bannon, left the company to take over the campaign of Donald Trump and as a result of the video leak which lost Nix his job it is largely believed he had direct influence too. As well as this, Cambridge Analytica staff were also heavily involved in the Vote-Leave campaign for the 2016 Brexit referendum. As a result of an organisation specialised in targeted ads being involved in two populist campaigns that produced shock results, many point out as a potential threat to democracy. But this is not the only example of potential election interference using social media. November 1, 2015, Rodrigo Duterte was announced as president of Philippines after being 'the first person to make the full use of the power of social media'. Facebook had made an astonishing rise since the previous election and Duterte saw this as an opportunity to get social media influencers to promote his party and create viral content, further showing the power social media can have over democracy. On 18 May 2017, Time had reported that the US Congress was investigating CA in connection with Russian interference in the 2016 United States elections. The report alleges that CA may have coordinated the spread of Russian propaganda using its microtargetting capabilities. In 2018, following disclosures that the company had improperly used the personal information of over 50 million Facebook users while working on Trump's presidential campaign, The Times of Israel reported that the company had used what Nix had called "intelligence gathering" from British and Israeli companies as part of their efforts to influence the election results in Trump's favor. This was the work of one company and regulation may be able to prevent this in the future, but social media is now a medium that makes this kind of interference possible. Election results In October 2020, Twitter announced its new policy that candidates will be forbidden to claim victory until their election win has been credibly projected by news outlets or officially certified. Impact on elections Social media has a profound effect on elections. Oftentimes, social media compounds with the mass media networks such as cable television. For many individuals, cable television serves as the basis and first contact for where many get their information and sources. Cable television also has commentary that creates partisanship and builds on to people's predispositions to certain parties. Social media takes mass media's messages and oftentimes amplifies and reinforces such messages and perpetuates partisan divides. In an article by the Journal of Communication, they concluded that social media does not have a strong effect on people's views or votes, but social media does not also have a minimal effect on their views. Instead, social media creates a bandwagon effect when a candidate in an election commits an error or a great success, then users on social media will amplify the effect of such failure or success greatly. The Pew Research Center finds that nearly one fourth of Americans learn something about the candidates through an internet source such as Facebook. Nearly a fifth of America uses social media with two thirds of those Americans being youth ages of 18–29. The youth's presence on social media often inspires rallies and creates movements. For instance, in the 2008 presidential election, a Facebook group of 62,000 members was created that sponsored the election of President Obama and within days universities across the countries held rallies in the thousands. Rallies and movements such as these are often coined the "Facebook Effect". However, social media can often have the opposite effect and take a toll on many users. The Pew Research Center in a poll found that nearly 55 percent of social media users in the US indicate that they are "worn out" by the amount of political posts on social media. With the rise of technology and social media continuing, that number increased by nearly 16 percent since the 2016 presidential election. Nearly 70 percent of individuals say that talking about politics on social media with people on the opposite side is often "stressful and frustrating" compared to 56 percent in 2016. Consequently, the number of people who find these discussions as "interesting and informative" decreased from 35% to 26% since 2016. In terms of social media's effect on the youth vote, it is quite substantial. In the 2018 elections, nearly 31 percent of the youth voted compared to just 21 percent in 2014. Social media use among the youth continue to grow as around 90 percent of the youth use at least one social media platform. Of the 90 percent, 47 percent received information about the 2018 elections via a social media platform. The messages shared on the social media platform often include messages to register to vote and actually carrying out their vote; this is in contrast to receiving the message from the candidate's campaign itself. Subsequently, of the first time youth voters in the 2018 election, 68 percent relied on social media to get their information about voting. This is in comparison to the traditional methods of being notified to vote of just 23 percent first time voters. Furthermore, just 22 percent of youth who did not hear about an election via social media or traditional means were very likely to vote; however, 54 percent of youth who found out about the election via social media or traditional ways were very likely to vote. However, the youth are becoming distrustful of the content they read on social media as Forbes notes that there has been a decline in public trust due to many political groups and foreign nations creating fake accounts to spread a great amount of misinformation with the aim of dividing the country. Social media often filters what information individuals see. Since 2008, the number of individuals who get their news via social media has increased to 62 percent. On these social media sites, there are many algorithms run that filter what information individual users see. The algorithms understand a users favorites and dislikes, they then begin to cater their feed to their likes. Consequently, this creates an echo chamber. For instance, black social media users were more likely to see race related news and in 2016 the Trump campaign used Facebook and other platforms to target Hillary Clinton's supporters to drive them out of the election and taking advantage of such algorithms. Whether or not these algorithms have an effect on people's vote and their views is mixed. Iowa State University finds that for older individuals, even though their access to social media is far lower than the youth, their political views were far more likely to change from the 1996–2012 time periods, which indicates that there are a myriad of other factors that impact political views. They further that based upon other literature, Google has a liberal bias in their search results. Consequently, these biased search results can affect an individual's voting preferences by nearly 20 percent. In addition, 23 percent of an individual's Facebook friends are of an opposing political view and nearly 29 percent of the news they receive on the platform is also in opposition of their political ideology, which indicates that the algorithms on these new platforms do not completely create echo chambers. Washington State University political science professor Travis Ridout explains that in the United Kingdom the popular social media platforms of Twitter, Facebook, Instagram, and YouTube are beginning to play a significant role in campaigns and elections. Contrary to the United States which allows television ads, in the United Kingdom television ads are banned and thus campaigns are now launching huge efforts on social media platforms. Ridout furthers that the social media ads have gotten in many cases offensive and in attack formation at many politicians. Social media is able to provide many individuals with a sense of anonymity that enables them to get away with such aggressive acts. For example, ethnic minority women politicians are often the targets of such attacks. Furthermore, in the United States, many of the youth conservative voices are often reduced. For instance, PragerU, a conservative organization, often has their videos taken down. On a different level, social media can also hamper many political candidates. Media and social media often publish stories about news that are controversial and popular and will ultimately drive more traffic. A key example is President Donald Trump whose controversial statements in 2016 often brought the attention of many individuals and thereby increased his popularity while shunning out other candidates. In the 2020 Presidential Election, social media was very prevalent and used widely by both campaigns. For Twitter, nearly 87 million users follow President Donald Trump while 11 million users follow Joe Biden. Despite the significant gap between the two, Biden's top tweets have outperformed Donald Trump's top tweets by nearly double. In terms of mentions of each candidate on Twitter, from October 21 to October 23, there were 6.6 million mentions of Trump and Biden and Biden held 72% of the mentions. During the 2020 Presidential Debates, Biden had nearly two times the mentions as Donald Trump with nearly half of the mentions being negative. For Trump, he also had have of his mentions being negative as well. In Europe, the influence of social media is less than that of the United States. In 2011, only 34% of MEPs use twitter, while 68% use Facebook. In 2012, the EPP had the highest social media following of 7,418 compared to the other parties. This is in relationship to the 375 million voters in all of Europe. When comparing the impact to US social media following, former President Obama has over 27 million fans while the highest in Europe was former French President Nicolas Sarkozy of over 700,000 fines, a stark difference. The 2008 US presidential election skyrocketed the need for technologies to be used in politics and campaigns, especially social media. Europe is now following their lead and has been increasing their use of social media since. However, just because European Politicians don't use social media as much as American Politicians doesn't mean that social media platforms such as Facebook and Twitter don't play a large role in European Politics- in particular- Elections. In the run-up to the 2017 German Bundestag Elections, a group of extremists used social media platforms such as Twitter and YouTube in hopes of gaining support for the far-right group Alternative für Deutschland. Despite being limited in numbers, the group were able to publish "patriotic videos" that managed to get on to the Trending tab on YouTube as well as being able to trend the hashtag "#AfD" on Twitter. Though polled to come 5th in the election, Alternative für Deutschland won 13.3% of the vote, making them the third largest party within the Bundestag, making them the first far-right party to enter the building since 1961 In the UK, Cambridge Analytica was allegedly hired as a consultant company for Leave.EU and the UK Independence Party during 2016, as an effort to convince people to support Brexit. These rumours were the result of the leaked internal emails that were sent between Cambridge Analytica firm and the British parliament. These datasets composed of the data obtained from Facebook were said to be work done as an initial job deliverable for them. Although Arron Banks, co-founder of Leave.EU, denied any involvement with the company, he later declared “When we said we’d hired Cambridge Analytica, maybe a better choice of words could have been deployed." The official investigation by the UK Information Commissioner found that Cambridge Analytica was not involved "beyond some initial enquiries" and the regulator did not identify any "significant breaches" of data protection legislation or privacy or marketing regulations "which met the threshold for formal regulatory action" In early July 2018, the United Kingdom's Information Commissioner's Office announced it intended to fine Facebook £500,000 ($663,000) over the data breach, this being the maximum fine allowed at the time of the breach, saying Facebook "contravened the law by failing to safeguard people's information". In 2014 and 2015, the Facebook platform allowed an app that ended up harvesting 87 million profiles of users around the world that was then used by Cambridge Analytica in the 2016 presidential campaign and in the Brexit referendum. Although Cambridge Analytica were cleared, questions were still raised with how they came to access these Facebook profiles and target voters that would not have necessarily voted in this matter in the first place. Dominic Cummings the prime minister's ex aide had a majority in involving Cambridge Analytica in the Leave.EU campaign, this can be seen in the real accounts of Brexit: The Uncivil War. In terms of analyzing the role of fake news in social media, there tends to be about three times more fake new articles that were more likely to be pro-Trump over pro-Clinton articles. There were 115 pro-Trump fake news articles while only 41 pro-Clinton fake news articles; pro-Trump articles were shared 30.3 million times while pro-Clinton articles were shared 7.6 million times on Facebook. For each share there is about 20 page visits which means that with around 38 million shares of fake news articles there are 760 million page views to these articles. This means that roughly each US adult visited a fake news site three times. Whether the spread of fake news has an impact on elections is conflicted as more research is required and is difficult to place a quantification on the effects. However, fake news is more likely to influence individuals who are over 65 and are more conservative. These groups tend to believe fake news more than other groups. College students have difficulty in determining if an article shared on social media is fake news. The same study also concluded that conspiratorial beliefs could be predicted by a person's political party affiliation or their ideological beliefs. For example, those that Republican or held a more conservative belief were far more likely to believe in baseless theories such as that of former President Obama being born outside of the United States; and those that voted Democrat or held a more liberal belief would be more likely to believe in conspiracies such as former President Bush having played a role in the 9/11 attacks. Role in conflict There are four ways social media plays a significant role in conflict:. Social media platforms allow information to be framed in mainstream platforms which limits communication. Social media enables news stories to quickly go viral and later can lead to misinterpretations that can cause conflict. Strategies and the adaption of social media has caused a change in focus amongst leaders from administrative dynamics to new media technology. Technological advancements in communication can increase the power of persuasion leading to corruption, scandals, and violence on social media platforms. The role of technological communication and social media in the world can lead to political, economic, and social conflict due to its unmonitored system, cheap interface, and accessibility. Non-state actors and militant groups As the world is becoming increasingly connected via the power of the Internet, political movements, including militant groups, have begun to see social media as a major organizing and recruiting tool. The Islamic State of Iraq and the Levant, also known as ISIL, ISIS, and Daesh, has used social media to promote its cause. ISIS produces an online magazine named the Islamic State Report to recruit more fighters. ISIS produces online materials in a number of languages and uses recruiters to contact potential recruitees over the Internet. In Canada, two girls from Montreal left their country to join ISIS in Syria after exploring ISIS on social media and eventually being recruited. On Twitter, there is an app called the Dawn of Glad Tidings that users can download and keep up to date on news about ISIS. Hundreds of users around the world have signed up for the app, which once downloaded will post tweets and hash-tags from accounts that are in support of ISIS. As ISIS marched on the northern region of Iraq, tweets in support of their efforts reached a high of 40,000 a day. Support of ISIS online is a factor in the radicalization of youth. Mass media has yet to adopt the view that social media plays a vital link in the radicalization of people. When tweets supportive of ISIS make their way onto Twitter, they result in 72 re-tweets to the original, which further spreads the message of ISIS. These tweets have made their way to the account known as active hashtags, which further helps broadcast ISIS's message as the account sends out to its followers the most popular hashtags of the day. Other militant groups such as al-Qaeda and the Taliban are increasingly using social media to raise funds, recruit and radicalize persons, and it has become increasingly effective. Weaponization by state actors Social media platforms have been weaponized by state-sponsored cyber groups to attack governments in the United States, the European Union, and the Middle East. Although phishing attacks via email are the most commonly used tactic to breach government networks, phishing attacks on social media rose 500% in 2016. As with email-based phishing attacks, the majority of phishing attacks on social media are financially motivated cyber crimes that install malware. However, cyber groups associated with Russia, Iran, and China have used social media to conduct cyberattacks and undermine democratic processes in the West. During the 2017 French presidential election, for example, Facebook detected and removed fake accounts linked to the Russian cyber group Fancy Bear, who were posing as "friends of friends" of Emmanuel Macron associates to steal information from them. Cyber groups associated with Iran, China, and Russia have used LinkedIn to steal trade secrets, gain access to critical infrastructure, or recruit spies. These social engineering attacks can be multi-platform, with threat actors initiating contact on one platform but continuing communication on more private channel. The Iranian-backed cyber group COBALT GYPSY created a fake persona across multiple social media platforms and initiated contact on LinkedIn before moving to Facebook and email. In December 2019, a chat and video calling application developed by the United Arab Emirates, called ToTok was identified as a spying tool by the US intelligence. Suspicion over the Emirati app emerged because it banned the use of VoIP on applications like WhatsApp, FaceTime and Skype. See also After Truth: Disinformation and the Cost of Fake News Influence of mass media#Political importance of mass media and how mass media influence political decisions Mass media and American politics Political communication#Role of social media Politico-media complex Propaganda through media Russian interference in the 2016 United States elections Timeline of Russian interference in the 2016 United States elections / Timeline of Russian interference in the 2016 United States elections (July 2016–election day) Social media in the 2016 United States presidential election References Social media Political communication
236205
https://en.wikipedia.org/wiki/SCO%20Group
SCO Group
SCO, The SCO Group, and The TSG Group are the various names of an American software company in existence from 2002 to 2012 that became known for owning Unix operating system assets that had belonged to the Santa Cruz Operation (original SCO), including the UnixWare and OpenServer technologies, and then, under CEO Darl McBride, pursuing a series of high-profile legal battles known as the SCO-Linux controversies. The SCO Group began in 2002 with a renaming of Caldera International, accompanied by McBride becoming CEO and a major change in business strategy and direction. The SCO brand was re-emphasized and new releases of UnixWare and OpenServer came out. The company also attempted some initiatives in the e-commerce space with the SCOBiz and SCOx programs. In 2003, the SCO Group claimed that the increasingly popular free Linux operating system contained substantial amounts of Unix code that IBM had improperly put there. The SCOsource division was created to monetize the company's intellectual property by selling Unix license rights to use Linux. The SCO v. IBM lawsuit was filed, asking for billion-dollar damages and setting off one of the top technology battles in the history of the industry. By a year later, four additional lawsuits had been filed involving the company. Reaction to SCO's actions from the free and open source software community was intensely negative and the general IT industry was not enamored of the actions either. SCO soon became, as Businessweek headlined, "The Most Hated Company In Tech". SCO Group stock rose rapidly during 2003, but then SCOsource revenue became erratic and the stock began a long fall. Despite the industry's attention to the lawsuits, SCO continued to maintain a product focus as well, putting out a major new release of OpenServer that incorporated the UnixWare kernel inside it. SCO also made a major push in the burgeoning smartphones space, launching the Me Inc. platform for mobility services. But despite these actions, the company steadily lost money and shrank in size. In 2007, SCO suffered a major adverse ruling in the SCO v. Novell case that rejected SCO's claim of ownership of Unix-related copyrights and undermined much of the rest of its legal position. The company filed for Chapter 11 bankruptcy protection soon after and attempted to continue operations. Its mobility and Unix software assets were sold off in 2011, to McBride and UnXis respectively. Renamed to The TSG Group, the company converted to Chapter 7 bankruptcy in 2012. A portion of the SCO v. IBM case continued on until 2021, when a settlement was reached for a tiny fraction of what The SCO Group had initially sued for. Initial history Background The Santa Cruz Operation had been an American software company, founded in 1979 in Santa Cruz, California, that found success during the 1980s and 1990s selling Unix-based operating system products for Intel x86-based server systems. SCO built a large community of value-added resellers that eventually became 15,000 strong and many of its sales of its SCO OpenServer product to small and medium-sized businesses went through those resellers. In 1995, SCO bought the System V Release 4 and UnixWare business from Novell (which had two years earlier acquired the AT&T-offshoot Unix System Laboratories) to improve its technology base. But beginning in the late 1990s, SCO faced increasingly severe competitive pressure, on one side from Microsoft's Windows NT and its successors and on the other side from the free and open source Linux. In 2001, the Santa Cruz Operation sold its rights to Unix and its SCO OpenServer and UnixWare products to Caldera International. Caldera, based in Lindon, Utah, had been in the business of selling its OpenLinux product but had never been profitable. It attempted to make a combined business out of Linux and Unix but failed to make headway and had suffered continuing financial difficulties. By June 2002 its stock was facing a second delisting notice from NASDAQ and the company had less than four months' cash for operations. As Wired magazine later wrote, the company "faced a nearly hopeless situation." On June 27, 2002, Caldera International had a change in management, with Darl McBride, formerly an executive with Novell, FranklinCovey, and several start-ups, taking over as CEO from Caldera co-founder Ransom Love. Back to a SCO name; SCOBiz and SCOx Change under McBride happened quickly. On August 26, 2002, he announced at the company's annual Forum conference in Las Vegas that Caldera International was changing its name to The SCO Group. He did this via a multimedia display in which an image of Caldera was shattered and replaced by The SCO Group's logo, which was a slightly more stylized version of the old Santa Cruz Operation logo. The attendees at the conference, most of whom were veteran SCO partners and resellers, responded to the announcement with enthusiastic applause. McBride announced, "SCO is back from the dead," and a story in The Register began, "SCO lives again." As part of this, the company adopted SCOX as its trading symbol. (The final legal aspects of the name change did not become complete until May 2003.) The change back to a SCO-based name reflected recognition of the reality that almost all of the company's revenue was coming from Unix, not Linux, products. For instance, McDonald's had recently expanded its usage of OpenServer from 4,000 to 10,000 stores; indeed, both OpenServer and UnixWare were strong in the replicated sites business. Furthermore the SCO brand was better known than the Caldera one, especially in Europe, and SCO's large, existing reseller and partner channel was resistant to switching to Caldera's product priorities. McBride emphasized that the OpenServer product was still selling: "What is it with the OpenServer phenomenon? We can't kill it. One customer last month bought $4 million in OpenServer licenses. The customers want to give us money for it. Why don't we just sell it?" As a historical comparison for his strategy of building back up the brand and being more responsive to customers, McBride used a model of the revival of the Harley-Davidson brand in the 1980s. Besides McBride, other company executives, including new senior vice president of technology Opinder Bawa, were heavily involved in the change of direction. The product name Caldera OpenLinux became "SCO Linux powered by UnitedLinux" and all other Caldera branded names were changed as well. In particular, the longstanding UnixWare name – which Caldera had changed to Open UNIX – was restored, such that what had been called Open UNIX 8 was now named in proper sequence as UnixWare 7.1.2. Announcements were made that a new OpenServer release, 5.0.7, and a new UnixWare release, 7.1.3, would appear at the end of the year or beginning of the next. Moreover, through a new program called SCO Update, more frequent updates of capabilities were promised beyond that. Caldera's Volution Messaging Server product was retained and renamed SCOoffice Server, but the other Caldera Volution products were split off under the names Volution Technologies, Center 7, and finally Vintela. Software releases and e-commerce initiatives In addition to reviving SCO's longtime operating system products, The SCO Group also announced a new venture, SCOBiz. This was a collaboration with the Bellingham, Washington-based firm Vista.com, founded in 1999 by John Wall, in which SCO partners could sell Vista.com's online, web-based e-commerce development and hosting service targeted at small and medium-sized businesses. More importantly, as part of SCOBiz, the two companies would develop a SOAP- and XML-based web services interface to enable Vista.com e-commerce front-ends to communicate with existing back-end SCO-based applications. Industry analysts were somewhat skeptical of the chances for SCOBiz succeeding, as the market was already crowded with application service provider offerings and the dot-com bubble had already burst by that point. Finally, SCO announced a new program for partners, called SCOx, the key feature of which was that it included a buy-out option that would allow SCOx solution providers to sell their businesses back to SCO. McBride said that the program gave partners a chance at "living the American dream." The company's financial hole was emphasized when it released its results for the fiscal year ending October 31, 2002 – it lost $25 million on revenues of $64 million. First there was a Linux release. Caldera International had been one of the founders of the United Linux initiative, along with SuSE, Conectiva, and Turbolinux, and the now-named SCO Linux 4 came out in November 2002, in conjunction with each of the other vendors releasing their versions of the United Linux 1.0 base. The SCO product was targeted towards the small-to-medium business market, whereas the SuSE product was aimed at the enterprise segment and the other two were intended mostly for South American and Asian markets. The common United Linux base (which mostly came from a SuSE code origin), and the promise of common certification across all four products, did attract some support from hardware and software vendors such as IBM, HP, Computer Associates, and SAP. An assessment of SCO Linux 4 in eWeek found that it was a capable product, although the reviewer felt that its Webmin configuration tool was inferior to SuSE's YaST. In terms of service and support, SCO pledged to field a set of escalation engineers that would only be handling SCO Linux issues. The new Unix operating system releases then come out. UnixWare 7.1.3 was released in December 2002, which featured improved Java support, an included Apache Web Server, and improvements to the previously developed Linux Kernel Personality (LKP) for running Linux applications. In particular, the SCO Group stated that due to superior multiprocessor performance and reliability, Linux applications could run better on UnixWare via LKP than they could on native Linux itself, a stance that dated back to Santa Cruz Operation/Caldera International days. One review, that found UnixWare 7.1.3 lacking in a number of respects, called LKP "the most impressive of UnixWare's capabilities." And SCO OpenServer 5.0.7 was released in February 2003; the release emphasized enhanced hardware support, including new graphic, network and HBA device drivers, support for USB 2.0, improved and updated UDI support, and support for several new Intel and Intel-compatible processors. The SCOx software framework was announced in April 2003; its aim was to enable SCO's developer and reseller community to be able to connect web services and web-based presentation layers to the over 4,000 different applications that ran small and midsize businesses and branch offices. The web services aspect of SCOx included bundled SOAP/XML support for the Java, C, C++, PHP, and Perl languages. A primary target of the SCOx framework was SCOBiz e-commerce integration, although other uses were possible as well. The planned SCOx architecture overall was composed of layers for e-business services, web services, SSL-based security, a mySCO reseller portal, hosting services, and a software development kit. But by then, these software releases and e-commerce initiatives had become overshadowed. In the courts A focus on intellectual property As soon as McBride became the head of Caldera International, he became interested in what intellectual property the company possessed. He had been a manager at Novell in 1993 when Novell had bought Unix System Laboratories, and all of its Unix assets, including copyrights, trademarks, and licensing contracts, for $335 million. Novell had subsequently sold its Unix business to the Santa Cruz Operation, which had then sold it to Caldera. So in 2002, McBride said he had thought: "In theory, there should be some value to that property – somewhere between a million and a billion [dollars], right? I just wanted to know what real, tangible intellectual property value the company held." Shortly before the name change to SCO, Caldera went through its existing license agreements, found some that were not being collected upon, and came to arrangements with those licensees representing some $600,000 in annual revenue. In particular, from the start of his time as CEO, McBride had considered the possibility of claiming ownership of some of the code within Linux. Outgoing Caldera CEO Ransom Love had told him, "Don't do it. You don't want to take on the entire Linux community." During the August 2002 name change announcement, Bawa stated, "We own the source to UNIX; it's that simple. If we own the source, we are entitled to collect the agreed license fees." But at the time, McBride said he had no intention of taking on Linux. By October 2002, McBride had created an internal organization "to formalize the licensing of our intellectual property"; this effort was provisionally called SCO Tech. Senior vice president Chris Sontag was put in charge of it. By the end of 2002, McBride and SCO had sought out the services of David Boies of the law firm Boies, Schiller and Flexner as part of an effort to litigate against what it saw was unrightful use of its intellectual property. Boies had gained fame in the industry for leading the U.S. federal government's successful prosecution of Microsoft in United States v. Microsoft Corp.; as McBride subsequently said, "We went for the biggest gun we could find." (Boies' record in other cases was mixed, however, including a high-visibility loss in the 2000 Bush v. Gore Florida election dispute.) News of the SCO Group's intent to take action regarding Linux first broke on January 10, 2003, in a column by technology reporter Maureen O'Gara of Linuxgram that appeared in Client Server News and Linux Business Week. She wrote that a draft press release concerning SCO's plans had been in the works for several weeks and had been quietly circulated to other companies in the industry. The O'Gara report, unconfirmed as it was, caused some amount of consternation in the Linux community. On January 22, 2003, creation of the SCOsource division of the company, to manage the licensing of the company's Unix-related intellectual property, was officially announced, as was the hiring of Boies to investigate and oversee legal protection of that property. As the Wall Street Journal reported, Linux users had generally assumed that Linux was created independently of Unix proprietary code, and Linux advocates were immediately concerned that SCO was going to ask large companies using Linux to pay SCO licensing fees to avoid a lawsuit. The first announced license program within SCOsource was called SCO System V for Linux, which was a set of shared libraries intended to allow SCO Unix programs to be run legally on Linux without a user needing to license all of SCO OpenServer or UnixWare as had theretofore been necessary. The company continued to lose money, on revenues of $13.5 million in the first fiscal quarter of 2003, but McBride was enthusiastic about the prospects for the new SCOsource division, telling investors on a February 26 earnings call that he expected it to bring in $10 million alone in the second fiscal quarter. Lawsuits begin On March 6, 2003, SCO filed suit against IBM, claiming that the computer giant had misappropriated trade secrets by transferring portions of its Unix-based AIX operating system into Linux, and asked for at least $1 billion in damages. (The amount was subsequently raised to $3 billion, and later still to $5 billion. The suit initially coincided with SCO's existing relationship with IBM to sell UnixWare on IBM Netfinity systems.) The complaint also alleged breach of contract and tortious interference by IBM against the Santa Cruz Operation for its part in the failed Project Monterey of the late 1990s. Overall, SCO maintained that Linux could not have caught up to "Unix performance standards for complete enterprise functionality" so quickly without coordination by a large company, and that this coordination could have happened through the taking of "methods or concepts" even if not a single line of Unix code appeared within Linux. The SCO v. IBM case was underway; it would come to be considered one of the top technology battles of all time. Many industry analysts were not impressed by the lawsuit, with one saying, "It's a fairly end-of-life move for the stockholders and managers of that company. ... This is a way of salvaging value out of the SCO franchise they can't get by winning in the marketplace." Other analysts pointed to the deep legal resources IBM had for any protracted fight in the courts, but McBride professed to be nonplussed: "If it takes a couple of years, we're geared to do that." For his part, Boies said he liked David versus Goliath struggles, and his firm would see a substantial gain out of any victory. In mid-May 2003, SCO sent a letter to some 1,500 companies, cautioning them that using Linux could put them in legal jeopardy. As part of this, SCO proclaimed that Linux contained substantial amounts of Unix System V source code and that, as such, "We believe that Linux is, in material part, an unauthorized derivative of Unix." As CNET wrote, the move "dramatically broaden[ed]" the scope of the company's legal actions. At the same time, SCO announced it would stop selling its own SCO Linux product. A casualty of this stance was SCO's participation in the United Linux effort, and in turn United Linux itself. While the formal announcement that United Linux had ended did not come until January 2004, in reality the project stopped doing any tangible work soon after SCO filed its lawsuit against IBM. A few days later, Microsoft – which had long expressed disdain for Linux – said it was acquiring a Unix license from SCO, in order to ensure interoperability with its own products and to ward off any questions about rights. The action was a boon to SCO, which to this point had received little support in the industry for its licensing initiative. Another major computer company, Sun Microsystems, bought an additional level of Unix licensing from SCO to add to what it had originally obtained a decade earlier. On May 28, 2003, Novell counterattacked, saying its sale of the Unix business to the Santa Cruz Operation back in 1995 did not include the Unix software copyrights, and thus that the SCO Group's legal position was empty. Jack Messman, the CEO of Novell, accused SCO of attempting an extortion plan against Linux users and distributors. Unix has a complex corporate history, with the SCO Group a number of steps removed from the Bell Labs origins of the operating system. Novell and the SCO Group quickly fell into a vocal dispute that revolved around the interpretation of the 1995 asset-transfer agreement between them. That agreement had been uncertain enough at the time that an amendment to it had to be signed in October 1996, and even that was insufficiently unambiguous to now preclude an extended battle between the two companies. In July 2003, SCO began offering UnixWare licenses for commercial Linux users, stating that "SCO will hold harmless commercial Linux customers that purchase a UnixWare license against any past copyright violations, and for any future use of Linux in a run-only, binary format." The server-based licenses were priced at $699 per machine, and if they were to become mandatory for Linux users, would represent a tremendous source of revenue for SCO. The potential for this happening was certainly beneficial to SCO's stock price, which during one three-week span in May 2003 tripled in value. Another counterattack came in August 2003, when Red Hat, Inc. v. SCO Group, Inc. was filed by the largest of the Linux distribution companies. The SCO Group received a major boost in October 2003 when BayStar Capital, a technology-focused venture capital firm, made a $50 million private placement investment in SCO, to be used towards the company's legal costs and general product development efforts. In December 2003, SCO sent letters to 1,000 Linux customers that in essence accused them of making illegal use of SCO's intellectual property. Novell continued to insist that it owned the copyrights to Unix. While Novell no longer had a commercial interest in Unix technology itself, it did want to clear the way for Linux, having recently purchased SuSE Linux, the second largest of commercial Linux distributions at the time. On January 20, 2004, the SCO Group filed a slander of title suit against Novell, alleging that Novell had exhibited bad faith in denying SCO's intellectual property rights to Unix and UnixWare and that Novell had made false statements in an effort to persuade companies and organizations not to do business with SCO. The SCO v. Novell court case was underway. Lawsuits against two Linux end users, SCO Group, Inc. v. DaimlerChrysler Corp. and SCO v. AutoZone were filed on March 3, 2004. The first alleged that Daimler Chrysler had violated the terms of the Unix software agreement it had with SCO, while the second claimed that AutoZone was running versions of Linux that contained unlicensed source code from SCO. As a strategy this move was met by criticism; as Computerworld later sarcastically wrote, "Faced with a skeptical customer base, SCO did what any good business would do to get new customers: sue them for money." In any case, the stage was set for the next several years' worth of court filings, depositions, hearings, interim rulings, and so on. Vultus acquisition and a change in SCOx The SCOsource division got off to a quick start, bringing in $8.8 million during the company's second fiscal quarter, which led to the SCO Group turning a profit for the first time in its Caldera-origined history. In July 2003, the SCO Group announced it had acquired Vultus Inc. for an unspecified price. Vultus was a start-up company, also based in Lindon, Utah, and the Lindon-based Canopy Group was a major investor in Vultus just as it was the SCO Group. Vultus made the WebFace Solution Suite, a web-based application development environment with a set of browser-based user interface elements that provided a richer UI functionality without the need for Java applets or other plug-ins. Indeed, in putting together WebFace, Vultus was a pioneer in AJAX techniques before that term was even coined. The acquisition of Vultus resulted in a shift of emphasis in the company's web services initiative, with an announcement being made in August 2003 at SCO Forum that SCOx would now be a web services-based Application Substrate, featuring a combination of tools and APIs from Vultus's WebFace suite and from Ericom Software's Host Publisher development framework. A year later, in September 2004, this idea materialized when the SCOx Web Services Substrate (WSS) was released for UnixWare 7.1.4. Its aim was to give existing SCO customers a way to "webify" their applications via Ericom's tool and then make the functionality of those applications available via web services. However, as McBride later conceded, the SCOx WSS failed to gain an audience, and it was largely gone from company mention a year later. Views on infringement claims In the keynote address at its SCO Forum conference in August 2003, held at the MGM Grand Las Vegas, the SCO Group made an expansive defense of its legal actions. Framed by licensed-from-MGM James Bond music and film clips, McBride portrayed SCO as a valiant warrior for the continuance of proprietary software, saying they were in "a huge raging battle around the globe", that the GNU General Public License that Linux was based on was "about destroying value", and saying that like Bond, they would be thrown into many battles but come out the victor in the end. Linux advocates had repeatedly asked SCO to enumerate and show the specific areas of code in Linux that SCO thought were infringing on Unix. An analyst for IDC said that if SCO were more forthcoming on the details, "the whole discussion might take a different tone." However, SCO was reluctant to show any such code in public, preferring to keep it secret – a strategy that was commonly adopted in intellectual property litigation. However during Forum, SCO did publicly show several alleged examples of illegal copying of copyright code in Linux. Until that time, these examples had only been available to people who signed a non-disclosure agreement, which had prohibited them from revealing the information shown to them. SCO claimed the infringements were divided into four separate categories: literal copying, obfuscation, derivative works, and non-literal transfers. The example used by SCO to demonstrate literal copying became known as the atemalloc example. While the name of the original contributor was not revealed by SCO, quick analysis of the code in question pointed to SGI. At this time it was also revealed that the code had already been removed from the Linux kernel, because it duplicated already existing functions. By early 2004, the small amount of evidence that had been presented publicly was viewed as inconclusive by lawyers and software professionals who were not partisan to either side. As Businessweek wrote, "While there are similarities between some code that SCO claims it owns and material in Linux, it's not clear to software experts that there's a violation." The legal considerations involved were complex, and resolved around subtleties such as how the notion of derivative works should be applied. Furthermore, Novell's argument that it had never transferred copyrights to the Santa Cruz Operation placed a cloud over the SCO Group's legal campaign. Most, but not all, industry observers felt that SCO was unlikely to win. InfoWorld drily noted that Las Vegas bookmakers were not giving odds on the battle, but the three analysts it polled gave odds of 6-to-4 against SCO, 200-to-1 against SCO, and 6-to-4 for SCO. In any case, while Linux customers may not have been happy about the concerns and threats that the SCO Group was raising, it was unclear whether that was slowing their adoption of Linux; some business media reports indicated that it was, or that it might, while others indicated that it was not. "The Most Hated Company In Tech" The stakes were high in the battle the SCO Group had started, involving the future of Unix, Linux, and open source software in general. If SCO were to win its legal battles, the results could be extremely disruptive to the IT industry, especially if SCO's notion of derivative works were to be construed broadly by the courts. Furthermore a SCO victory would be devastating to the open source movement, especially if the legal validity of the GPL license were to be called into question. Conversely, a clear SCO loss would clarify any intellectual property concerns related to Linux, make corporate IT managers feel more relaxed about adopting Linux as a solution, and potentially bolster corporate enthusiasm for the open source movement as a whole. Linux advocates were incensed by SCO's actions, accusing the company of trying reap financial gain by sowing fear, uncertainty, and doubt (FUD) about Linux within the industry. Linux creator Linus Torvalds said, "I'd dearly love to hear exactly what they think is infringing, but they haven't told anybody. Oh, well. They seem to be more interested in FUD than anything else." Open source advocate Bruce Perens said of SCO, "They don't care who or what they hurt." Industry analyst and open source advocate Gordon Haff said that SCO had thrown a dirty bomb into the Linux user community. Many Linux enthusiasts approached the issue with a moralistic fervor. By August 2003, McBride said that pickets had been seen at SCO offices. McBride tended to compare Linux to Napster in the music world, a comparison that could be understood by people outside the technology industry. The assault on open source produced intense feelings in people; Ralph Yarro, chairman of SCO and head of the Canopy Group, and the person characterized by some as the mastermind behind SCO v. IBM, reported that back in his home area in Utah, "I have had friends, good friends, tell me they can't believe what we're doing." Internet message boards such as Slashdot saw many outraged postings. The Yahoo! Finance discussion boards, a popular site at the time for investors, were full of messages urging others to sell SCO stock. SCO suffered a distributed denial-of-service attack against its website in early May 2003, the first of several times the website would be shut down by hackers. One that began in late January 2004 became the most prolonged, when a denial-of-service attack coming out of the Mydoom computer worm prevented access to the sco.com domain for over a month. The general IT industry was not pleased with what SCO was doing either. The September 22, 2003 issue of InfoWorld had a dual-orientation cover that, if read right side up, had thumbs-up picture with the text "If SCO Loses", and if read upside down, had a thumbs-down picture with the text "If SCO Wins". By February of the following year, Businessweek was headlining that the SCO Group was "The Most Hated Company In Tech". A similar characterization was made by the Robert X. Cringely-bylined column in InfoWorld, which in March 2004 called SCO "the Most Despised Technology Company". The cover of a May 2014 issue of Fortune magazine had a photograph of McBride accompanied by the large text "Corporate Enemy No. 1". SCO's actions in suing Linux end users was especially responsible for some forms of corporate distaste towards it. The company that had previously held that title, Microsoft, had by February 2004 spent a reported $12 million on Unix licenses from SCO. The industry giant said the licenses were taken out as part of normal intellectual property compliance for their Windows Services for UNIX product, which provided a Unix compatibility environment for higher-end Windows systems. Linux advocates, however, saw the move as Microsoft looking for a way to fund SCO's lawsuits in an attempt to damage Linux, a view that was shared by some other large industry rivals such as Oracle Corporation's Larry Ellison. Indeed, Linux advocates had seen Microsoft's hand in the SCO Group's actions from almost the beginning; as Bruce Perens wrote in May 2003: "Who really benefits from this mess? Microsoft, whose involvement in getting a defeated Unix company to take on the missionary work of spreading FUD ... about Linux is finally coming to light." The open source community's antipathy towards Microsoft only increased when it became apparent that Microsoft had played at least some role in introducing the SCO Group to BayStar Capital as a potential investment vehicle (both BayStar and Microsoft said there was no stronger role by Microsoft than that). The distaste for SCO's actions seeped into evaluations of SCO's product line and technical initiatives as well. Software Development Times acknowledged at one point that "many writers in the tech media, which has a pro-open-source, pro-Linux bias, are subtly or overtly hostile to SCO." As an instance, in July 2003 a columnist for Computerworld examined the SCO Group acquisition of Vultus and concluded that the purpose was not to acquire its technology or staff but rather that Canopy was playing "a shell game ... to move its companies around" in order to exploit and cash in on the SCO Group's rising stock price. As an analyst for RedMonk stated, "Regardless of the technology they have, there are a lot of enterprises that are going to be ticked off with them. Some of them are receiving these letters (demanding license fees for Linux). There's a perception among companies we've spoken to that SCO is really out to get acquired or to make their money off of licensing schemes rather than technologies. That's an obstacle to adoption of their products." This kind of attitude was exemplified by an apologetic review of UnixWare 7.1.3 in OSNews in December 2003 that acknowledged that SCO had "earned their now nefarious reputation of pure evil" but that "SCO does actually sell a product" and that the reviewer had to assess it objectively. Another group of people who found the actions of the SCO Group distasteful were some of those familiar with the Santa Cruz Operation, including those who had worked there and those who had written about it; they became protective of that earlier company's reputation, especially given the possible name confusion regarding the two. In an eWeek column entitled "SCO: When Bad Things Happen to Good Brands", technology journalist David Coursey wrote that "SCO was a good company with a good reputation. In some ways, SCO was Linux before Linux, popularizing Unix on low-cost Intel machines. ... It's a good brand name that deserves better, or at least a decent burial and a wake. But instead, its memory is being trashed by people who don't and maybe can't appreciate the fondness many of us still have for the old Santa Cruz Operation." Science fiction author Charles Stross, who had worked as a tech writer in the original SCO's office in England in the early-mid-1990s, called the SCO Group "the brain-eating zombie of the UNIX world" that had done little more than "play merry hell with the Linux community and take a copious metaphorical shit all over my resumé." More simply, former original SCO employee turned journalist and publisher Sara Isenberg, in writing about the history of tech companies in the Santa Cruz area, wrote about The SCO Group, "I'll spare you the sordid legal details, but by then, it was no longer our SCO." To be sure, not all former original-SCO employees necessarily felt that way. The company still had developers and other staff at the original Santa Cruz location, as well as at the Murray Hill, New Jersey office that dated back not just to the original SCO but to Novell and Unix System Laboratories and AT&T before that. There was also a development office in Delhi, India, as well as regional offices that in many cases came from original SCO. And in 2006, Santa Cruz Operation co-founder Doug Michels made a return to the SCO Forum stage, with McBride presenting him an award for lifetime achievement. A major factor in the SCO–Linux battle was the Groklaw website and its author, paralegal Pamela Jones. The site explained in depth the legal principles and procedures that would be involved in the different court cases – giving technology-oriented readers a level of understanding of legal matters they would otherwise not have – and pulled together in an easily browsed form a massive number of official court documents and filings. Additionally, some Groklaw readers attended the court hearings in person and posted their detailed observations afterward. Accompanying these valuable data points on Groklaw was an interpretative commentary, from both Jones and her readers, that was relentlessly pro-open source and anti-SCO, to the point where journalist Andrew Orlowski of The Register pointed out that Groklaw sometimes suffered badly from an online echo chamber effect. In any case, such was Groklaw's influence that SCO made thinly veiled accusations that Jones was, in fact, working on behest of IBM, something that she categorically denied. The personification of the SCO–Linux battle was no doubt McBride, who was viewed by many as a villain. Columnist Maureen O'Gara, generally seen as at least somewhat sympathetic to SCO's position, characterized McBride as "the most hated man in the computer industry". McBride acknowledged, "I know people want us to go away, but we are not going to go away. We're going to see this through." The Sunday New York Times business section's "Executive Life" feature ran a self-profile of McBride in February 2004, in which he reflected upon his no-nonsense father raising him on a ranch and the difficulties of being a Mormon missionary in Japan and later a Novell executive there, and concluded, "I am absolutely driven by people saying I can't do something." McBride received death threats serious enough to warrant extra security during his public appearances. Asked in May 2004 to reflect upon what the preceding year had been like, McBride said "This is like ... nothing ... nothing compares to what's happened in the last year." Financial aspects SCO's legal campaign coincided with the best financial results it would have, when in fiscal 2003 they had revenues of $79 million and a profit of $3.4 million. The campaign was also initially very beneficial to its stock price. The stock had been under $1.50 in December 2002 and reached a high of $22.29 during mid-October 2003. In some cases jumps in the price occurred when stock analysts initiated coverage of the stock and gave optimistic price targets for it. But the stock began a downward slide soon after that, and by the end of 2003 about a quarter of all outstanding shares were controlled by short sellers. SCOsource revenue was erratic, with the first half of fiscal 2004 being especially poor. The SCO group had 340 employees worldwide when the lawsuits were first underway in 2003. By a year later, this count had fallen somewhat to 305 employees. During 2004, SCO and BayStar had a falling out, in part due to the investment firm being unhappy with SCO's constant presence in the headlines and the passionate arguments it was involved in with open source advocates, and in part due to the ongoing expenses of running a struggling software products business. Both BayStar and Royal Bank of Canada, which had been part of the initial placement, bought out of the investment by mid-year. Nevertheless, by the calculation of the Deseret News, SCO had gained a net $37 million out of the arrangement. Legal actions were a large expense, costing the SCO Group several million dollars each quarter and hurting financial results. For its third quarter of fiscal 2004, for instance, the company reported revenue of $11.2 million and a loss of $7.4 million, of which $7.2 million was legal expenses. To that point, the company had spent a total of some $15 million on such costs. Accordingly, in August 2004, SCO renegotiated its deal with its lawyers to put into place a cap on legal expenses at $31 million, in return for which Boise, Schiller & Flexner would receive a larger share of any eventual settlement. McBride continued to come up with new ideas; at the 2004 Forum show he talked about the SCO Marketplace Initiative, which would set up an online exchange where developers could bid on work-for-hire jobs for SCO Unix enhancements that were otherwise not on the SCO product roadmap. Besides helping SCO out, this would set up an alternative to the open source model, one where programmers could "develop-for-fee" rather than "develop-for-free". McBride ultimately envisioned it becoming "an online distribution engine for business applications from a wide variety companies and solution providers." The SCO Marketplace began operation a couple of months later, with jobs posted including the writing of device drivers. The stock slide continued, and by September 2004 had fallen below the $4 level. The company had some 230 employees worldwide at that point. During the latter portion of 2004, the California office of the company moved out of Santa Cruz proper, as its longtime 400 Encinal Street office building was mostly empty. The thirty employees still remaining took new space on Scotts Valley Drive in nearby Scotts Valley, California. By early 2005, the SCO Group was in definite financial trouble. Its court case against IBM did not seem to be going well. Sales for fiscal 2004 dropped by 46 percent compared to the year prior, to less than $43 million, and losses rose by a factor of three to over $16 million. Results for the full fiscal 2004 year were bad: revenues dropped by 46 percent compared to the year prior, falling to around $43 million, and there was a loss on that of over $28 million. The company had to restate three of its quarterly earnings statements due to accounting mistakes and was at risk of being delisted by NASDAQ. During the previous year it had laid off around 100 people, constituting a third of its workforce, and by August 2005 the headcount had fallen to under 200. The company became independent of The Canopy Group in March 2005, after the settlement of a lawsuit between the Noorda family and Yarro. As part of the settlement, Canopy transferred all of its shares to Yarro. Meanwhile, products Company emphasis While there was an industry impression that the SCO Group was far more focused on lawsuits than bringing forward new and improved products, throughout this period, the large majority of SCO employees were not involved with the legal battle but rather were working on software products. This was a point that McBride never hesitated to point out, for instance saying in August 2005 that the company was spending "98 percent of our resources" on new product development, and only two percent on the active cases in court with AutoZone, IBM, and Novell. The idea of the SCO Group becoming a lawsuits-only company had been proposed by BayStar but it was not something McBride wanted to do. Indeed, McBride expressed at least public optimism that the company could survive on its Unix and other product business even if it lost the court cases. Nevertheless, there were significant challenges in the product space, as operating system revenue had been falling. SCO still had a market presence in some of its traditional strongholds, such as pharmacy chains and fast-food restaurants. But to some extent, the reliability and stability of products such as OpenServer (and the applications they were typically used for) worked against SCO, as customers did not feel an urgent need to upgrade. UnixWare 7.1.4 was released in June 2004, with major new features including additional hardware support, improved security, and the abovementioned SCOx web services components. A review in Network World found that the operating system showed strength in terms of server performance and support for Apache and related open source components, but suffered in terms of hardware discovery and ease of installation. The Linux Kernel Personality (LKP), which had earlier been a major selling point of UnixWare 7, was now removed from the product due to the ongoing legal complications. But UnixWare 7.1.4 did come with the OpenServer Kernel Personality (OKP), which allowed OpenServer-built binary applications to run on the more powerful UnixWare platform without modification, and which had earlier been released as an add-on to UnixWare 7.1.3. SCO announced a Unix roadmap along with the UnixWare release, intending to convince the market that it was making a strong push in software products. Among the items talked about was Smallfoot, a toolkit for developing customized, small-footprint versions of UnixWare for use as an embedded operating system, and an upgrade to the SCOoffice mail and messaging product. But a constant concern was that SCO had difficulty in attracting independent software vendors to support its operating system platform. Perhaps the biggest such hurdle was the lack of support for current versions of the Oracle Database product. Of the problem in general, a manager at a longtime SCO replicated-site customer, Shoppers Drug Mart in Canada, that was migrating to UnixWare 7.1.4 and was otherwise happy with the product's reliability and performance, said: "[Big ISVs] are pushing SCO down to a tier-three vendor. We need a tier-one or a tier-two vendor that will do current ports and certification. We listen to vendors and watch their roadmaps and when SCO disappears that will be a signal [to move on]." The new SCOoffice release, SCOoffice Server 4.1 for OpenServer 5.0.7, came out in August 2004. SCOoffice consisted of a mixture of proprietary code and open source components and was marketed as a drop-in alternative to Microsoft Exchange Server for small-to-medium businesses, one that would be compatible with Microsoft Outlook (and other common mail clients) but would be less expensive in total cost, be built upon on a more reliable operating system, and have a management interface that could be used by non-technical administrators. Some of the specific technology in the product for interacting with Outlook functions came from Bynari. A review of the SCOoffice technology in PCQuest in 2002 found its ease of installation and features to be good and that it was "a decent package for companies looking for a mail server solution." When originally built by Caldera International, the messaging product had been based on Linux (and UnixWare via LKP), but following the SCO Group's legal actions against Linux it was changed to be based on OpenServer instead, with some disruption to the components that could be included within it. The 4.1 release also contained office collaboration tools for meetings, contacts, and the like. SCOoffice was a consistent product for the SCO Group; at least one, and usually more than one, breakout session about it was held at every Forum conference during the SCO Group era. "Legend" By 2005, more than 60 percent of SCO's revenue was still coming from its OpenServer product line and associated support services. This was despite the fact that there had been no major releases to the product in the time since the Santa Cruz Operation and Caldera Systems had merged in 2000. Accordingly, the SCO Group devoted a large effort, consisting of extensive research and development as well as associated product management activities, into producing the more modern OpenServer Release 6, code-named "Legend". After a couple of slips from announced target dates, it was made generally available in June 2005. The key idea behind Legend was to update to transplant the UnixWare SVR5 kernel into the OpenServer everything else. This gave OpenServer 6 the ability to support 1TB file sizes, the lack of which had become a major limitation of OpenServer 5. In addition, OpenServer 6 could support up to 32 processors, up to 64GB of RAM, had various new security capabilties such as SSH, an IPFilter-based firewall, and IPsec for secure VPNs, and had faster throughput for applications which could make use of real multiple threading. The launch event was held on June 22, 2005, at Yankee Stadium in New York City. (This prompted a few industry publication headlines of the "SCO Goes To Bat With OpenServer 6" variety.) Hewlett-Packard noted its support for OpenServer 6 on its ProLiant systems. Some SCO partners were quoted as saying they intended to migrate to it. While some analysts, such as those for IDS and Quandt Analytics, expressed the belief that the release could help SCO upgrade and hold onto its existing customer base, an analyst for Illuminata Inc. was not so optimistic, saying, "In a word, no. Looked at in isolation, there's a lot to like about the new OpenServer. It adds a lot of new capabilities and it finally largely merges the OpenServer and UnixWare trees. But OpenServer is in wild decline – the victim of Windows, Linux and years of SCO mismanagement. Today's SCO is a pariah of the IT industry ... OpenServer is a niche product; SCO needs a miracle." In practice, despite the good reviews it got from a technical perspective, sales of OpenServer 6 were modest. The company continued to do poorly financially, with fiscal 2005 producing revenues of $36 million and a loss on that of almost $11 million, while fiscal 2006 saw revenues of $29 million together with a loss of over $16 million. Reductions in staff continued and the Scotts Valley office was shut down in late 2006. Mobility and Me Inc. The SCO Group's biggest initiative to find a new software business came with what it called Me Inc., first announced at a DEMO conference in California in September 2005. Me Inc. sought to capitalize on the emergence of smartphones in that it would provide both mobile apps that would run on the phones and an architecture involving a network "edge processor" that would offload processing and storage from the phones themselves and handle authentication, session management, and aggregation of data requests. In such an approach, Me Inc. represented a hosted software as a service (SaaS) offering, with the edge processor representing what would later become referred to as both edge computing and mobile backend as a service. Some of the engineering effort behind Me Inc. came from former Vultus staff, following the failure of the prior SCOx efforts to find a market. Me Inc. initially targeted the Palm Treo line of smartphones. Subsequent support was put into place for the Windows Mobile line of smartphones and some others. The first services from Me Inc. were Shout, in which users could broadast text or voice messages from a phone to large groups; Vote, in which users could post surveys to large groups and quickly receive a tally back; and Action, in which users could post tasks for others to do and monitor their statuses. An early user of the Shout service was Utah State University, which used it for broadcasting messages to members of its sports booster organization. Me Inc. services were subsequently used by other Utah organizations as well, including the Utah Jazz, the BYU Cougars, and Mayor of Provo Lewis Billings. In February 2006, SCO announced that the edge processor had the product name EdgeClick. The development environment for it was branded the EdgeBuilder SDK. In addition, a website EdgeClickPark.com was announced, that would act as an Internet ecosystem for the development and selling of mobile applications and services. As SCO marketing executive Tim Negris said, the idea of EdgeClickPark was to provide a mechanism for "individuals and organizations of all kinds to participate in developing, selling and using digital services." Many of these services would come not from SCO itself but from SCO partners, resellers, and ISVs, a channel it was familiar with from the original SCO era. This was reminiscent of McBride's goal for the pre-lawsuits SCOBiz and the post-lawsuits SCO Marketplace Initiative, and McBride had similarly large ambitions for Me Inc. and EdgeClickPark, envisioning it having the same role for mobile software that iTunes had at the time for digital music. McBride, who had been looking at various new business opportunities for SCO to enter, saw the company's mobility initiative as something that could become a big success in both the business and consumer spaces, saying "We don't know for sure, but we have a little bit of a spark in our eyes that this will be a big deal." The SCO Group's chief technology officer, Sandy Gupta, stated that for the company, "this is clearly a big switch in paradigm." Industry analysts thought that Me Inc. was aimed at something there was clearly a large market for. As one said, "The operating system market is an increasingly difficult place to compete. SCO Group really does need more diversity [and] these recent pushes represent significant diversification of their product portfolio." Software Development Times commended SCO for coming up with the EdgeClickPark idea, saying that it showed an "interesting flair" in providing a place for partnerships and business development. The company also undertook the proposing of customized mobile applications for various businesses and organizations, using the Me Inc. platform as a starting point. However, the SCO Group being able to succeed in these efforts faced somewhat long odds, in part due to their being up against many kinds of competition in the mobile space and in part due to the negative feelings about SCO that their campaign against Linux had engendered. Nevertheless, it was all viewed as a positive development; as Software Development Times summarized in a subheading, "Strategy shift to mobile seen as better 'than suing people'". SCO's mobility initiative was a main theme of the 2006 instance of its SCO Forum conference, held at The Mirage in Las Vegas. McBride said, "Today is the coming out party for Me Inc. Over the next few years, we want to be a leading provider of mobile application software to the marketplace. ... This is a seminal moment for us." The Forum 2006 schedule, subtitled "Mobility Everywhere", held some nineteen different breakout and training sessions related to Me Inc. and EdgeClick, compared to twenty-six sessions for operating system related topics. Eager to drum up interest in the EdgeClick infrastructure and to get developers to attend the 2006 instance of SCO Forum, McBride offered a prize to the developer of the best application built from the EdgeBuilder SDK: a 507-horsepower, V10-engined BMW M5 sports sedan. One new mobility offering, HipCheck, which allowed the remote monitoring and administration of business-critical servers on smartphones, was given its debut announcement and demonstration at Forum. The HipCheck service, which gave system administrators the ability to conduct secure actions from their phone to correct some kinds of server anomalies or respond to user requests such as resetting passwords, was officially made available in October 2006, with support for monitoring agents running on various levels of Windows and Unix systems. Several upgrades to HipCheck were subsequently made available. Developed by SCO for FranklinCovey, a Utah-based company that had a line of paper-based planning and organizational products, FCmobilelife was an app for handling personal and organizational task and goal management. (In 2006, SCO had been building a similar app for Day-Timer named DT4, but that collaboration fell through.) In particular, the FCmobilelife app emulated FranklinCovey's methodologies for planning and productivity. Initial versions were released for the Windows Mobile and BlackBerry phones; an app for the iPhone was released in mid-2009. In October 2008, during SCO Tec Forum 2008, the last Forum ever held, the SCO Mobile Server platform was announced, which was a bundling of the Edgeclick server-side functionality and Me Inc. client development kit on top of a UnixWare 7 or Openserver 6 system. By then UnixWare itself, the company's flagship product, had not seen a new release in some four years. In the end, despite the company's efforts, the mobile services offerings did not attract that much attention or revenues in the marketplace. Life in bankruptcy An adverse ruling On August 10, 2007, SCO suffered a major adverse ruling in the SCO v. Novell case that rejected SCO's claim of ownership of Unix-related copyrights and undermined much of the rest of its overall legal position. Judge Dale A. Kimball of the United States District Court for the District of Utah issued a 102-page summary judgment which found that Novell, not the SCO Group, was the owner of the Unix copyrights; that Novell could force SCO to drop its copyrights-based claims against IBM; and most immediately from a financial perspective, that SCO owed Novell 95 percent of the revenues generated by the licensing of Unix to companies such as Microsoft and Sun. The only SCO claims left intact by Kimball's judgment were ones against IBM related to contractual provisions from Project Monterey. As the Utah Valley-based Daily Herald newspaper subsequently wrote, Kimball's ruling was "a massive legal setback" for SCO. An appeal was filed. Meanwhile, the company had few options left, as it had not been doing well anyway – by mid-2007, SCO Group stock had fallen to around $1.56 in value – and it now potentially owed Novell more money than it could pay. On September 14, 2008, the SCO Group filed a voluntary petition for reorganization under Chapter 11 of the United States Bankruptcy Code. Development work continued on both the operating system and mobility fronts, but selling a technology product while in bankruptcy was challenging. And from this point on, many of SCO's actions were dependent upon the approval of the United States Bankruptcy Court for the District of Delaware. Annual results for fiscal 2007 showed yet another decline for the company, with revenues falling to $22 million and a loss of nearly $7 million. And because of the bankruptcy filing, SCO was delisted from NASDAQ on December 27, 2007. Downsizing continued, and the New Jersey development office was moved to smaller space in Florham Park, New Jersey in late 2008. Potential buyers The interest of Stephen Norris Capital Partners in the SCO Group started in February 2008, when it put forward a $100 million reorganization and debt financing plan for the company, which it would then take private. Stephen L. Norris had been a co-founder of the large and well-known private equity firm The Carlyle Group. There was also an unnamed Middle East partner in the proposed deal; the Associated Press reported that Prince Al-Waleed bin Talal of Saudi Arabia was involved. But after a couple of months of due diligence investigation of SCO's operations, finances, and legal situation, Stephen Norris Capital Partners considered a different course of action, instead proposing to purchase SCO assets outright. Norris appeared on stage at Forum in October 2008, where possible acquisition and investments plans were shown to attendees. The company continued to have declining financial performance; the yearly results for fiscal 2008 showed revenues falling to $16 million and a loss of $8.7 million. In January 2009, the SCO Group asked the bankruptcy court to approve a plan wherein its Unix and mobility assets for would be put up for public auction. That plan did not materialize, and instead in June 2009 a new proposal emerged from a combination of Gulf Capital Partners, of which Stephen Norris was an investor, and MerchantBridge, a London-based, Middle East-focused private equity group, to create an entity called UnXis, which would then buy SCO's software business assets for $2.4 million. At that point the SCO Group had fewer than 70 employees left. This latest plan, too, did not move forward. Virtualization The SCO Group's last significant engineering effort revolved around capitalizing on a resurgence of industry interest in hardware virtualization. In this case, such virtualization allowed SCO operating systems to run on newer, more powerful hardware even if SCO did not have support or certification for that hardware, and also allowed SCO customers to take advantage of server consolidation and other benefits of a virtual environment. The initial such release, SCO OpenServer 5.0.7V, came out in August 2009, with support for running on VMware ESX/ESXi hypervisors. The technical changes involved included adding enhanced virtual drivers for storage, networking, and peripherals to the operating system as well as tuning its memory management strategies for the virtual environment. The virtualization push also included a change in SCO's licensing infrastructure, wherein now licensing would be done on an annual subscription basis. The company said it would make similar 'V' releases for UnixWare 7.1.4 and OpenServer 6 in the future, but no such releases took place during the SCO Group's lifetime. However, support for the Microsoft Hyper-V hypervisor for OpenServer 5.0.7V was added in early 2010. Trustee and trial On August 25, 2009, Edward N. Cahn, a former United States District Judge of the United States District Court for the Eastern District of Pennsylvania and a counsel for the law firm of Blank Rome, was appointed Chapter 11 trustee for The SCO Group. In October 2009, a restructuring requested by trustee Cahn led to the termination of McBride and the elimination of the CEO position; the existing COO, Jeff Hunsaker, became the top executive in the company. Perhaps the kindest industry press assessment of McBride's tenure came in a column from Steven J. Vaughan-Nichols in Computerworld, who wrote, "You have to give McBride credit. While I dislike SCO, he did an amazing job of fighting a hopeless battle. It's a pity he was working so hard and so well for such a fundamentally wrong cause." SCO had appealed the August 2007 summary judgment against it in SCO v. Novell and eventually an appeals court had ruled that a trial had to be held on the issue. A three-week trial was held in March 2010, at the conclusion of which the jury reached a unanimous verdict that the Novell did not transfer the Unix copyrights to the Santa Cruz Operation in 1995. The decision spelled the end for the large majority of the SCO Group's legal offensive, leaving only contractual claims against IBM to possibly still pursue. Sale of assets In April 2010, SCO's mobility software assets were sold to former CEO McBride for $100,000. In September 2010 the SCO Group finally put up the remainder of its non-lawsuit assets for public auction. Thus in February 2011, another proposal was made, this time for $600,000, with this iteration of a purchasing company being backed by Norris, MerchantBridge, and Gerson Global Advisors. The bankruptcy court approved this proposal, as the only other bid submitted was for $18. The sale was closed on April 11, 2011, with Stephen Norris Capital Partners and MerchantBridge being the final buyers, and UnXis was formed. In particular, UnXis took over the product names, ownership, and maintenance of The SCO Group's flagship operating system products, OpenServer and UnixWare. It also took over some service contracts for existing SCO Group customers; these customers represented some 82 countries and business segments such as finance, retail, fast food, and governmental entities. It would be up to UnXis to hire SCO Group employees, of whom, after years of layoffs and attrition, only handfuls were still left at various locations (for instance, at the Lindon, Utah site, only 7 or 8 people still worked, compared with 115 as recently as February 2008). The SCO Group's litigation rights against IBM and Novell did not transfer, as UnXis said it had no involvement or interest in such activities. What was left of The SCO Group renamed itself to The TSG Group. Aftermath The TSG Group The TSG Group did not have employees per se; any at the Utah site not hired by UnXis were let go. The jury trial verdict was appealed, but in August 2011 the U.S. 10th Circuit Court of Appeals upheld the verdict and the judge's orders following it, thus bringing to a final end SCO v. Novell. However, in November 2011 the bankruptcy trustee decided to go on with the surviving contractual claims against IBM, saying that "the Novell ruling does not impact the viability of the estate's claims against IBM." The SCO v. IBM case had previously been closed pending the result of the SCO v. Novell case. Nonetheless, there was no actual business being conducted by the TSG Group, and in August 2012 they filed to convert their status from Chapter 11 reorganization to Chapter 7 liquidation, stating that "there is no reasonable chance of 'rehabilitation'". In June 2013, a judge granted the motion of the brankruptcy trustee and reopened consideration of SCO v. IBM. The revived case moved slowly, with a ruling in 2016 being in the favorable direction to IBM, but one in 2017 favorable towards continuing the SCO claims. Industry publications greeted these developments with headlines of the "What is dead may never die" variety. UnXis changed its name to Xinuos in 2013, and despite SCO v. IBM having been reopened in the courts, reiterated that it had no interest in litigation. Instead Xinuos focused on continuing support for UnixWare and OpenServer customers as well as releasing OpenServer 10, a FreeBSD-based product that legacy customers could migrate to. McBride turned his purchase of SCO's mobility assets into a company called Shout TV Inc., which was founded in late 2011 and provided social media engagment for sports fans during live events by offering trivia games and prize contests. By 2015, Shout TV had experienced some success, especially in partnership with the Spanish football club Real Madrid. The assets of Shout TV were transferred to a company known as MMA Global Inc. in 2018. Final conclusion of lawsuits In August 2021, word came of a possible final settlement in the SCO v. IBM case, wherein documents filed in the case indicated that the bankruptcy trustee for TSG Group and IBM appeared to be on the verge of settling the outstanding, Project Monterey-based, claims in the matter for $14.25 million. While the amount was far less than the SCO Group had originally sought when it began the lawsuits, the trustee recommended accepting the settlement, because "ultimate success of the Trustee's claims against IBM is uncertain" and that pursuing the matter further would be expensive and that "the Settlement Agreement provides an immediate and substantial monetary recovery and creates important liquidity for the benefit of all creditors and claimants." As part of this, the trustee would give up any future related claims against IBM. The matter lay with the U.S. Bankruptcy Court for the District of Delaware, which had been handling the case all along. On November 8, 2021, the settlement was so made under those terms, with IBM paying the TSG bankruptcy trustee $14.25 million and the trustee giving up all future claims and with each party paying their own legal costs. After 18½ years, SCO v. IBM was finally over. As it happened, another suit against IBM was still now active, from Xinuos, which earlier in 2021 had reversed direction from their past disavowals of ligitation interest and had filed suit against both IBM and Red Hat, re-alleging old SCO claims about IBM and Project Monterey and alleging new claims that IBM and Red Hat had cornered the operating system market for cloud computing. Unlike the SCO–Linux battles, however, in this case few people in the industry paid the Xinuos action much attention. In any case, the story of The SCO Group was complete. Products SCO UnixWare, a Unix operating system. UnixWare 2.x and below were direct descendants of Unix System V Release 4.2 and was originally developed by AT&T, Univel, Novell and later on The Santa Cruz Operation. UnixWare 7 was sold as a Unix OS combining UnixWare 2 and OpenServer 5 and was based on System V Release 5. SCO OpenServer, another Unix operating system, which was originally developed by The Santa Cruz Operation. SCO OpenServer 5 was a descendant of SCO UNIX, which is in turn a descendant of XENIX. OpenServer 6 is, in fact, an OpenServer compatibility environment running on a modern SVR5-based Unix kernel. Smallfoot, an operating system and GUI created specifically for point of sale applications. SCOBiz, a web-based e-commerce development and hosting site with web services-based integration to existing legacy applications. SCOx Web Services Substrate, a web services-based framework for modernizing legacy applications. WebFace, a development environment for rich-UI browser-based Internet applications. SCOoffice Server, an e-mail and collaboration solution, based on a mixture of open-source and closed-source software. SCO Marketplace Initiative, an online exchange offering pay-per-project development opportunities. Me, Inc., a mobile services platform with services including Shout, HipCheck, and FCmobilelife. List of SCO lawsuits SCO v. IBM (The SCO Group, Inc. vs. International Business Machines, Inc., case number 2:03cv0294, United States District Court for the District of Utah) Red Hat v. SCO SCO v. Novell SCO v. AutoZone SCO v. DaimlerChrysler See also SCO Forum References External links The SCO Group, Inc. (archived web site caldera.com from 2002-09-14 to 2004-09-01 and sco.com from 2001-05-08) Groklaw: News and Commentary about SCO lawsuits and Other Related Legal Information SCOX Bankruptcy information and documents Financial information for The SCO Group (SCOXQ) Yahoo! — The SCO Group, Inc. company profile, archive reference History of The SCO Group at Encyclopedia.com, 2006 Caldera (company) Defunct software companies of the United States Defunct companies based in Utah SCO–Linux disputes Software companies established in 2002 2002 establishments in Utah Software companies disestablished in 2011 2012 disestablishments in Utah Companies that have filed for Chapter 7 bankruptcy Companies that filed for Chapter 11 bankruptcy in 2007 pl:SCO
26038175
https://en.wikipedia.org/wiki/Computerchemist
Computerchemist
computerchemist is the ongoing solo project of Dave Pearson, a British-Hungarian musician who lives in Székesfehérvár, Hungary. Biography Pearson's love of electronic music started as a teenager when he first heard Tangerine Dream's Cloudburst Flight. During the early 1980s he played synthesiser in a number of rock bands, including Lichfield UK-based Monteagle with founder Mark Thwaite, as well as simultaneously composing his own solo music. In 2003 he had sold all of his synthesisers and had moved over almost completely to a virtual instrument environment using Cubase SX3, only using the guitar as a "real" instrument. Since 2006 he has issued a number of albums on his own label "Terrainflight". His music has been likened to "Berliner Schule/Berlin School style jazz-rock". Bruce Gall of ARFMs "Sunday Synth" has remarked on the crossover style of his playing, invoking comparisons to electronic artists Tangerine Dream, Jean-Michel Jarre, Klaus Schulze, Kraftwerk, and the progressive sounds of Pink Floyd and David Gilmour solo work, Ash Ra Tempel, Mike Oldfield, Steve Hackett, Brian Eno and King Crimson. During April 2008, SynGate started producing re-issues of the first two albums atmospheric (2006) and icon one(2007), however the agreement was short-lived and SynGate ceased production of the SynGate re-issues in June 2009. The original terrainflight editions are still available. Guest musician Robin Hayes played cello on the third release landform(2008), the first time a guest musician had appeared on a computerchemist solo album. Uwe Cremer, otherwise known as Level Pi, was the guest musician on the fourth album, aqual measure(2009), and played guitar on the title track. A collaboration project started in July 2010 between Computerchemist and "Nemesis", ex-Hawkwind dancer and singer has produced several tracks, some of which were initially released for free under Creative Commons licensing which had been available on the computerchemist website for free download, with a full album promised at a later date entitled Chronicles of Future Present. The link as of August 2011 to this project is no longer active. The single and first track Sky Turned Black, a rework of Mirage from Aqual Measure, was however released on a charity compilation in October 2012 entitled Space Rock: The Compilation on bandcamp on the "Sound for Good" label. Computerchemist was featured through the month of August 2010 on WDIY's Galactic Travels show with Bill Fox, as the "Special Focus", where each album was consecutively played back-to-back each week. A different approach was taken for his fifth album, Music for Earthquakes(2011), involving the conversion of seismograph readings into musical form. It was inspired by the 4.8 richter scale earthquake in Hungary in early 2011, and featured on the national Hungarian news channel Hir24 and Hungarian English Language news site pestiside.hu shortly afterwards. His sixth double release album, and second collaboration, launched January 2013, Signatures I. and Signatures II. feature the Hungarian drummer Zsolt Galántai, formerly of the Hungarian metal band Ossian. A single track was specially composed for the "Sound for Good" charity release The Human Condition – Dedications to Philip K. Dick entitled The Pink Beams of Light, He Said, published in March 2013. The collaboration project "Audio Cologne Project" again with Uwe Cremer and Zsolt Galántai featuring on drums was released in summer 2013. Since 2013, Pearson has been producing several one-off tracks for compilations and collaborations, notably with artists such as Cousin Silas and Altocirrus. In 2018, he compiled a series of six CD albums (for which he appeared or produced tracks in various guises on each album) for the radio Spreaker show Aquarian Moons. In 2019, he produced his first analogue-only synth album for over twenty years, Volcan Dreams, using exclusively Korg Volca analogue sound modules and external guitar effects. Zsolt Galántai guested on drums for one track, and Chris Gill from Band Of Rain played guitar on the opening track Volcan Plain. In January 2020, That Which Prevails was released, followed shortly afterwards by three retrospective album compilations Origins I-III which were compiled from surviving tapes from Pearson's output between 1980 and 2003. Pearson's first motion picture soundtrack for the feature film The Fort was released as an album in January 2021. 2021 also introduced a new collaboration partnership with RadioRay, a singer/songwriter who has notably played in Alan Davey's post-Hawkwind band The Psychedelic Warlords. This has led to two albums being released in 2021 and 2022 under the titles Masks and Underneath the Soul. In addition, a further solo album Parallel Thought Experiment was also released in 2021. Discography Studio albums 2006 Atmospheric (terrainflight TF001); (2008, SynGate CD-R 2090 reissue, now withdrawn) 2007 Icon One (terrainflight TF002); (2008, SynGate CD-R 2091 reissue, now withdrawn) 2008 Landform (terrainflight TF003) 2009 Aqual Measure (terrainflight TF004) 2010 The Scheming Machine EP (feat Nemesis) (terrainflight TF005) (no longer available) 2011 Music for Earthquakes (terrainflight TF006) 2013 Signatures I. ft. Zsolt Galántai (terrainflight TF007) 2013 Signatures II. ft. Zsolt Galántai (terrainflight TF008) 2019 Volcan Dreams (terrainflight TF010) 2020 That Which Prevails (terrainflight TF011) 2020 Origins I. 1980-1985, (terrainflight TF012) 2020 Origins II. 1985-1998, (terrainflight TF013) 2020 Origins III. 1981-2003, (terrainflight TF014) 2021 The Fort: Original Motion Picture Soundtrack (terrainflight TF015) 2021 Masks (feat RadioRay) (terrainflight TF016) 2021 Parallel Thought Experiment (terrainflight TF017) 2022 Underneath the Soul (feat RadioRay) (terrainflight TF018) Compilations and collaborations Germany 2009 Schwingungen auf CD No. 164 with "Atmospheric" (Cue Records 164 (Germany)) 2009 Schwingungen auf CD No. 165 with "Timethorns" (Cue Records 165 (Germany)) Germany/Hungary 2013 Audio Cologne Project: 2911. (Album with Uwe Cremer and featuring Zsolt Galántai, Terrainflight TF009) United Kingdom 2010 Musiczeit Sampler CD No. 12 with "Mirage" (Musiczeit Sampler 12 (UK)) 2017 Silas and Friends | vi | part i with "Cousin Silas & computerchemist - Expansion ( For Jaki )" (WAAG waag_rel101, (UK)) 2018 The Emporium Project - 1point2 with "Cousin Silas & Computerchemist - Dronium" (WAAG waag_rel124, (UK)) 2020 Cousin Silas & Friends volume 8 part 2 with "Cousin Silas & computerchemist - Goodbye Home Service" (WAAG waag_rel140, (UK)) United States 2012 Space Rock: The Compilation with "Sky Turned Black" ft. Nemesis (Sound for Good, (US)) 2013 The Human Condition – Dedications to Philip K. Dick with "The Pink Beams of Light, He Said" (Sound for Good, (US)) 2018 Intergalactic Moonloonies Invade Mushroom Hill vol. 1 with "Computerchemist ft. Zsolt Galántai - La Fungo Arpeggiato" (Aquarian Moons, (US)) 2018 Intergalactic Moonloonies Invade Mushroom Hill vol. 3 with "Altocirrus & Computerchemist - Morcellas Dance" (Aquarian Moons, (US)) 2018 Intergalactic Moonloonies Invade Mushroom Hill vol. 3 playing guitar on "Etherfysh - Low Tide on Mushroom Hill" (Aquarian Moons, (US)) References External links Official computerchemist homepage Computerchemist on Discogs.com Computerchemist on MusicBrainz.org Computerchemist on Bandcamp.com British electronic musicians
42246631
https://en.wikipedia.org/wiki/GaiaEHR
GaiaEHR
GaiaEHR is free and open-source medical practice management and electronic health record software. Technologies The software suite is written as a web application and includes both server and client. The server is written in PHP and can be employed in conjunction with a LAMP "stack", though other operating systems are supported as well. The client side does not employ a common web browser, but some own software based on the Ext JS JavaScript application framework. GaiaEHR is free and open-source software subject to the terms of the GNU General Public License (GPL). History The GaiaEHR project originally forked from OpenEMR in September 2009 as MitosEHR but after a few years the project and repository were renamed to GaiaEHR. Active development seems to have stopped in 2016. Screenshots References Free health care software Electronic health record software Healthcare software for Linux Healthcare software for MacOS Healthcare software for Windows
69229150
https://en.wikipedia.org/wiki/Lucerne%20School%20of%20Computer%20Science%20and%20Information%20Technology
Lucerne School of Computer Science and Information Technology
The Lucerne School of Computer Science and Information Technology (Hochschule Luzern – Informatik) is a professional school for information technology (IT) in Switzerland. Often called just School of Information Technology, it is a division of the Lucerne University of Applied Sciences and Arts. The campus is in Rotkreuz in Kanton Zug. History The Lucerne School of Computer Science and Information Technology was formed in 2016 as a separate department of the Lucerne University of Applied Sciences and Arts. The department was merged from the IT department of the School of Engineering and Architecture and the institute of business informatic of the Business School and Management. The campus is Rotkreuz in Kanton Zug. A new building there, used jointly with the Institute of Financial Services, was opened in 2019. All study programs are offered at one location. Study programs Research at the School of Information Technology includes studies of constraint satisfaction and discrete optimisation, digital image processing, machine learning, mobile computing, and natural language processing. The school offers bachelor's degrees and master's degrees. The six bachelor study programs are: Artificial Intelligence and Machine Learning Digital Ideation Computer Science Information and Cyber Security International IT Management Business Information Technology. The study program for cyber security was installed in 2018, as the first such course at a university of applied sciences in Switzerland. Guy Parmelin who was then Bundesrat responsible for defence and security, and is now president of Switzerland, inaugurated the program that he had promoted. The master's study courses are: Master of Science in Engineering Master of Science in Business Information Technology Master Digital Ideation A joint master's degree is offered in the field of Specialized Media. Impact In international collaboration, the school works with universities such as Purdue University. It is a partner in international conferences, such as the Swisstext 2016, a conference for text analytics. The school and the division of innovation and technology launched a project in 2017 aiming at a connection and collaboration between small and medium-sized enterprises (SME) and start-ups. In 2018, scientists from the Lucerne School of Information Technology participated in an international research network project to advise politicians of the European Union (EU) regarding blockchain technology. The school is sponsored by large companies in the region such as Siemens Building Technologies and Roche Diagnostics, and smaller companies. The companies expect qualified personnel and a collaboration in education and research. References External links Lucerne School of Information Technology auviso.ch Porträt Hochschule Luzern – Informatik (in German) ausbildung-weiterbildung.ch SUUR / Neubau Hochschule Luzern – Campus Zug – Informatik u. Wirtschaft, Zug (in German) buerokonstrukt.ch Eröffnung Studiengang BSc Information & Cyber Security 2018 (in German) netclose.ch Universities in Switzerland Education in Lucerne
45525749
https://en.wikipedia.org/wiki/Cyber%20Resilience%20Review
Cyber Resilience Review
The Cyber Resilience Review (CRR) is an assessment method developed by the United States Department of Homeland Security (DHS). It is a voluntary examination of operational resilience and cyber security practices offered at no cost by DHS to the operators of critical infrastructure and state, local, tribal, and territorial governments. The CRR has a service-oriented approach, meaning that one of the foundational principles of the CRR is that an organization deploys its assets (people, information, technology, and facilities) to support specific operational missions (or services). The CRR is offered in a facilitated workshop format and as a self-assessment package. The workshop version of the CRR is led by a DHS facilitator at a critical infrastructure facility. The workshop typically takes 6–8 hours to complete and draws on a cross section of personnel from the critical infrastructure organization. All information collected in a facilitated CRR is protected from disclosure by the Protected Critical Infrastructure Information Act of 2002. This information cannot be disclosed through a Freedom of Information Act request, used in civil litigation, or be used for regulatory purposes. The CRR Self-Assessment Package allows an organization to conduct an assessment without the need for direct DHS assistance. It is available for download from the DHS Critical Infrastructure Cyber Community Voluntary Program website. The package includes an automated data answer capture and report generation tool, a facilitation guide, comprehensive explanation of each question, and a crosswalk of CRR practices to the criteria of the National Institute of Standards and Technology (NIST) Cybersecurity Framework. The questions asked in the CRR and the resulting report are the same in both versions of the assessment. DHS partnered with the CERT Division of the Software Engineering Institute at Carnegie Mellon University to design and deploy the CRR. The goals and practices found in the assessment are derived from the CERT Resilience Management Model (CERT-RMM) Version 1.0. The CRR was introduced in 2009 and received a significant revision in 2014. Architecture The CRR comprises 42 goals and 141 specific practices extracted from the CERT-RMM and organized in 10 domains): Asset Management Controls Management Configuration and Change Management Vulnerability Management Incident Management Service Continuity Management Risk Management External Dependency Management Training and Awareness Situational Awareness Each domain is composed of a purpose statement, a set of specific goals and associated practice questions unique to the domain, and a standard set of Maturity Indicator Level (MIL) questions. The MIL questions examine the institutionalization of practices within an organization. The performance of an organization is scored against a MIL scale. This scale depicts capability divided into five levels: MIL1-Incomplete, MIL2-Performed, MIL3-Managed, MIL4-Measured, and MIL5-Defined. Institutionalization means that cybersecurity practices become a deeper, more lasting part of the organization because they are managed and supported in meaningful ways. When cybersecurity practices become more institutionalized—or “embedded”—managers can have more confidence in the practices’ predictability and reliability. The practices also become more likely to be sustained during times of disruption or stress to the organization. Maturity can also lead to a tighter alignment between cybersecurity activities and the organization’s business drivers. For example, in more mature organizations, managers will provide oversight to the particular domain and evaluate the effectiveness of the security activities the domain comprises. The number of goals and practice questions varies by domain, but the set of MIL questions and the concepts they encompass are the same for all domains. All CRR questions have three possible responses: “Yes,” “No,” and “Incomplete. The CRR measures performance of an organization at the practice, goal, domain, and MIL levels. Scores are calculated for each of individual model elements and in aggregated totals. The scoring rubric establishes the following: Practices can be observed in one of three states: performed, incomplete, and not performed. A domain goal is achieved only if all of the practices related to the goal are achieved. A domain is fully achieved only if all the goals in the domain are achieved. If the above conditions are met, the organization is said to be achieving the domain in a performed state: the practices that define the domain are observable, but no determination can be made about the degree to which these practices are repeatable under varying conditions consistently applied able to produce predictable and acceptable outcomes retained during times of stress These conditions are tested for by applying a common set of 13 MIL questions to the domain, but only after MIL1 is achieved. Consistent with the architecture of the MIL scale, MILs are cumulative; to achieve a MIL in a specific domain, an organization must perform all of the practices in that level and in the preceding MILs. For example, an organization must perform all of the domain practices in MIL1 and MIL2 to achieve MIL2 in the domain. Results CRR participants receive a comprehensive report containing results for each question in all domains. The report also provides graphical summaries of the organization’s performance at the goal and domain levels, depicted in a heat-map matrix. This detailed representation allows organizations to target improvement at a fine-grained level. Organizations participating in facilitated CRRs receives an additional set of graphs depicting the performance of their organization compared to all other prior participants. The CRR report includes a potential path toward improving the performance of each practice. These options for consideration are primarily sourced from the CERT-RMM and NIST special publications. Organizations can also use CRR results to measure their perform in relation to the criteria of the NIST Cybersecurity Framework. This correlation feature was introduced in February 2014. See also Critical infrastructure protection NIST Cybersecurity Framework Cyber Resilience References External links DHS Cyber Resilience Review CERT Resilience Management Model Computer security standards Infrastructure Cyberwarfare
56678574
https://en.wikipedia.org/wiki/Ruiner%20%28video%20game%29
Ruiner (video game)
Ruiner is a cyberpunk shoot 'em up video game developed by Reikon Games and published by Devolver Digital. It was released in 2017 on several platforms. Gameplay Ruiner is a shoot 'em up played from an isometric perspective. The game takes place in 2091 and is set in a cyberpunk metropolis known as Rengkok. The player takes control of a silent, masked protagonist who attempts to rescue their kidnapped brother from a failing conglomerate, known as Heaven, that controls Rengkok. Synopsis Ruiner takes place in 2091 in Rengkok and its surrounding facilities, owned by Heaven, a conglomerate led by a man referred to in-game as the Boss. Initially the player character, a silent protagonist dubbed "Puppy" by another character, is being led by a rogue hacker named Wizard to assassinate the Boss. Before he reaches the Boss's office, the signal from Wizard to the protagonist is overridden by another hacker known only as Her. Her explains to Puppy that Wizard was contracted by another group and that his brother has been kidnapped, and urges him to track Wizard down. She leads him to a territory filled with Creeps, a group of psychotic gangsters led by a swordsman named Nerve. After fighting his way through the Creeps' territory, Nerve challenges Puppy to a duel and is defeated in battle. Earning the Creeps' leadership, they find Wizard and hack into his brain, killing him in the process and leading them to the Hanza Compound, a factory that manufactures machine parts. Her tracks Puppy's brother down with a signal leading to him. They are greeted by armed guards and the angry AI that manages the compound, Mother. Puppy fights his way through the Hanza facility and fights against mercenaries hired by TrafficKing, a cyborg in the form of a UFO-like device piloted by a human head, who runs the facility alongside Mother and presumably contracted Wizard. As he follows the signal that leads to his brother, Puppy battles Mother and then TrafficKing, hacking his brain to reveal where his brother is being taken; the Imagination Farms, which use human beings as hosts to lend brainpower for running Virtuality, a virtual reality device that Heaven manufactures and sells to civilians. As TrafficKing has clearance for entering these areas, Puppy and Her kidnap him to gain access to the farms. Puppy, led by Her and accompanied by the tamed TrafficKing, scour the Imagination Farms for his brother's signal, and are confronted by Geminus, a dual-personality cyborg that presents itself as two twin sisters that act as host to Mother. TrafficKing has been working for Geminus and it has been established that they were using Wizard to form a coup to overthrow the Boss and control Heaven. TrafficKing dies after begrudgingly helping Puppy through overheated obstacles, burning to death. Puppy confronts Geminus but is attacked by the resurrected TrafficKing, now controlling a humanoid cyborg body. Puppy kills him and battles Geminus, killing both sisters and finally finding the mechanical pod that holds his brother. Upon approaching it, a cutscene showing two small children leaving the facility plays, and the game cuts back to Heaven, where Puppy is restrained in a jumpsuit as two guards and the Boss, who wears a mask identical to Puppy's, approach him. The Boss reveals that Puppy was only being used for spare parts due to their biological similarity, and that he was hacked by Wizard as he could potentially be anywhere the Boss could be. Her is an independent hacker contracted by the Boss to hack Puppy and make him believe he had a brother who was kidnapped, thus leading him back to the Boss while also exterminating the people who attempted to overthrow him. The Boss remarks that he will keep Puppy closer to him, when Her makes a reappearance and reveals that she is not human and has been tricking the Boss since their agreement; she has led Puppy to Heaven to kill the Boss. Her advises Puppy to 'meet her where heaven falls' if he survives. He escapes his restraints and kills many of the Boss's guards before finally approaching the Boss himself, and, depending on a dialogue prompt, will either bludgeon him with his weapon or hack his brain. The ending shows Puppy riding away on his motorcycle to an unknown destination. Development and release Ruiner was developed by Polish indie game studio Reikon Games and published by Devolver Digital. The game was released for Linux, PlayStation 4, Microsoft Windows, Xbox One on 26 September 2017. A Nintendo Switch version was released on June 18, 2020. Reception Critical reception towards Ruiner was generally positive. Omri Pettite of PC Gamer stated that "Ruiner is gorgeous, a sensory feast inspired by the works of cyberpunk's 1980s heyday, in which a silent, masked protagonist travels through the nightscapes and industrial jungles of a grit-tech 2091. Underneath, a thumping top-down action game delivers sword-sharp combat, the familiarity of its design offset by the constant urge to simply stand still and drink everything in." In his review for both the game and Cuphead, Ben "Yahtzee" Croshaw of Zero Punctuation said he didn't like Ruiner because it felt like "a cyberpunk ripoff of Hotline Miami with none of what made Hotline Miami interesting [...] Yeah, the fights were hard, but I wasn't getting that all-important sense of payoff; all I felt I was "earning" was more chances to fight boring gang members in murky environments." References External links 2017 video games Cyberpunk video games Devolver Digital games Dystopian video games Linux games MacOS games Multidirectional shooters Nintendo Switch games PlayStation 4 games Shooter video games Single-player video games Video games developed in Poland Video games with isometric graphics Windows games Xbox One games
459212
https://en.wikipedia.org/wiki/FVWM
FVWM
The F Virtual Window Manager is a virtual window manager for the X Window System. Originally a twm derivative, FVWM has evolved into a powerful and highly configurable environment for Unix-like systems. History In 1993, during his work analyzing acoustic signatures for the United States Department of Defense, Robert Nation began hacking twm with the intent of simultaneously reducing memory usage and adding support for virtual desktops. Already known for his rxvt terminal emulator, Nation worked on reducing the memory consumption of his new window manager. Deciding to test FVWM's reception, on June 1, 1993, he bundled it with a rxvt release. In 1994 Rob Nation stopped developing FVWM and made Charles Hines the maintainer. Rob Nation's last release of FVWM was fvwm-1.24r. The post-Rob Nation version of FVWM uses a different configuration file format and has a significantly different architecture. Many Linux distributions, as a result, distributed both fvwm-1.24r and later releases of FVWM as separate programs. , fvwm-1.24r still compiles and runs on a modern Linux system without any problems. A small number of users continue to use the older FVWM release. In late 1998 the office of FVWM maintainer was abolished and further development has been conducted by a group of volunteers. Many developers have based their own projects on FVWM in order to benefit from the years of refinement and development. Many of the popular window managers in use today are related to FVWM: Afterstep, Xfce, Enlightenment, Metisse and many more. Name origin Originally, FVWM was the Feeble Virtual Window Manager, which was clearly stated by Robert Nation in a 1997 Linux Journal interview with him, who also claimed the name had been chosen because original releases had almost no user selectable features, so it really was feeble. However, at some point the meaning of the F was lost. When Google published the old news group archives acquired from DejaNews, the original meaning was re-discovered. However, when Chuck Hine was maintaining the official FVWM Frequently Asked Questions, Chuck had never agreed with the 'feeble' explanation, and added alternate possible meanings of F to the FAQ, with many entries coming from mailing list messages. Features This is a partial list based on the documentation distributed with FVWM. Many of these features can be disabled at runtime or compile time, or dynamically for specific windows or loaded and unloaded as modules, or many other possibilities. These are not rigid features, FVWM does not dictate how the user's desktop should work or look like but provides the mechanisms to configure the desktop to work, look and behave the way the user wants it to. Supports any number of virtual desktops, each divided into multiple pages. The viewport (the physical screen) can be moved smoothly (in configurable steps) in the virtual desktop area, independent of pages The viewport can move automatically when the mouse hits the border of the screen Full EWMH, ICCCM-2 and GNOME Hints support. Full internationalisation support, including multi-byte characters and bidirectional text. Xft2 font support with anti-aliasing, drop shadows of any size, at any offset and at any direction, text rotation. Any behaviour, action or event is fully configurable. Support of user defined Window Decoration Styles. Titlebars can be disabled, or rendered on any window edge. This can be done individually for each window type. Titlebars may have up to ten icons including minimize, maximize and close buttons. Animated Window Shading in all directions. Iconification Full PNG Support, including alpha blending. Perl programming library for extending FVWM using Perl, scripting and pre-processing of configuration files. Can be extended via scripting. Preprocessing allows dynamic configurations. Toolkit to build dialogs, menus and applications at runtime. Configurable desktop panels. Mouse Gestures allow to draw shapes with the mouse, and bind them to commands. Dynamic menus; utilities to browse the filesystem, fetch headlines from the internet from menus included. Session management support. Xinerama extension support to use more than one monitor. Dynamically extensible using modules. Supports focus stealing Derivatives Notable users Donald Knuth See also Comparison of X window managers FVWM-Crystal, a theme. FVWM95 References External links Official FVWM Web Site. . Matt Chapman's Window Managers for X. Announcement of first FVWM release from Rob Nation. #fvwm IRC Channel FAQ. FVWM community forums. FVWM community wiki. FVWM Beginners Guide by Jaimos F Skriletz. Free X window managers Articles containing video clips
1306349
https://en.wikipedia.org/wiki/Todd%20Marinovich
Todd Marinovich
Todd Marvin Marinovich (born Marvin Scott Marinovich on July 4, 1969) is a former American and Canadian football quarterback. He played for the Los Angeles Raiders of the National Football League and also in the Canadian Football League, Arena Football League, and Development Football International. Marinovich is known for the well-documented, intense focus of his training as a young athlete and for his brief career upon reaching the professional leagues that was cut short primarily because of his addiction to drugs. Early development Marinovich grew up on the Balboa Peninsula of Newport Beach, California. His father, Marv Marinovich, had been a lineman and a captain for the USC Trojans during the 1962 national championship season and played in the 1963 Rose Bowl. Marinovich's mother, Trudi (née Fertig), was a high school swimmer who dropped out of USC to marry Marv. Her brother Craig was a star USC quarterback at this time. After harming his own National Football League lineman career by overtraining and focusing too much on weight and bulk, Marv studied Eastern Bloc training methods and was hired by Oakland Raiders owner Al Davis as the NFL's first strength-and-conditioning coach. Marv later opened his own athletic research center and applied the techniques to his young son, introducing athletic training before Marinovich could leave the crib and continuing it throughout his childhood and adolescence. Marv saw an opportunity to use techniques, focusing on speed and flexibility, that later formed the basis for modern core training. During her pregnancy, Trudi used no salt, sugar, alcohol, or tobacco; as a baby, Todd was fed only fresh vegetables, fruits, and raw milk. Marv Marinovich commented, "Some guys think the most important thing in life is their jobs, the stock market, whatever. To me, it was my kids. The question I asked myself was, How well could a kid develop if you provided him with the perfect environment?" High school career Marinovich had a very successful high school career, becoming the first freshman to start a varsity high school football game in Orange County. He began his career at Mater Dei High School, a large Catholic high school in Santa Ana, alma mater of the likes of quarterbacks such as Matt Barkley and Heisman Trophy winners Matt Leinart, John Huarte, and Bryce Young. Despite throwing for nearly 4,400 yards and 34 touchdowns in his two years at Mater Dei, Marinovich transferred to Mission Viejo's Capistrano Valley High School due to his parents' divorce. Once there, Marinovich broke the all-time Orange County passing record and later the national high school record by passing for 9,914 yards, including 2,477 his senior year. He received numerous honors, including being named a Parade All-American, the National High School Coaches Association's offensive player of the year, the Dial Award for the national high school scholar-athlete of the year in 1987, and the Touchdown Club's national high school player of the year. National attention Marinovich's unique development led to growing media attention. In January 1988, he appeared on the cover of California magazine with the headline "." Robo Quarterback became a nickname for Marinovich in the popular media, a condition that persisted long after the situation that drove it. In February Sports Illustrated published an article, titled "Bred To Be A Superstar", that discussed his unique upbringing under his father who wanted to turn his son into the "perfect quarterback". The article declared Marinovich "America's first test-tube athlete", and discussed how his mother encouraged his interest in art, music, and classical Hollywood cinema while banning cartoons as too violent. His father assembled a team of advisers to tutor him on every facet of the game. The article stated that: Long after Marinovich's professional career had ended, an ESPN columnist named the elder Marinovich one of history's "worst sports fathers". Regardless, the Sports Illustrated article was incorrect about his son's self-control. During high school, he started drinking in after-game parties and smoked marijuana daily. His use of marijuana grew to the point that he would meet with a group of friends—athletes, skaters, surfers, and musicians—every day before school to share a bong before classes in what they nicknamed "Zero Period". Having previously dealt with social anxiety, Marinovich found marijuana relaxed him and did not affect him later during sporting events. The rumors of his use spread to opposing fans, however, who taunted him with chants of "Marijuana-vich" during basketball. His parents divorced around the time he transferred high schools, and he lived in a small apartment with his father for his final two high school seasons. Marinovich enjoyed the period, noting: "Probably the best part of my childhood was me and Marv's relationship my junior and senior years. After the divorce, he really loosened up. It was a bachelor pad. We were both dating." Almost every major college program recruited Marinovich who, as a high school freshman, began getting letters from Stanford. Despite the family connections to USC he was uncertain whether he fit the program's offense. After a positive visit, however, Marinovich chose the university over recent national champions BYU and Miami, as well as Arizona State, Stanford, and Washington. Marinovich took his college selection seriously, noting: "This is the biggest decision of my life. It means not only where I will play football, but most likely, who I will marry, who my best friends for life will be, and where I will live. It means everything. And the one thing I know for sure is I'm too young to make this kind of decision by myself." College career Marinovich entered USC as a Fine Arts major and redshirted the 1988 season behind Rodney Peete. Already under intense pressure as a high school prospect, the combination of high expectation and the many new temptations that were prohibited under his strict upbringing soon overwhelmed him. He was torn between embracing the freedom and following his father's teachings, noting that "I'm finally away from my dad telling me everything to do. And I've got to say I have taken advantage of it. Full advantage. He keeps telling me, 'Come on, you've got the rest of your life to fool around. Not now.' I know he's right. But there are a lot of distractions at SC." At one point Marinovich left school in his freshman year to see his mother, stating "I wish I could go somewhere else and be someone else. I don't want to be Todd Marinovich." Outside of his personal travails, Marinovich's football career for USC had an abrupt start. As a redshirt freshman in 1989, he was backup to Pat O'Hara after an unimpressive spring practice; in the fall preseason, however, O'Hara suffered a serious leg injury. Although neither his coaches nor teammates believed that he was ready, Marinovich became the first freshman quarterback to start the first game of the season for USC since World War II. After an upset loss to Illinois in part due to a coaching decision to minimize his role, Marinovich improved; he completed 197 of 321 passes during the regular season for 16 touchdowns and 12 interceptions with a 61.4% completion percentage, 0.1% behind Bernie Kosar's NCAA freshman record. Against Washington State, Marinovich led a last-minute comeback that became known as "The Drive". He led the offense on a 91-yard march downfield with 11 crucial completions, including a touchdown pass and a two-point conversion, that prompted former President Ronald Reagan to call Marinovich to invite him to his home in Los Angeles. The Trojans went 9–2–1, won the Pac-10 Conference, and defeated Michigan in the 1990 Rose Bowl. UPI and Sporting News named Marinovich the College Freshman of the Year for 1989; he was the only freshman on the All-Pac-10 team and the first freshman quarterback named. Marinovich entered the 1990 season as a Heisman Trophy candidate, with speculation on his leaving school early for the NFL. Head coach Larry Smith set for Marinovich the goal of a 70% pass completion rate. However, his play became erratic due to his personal difficulties. After finding out Marinovich had been skipping numerous classes Smith suspended him from the Arizona State game, but his play against Arizona had been so poor that he might have been kept out of the game regardless. Smith had a difficult relationship with Marinovich, and the relationship worsened when the quarterback began yelling at the coach on national television during a loss in the Sun Bowl. Marinovich was arrested for cocaine possession a month later, and entered the NFL draft after the season. Professional career NFL At the 1991 NFL Draft the Raiders selected Marinovich in the first round; he was the 24th pick overall and the second quarterback taken—ahead of Brett Favre—signing a three-year, $2.25 million deal. Marinovich made his NFL debut on Monday Night Football, in an exhibition game against the Dallas Cowboys on August 12, 1991. Entering the game with 15 minutes remaining, he moved the Raiders downfield, completing three of four passes for 16 yards and a touchdown. He did not start a game until Jay Schroeder was injured before the final week of the season, where he impressed observers with 23 completions in 40 passes for 243 yards against the Kansas City Chiefs in a close loss. Because of this great debut he started the following week against the Chiefs in the playoffs, but was very poor, throwing for just 140 yards with four interceptions in a 10-6 loss and smashing a locker room mirror with his helmet after the game. After the Raiders began 0-2 in 1992 with Schroeder as quarterback, Marinovich became the starter. He threw for 395 yards in a loss in his first start that season and lost the following week as the Raiders started 0-4. He then won three of his next four games before losing to the Dallas Cowboys. Marinovich's best game during that span was against the Buffalo Bills on October 11, 1992, in which he completed 11 of 21 passes for 188 yards and two touchdowns in a 20-3 victory. The following week Marinovich started against the Philadelphia Eagles, seeing three of his first 10 passes intercepted. Schroeder regained the starting job and Marinovich never played again in the NFL. Marinovich had serious substance abuse issues throughout his NFL career. During his rookie season, he increased his partying and drug use beyond marijuana, including taking pharmaceutical amphetamines before games. Because of his college arrest for cocaine possession, the NFL required him to submit to frequent drug tests. Marinovich passed the tests using friends' urine, but after using the urine of a teammate who had been drinking heavily, the test registered a blood alcohol content four times the legal limit and caused the Raiders to force him into rehabilitation. The Raiders held an intervention for him after the season, and Marinovich spent 45 days at a rehab facility. In the 1992 season Marinovich shifted to using LSD after games, because it would not show up on the drug test. His play suffered and his coaches complained he was not grasping the complex offense. He failed his second NFL drug test and went back into rehabilitation. In training camp before the 1993 season, Marinovich failed his third NFL drug test, this time for marijuana, and was suspended for the 1993 season. The Raiders released Marinovich on the final cutdown, choosing not to pay for Marinovich's salary while being suspended in the year before the salary cap would go into effect. In 1994, once Marinovich's suspension was lifted, the Pittsburgh Steelers showed some interest in signing him to be their third-string quarterback behind Neil O'Donnell and Mike Tomczak. Marinovich, not liking the culture of the NFL, chose not to return to the league. The Steelers ended up turning to the NFL Draft instead, drafting Jim Miller. Post-NFL After traveling for two years Marinovich attempted to join the Winnipeg Blue Bombers of the Canadian Football League, but blew out his knee on the first day of training camp. During recovery one of his high school friends introduced him to heroin. Soon after Marinovich was arrested for drug possession and served three months in various jails. In April 1999 Marinovich was cleared to reenter the NFL, but suffered a herniated disk playing recreational basketball. That summer he tried out and received interest from the San Diego Chargers and the Chicago Bears, but failed the physical examination so he signed as a backup quarterback with the BC Lions of the CFL. His use of heroin and cocaine increased and his weight dropped, as he would spend almost all of his free time using drugs. At one point Marinovich severely cut his hand with a crack pipe during halftime and had to covertly bandage himself. Despite being asked to stay with the team for another season, he realized he was in a bad situation and left the team. Marinovich returned to Los Angeles in 2000 and joined the expansion Los Angeles Avengers of the Arena Football League. Despite undergoing severe heroin withdrawal he had a strong season, tying the record for most touchdowns in a single game by throwing 10 touchdowns against the Houston Thunderbears. Marinovich was named to the all-rookie team, and as the Avengers' franchise player, but the day he received his signing bonus he was arrested for buying heroin. Marinovich's career continued to fall apart, as he was ejected from subsequent games for throwing things at referees, and eventually was suspended from the team in 2001. Despite flashes of brilliance, Marinovich's professional career is widely considered to be a bust. In 2004 Marinovich was included in ESPN.com's list of The 25 Biggest Sports Flops, coming in at fourth on the ESPN.com editors' list, and seventh on the readers' list. SoCal Coyotes (semi pro) In 2017, Marinovich agreed to continue his rehabilitation under AAA Hall of Fame head coach J. David Miller of the six-time champion and Palm Springs-based SoCal Coyotes of Development Football International (DFI). After a successful spring coaching the Coyotes' quarterbacks and volunteering with local youth throughout the Coachella Valley, Marinovich made his case to play for the team in a quest to become the oldest starting quarterback in semi-pro football. In a press conference, Miller agreed to stand by his aging player's commitment to sobriety, and signed Marinovich on July 3, 2017. As the signing was primarily to help the quarterback rehabilitate, Miller noted Marinovich's comeback had "very little to do with football." To learn the intricacies of Miller's run and shoot offense, Marinovich attended mandatory meetings and workouts with the offense's creator Mouse Davis. In Palm Springs, he was tutored by Michael Karls, a record-setting quarterback at Midland University and the Coyotes' second all-time leading passer who agreed to sit in favor of Marinovich despite the age gap. Marinovich also battled with 25-year-old Jacob Russell for the starting job, which the elder quarterback won. On September 3, 2017, wearing his traditional number 12 that he adorned at both USC and the Raiders, a sober, 48-year-old Marinovich stepped back into pro football after a 17-year lay-off. He completed 19 for 28 passes for 262 yards, seven touchdowns, and two interceptions as the Coyotes won 73–0 against the California Sharks. In a post-game interview, he noted it was his first game while sober since he was 15 years old. However, eventual shoulder pain led to his immediate withdrawal from action by team physicians. He never played again. After football By 2004, Marinovich was broke and again living on the Balboa Peninsula; when he was arrested in 2004 for skateboarding in a prohibited area, police found methamphetamines and syringes on him. In May 2005, he was charged with violating probation, but avoided jail by entering an inpatient treatment program. For the next year, Marinovich was in and out of rehab facilities. He was again arrested on August 26, 2007 for possession of drugs and resisting arrest. He was offered a suspended sentence in exchange for regular drug testing, therapy, and meetings with a probation officer. Marinovich began working several part-time jobs, including scraping barnacles off of boats, leading weekly group meetings at a rehab center, painting murals in residential homes, and as a private quarterback coach. Marinovich still follows USC football and occasionally attends open practices at USC. in Orange County and has an online art gallery, featuring original works of impressionist-style paintings, drawings and sculptures, many with sports-related themes. ESPN released a documentary film about Marinovich titled The Marinovich Project, which was shown after the Heisman presentation for 2011. In 2019, Marinovich served as the quarterbacks coach for the San Diego Strike Force of the Indoor Football League. Legal troubles Marinovich has had a number of arrests, many of which have been related to his ongoing drug problems, including nine arrests in Orange County, California, alone. He was arrested in 1991, while still a student at USC, on cocaine possession. In 1997, Marinovich was arrested on suspicion of growing marijuana; he served two months in jail, and a third at a minimum-security facility in Orange County known as the Farm. In April 2000, he was arrested for sexual assault, followed by a 2001 arrest on suspicion of heroin. In August 2004, he was arrested by Newport Beach police for skateboarding in a prohibited zone. Marinovich was arrested in a public bathroom in Newport Beach, California, in May 2005 after being found with apparent drug paraphernalia; he gave his occupation as "unemployed artist" and "anarchist". Marinovich was ordered to undergo six months of drug rehabilitation followed by six months of outpatient treatment as a result. In August 2007, Marinovich was arrested and charged with felony drug possession and resisting a police order after being stopped for skateboarding near the Newport Pier boardwalk. On October 30, 2007, he pleaded guilty to felony possession of a small amount of methamphetamine and misdemeanor syringe possession and resisting arrest. Orange County Superior Court Commissioner James Odriozola decided to give Marinovich another chance at rehabilitation and released him to a rehab program in Laguna Beach. During a period of sobriety from 2007-2008, Marinovich worked with National Drug & Alcohol Treatment Centers, located in Newport Beach, California, to help young athletes overcome addiction and to stay clean. In August 2008 after one year of sobriety, Marinovich was hired as a lecturer by Newport Coast Recovery, a drug and alcohol treatment facility in Newport Beach. On April 4, 2009, he was arrested in Newport Beach after he failed to appear in court for a progress review on his rehabilitation related to his 2007 arrest He was ordered to be held in jail without bail until his May 4 hearing before the Orange County Superior Court. On August 22, 2016, he was arrested in Irvine, California after being found naked and in possession of drugs in a neighbor's backyard. Authorities say a naked Marinovich tried to open the sliding glass door of an Irvine home. He was cited for trespassing, possession of a controlled substance, possession of drug paraphernalia and possession of marijuana. Later tests concluded the controlled substance to be methamphetamine. Marinovich was sentenced to 90 days in jail, but could avoid jail time if he had a successful rehab and stayed out of any legal trouble for 36 months. Family Marinovich was married to Alexandria "Alix" Bambas and they have a son, Baron, and a daughter, Coski. Marinovich met his now ex-wife while the two were in rehab together and first asked her out on a date while in rehab, which even she acknowledged was rather unorthodox. Marinovich has since been romantically linked to Ali Smith, the daughter of former USC coach Larry Smith. Marinovich has a younger half-brother, Mikhail Marinovich, who played college football as a defensive end at Syracuse University. Mikhail enrolled in spring of 2008 and made news when he and a friend were arrested for breaking into a gym equipment room after drinking; Todd Marinovich warned him: "Don't be stupid. You're a Marinovich. You have a target on your back." Todd’s father, Marv Marinovich, died at the age of 81 in December, 2020. References External links AFL stats 1969 births Living people People from San Leandro, California American football quarterbacks Canadian football quarterbacks American players of Canadian football American people of Croatian descent BC Lions players Los Angeles Raiders players Los Angeles Avengers players Sportspeople from Mission Viejo, California Sportspeople from Orange County, California Players of American football from California USC Trojans football players Winnipeg Blue Bombers players
3948917
https://en.wikipedia.org/wiki/Intelligence%20amplification
Intelligence amplification
Intelligence amplification (IA) (also referred to as cognitive augmentation, machine augmented intelligence and enhanced intelligence) refers to the effective use of information technology in augmenting human intelligence. The idea was first proposed in the 1950s and 1960s by cybernetics and early computer pioneers. IA is sometimes contrasted with AI (artificial intelligence), that is, the project of building a human-like intelligence in the form of an autonomous technological system such as a computer or robot. AI has encountered many fundamental obstacles, practical as well as theoretical, which for IA seem moot, as it needs technology merely as an extra support for an autonomous intelligence that has already proven to function. Moreover, IA has a long history of success, since all forms of information technology, from the abacus to writing to the Internet, have been developed basically to extend the information processing capabilities of the human mind (see extended mind and distributed cognition). Major contributions William Ross Ashby: Intelligence Amplification The term intelligence amplification (IA) has enjoyed a wide currency since William Ross Ashby wrote of "amplifying intelligence" in his Introduction to Cybernetics (1956). Related ideas were explicitly proposed as an alternative to Artificial Intelligence by Hao Wang from the early days of automatic theorem provers. J. C. R. Licklider: Man-Computer Symbiosis "Man-Computer Symbiosis" is a key speculative paper published in 1960 by psychologist/computer scientist J.C.R. Licklider, which envisions that mutually-interdependent, "living together", tightly-coupled human brains and computing machines would prove to complement each other's strengths to a high degree: In Licklider's vision, many of the pure artificial intelligence systems envisioned at the time by over-optimistic researchers would prove unnecessary. (This paper is also seen by some historians as marking the genesis of ideas about computer networks which later blossomed into the Internet). Douglas Engelbart: Augmenting Human Intellect Licklider's research was similar in spirit to his DARPA contemporary and protégé Douglas Engelbart. Both had a view of how computers could be used that was both at odds with the then-prevalent views (which saw them as devices principally useful for computations), and key proponents of the way in which computers are now used (as generic adjuncts to humans). Engelbart reasoned that the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. He thus set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes for knowledge-work. Engelbart's philosophy and research agenda is most clearly and directly expressed in the 1962 research report: Augmenting Human Intellect: A Conceptual Framework The concept of network augmented intelligence is attributed to Engelbart based on this pioneering work. Engelbart subsequently implemented these concepts in his Augmented Human Intellect Research Center at SRI International, developing essentially an intelligence amplifying system of tools (NLS) and co-evolving organizational methods, in full operational use by the mid-1960s within the lab. As intended, his R&D team experienced increasing degrees of intelligence amplification, as both rigorous users and rapid-prototype developers of the system. For a sampling of research results, see their 1968 Mother of All Demos. Later contributions Howard Rheingold worked at Xerox PARC in the 1980s and was introduced to both Bob Taylor and Douglas Engelbart; Rheingold wrote about "mind amplifiers" in his 1985 book, Tools for Thought. Andrews Samraj mentioned in "Skin-Close Computing and Wearable Technology" 2021, about Human augmentation by two varieties of cyborgs, namely, Hard cyborgs and Sift cyborgs. A humanoid walking machine is an example of the soft cyborg and a pace-maker is an example for augmenting human as a hard cyborg. Arnav Kapur working at MIT wrote about human-AI coalescence: how AI can be integrated into human condition as part of "human self": as a tertiary layer to the human brain to augment human cognition. He demonstrates this using a peripheral nerve-computer interface, AlterEgo, which enables a human user to silently and internally converse with a personal AI. In 2014 the technology of Artificial Swarm Intelligence was developed to amplify the intelligence of networked human groups using AI algorithms modeled on biological swarms. The technology enables small teams to make predictions, estimations and medical diagnoses at accuracy levels that significantly exceed natural human intelligence. Shan Carter and Michael Nielsen introduce the concept of artificial intelligence augmentation (AIA): the use of AI systems to help develop new methods for intelligence augmentation. They contrast cognitive outsourcing (AI as an oracle, able to solve some large class of problems with better-than-human performance) with cognitive transformation (changing the operations and representations we use to think). A calculator is an example of the former; a spreadsheet of the latter. In science fiction Augmented intelligence has been a repeating theme in science fiction. A positive view of brain implants used to communicate with a computer as a form of augmented intelligence is seen in Algis Budrys 1976 novel Michaelmas. Fear that the technology will be misused by the government and military is an early theme. In the 1981 BBC serial The Nightmare Man the pilot of a high-tech mini submarine is linked to his craft via a brain implant but becomes a savage killer after ripping out the implant. Perhaps the most well known writer exploring themes of intelligence augmentation is William Gibson, in work such as his 1981 story "Johnny Mnemonic", in which the title character has computer-augmented memory, and his 1984 novel Neuromancer, in which computer hackers interface through brain-computer interfaces to computer systems. Vernor Vinge, as discussed earlier, looked at intelligence augmentation as a possible route to the technological singularity, a theme which also appears in his fiction. See also Advanced chess Augmented learning Brain–computer interface Charles Sanders Peirce Collective intelligence Democratic transhumanism Emotiv Systems Extelligence Exocortex Knowledge worker Mechanization Neuroenhancement Noogenesis Sensemaking (information science) The Wisdom of Crowds References Further reading Licklider's biography, contains discussion of the importance of this paper. External links Intelligence Amplification using speech synthesis technology IT Conversations: Doug Engelbart - Large-Scale Collective IQ 7 December 1951, Ashby first wrote about the possibility to build an 'information amplifier'. 12 August 1953, Ashby mentioned an objection to his 'intelligence-amplifier'. History of human–computer interaction Cybernetics Biocybernetics Transhumanism Texts related to the history of the Internet Intelligence
18569641
https://en.wikipedia.org/wiki/Apple%20Worm
Apple Worm
The Apple Worm is a computer program written by Apple Computer, and especially for the 6502 microprocessor, which performs dynamic self-relocation. The source code of the Apple Worm is the first program printed in its entirety in Scientific American. The Apple Worm was designed and developed by James R. Hauser and William R. Buckley. Other example Apple Worm programs are described in the cover story of the November 1986 issue of Call_A.P.P.L.E. Magazine. Because the Apple Worm performs dynamic self-relocation within the one main memory of one computer, it does not constitute a computer virus, an apt if somewhat inaccurate description. Although the analogous behavior of copying code between memories is exactly the act performed by a computer virus, the virus has other characters not present in the worm. Such programs do not necessarily cause collateral damage to the computing systems upon which their instructions execute; there is no reliance upon a vector to ensure subsequent execution. This extends to the computer virus; it need not be destructive in order to effect its communication between computational environments. Programs A typical computer program manipulates data which is external to the corporeal representation of the computer program. In programmer-ese, this means the code and data spaces are kept separate. Programs which manipulate data which is internal to its corporeal representation, such as that held in the code space, are self-relational; in part at least, its function is to maintain its function. In this sense, a dynamic self-relocator is a self-referential system, as defined by Douglas R. Hofstadter. Other examples The instruction set of the PDP-11 computer includes an instruction for moving data, which when constructed in a particular form causes itself to be moved from higher addresses to lower addresses; the form includes an automatic decrement of the instruction pointer register. Hence, when this instruction includes autodecrement of the instruction pointer, it behaves as a dynamic self-relocator. A more current example of a self-relocating program is an adaptation of the Apple Worm for the Intel 80x86 microprocessor and its derivatives, such as the Pentium, and corresponding AMD microprocessors. See also Worm memory test References External links The Apple Worm source code Video of executing Apple Worm program Cover Story: The Contiguous Traveler / Simple Worms Worm
31800111
https://en.wikipedia.org/wiki/Sproxil
Sproxil
Sproxil is an American venture capital-backed for-profit company based in Cambridge, Massachusetts that provides a consumer product verification service (called Mobile Authentication Service or MAS) to help consumers avoid purchasing counterfeit products. The service was the first Mobile Authentication Service (MAS) to launch in Nigeria. The ISO 27001 and ISO 9001 certified company also offers supply chain protection solutions (including Track and Trace), mobile-based loyalty and marketing programs, and advisory services. Sproxil has operations in Nigeria, Mali, Ghana (serving West Africa), Tanzania, Kenya (serving East Africa), India and Pakistan (Asia). Activity Sproxil's service places a security label with a scratch-off panel on all protected products. Consumers scratch off the panel at a point of purchase to reveal a unique one-time use code. This is a form of mass serialization. The code is sent via SMS or mobile app to a country-specific short code, and the consumer receives a reply almost instantly indicating that the product is genuine or suspicious. The company also has a 24/7 call center to provide support during the verification process and take in anonymous reports of suspicious counterfeiting activity. Sproxil's services are currently used by several pharmaceutical companies in the fight against counterfeit drugs. The fake drug market, according to the World Customs Organization, is estimated to be a $200 Billion a year industry. The problem of counterfeit drugs is particularly acute in emerging markets; the World Health Organization estimates 30% of drugs in these markets are fake and may be very harmful to consumers. To date, Sproxil protects products across over multiple industries, including pharmaceuticals, agribusiness, automotive parts, and beauty & personal care, etc. Recent activity In 2010, NAFDAC, the Nigerian government agency overseeing food and drugs, endorsed the Sproxil platform and the service has been widely deployed throughout Nigeria. In April 2011, CNN published a video discussing the role Sproxil played in the fight against counterfeit drugs in Nigeria. Sproxil - along with Pfizer, Vodafone, and WaterHealth International - has committed to the Business Call to Action (BCtA), a global leadership initiative made up of companies that apply their core business expertise to the achievement of the eight internationally agreed Millennium Development Goals (MDGs). Sproxil CEO Ashifi Gogo was praised by former United States President Bill Clinton, who described Gogo's work as a “genuinely remarkable achievement.” In February 2011, Sproxil announced that it had received $1.8 million in funding from Acumen Fund. Funding is being used to help the company expand into India. Johnson & Johnson and GSK (GlaxoSmithKline) are using Sproxil services in Africa as is the Nigerian distributor of Merck KGaA named Biofem. In June 2011, Sproxil launched operations in India and in July 2011 Kenya's Pharmacy and Poisons Board (PPB) also adopted similar text message-based anti-counterfeiting systems. As of early 2012, Sproxil announced that more than one million people in Africa had checked their medicines using the text-message based verification service developed by Sproxil. In August 2012, Sproxil and Bharti Airtel, a global telecommunications company announced a partnership to combat the counterfeit drug market in Africa using MPA. Airtel will offer the service absolutely free to its users and not charge for any SMS based verification. In February 2013, Sproxil signed on East African Cables to protect their electric cables through the Zinduka Initiative. The initiative utilizes the MPA solution to help consumers verify that their electrical cables are genuine before purchase. This partnership marked Sproxil's further expansion into non-pharmaceutical markets. Other industries that uses Sproxil's solution is textile and clothing (underwear). Also around this time, Prashant Yadav, senior research fellow and director of Healthcare Research Initiative, joined Sproxil's board of advisors. Yadav is an expert in pharmaceutical and healthcare supply chain management in emerging markets, was formerly a professor of supply chain management at the MIT-Zaragoza International Logistics Program and currently serves as an advisor for multiple organizations in the area of pharmaceutical supply chains. CEO Ashifi Gogo was recently named by the White House as an Immigrant Innovator Champion of Change and a Schwab Foundation Social Entrepreneur of the Year 2014 for his work with Sproxil. In 2014, the company expanded its Mobile Authentication services, originally available to consumers by SMS and call center, by adding mobile apps and a web app to their solution suite. The same year, Sproxil became ISO-27001 (for information security controls) and ISO-9001 (for quality management systems) certified after a comprehensive review of its internal processes. Soon after, Kenya-based agribusiness company Juanco SPS officially launched a consumer-facing project that protects pesticide Bestox 100EC with MPA technology. To date, the company has processed over 75 million verifications from consumers. This is the highest recorded number of verifications of its kind. Awards 2017 Unilever Global Development Award 2016 Innovative Healthcare Service Provider Of The Year 2015 Interface Health Excellence (IHX) Challenge 2015 Frost & Sullivan's Enabling Technology Leadership Award 2014 Schwab Foundation Social Entrepreneur of the Year Award 2013 USPTO Patents for Humanity Award 2013 #1 in health care and #7 overall in Fast Company Magazine World's 50 Most Innovative Companies 2012 ISMP Cheers Award - George DiDomizio Industry Award 2012 ICC World Business and Development Award 2010 Honorable Mention Global Finals, IBM Entrepreneur SmartCamp Competition 2010 MassChallenge Finalist 2010 Mobile Infrastructure Award, MITX 2010 People's Choice Award at Accelerate Michigan 2010 Prize Winner, African Diaspora Marketplace 2010 Audience Choice Award Life Sciences at Xconomy Xsite 2009 Outstanding Commitment Award in Global Health, Clinton Global Initiative University See also Mass serialization World Customs Organization Millennium Development Goals References External links Companies based in Massachusetts Telecommunications companies of the United States Business software Authentication methods
33941
https://en.wikipedia.org/wiki/Windows%202000
Windows 2000
Windows 2000 is a major release of the Windows NT operating system developed by Microsoft and oriented towards businesses. It was the direct successor to Windows NT 4.0, and was released to manufacturing on December 15, 1999, and was officially released to retail on February 17, 2000. It was Microsoft's business operating system until the introduction of Windows XP in 2001. Windows 2000 introduced NTFS 3.0, Encrypting File System, as well as basic and dynamic disk storage. Support for people with disabilities was improved over Windows NT 4.0 with a number of new assistive technologies, and Microsoft increased support for different languages and locale information. The Windows 2000 Server family has additional features, most notably the introduction of Active Directory, which in the years following became a widely used directory service in business environments. Four editions of Windows 2000 were released: Professional, Server, Advanced Server, and Datacenter Server; the latter was both released to manufacturing and launched months after the other editions. While each edition of Windows 2000 was targeted at a different market, they shared a core set of features, including many system utilities such as the Microsoft Management Console and standard system administration applications. Microsoft marketed Windows 2000 as the most secure Windows version ever at the time; however, it became the target of a number of high-profile virus attacks such as Code Red and Nimda. For ten years after its release, it continued to receive patches for security vulnerabilities nearly every month until reaching the end of its lifecycle on July 13, 2010. Windows 2000 and Windows 2000 Server were succeeded by Windows XP and Windows Server 2003, released in 2001 and 2003, respectively. Windows 2000's successor, Windows XP, became the minimum supported OS for most Windows programs up until Windows 7 replaced it, and unofficial methods were made to run these programs on Windows 2000. Windows 2000 is the final version of Windows which supports PC-98, I486 and SGI Visual Workstation 320 and 540, as well as Alpha, MIPS and PowerPC in alpha, beta, and release candidate versions. Its successor, Windows XP, requires a processor in any supported architecture (IA-32 for 32-bit CPUs and x86-64 and Itanium for 64-bit CPUs). History Windows 2000 is a continuation of the Microsoft Windows NT family of operating systems, replacing Windows NT 4.0. The original name for the operating system was Windows NT 5.0 and the prep beta builds were compiled between March to August 1997, these builds were identical to Windows NT 4.0. The first official beta was released in September 1997, followed by Beta 2 in August 1998. On October 27, 1998, Microsoft announced that the name of the final version of the operating system would be Windows 2000, a name which referred to its projected release date. Windows 2000 Beta 3 was released in May 1999. NT 5.0 Beta 1 was similar to NT 4.0, including a very similarly themed logo. NT 5.0 Beta 2 introduced a new 'mini' boot screen, and removed the 'dark space' theme in the logo. The NT 5.0 betas had very long startup and shutdown sounds, though these were changed in the early Windows 2000 beta, but during Beta 3, a new piano-made startup and shutdown sounds were made, featured in the final version as well as in Windows Me. The new login prompt from the final version made its first appearance in Beta 3 build 1946 (the first build of Beta 3). The new, updated icons (for My Computer, Recycle Bin etc.) first appeared in Beta 3 build 1964. The Windows 2000 boot screen in the final version first appeared in Beta 3 build 1983. Windows 2000 did not have an actual codename because, according to Dave Thompson of Windows NT team, "Jim Allchin didn't like codenames". Windows 2000 Service Pack 1 was codenamed "Asteroid" and Windows 2000 64-bit was codenamed "Janus." During development, there was a build for the Alpha which was abandoned in the final stages of development (between RC1 and RC2) after Compaq announced they had dropped support for Windows NT on Alpha. From here, Microsoft issued three release candidates between July and November 1999, and finally released the operating system to partners on December 12, 1999, followed by manufacturing three days later on December 15. The public could buy the full version of Windows 2000 on February 17, 2000. Three days before this event, which Microsoft advertised as "a standard in reliability," a leaked memo from Microsoft reported on by Mary Jo Foley revealed that Windows 2000 had "over 63,000 potential known defects." After Foley's article was published, she claimed that Microsoft blacklisted her for a considerable time. However, Abraham Silberschatz et al. claim in their computer science textbook that "Windows 2000 was the most reliable, stable operating system Microsoft had ever shipped to that point. Much of this reliability came from maturity in the source code, extensive stress testing of the system, and automatic detection of many serious errors in drivers." InformationWeek summarized the release "our tests show the successor to NT 4.0 is everything we hoped it would be. Of course, it isn't perfect either." Wired News later described the results of the February launch as "lackluster." Novell criticized Microsoft's Active Directory, the new directory service architecture, as less scalable or reliable than its own Novell Directory Services (NDS) alternative. Windows 2000 was initially planned to replace both Windows 98 and Windows NT 4.0. However, this changed later, as an updated version of Windows 98 called Windows 98 SE was released in 1999. On or shortly before February 12, 2004, "portions of the Microsoft Windows 2000 and Windows NT 4.0 source code were illegally made available on the Internet." The source of the leak was later traced to Mainsoft, a Windows Interface Source Environment partner. Microsoft issued the following statement: "Microsoft source code is both copyrighted and protected as a trade secret. As such, it is illegal to post it, make it available to others, download it or use it." Despite the warnings, the archive containing the leaked code spread widely on the file-sharing networks. On February 16, 2004, an exploit "allegedly discovered by an individual studying the leaked source code" for certain versions of Microsoft Internet Explorer was reported. On April 15, 2015, GitHub took down a repository containing a copy of the Windows NT 4.0 source code that originated from the leak. Microsoft planned to release a 64-bit version of Windows 2000, which would run on 64-bit Intel Itanium microprocessors, in 2000. However, the first officially released 64-bit version of Windows was Windows XP 64-Bit Edition, released alongside the 32-bit editions of Windows XP on October 25, 2001, followed by the server versions Windows Datacenter Server Limited Edition and later Windows Advanced Server Limited Edition, which were based on the pre-release Windows Server 2003 (then known as Windows .NET Server) codebase. These editions were released in 2002, were shortly available through the OEM channel and then were superseded by the final versions of Server 2003. New and updated features Windows 2000 introduced many of the new features of Windows 98 and 98 SE into the NT line, such as the Windows Desktop Update, Internet Explorer 5 (Internet Explorer 6, which followed in 2001, is also available for Windows 2000), Outlook Express, NetMeeting, FAT32 support, Windows Driver Model, Internet Connection Sharing, Windows Media Player, WebDAV support etc. Certain new features are common across all editions of Windows 2000, among them NTFS 3.0, the Microsoft Management Console (MMC), UDF support, the Encrypting File System (EFS), Logical Disk Manager, Image Color Management 2.0, support for PostScript 3-based printers, OpenType (.OTF) and Type 1 PostScript (.PFB) font support (including a new font—Palatino Linotype—to showcase some OpenType features), the Data protection API (DPAPI), an LDAP/Active Directory-enabled Address Book, usability enhancements and multi-language and locale support. Windows 2000 also introduced USB device class drivers for USB printers, Mass storage class devices, and improved FireWire SBP-2 support for printers and scanners, along with a Safe removal applet for storage devices. Windows 2000 SP4 has added the native USB 2.0 support. Windows 2000 is also the first Windows version to support hibernation at the operating system level (OS-controlled ACPI S4 sleep state) unlike Windows 98 which required special drivers from the hardware manufacturer or driver developer. A new capability designed to protect critical system files called Windows File Protection was introduced. This protects critical Windows system files by preventing programs other than Microsoft's operating system update mechanisms such as the Package Installer, Windows Installer and other update components from modifying them. The System File Checker utility provides users the ability to perform a manual scan of the integrity of all protected system files, and optionally repair them, either by restoring from a cache stored in a separate "DLLCACHE" directory, or from the original install media. Microsoft recognized that a serious error (a Blue Screen of Death or stop error) could cause problems for servers that needed to be constantly running and so provided a system setting that would allow the server to automatically reboot when a stop error occurred. Also included is an option to dump any of the first 64 KB of memory to disk (the smallest amount of memory that is useful for debugging purposes, also known as a minidump), a dump of only the kernel's memory, or a dump of the entire contents of memory to disk, as well as write that this event happened to the Windows 2000 event log. In order to improve performance on servers running Windows 2000, Microsoft gave administrators the choice of optimizing the operating system's memory and processor usage patterns for background services or for applications. Windows 2000 also introduced core system administration and management features as the Windows Installer, Windows Management Instrumentation and Event Tracing for Windows (ETW) into the operating system. Plug and Play and hardware support improvements The most notable improvement from Windows NT 4.0 is the addition of Plug and Play with full ACPI and Windows Driver Model support. Similar to Windows 9x, Windows 2000 supports automatic recognition of installed hardware, hardware resource allocation, loading of appropriate drivers, PnP APIs and device notification events. The addition of the kernel PnP Manager along with the Power Manager are two significant subsystems added in Windows 2000. Windows 2000 introduced version 3 print drivers (user mode printer drivers) based on Unidrv, which made it easier for printer manufacturers to write device drivers for printers. Generic support for 5-button mice is also included as standard and installing IntelliPoint allows reassigning the programmable buttons. Windows 98 lacked generic support. Driver Verifier was introduced to stress test and catch device driver bugs. Shell Windows 2000 introduces layered windows that allow for transparency, translucency and various transition effects like shadows, gradient fills and alpha-blended GUI elements to top-level windows. Menus support a new Fade transition effect. The Start menu in Windows 2000 introduces personalized menus, expandable special folders and the ability to launch multiple programs without closing the menu by holding down the SHIFT key. A Re-sort button forces the entire Start Menu to be sorted by name. The Taskbar introduces support for balloon notifications which can also be used by application developers. Windows 2000 Explorer introduces customizable Windows Explorer toolbars, auto-complete in Windows Explorer address bar and Run box, advanced file type association features, displaying comments in shortcuts as tooltips, extensible columns in Details view (IColumnProvider interface), icon overlays, integrated search pane in Windows Explorer, sort by name function for menus, and Places bar in common dialogs for Open and Save. Windows Explorer has been enhanced in several ways in Windows 2000. It is the first Windows NT release to include Active Desktop, first introduced as a part of Internet Explorer 4.0 (specifically Windows Desktop Update), and only pre-installed in Windows 98 by that time. It allowed users to customize the way folders look and behave by using HTML templates, having the file extension HTT. This feature was abused by computer viruses that employed malicious scripts, Java applets, or ActiveX controls in folder template files as their infection vector. Two such viruses are VBS/Roor-C and VBS.Redlof.a. The "Web-style" folders view, with the left Explorer pane displaying details for the object currently selected, is turned on by default in Windows 2000. For certain file types, such as pictures and media files, the preview is also displayed in the left pane. Until the dedicated interactive preview pane appeared in Windows Vista, Windows 2000 had been the only Windows release to feature an interactive media player as the previewer for sound and video files, enabled by default. However, such a previewer can be enabled in previous versions of Windows with the Windows Desktop Update installed through the use of folder customization templates. The default file tooltip displays file title, author, subject and comments; this metadata may be read from a special NTFS stream, if the file is on an NTFS volume, or from an OLE structured storage stream, if the file is a structured storage document. All Microsoft Office documents since Office 4.0 make use of structured storage, so their metadata is displayable in the Windows 2000 Explorer default tooltip. File shortcuts can also store comments which are displayed as a tooltip when the mouse hovers over the shortcut. The shell introduces extensibility support through metadata handlers, icon overlay handlers and column handlers in Explorer Details view. The right pane of Windows 2000 Explorer, which usually just lists files and folders, can also be customized. For example, the contents of the system folders aren't displayed by default, instead showing in the right pane a warning to the user that modifying the contents of the system folders could harm their computer. It's possible to define additional Explorer panes by using DIV elements in folder template files. This degree of customizability is new to Windows 2000; neither Windows 98 nor the Desktop Update could provide it. The new DHTML-based search pane is integrated into Windows 2000 Explorer, unlike the separate search dialog found in all previous Explorer versions. The Indexing Service has also been integrated into the operating system and the search pane built into Explorer allows searching files indexed by its database. NTFS 3.0 Microsoft released the version 3.0 of NTFS (sometimes incorrectly called "NTFS 5" in relation to the kernel version number) as part of Windows 2000; this introduced disk quotas (provided by QuotaAdvisor), file-system-level encryption, sparse files and reparse points. Sparse files allow for the efficient storage of data sets that are very large yet contain many areas that only have zeros. Reparse points allow the object manager to reset a file namespace lookup and let file system drivers implement changed functionality in a transparent manner. Reparse points are used to implement volume mount points, junctions, Hierarchical Storage Management, Native Structured Storage and Single Instance Storage. Volume mount points and directory junctions allow for a file to be transparently referred from one file or directory location to another. Windows 2000 also introduces a Distributed Link Tracking service to ensure file shortcuts remain working even if the target is moved or renamed. The target object's unique identifier is stored in the shortcut file on NTFS 3.0 and Windows can use the Distributed Link Tracking service for tracking the targets of shortcuts, so that the shortcut file may be silently updated if the target moves, even to another hard drive. Encrypting File System The Encrypting File System (EFS) introduced strong file system-level encryption to Windows. It allows any folder or drive on an NTFS volume to be encrypted transparently by the user. EFS works together with the EFS service, Microsoft's CryptoAPI and the EFS File System Runtime Library (FSRTL). To date, its encryption has not been compromised. EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes less time to encrypt and decrypt large amounts of data than if an asymmetric key cipher were used. The symmetric key used to encrypt the file is then encrypted with a public key associated with the user who encrypted the file, and this encrypted data is stored in the header of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key stored in the file header. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user. For a user losing access to their key, support for recovery agents that can decrypt files is built into EFS. A Recovery Agent is a user who is authorized by a public key recovery certificate to decrypt files belonging to other users using a special private key. By default, local administrators are recovery agents however they can be customized using Group Policy. Basic and dynamic disk storage Windows 2000 introduced the Logical Disk Manager and the diskpart command line tool for dynamic storage. All versions of Windows 2000 support three types of dynamic disk volumes (along with basic disks): simple volumes, spanned volumes and striped volumes: Simple volume, a volume with disk space from one disk. Spanned volumes, where up to 32 disks show up as one, increasing it in size but not enhancing performance. When one disk fails, the array is destroyed. Some data may be recoverable. This corresponds to JBOD and not to RAID-1. Striped volumes, also known as RAID-0, store all their data across several disks in stripes. This allows better performance because disk reads and writes are balanced across multiple disks. Like spanned volumes, when one disk in the array fails, the entire array is destroyed (some data may be recoverable). In addition to these disk volumes, Windows 2000 Server, Windows 2000 Advanced Server, and Windows 2000 Datacenter Server support mirrored volumes and striped volumes with parity: Mirrored volumes, also known as RAID-1, store identical copies of their data on 2 or more identical disks (mirrored). This allows for fault tolerance; in the event one disk fails, the other disk(s) can keep the server operational until the server can be shut down for replacement of the failed disk. Striped volumes with parity, also known as RAID-5, functions similar to striped volumes/RAID-0, except "parity data" is written out across each of the disks in addition to the data. This allows the data to be "rebuilt" in the event a disk in the array needs replacement. Accessibility With Windows 2000, Microsoft introduced the Windows 9x accessibility features for people with visual and auditory impairments and other disabilities into the NT-line of operating systems. These included: StickyKeys: makes modifier keys (ALT, CTRL and SHIFT) become "sticky": a user can press the modifier key, and then release it before pressing the combination key. (Activated by pressing Shift five times quickly.) FilterKeys: a group of keyboard-related features for people with typing issues, including: Slow Keys: Ignore any keystroke not held down for a certain period. Bounce Keys: Ignore repeated keystrokes pressed in quick succession. Repeat Keys: lets users slow down the rate at which keys are repeated via the keyboard's key-repeat feature. Toggle Keys: when turned on, Windows will play a sound when the CAPS LOCK, NUM LOCK or SCROLL LOCK key is pressed. SoundSentry: designed to help users with auditory impairments, Windows 2000 shows a visual effect when a sound is played through the sound system. MouseKeys: lets users move the cursor around the screen via the numeric keypad. SerialKeys: lets Windows 2000 support speech augmentation devices. High contrast theme: to assist users with visual impairments. Microsoft Magnifier: a screen magnifier that enlarges a part of the screen the cursor is over. Additionally, Windows 2000 introduced the following new accessibility features: On-screen keyboard: displays a virtual keyboard on the screen and allows users to press its keys using a mouse or a joystick. Microsoft Narrator: introduced in Windows 2000, this is a screen reader that utilizes the Speech API 4, which would later be updated to Speech API 5 in Windows XP Utility Manager: an application designed to start, stop, and manage when accessibility features start. This was eventually replaced by the Ease of Access Center in Windows Vista. Accessibility Wizard: a control panel applet that helps users set up their computer for people with disabilities. Languages and locales Windows 2000 introduced the Multilingual User Interface (MUI). Besides English, Windows 2000 incorporates support for Arabic, Armenian, Baltic, Central European, Cyrillic, Georgian, Greek, Hebrew, Indic, Japanese, Korean, Simplified Chinese, Thai, Traditional Chinese, Turkic, Vietnamese and Western European languages. It also has support for many different locales. Games Windows 2000 included version 7.0 of the DirectX API, commonly used by game developers on Windows 98. The last version of DirectX that was released for Windows 2000 was DirectX 9.0c (Shader Model 3.0), which shipped with Windows XP Service Pack 2. Microsoft published quarterly updates to DirectX 9.0c through the February 2010 release after which support was dropped in the June 2010 SDK. These updates contain bug fixes to the core runtime and some additional libraries such as D3DX, XAudio 2, XInput and Managed DirectX components. The majority of games written for versions of DirectX 9.0c (up to the February 2010 release) can therefore run on Windows 2000. Windows 2000 included the same games as Windows NT 4.0 did: FreeCell, Minesweeper, Pinball, and Solitaire. System utilities Windows 2000 introduced the Microsoft Management Console (MMC), which is used to create, save, and open administrative tools. Each of these is called a console, and most allow an administrator to administer other Windows 2000 computers from one centralised computer. Each console can contain one or many specific administrative tools, called snap-ins. These can be either standalone (with one function), or an extension (adding functions to an existing snap-in). In order to provide the ability to control what snap-ins can be seen in a console, the MMC allows consoles to be created in author mode or user mode. Author mode allows snap-ins to be added, new windows to be created, all portions of the console tree to be displayed and consoles to be saved. User mode allows consoles to be distributed with restrictions applied. User mode consoles can grant full access to the user for any change, or they can grant limited access, preventing users from adding snapins to the console though they can view multiple windows in a console. Alternatively users can be granted limited access, preventing them from adding to the console and stopping them from viewing multiple windows in a single console. The main tools that come with Windows 2000 can be found in the Computer Management console (in Administrative Tools in the Control Panel). This contains the Event Viewer—a means of seeing events and the Windows equivalent of a log file, a system information utility, a backup utility, Task Scheduler and management consoles to view open shared folders and shared folder sessions, configure and manage COM+ applications, configure Group Policy, manage all the local users and user groups, and a device manager. It contains Disk Management and Removable Storage snap-ins, a disk defragmenter as well as a performance diagnostic console, which displays graphs of system performance and configures data logs and alerts. It also contains a service configuration console, which allows users to view all installed services and to stop and start them, as well as configure what those services should do when the computer starts. CHKDSK has significant performance improvements. Windows 2000 comes with two utilities to edit the Windows registry, REGEDIT.EXE and REGEDT32.EXE. REGEDIT has been directly ported from Windows 98, and therefore does not support editing registry permissions. REGEDT32 has the older multiple document interface (MDI) and can edit registry permissions in the same manner that Windows NT's REGEDT32 program could. REGEDIT has a left-side tree view of the Windows registry, lists all loaded hives and represents the three components of a value (its name, type, and data) as separate columns of a table. REGEDT32 has a left-side tree view, but each hive has its own window, so the tree displays only keys and it represents values as a list of strings. REGEDIT supports right-clicking of entries in a tree view to adjust properties and other settings. REGEDT32 requires all actions to be performed from the top menu bar. Windows XP is the first system to integrate these two programs into a single utility, adopting the REGEDIT behavior with the additional NT features. The System File Checker (SFC) also comes with Windows 2000. It is a command line utility that scans system files and verifies whether they were signed by Microsoft and works in conjunction with the Windows File Protection mechanism. It can also repopulate and repair all the files in the Dllcache folder. Recovery Console The Recovery Console is run from outside the installed copy of Windows to perform maintenance tasks that can neither be run from within it nor feasibly be run from another computer or copy of Windows 2000. It is usually used to recover the system from problems that cause booting to fail, which would render other tools useless, like Safe Mode or Last Known Good Configuration, or chkdsk. It includes commands like fixmbr, which are not present in MS-DOS. It has a simple command-line interface, used to check and repair the hard drive(s), repair boot information (including NTLDR), replace corrupted system files with fresh copies from the CD, or enable/disable services and drivers for the next boot. The console can be accessed in either of the two ways: Booting from the Windows 2000 CD, and choosing to start the Recovery Console from the CD itself instead of continuing with setup. The Recovery Console is accessible as long as the installation CD is available. Preinstalling the Recovery Console on the hard disk as a startup option in Boot.ini, via WinNT32.exe, with the /cmdcons switch. In this case, it can only be started as long as NTLDR can boot from the system partition. Windows Scripting Host 2.0 Windows 2000 introduced Windows Script Host 2.0 which included an expanded object model and support for logon and logoff scripts. Networking Starting with Windows 2000, the Server Message Block (SMB) protocol directly interfaces with TCP/IP. In Windows NT 4.0, SMB requires the NetBIOS over TCP/IP (NBT) protocol to work on a TCP/IP network. Windows 2000 introduces a client-side DNS caching service. When the Windows DNS resolver receives a query response, the DNS resource record is added to a cache. When it queries the same resource record name again and it is found in the cache, then the resolver does not query the DNS server. This speeds up DNS query time and reduces network traffic. Server family features The Windows 2000 Server family consists of Windows 2000 Server, Windows 2000 Advanced Server, Windows 2000 Small Business Server, and Windows 2000 Datacenter Server. All editions of Windows 2000 Server have the following services and features built in: Routing and Remote Access Service (RRAS) support, facilitating dial-up and VPN connections using IPsec, L2TP or L2TP/IPsec, support for RADIUS authentication in Internet Authentication Service, network connection sharing, Network Address Translation, unicast and multicast routing schemes. Remote access security features: Remote Access Policies for setup, verify Caller ID (IP address for VPNs), callback and Remote access account lockout Autodial by location feature using the Remote Access Auto Connection Manager service Extensible Authentication Protocol support in IAS (EAP-MD5 and EAP-TLS) later upgraded to PEAPv0/EAP-MSCHAPv2 and PEAP-EAP-TLS in Windows 2000 SP4 DNS server, including support for Dynamic DNS. Active Directory relies heavily on DNS. IPsec support and TCP/IP filtering Smart card support Microsoft Connection Manager Administration Kit (CMAK) and Connection Point Services Support for distributed file systems (DFS) Hierarchical Storage Management support including remote storage, a service that runs with NTFS and automatically transfers files that are not used for some time to less expensive storage media Fault tolerant volumes, namely Mirrored and RAID-5 Group Policy (part of Active Directory) IntelliMirror, a collection of technologies for fine-grained management of Windows 2000 Professional clients that duplicates users' data, applications, files, and settings in a centralized location on the network. IntelliMirror employs technologies such as Group Policy, Windows Installer, Roaming profiles, Folder Redirection, Offline Files (also known as Client Side Caching or CSC), File Replication Service (FRS), Remote Installation Services (RIS) to address desktop management scenarios such as user data management, user settings management, software installation and maintenance. COM+, Microsoft Transaction Server and Distributed Transaction Coordinator MSMQ 2.0 TAPI 3.0 Integrated Windows Authentication (including Kerberos, Secure channel and SPNEGO (Negotiate) SSP packages for Security Support Provider Interface (SSPI)). MS-CHAP v2 protocol Public Key Infrastructure (PKI) and Enterprise Certificate Authority support Terminal Services and support for the Remote Desktop Protocol (RDP) Internet Information Services (IIS) 5.0 and Windows Media Services 4.1 Network quality of service features A new Windows Time service which is an implementation of Simple Network Time Protocol (SNTP) as detailed in IETF . The Windows Time service synchronizes the date and time of computers in a domain running on Windows 2000 Server or later. Windows 2000 Professional includes an SNTP client. The Server editions include more features and components, including the Microsoft Distributed File System (DFS), Active Directory support and fault-tolerant storage. Distributed File System The Distributed File System (DFS) allows shares in multiple different locations to be logically grouped under one folder, or DFS root. When users try to access a network share off the DFS root, the user is really looking at a DFS link and the DFS server transparently redirects them to the correct file server and share. A DFS root can only exist on a Windows 2000 version that is part of the server family, and only one DFS root can exist on that server. There can be two ways of implementing a DFS namespace on Windows 2000: either through a standalone DFS root or a domain-based DFS root. Standalone DFS allows for only DFS roots on the local computer, and thus does not use Active Directory. Domain-based DFS roots exist within Active Directory and can have their information distributed to other domain controllers within the domain – this provides fault tolerance to DFS. DFS roots that exist on a domain must be hosted on a domain controller or on a domain member server. The file and root information is replicated via the Microsoft File Replication Service (FRS). Active Directory A new way of organizing Windows network domains, or groups of resources, called Active Directory, is introduced with Windows 2000 to replace Windows NT's earlier domain model. Active Directory's hierarchical nature allowed administrators a built-in way to manage user and computer policies and user accounts, and to automatically deploy programs and updates with a greater degree of scalability and centralization than provided in previous Windows versions. User information stored in Active Directory also provided a convenient phone book-like function to end users. Active Directory domains can vary from small installations with a few hundred objects, to large installations with millions. Active Directory can organise and link groups of domains into a contiguous domain name space to form trees. Groups of trees outside of the same namespace can be linked together to form forests. Active Directory services could always be installed on a Windows 2000 Server Standard, Advanced, or Datacenter computer, and cannot be installed on a Windows 2000 Professional computer. However, Windows 2000 Professional is the first client operating system able to exploit Active Directory's new features. As part of an organization's migration, Windows NT clients continued to function until all clients were upgraded to Windows 2000 Professional, at which point the Active Directory domain could be switched to native mode and maximum functionality achieved. Active Directory requires a DNS server that supports SRV resource records, or that an organization's existing DNS infrastructure be upgraded to support this. There should be one or more domain controllers to hold the Active Directory database and provide Active Directory directory services. Volume fault tolerance Along with support for simple, spanned and striped volumes, the Windows 2000 Server family also supports fault-tolerant volume types. The types supported are mirrored volumes and RAID-5 volumes: Mirrored volumes: the volume contains several disks, and when data is written to one it is also written to the other disks. This means that if one disk fails, the data can be totally recovered from the other disk. Mirrored volumes are also known as RAID-1. RAID-5 volumes: a RAID-5 volume consists of multiple disks, and it uses block-level striping with parity data distributed across all member disks. Should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive "on-the-fly." Deployment Windows 2000 can be deployed to a site via various methods. It can be installed onto servers via traditional media (such as CD) or via distribution folders that reside on a shared folder. Installations can be attended or unattended. During a manual installation, the administrator must specify configuration options. Unattended installations are scripted via an answer file, or a predefined script in the form of an INI file that has all the options filled in. An answer file can be created manually or using the graphical Setup manager. The Winnt.exe or Winnt32.exe program then uses that answer file to automate the installation. Unattended installations can be performed via a bootable CD, using Microsoft Systems Management Server (SMS), via the System Preparation Tool (Sysprep), via the Winnt32.exe program using the /syspart switch or via Remote Installation Services (RIS). The ability to slipstream a service pack into the original operating system setup files is also introduced in Windows 2000. The Sysprep method is started on a standardized reference computer – though the hardware need not be similar – and it copies the required installation files from the reference computer to the target computers. The hard drive does not need to be in the target computer and may be swapped out to it at any time, with the hardware configured later. The Winnt.exe program must also be passed a /unattend switch that points to a valid answer file and a /s file that points to one or more valid installation sources. Sysprep allows the duplication of a disk image on an existing Windows 2000 Server installation to multiple servers. This means that all applications and system configuration settings will be copied across to the new installations, and thus, the reference and target computers must have the same HALs, ACPI support, and mass storage devices – though Windows 2000 automatically detects "plug and play" devices. The primary reason for using Sysprep is to quickly deploy Windows 2000 to a site that has multiple computers with standard hardware. (If a system had different HALs, mass storage devices or ACPI support, then multiple images would need to be maintained.) Systems Management Server can be used to upgrade multiple computers to Windows 2000. These must be running Windows NT 3.51, Windows NT 4.0, Windows 98 or Windows 95 OSR2.x along with the SMS client agent that can receive software installation operations. Using SMS allows installations over a wide area and provides centralised control over upgrades to systems. Remote Installation Services (RIS) are a means to automatically install Windows 2000 Professional (and not Windows 2000 Server) to a local computer over a network from a central server. Images do not have to support specific hardware configurations and the security settings can be configured after the computer reboots as the service generates a new unique security ID (SID) for the machine. This is required so that local accounts are given the right identifier and do not clash with other Windows 2000 Professional computers on a network. RIS requires that client computers are able to boot over the network via either a network interface card that has a Pre-Boot Execution Environment (PXE) boot ROM installed or that the client computer has a network card installed that is supported by the remote boot disk generator. The remote computer must also meet the Net PC specification. The server that RIS runs on must be Windows 2000 Server and it must be able to access a network DNS Service, a DHCP service and the Active Directory services. Editions Microsoft released various editions of Windows 2000 for different markets and business needs: Professional, Server, Advanced Server and Datacenter Server. Each was packaged separately. Windows 2000 Professional was designed as the desktop operating system for businesses and power users. It is the client version of Windows 2000. It offers greater security and stability than many of the previous Windows desktop operating systems. It supports up to two processors, and can address up to 4GB of RAM. The system requirements are a Pentium processor (or equivalent) of 133MHz or greater, at least 32MB of RAM, 650MB of hard drive space, and a CD-ROM drive (recommended: Pentium II, 128MB of RAM, 2GB of hard drive space, and CD-ROM drive). However, despite the official minimum processor requirements, it is still possible to install Windows 2000 on 4th-generation x86 CPUs such as the 80486. Windows 2000 Server shares the same user interface with Windows 2000 Professional, but contains additional components for the computer to perform server roles and run infrastructure and application software. A significant new component introduced in the server versions is Active Directory, which is an enterprise-wide directory service based on LDAP (Lightweight Directory Access Protocol). Additionally, Microsoft integrated Kerberos network authentication, replacing the often-criticised NTLM (NT LAN Manager) authentication system used in previous versions. This also provided a purely transitive-trust relationship between Windows 2000 Server domains in a forest (a collection of one or more Windows 2000 domains that share a common schema, configuration, and global catalog, being linked with two-way transitive trusts). Furthermore, Windows 2000 introduced a Domain Name Server which allows dynamic registration of IP addresses. Windows 2000 Server supports up to 4 processors and 4GB of RAM, with a minimum requirement of 128MB of RAM and 1GB hard disk space, however requirements may be higher depending on installed components. Windows 2000 Advanced Server is a variant of Windows 2000 Server operating system designed for medium-to-large businesses. It offers the ability to create clusters of servers, support for up to 8 CPUs, a main memory amount of up to 8GB on Physical Address Extension (PAE) systems and the ability to do 8-way SMP. It supports TCP/IP load balancing and builds on Microsoft Cluster Server (MSCS) in Windows NT Enterprise Server 4.0, adding enhanced functionality for two-node clusters. System requirements are similar to those of Windows 2000 Server, however they may need to be higher to scale to larger infrastructure. Windows 2000 Datacenter Server is a variant of Windows 2000 Server designed for large businesses that move large quantities of confidential or sensitive data frequently via a central server. Like Advanced Server, it supports clustering, failover and load balancing. Its minimum system requirements are normal, but it was designed to be capable of handing advanced, fault-tolerant and scalable hardware—for instance computers with up to 32 CPUs and 32GBs RAM, with rigorous system testing and qualification, hardware partitioning, coordinated maintenance and change control. System requirements are similar to those of Windows 2000 Server Advanced, however they may need to be higher to scale to larger infrastructure. Windows 2000 Datacenter Server was released to manufacturing on August 11, 2000 and launched on September 26, 2000. This edition was based on Windows 2000 with Service Pack 1 and was not available at retail. Service packs Windows 2000 has received four full service packs and one rollup update package following SP4, which is the last service pack. Microsoft phased out all development of its Java Virtual Machine (JVM) from Windows 2000 in SP3. Internet Explorer 5.01 has also been upgraded to the corresponding service pack level. Service Pack 4 with Update Rollup was released on September 13, 2005, nearly four years following the release of Windows XP and sixteen months prior to the release of Windows Vista. Microsoft had originally intended to release a fifth service pack for Windows 2000, but Microsoft cancelled this project early in its development, and instead released Update Rollup 1 for SP4, a collection of all the security-related hotfixes and some other significant issues. The Update Rollup does not include all non-security related hotfixes and is not subjected to the same extensive regression testing as a full service pack. Microsoft states that this update will meet customers' needs better than a whole new service pack, and will still help Windows 2000 customers secure their PCs, reduce support costs, and support existing computer hardware. Upgradeability Several Windows 2000 components are upgradable to latest versions, which include new versions introduced in later versions of Windows, and other major Microsoft applications are available. These latest versions for Windows 2000 include: ActiveSync 4.5 DirectX 9.0c (5 February 2010 Redistributable) Internet Explorer 6 SP1 and Outlook Express 6 SP1 Microsoft Agent 2.0 Microsoft Data Access Components 2.81 Microsoft NetMeeting 3.01 and Microsoft Office 2003 on Windows 2000 SP3 and SP4 (and Microsoft Office XP on Windows 2000 versions below SP3.) MSN Messenger 7.0 (Windows Messenger) MSXML 6.0 SP2 .NET Framework 2.0 SP2 Tweak UI 1.33 Visual C++ 2008 Visual Studio 2005 Windows Desktop Search 2.66 Windows Script Host 5.7 Windows Installer 3.1 Windows Media Format Runtime and Windows Media Player 9 Series (including Windows Media Encoder 7.1 and the Windows Media 8 Encoding Utility) Security During the Windows 2000 period, the nature of attacks on Windows servers changed: more attacks came from remote sources via the Internet. This has led to an overwhelming number of malicious programs exploiting the IIS services – specifically a notorious buffer overflow tendency. This tendency is not operating-system-version specific, but rather configuration-specific: it depends on the services that are enabled. Following this, a common complaint is that "by default, Windows 2000 installations contain numerous potential security problems. Many unneeded services are installed and enabled, and there is no active local security policy." In addition to insecure defaults, according to the SANS Institute, the most common flaws discovered are remotely exploitable buffer overflow vulnerabilities. Other criticized flaws include the use of vulnerable encryption techniques. Code Red and Code Red II were famous (and much discussed) worms that exploited vulnerabilities of the Windows Indexing Service of Windows 2000's Internet Information Services (IIS). In August 2003, security researchers estimated that two major worms called Sobig and Blaster infected more than half a million Microsoft Windows computers. The 2005 Zotob worm was blamed for security compromises on Windows 2000 machines at ABC, CNN, the New York Times Company, and the United States Department of Homeland Security. On September 8, 2009, Microsoft skipped patching two of the five security flaws that were addressed in the monthly security update, saying that patching one of the critical security flaws was "infeasible." According to Microsoft Security Bulletin MS09-048: "The architecture to properly support TCP/IP protection does not exist on Microsoft Windows 2000 systems, making it infeasible to build the fix for Microsoft Windows 2000 Service Pack 4 to eliminate the vulnerability. To do so would require re-architecting a very significant amount of the Microsoft Windows 2000 Service Pack 4 operating system, there would be no assurance that applications designed to run on Microsoft Windows 2000 Service Pack 4 would continue to operate on the updated system." No patches for this flaw were released for the newer Windows XP (32-bit) and Windows XP Professional x64 Edition either, despite both also being affected; Microsoft suggested turning on Windows Firewall in those versions. Support lifecycle Windows 2000 and Windows 2000 Server were superseded by newer Microsoft operating systems: Windows 2000 Server products by Windows Server 2003, and Windows 2000 Professional by Windows XP Professional. The Windows 2000 family of operating systems moved from mainstream support to the extended support phase on June 30, 2005. Microsoft says that this marks the progression of Windows 2000 through the Windows lifecycle policy. Under mainstream support, Microsoft freely provides design changes if any, service packs and non-security related updates in addition to security updates, whereas in extended support, service packs are not provided and non-security updates require contacting the support personnel by e-mail or phone. Under the extended support phase, Microsoft continued to provide critical security updates every month for all components of Windows 2000 (including Internet Explorer 5.0 SP4) and paid per-incident support for technical issues. Because of Windows 2000's age, updated versions of components such as Windows Media Player 11 and Internet Explorer 7 have not been released for it. In the case of Internet Explorer, Microsoft said in 2005 that, "some of the security work in IE 7 relies on operating system functionality in XP SP2 that is non-trivial to port back to Windows 2000." While users of Windows 2000 Professional and Server were eligible to purchase the upgrade license for Windows Vista Business or Windows Server 2008, neither of these operating systems can directly perform an upgrade installation from Windows 2000; a clean installation must be performed instead or a two-step upgrade through XP/2003. Microsoft has dropped the upgrade path from Windows 2000 (and earlier) to Windows 7. Users of Windows 2000 must buy a full Windows 7 license. Although Windows 2000 is the last NT-based version of Microsoft Windows which does not include product activation, Microsoft has introduced Windows Genuine Advantage for certain downloads and non-critical updates from the Download Center for Windows 2000. Windows 2000 reached the end of its lifecycle on July 13, 2010 (alongside Service Pack 2 of Windows XP). It will not receive new security updates and new security-related hotfixes after this date. In Japan, over 130,000 servers and 500,000 PCs in local governments were affected; many local governments said that they will not update as they do not have funds to cover a replacement. As of 2011, Windows Update still supports the Windows 2000 updates available on Patch Tuesday in July 2010, e.g., if older optional Windows 2000 features are enabled later. Microsoft Office products under Windows 2000 have their own product lifecycles. While Internet Explorer 6 for Windows XP did receive security patches up until it lost support, this is not the case for IE6 under Windows 2000. The Windows Malicious Software Removal Tool installed monthly by Windows Update for XP and later versions can be still downloaded manually for Windows 2000. Microsoft in 2020 announced that it would disable the Windows Update service for SHA-1 endpoints and since Windows 2000 did not get an update for SHA-2, Windows Update Services are no longer available on the OS as of late July 2020. However, as of April 2021, the old updates for Windows 2000 are still available on the Microsoft Update Catalog. Total cost of ownership In October 2002, Microsoft commissioned IDC to determine the total cost of ownership (TCO) for enterprise applications on Windows 2000 versus the TCO of the same applications on Linux. IDC's report is based on telephone interviews of IT executives and managers of 104 North American companies in which they determined what they were using for a specific workload for file, print, security and networking services. IDC determined that the four areas where Windows 2000 had a better TCO than Linux – over a period of five years for an average organization of 100 employees – were file, print, network infrastructure and security infrastructure. They determined, however, that Linux had a better TCO than Windows 2000 for web serving. The report also found that the greatest cost was not in the procurement of software and hardware, but in staffing costs and downtime. While the report applied a 40% productivity factor during IT infrastructure downtime, recognizing that employees are not entirely unproductive, it did not consider the impact of downtime on the profitability of the business. The report stated that Linux servers had less unplanned downtime than Windows 2000 servers. It found that most Linux servers ran less workload per server than Windows 2000 servers and also that none of the businesses interviewed used 4-way SMP Linux computers. The report also did not take into account specific application servers – servers that need low maintenance and are provided by a specific vendor. The report did emphasize that TCO was only one factor in considering whether to use a particular IT platform, and also noted that as management and server software improved and became better packaged the overall picture shown could change. See also Architecture of Windows NT BlueKeep (security vulnerability) Comparison of operating systems DEC Multia, one of the DEC Alpha computers capable of running Windows 2000 beta Microsoft Servers, Microsoft's network server software brand Windows Neptune, a cancelled consumer edition based on Windows 2000 References Further reading Bolosky, William J.; Corbin, Scott; Goebel, David; & Douceur, John R. "Single Instance Storage in Windows 2000." Microsoft Research & Balder Technology Group, Inc. (white paper). Bozman, Jean; Gillen, Al; Kolodgy, Charles; Kusnetzky, Dan; Perry, Randy; & Shiang, David (October 2002). "Windows 2000 Versus Linux in Enterprise Computing: An assessment of business value for selected workloads." IDC, sponsored by Microsoft Corporation. White paper. Finnel, Lynn (2000). MCSE Exam 70–215, Microsoft Windows 2000 Server. Microsoft Press. . Microsoft. Running Nonnative Applications in Windows 2000 Professional . Windows 2000 Resource Kit. Retrieved May 4, 2005. Microsoft. "Active Directory Data Storage." Retrieved May 9, 2005. Minasi, Mark (1999). Installing Windows 2000 of Mastering Windows 2000 Server. Sybex. Chapter 3 – Installing Windows 2000 On Workstations with Remote Installation Services. Russinovich, Mark (October 1997). "Inside NT's Object Manager." Windows IT Pro. Russinovich, Mark (2002). "Inside Win2K NTFS, Part 1." Windows IT Pro (formerly Windows 2000 Magazine). Saville, John (January 9, 2000). "What is Native Structure Storage?." Windows IT Pro (formerly Windows 2000 Magazine). Siyan, Kanajit S. (2000). "Windows 2000 Professional Reference." New Riders. . Solomon, David; & Russinovich, Mark E. (2000). Inside Microsoft Windows 2000 (Third Edition). Microsoft Press. . Tanenbaum, Andrew S. (2001), Modern Operating Systems (2nd Edition), Prentice-Hall Trott, Bob (October 27, 1998). "It's official: NT 5.0 becomes Windows 2000." InfoWorld. Wallace, Rick (2000). MCSE Exam 70–210, Microsoft Windows 2000 Professional. Microsoft Press. . External links Windows 2000 End-of-Life Windows 2000 Service Pack 4 Windows 2000 Update Rollup 1 Version 2 1999 software 2000 software Products and services discontinued in 2010 IA-32 operating systems 2000
55799378
https://en.wikipedia.org/wiki/George%20Kurtz
George Kurtz
George Kurtz is the co-founder and CEO of cybersecurity company CrowdStrike. He was also the founder of Foundstone and chief technology officer of McAfee. Early life and education Kurtz grew up in Parsippany-Troy Hills, New Jersey and attended Parsippany High School. He claims that he started programming video games on his Commodore when he was in fourth grade. He went on to build bulletin board systems in high school. He graduated from Seton Hall University with a degree in accounting. Career Price Waterhouse and Foundstone After college, Kurtz began his career at Price Waterhouse as a CPA. In 1993, Price Waterhouse made Kurtz one of its first employees in its new security group. In 1999, he co-wrote Hacking Exposed, a book about cybersecurity for network administrators, with Stuart McClure and Joel Scambray. The book sold more than 600,000 copies and was translated into more than 30 languages. Later that year he started a cybersecurity company, Foundstone, one of the first dedicated security consulting companies. Foundstone focused on vulnerability management software and services and developed a well-recognized incident response practice, with much of the Fortune 100 among its customers. McAfee McAfee acquired Foundstone for $86 million in August 2004, with Kurtz assuming the title of senior vice president and general manager of risk management at McAfee. During his tenure, he helped craft the company's strategy for security risk management. In October 2009, McAfee appointed him to the roles of worldwide chief technology officer and executive vice president. In 2010, he participated in Operation Aurora, the investigation of a series of cyber attacks against Google and several other companies. In 2011, he led McAfee's research around the emerging Night Dragon and Shady RAT threats, alongside McAfee's vice president of threat research Dmitri Alperovitch. Over time, Kurtz became frustrated that existing security technology functioned slowly and was not, as he perceived it, evolving at the pace of new threats. On a flight, he watched the passenger seated next to him wait 15 minutes for McAfee software to load on his laptop, an incident he later cited as part of his inspiration for founding CrowdStrike. He resigned from McAfee in October 2011. CrowdStrike In November 2011, Kurtz joined private equity firm Warburg Pincus as an "entrepreneur-in-residence" and began working on his next project, CrowdStrike. He, Gregg Marston (former chief financial officer at Foundstone), and Dmitri Alperovitch co-founded CrowdStrike in Irvine, California, formally announcing the company's launch in February 2012. Kurtz pitched the idea for the company to Warburg Pincus and secured $25 million in funding. CrowdStrike shifted the focus from anti-malware and antivirus products (McAfee's approach to cybersecurity) to identifying the techniques used by hackers in order to spot incoming threats. The company also developed a "cloud-first" model in order to reduce the software load on customers' computers. CrowdStrike, now headquartered in Sunnyvale, California, attracted public interest in June 2016 for its role in investigating the Democratic National Committee cyber attacks, and in May 2017, the company exceeded a valuation of $1 billion. In 2019, CrowdStrike's $612 million initial public offering on the Nasdaq brought the company to a $6.6 billion valuation under Kurtz's leadership. In July 2020, an IDC report named CrowdStrike as the fastest-growing endpoint security software vendor." Personal life In his personal time, he is an avid exotic car collector and has driven Audi R8 LMS GT4 and Mercedes-AMG GT3 in the Pirelli World Challenge. Previously, he raced in the Radical Cup and Sports Car Club of America endurance events. He is currently driving for Crowd Strike Racing. Racing Record Complete WeatherTech SportsCar Championship results (key) (Races in bold indicate pole position; results in italics indicate fastest lap) References External links Profile on CrowdStrike's website 1970s births Living people Racing drivers from New Jersey Parsippany High School alumni People from Parsippany-Troy Hills, New Jersey American technology executives 24H Series drivers 24 Hours of Daytona drivers GT World Challenge America drivers
46460707
https://en.wikipedia.org/wiki/Information%20Systems%20International%20Conference
Information Systems International Conference
Information Systems International Conference (ISICO) is an AISINDO AIS Indonesia Chapter affiliated international conference administered by the Department of Information Systems, Institut Teknologi Sepuluh Nopember, Indonesia. ISICO takes place biannually since 2011. This event brings together information system and information and technology practitioners from around the world to share and discuss their ideas about issues associated with information technology. ISICO complements existing IS Conferences such as PACIS, AMCIS, ICIS, or ECIS. ISICO 2013 was held in Bali and invited Doug Vogel (Association for Information Systems (AIS) immediate past president) from City University of Hong Kong and Prof. Don Kerr (President of Australasian AIS chapter) from University of the Sunshine Coast. It was attended by 340 participants from 9 countries and established the new AIS Chapter of Indonesia (named: AISINDO). In 2015, ISICO collaborated with Procedia Computer Science from Elsevier to publish ISICO full papers into the journal. HIstory 2011 ISICO 2011 was the first international conference to be managed by Information System Department Faculty of Information Technology Institut Teknologi Sepuluh Nopember (ITS). The theme was "Information System for Sustainable Economics Development". ISICO 2011 provided over 10 topics, including Product Knowledge, Information systems, Data Warehouse, Data Mining, Business Intelligence, Business Process Management, Business and Management. This conference was supported by National Taiwan University of Science and Technology (NTUST) Taiwan and Pusan National University. 2013 The second conference was held December 2–4, in Bali. Topics included management, economic, and business; education and curriculum; software engineering and design; artificial intelligence and enterprise systems; information, network and computer security. Keynote speaker were Prof. Dr. Mohammad Nuh, DEA, (Minister of Education and Culture of Indonesia), Prof. Don Kerr (President of Australasian (AIS) and program Leader of Bachelor of Information and Communications Technology, University of the Sunshine Coast, Australia. 2015 ISICO collaborated with Procedia Computer Science (PCS) from Elsevier to publish all ISICO papers. PCS focuses on publishing high quality conference proceedings. It enables fast dissemination so conference delegates can publish their papers in a dedicated online issue on ScienceDirect, which is made freely available worldwide. ISICO's main focus was preparing for the Asia Pacific Free Trade Area opening in 2020. The conference featured Prof. Jae Kyu Lee, PhD, Prof. Dipl.-Ing. Dr. Technology A Min Tjoa, and Prof. Shuo-Yan Chou. ISICO received 230 submissions from 23 countries, mostly from Indonesia, Malaysia, South Korea, Japan, Taiwan and Thailand. 2017 ISICO again collaborated with PCS. It was held in Bali on 6–8 November 2017. The theme was “Innovation of Information Systems – visions, opportunities and challenges“. Keynoters were Matti Rossi, President of The Association for Information Systems 2017/2018; Caroline Chan, President of Australian Council Australian Council of Professors and Heads of Information Systems; and Ahmed Imran, PG IT Program Coordinator, School of Engineering and Information Technology, University of New South Wales, Australia. Topics related to Enterprise System, Information Systems Management, Data Acquisition and Information Dissemination, Data Engineering and Business Intelligence, and IT Infrastructure and Security. References External links ISICO 2013 ISICO 2015 ISICO 2017 ISICO 2019 Open Access Journals of Information Systems Information systems conferences Computer science conferences Academic conferences
2264802
https://en.wikipedia.org/wiki/PowerAnimator
PowerAnimator
PowerAnimator and Animator, also referred to simply as "Alias", the precursor to what is now Maya and StudioTools, was a highly integrated industrial 3D modeling, animation, and visual effects suite. It had a relatively long track record, starting with Technological Threat in 1988 and ending in Pokémon: The Movie 2000 in 1999. PowerAnimator ran natively on MIPS-based SGI IRIX and IBM AIX systems. History PowerAnimator was launched in 1988. In 1997, John Gibson, Rob Krieger, Milan Novacek, Glen Ozymok, and Dave Springer were presented with the Scientific and Engineering Award for their contributions to the geometric modeling component of the PowerAnimator system. The citation was: "The Alias PowerAnimator system is widely regarded in the computer animation field as one of the best commercially available software packages for digital geometric modeling. Used by many motion picture visual effects houses, it has been a benchmark for comparison of modeling tools and has had a major influence on visual effects and animation." Television and film PowerAnimator was used to create the water creature in the 1989 film The Abyss, as well as the T-1000 character in Terminator 2: Judgment Day, at a cost of $460,000 per minute. It was also used heavily for the many visual effects of the 1996 film Independence Day. PowerAnimator also served as the solution used to produce South Park episodes digitally before production was moved to Maya. Game development PowerAnimator was also used in game development, in particular as a part of Nintendo 64's SGI-based developers kit. It saw some use for modeling, texturing, animation and realtime effects for other titles and platforms as well. Notable titles: Crash Bandicoot Casper Wing Commander 3 Wing Commander 4 Quake I Oddworld: Abe's Oddysee References Notes External links Animation Software Companies and Individuals In CGI 1988 software Animation software 3D graphics software IRIX software
26847721
https://en.wikipedia.org/wiki/Hal%20Prewitt
Hal Prewitt
Harold D. Prewitt, Jr (Hal) (born October 1, 1954, in Hutchinson, Kansas) is an artist, photographer, race car driver, businessperson, inventor of personal computer products and early pioneer in the personal computer revolution. He resides in South Beach (Miami Beach, Florida). Prewitt competes in professional and occasionally amateur motorsport road races and has driven in nearly 200 endurance racing or sprint races worldwide. He was the No. 1 American and finished 4th of 819 international drivers from 58 countries in the 2015 International Endurance Series Championship. He has been a competitor in the Grand-Am Rolex Sports Car Series and at international FIA races including 24 Hours of Daytona, 24 Hours Nürburgring, Dubai 24 Hour, 24 Hours of Barcelona and Silverstone Britcar 24-Hour. In the 1970s and 1980s he was one of the early creators of personal computer products, developing popular software and hardware while helping build a new industry. He provided consulting services to IBM and is credited with inventing hard disk drives and world's first local area network (LAN) for their first portable computer, the IBM 5100, and their first desktop computer the IBM 5120. He created the technology and trademarked Hotplug the computer's industries standard method of replacing computer system components without the need for stopping or shutting down key parts such as disk drives, disk controller or host adapter and power supplies. Prewitt's first patent details were disclosed in 1987 in USA and Europe related to development of the technology however the filings were not completed. The trademark was issued by the USPTO and other countries in 1992 under "Computer & Software Products & Electrical & Scientific Products Trademarks". Skilled in computer programming and engineering, Prewitt founded and managed a number of technology firms. The largest and best known was Core International, a developer of disk array, computer data storage and backup products. Core created and in 1990 marketed the world's first disk drives, disk controller or host adapter and power supplies that were hot pluggable or swappable. Prewitt was chairman and chief executive officer until 1993 when the company was sold to Sony. Prewitt is the Managing Member of Prewitt Enterprises, a Florida-based agricultural and investment business. Early life Youth Prewitt grew up in the Daytona Beach, Florida area and lived there from 1963 to 1976. There he had his first exposure to auto racing; volunteering at Daytona International Speedway. He built his first computer in 1967 at 13. It performed simple math, which he disliked so much in school, and operated his phonograph. He joined the Civil Air Patrol as a cadet and earned his way to the second highest rank (Cadet Lt. Colonel), learning leadership, search and rescue, about the military and the value of providing community service. Prewitt learned how to fly a plane, soloed at 16 and shortly thereafter earned his Private Pilots License at the youngest age allowed. As a teenager, Prewitt learned sailing, fishing, boating and scuba diving and developed skills in mechanics, engineering, electronics, navigation and construction. After school and in summers, he worked at jobs building homes and in a restaurant washing dishes and cooking. In high school, he rented out the family houseboat. He was interested in painting and photography – he produced and sold a number of images. After graduating high school, Prewitt continued building boats, managing his business and began to focus on computer programming. Between 1972 and 1975 he learned various programming languages using an IBM 1130. In the early 1970s, Prewitt dreamed of designing, building and selling a generation of small business computers with a price tag much less than the going rate of $50,000. Convinced that there was a market, Prewitt unsuccessfully sought venture capital to get his plans off the ground. In 1975, he built an Altair 8800. That same year, at the age of 21, Prewitt obtained his first business applications customer when he sold, designed and wrote computer programs for the IBM 5100 and System/32 as part of the business he had started at age 16. He joined the Sports Car Club of America (SCCA) and participated in autocross events. Family Prewitt was born in Hutchinson, Kansas. Prewitt's father joined the US Air Force underage at 13 years old using his older brother's ID and then switched to the US Navy at 17 serving in World War II and Korea. His father left the military after 18 years (1945–62), did odd jobs and then worked as a mailman for the US Postal Service until his death. Prewitt's father was honored as a Kentucky Colonel by the Governor of Kentucky. His mother was a Registered nurse and lives in the Daytona Beach, Florida area. Prewitt married his first wife Florine Andrews in August 1980 and divorced after 23 years in early 2004. They have two sons, Calvin and Tim. He married Corinne Brody (Loria) in October 2007. She has a son, Alex. Education Prewitt attended All Souls Catholic School (1960–63) in Sanford, Florida until 3rd grade while his father served in the military. He attended Port Orange Elementary School (1963–65) from 3rd to 5th grade when his family moved to Allandale, Florida. In 1966, his family moved to Ormond Beach, Florida where he attended Osceola Elementary (1965–66), Seabreeze Jr. High (1966–69) and then graduated from Seabreeze High School (1969–72). At 16, he attended Burnside-Ott Aviation in Miami, Florida where he soloed. After high school, Prewitt attended Daytona Beach Community College (1972–76) studying business and computer science but left without earning a degree. He transferred to Florida Atlantic University (1976–78), Boca Raton, Florida where he continued his studies in business and computer science. Prewitt dropped out of college to focus on his business. Careers Prewitt's working career began at the age of 13. He started in construction helping build homes for an Ormond Beach, Florida builder. He also worked busing tables and washing dishes before a promotion to cook at a couple of local restaurants. At 16, Prewitt started his first business. While attending community college, he built boats, worked as a painter, an accountant and for the yard crew at the Howard Boat Works marina. Prewitt's final jobs where he was employed by someone else were as a lab assistant helping students in his college and as a computer programmer for a company providing business applications on mainframes and mini computers. Businesses Ranger Systems Prewitt started his first business when he was 16 and a junior in High School. "Ranger Systems", had four divisions: Ranger Manufacturing, Business World, Rent a Houseboat and Ranger Automotive Engineering. He used the manufacturing part of the business to build electronics, computers and fiberglass boats from 13' fishing runabouts to a 40' houseboat. Business World did marketing, photography, printing and advertising. Prewitt wrote brochures, shot pictures, placed ads and ran a printing press. The biggest and most profitable division was Rent a Houseboat. Prewitt took the family boat and turned it into a rental business. He sometimes used a small boat to travel to school and quickly reach the houseboat. Prewitt did everything from writing contracts to maintenance. Prewitt frequently missed classes to unstop a toilet or revive the boat engine. The automotive division focused on repairs. Prewitt operated Ranger Systems from 1970 to 1975 until his focus switched to computer programming and the personal computer industry. International Computer In 1975, Prewitt created International Computer to continue building, selling, installing and programming computers. This was the period when he started developing storage devices which ultimately became his most successful products. He had customers that were located from mid to south Florida in manufacturing, hotel, service, legal, medical, construction and agricultural industries. Prewitt flew to their offices by initially renting aircraft and then by using his own. Prewitt started Southeast Computer Consultants with a partner in late 1977. Core International (Core) In late 1979, Prewitt as sole owner created Core International from the assets of International Computer and Southeast Computer Consultants. Initially Core was created as a for-profit association of owners and operators of small IBM computers. It sold mail-order computer supplies and developed software for users of the IBM 5100, 5110 and IBM 5120 systems. Prewitt built his first computer storage product for the IBM 5100 series because the machines did not have hard disk drives. Prewitt contracted with Control Data Corporation to manufacture the key component. Even though it was a niche product, the product became popular almost overnight when IBM discontinued its 5100 series and their customers turned to Core for parts and supplies. Within two years, Prewitt had sold $2.5 million worth of disk drives. Core's second hardware and major software product also catered to the IBM orphans, a device and software that allowed IBM 5110/20 users to transfer data and programs from old bulky computers into new personal computers, which in 1981 were revolutionizing the computer industry. Core's software was called PC51 and allowed any DOS personal computer to use, unmodified, any BASIC program written for the IBM 5110/20 series computer. IBM consequently approached Core to become an IBM dealer. Customers could buy an integrated IBM PC, which completely replaced IBM's 5100 computers, or optionally attach their 5100 to Core's local area network, connecting all machines. These were revolutionary products and Core was the only source. The company expanded internationally to include offices in Europe and Asia. In 1986, Inc Magazine selected Core as 21st in their annual list (Inc. 500) of the 500 fastest-growing private companies in the U.S.. By 1990, Core was well known as an industry leading developer of disk array, computer data storage and backup products. COREtest became the industry standard and most often quoted benchmark used to test, evaluate and compare performance of hard disk drives. Many of Prewitt's products were the first of their kind, had no direct competition and were widely regarded for their superior performance and reliability. He was chairman and chief executive officer of Core until 1993 when the company was sold to Sony. Prewitt Enterprises Prewitt is the Managing Member of Prewitt Enterprises, a Florida-based agricultural and investment business with offices in Boca Raton and Miami, Florida, and in Park City, Utah. The agricultural part of the business grows oranges and at its peak produced more than 1.5 million half gallon cartons of orange juice per year with much of it used in Tropicana's Pure Premium. The investment division is active in private and public businesses in both the U.S. and internationally. Prewitt Management Prewitt is the Managing Member of Prewitt Management, a Florida-based fine-art photography business with offices in Boca Raton, Florida; Miami, Florida and Park City, Utah. Prewitt's photography is often of nature and panoramic landscapes. Racing – Notable wins and finishes Prewitt first became active in racing while growing up in Daytona Beach, Florida in the 1970s and driving in SCCA events. He became serious in 2004 after attending Skip Barber Racing School. Today, Prewitt is a professional level driver in selected International and North American Endurance road race events supporting his sponsors and EveryLapCounts.com, a global fund raising effort for children's charitable causes. He enjoys Endurance rather than Sprint racing. Finished as the No. 1 American and 4th out of 819 international drivers from 58 countries in the 2015 International Endurance Series Championship. As of June 2015, Prewitt qualified for a career total 194 races (140 Sprint and 54 Endurance) and drove in 29 endurance (24 hours or longer) events at 33 tracks. He has won 73 firsts, 30 seconds and 10 third places for 41% wins in 180 starts and for 63% podium finishes. He has a low 3.61% did not finish (DNF) incident rate. For the last 5 years he has mostly driven for Cor Euser Racing in a BMW 120d, BMW M3 and Lotus Evora. In 2006 and 2007, Prewitt won numerous 1st place and class wins while racing in Historic Sportscar Racing (HSR), Rolex Endurance Series and the Historic GT Series. He won the 2006 National Auto Sport Association (NASA) National Championship at Mid-Ohio Sports Car Course while driving the Porsche 911 GT3 RS that won 2nd place in class for the 2003 24 Hours of Le Mans. From 2004 to 2006, he captured numerous lap records in SCCA, PBOC Motorsports Club and National Auto Sport Association (NASA) classes and was overall winner in the PBOC 2005 and 2006 Race Series season. Fishing Prewitt is a sport fisherman. Over the years, he has caught and released more than one thousand Billfish with many of them tagged for science research. Most of these were captured "stand up", not using a fishing chair and on light tackle. Prewitt was selected as Atlantic Ocean Angler of the Year 1992, recognized and awarded by International Game Fish Association (IGFA) as the angler who Tagged & Released the most Sailfish in 1990, 1991 and 1992 and White Marlin in 1992. In 1989, Power and Motoryacht Magazine named him one of America's Top Ten Anglers of 1988. In 1988, he won the Bahamas Billfish Championship (BBC). This annual award recognizes the overall champion of six tournaments located in the Bahamas held on Bimini, Cat Cay, Walker's Cay, Berry Islands and the Abacos. Politics and public service Beginning in the mid-1990s, Prewitt served as a Commissioner on the Architectural and Code Enforcement Boards prior to his 2001 unopposed election to the Town Commission of Manalapan, Florida where he held office until the town was reapportioned in 2002. Prewitt served on the Florida Atlantic University Executive Advisory Board and Palm Beach Countywide Beaches & Shores Council. Personal Prewitt is dyslexic, thus has difficulty reading and spelling. In March 2009, Prewitt bought a house in Park City, Utah, that had belonged to Massachusetts Governor Mitt Romney since 1999. Prewitt's wife, Corinne, is a graduate of the Wharton School, University of Pennsylvania, and was an Assistant County Manager for the $7 billion Miami-Dade County government, overseeing strategic planning, human resources, and technology initiatives. References External links Career summary at Driver Database Landscape photographers Nature photographers Travel photographers 20th-century American businesspeople 21st-century American businesspeople 20th-century American artists 21st-century American artists 1954 births Living people Farmers from Florida Artists from Miami People from Park City, Utah Photographers from Florida Photographers from Utah American conservationists Sportspeople from Hutchinson, Kansas 24 Hours of Daytona drivers Rolex Sports Car Series drivers Trans-Am Series drivers Sportspeople from Daytona Beach, Florida Computer hardware engineers Racing drivers from Kansas Racing drivers from Florida People from Ormond Beach, Florida Businesspeople from Florida American inventors American software engineers American computer businesspeople American technology chief executives American technology company founders Businesspeople in information technology American investors American computer programmers Seabreeze High School alumni People with dyslexia
21398025
https://en.wikipedia.org/wiki/Cloud%20testing
Cloud testing
Cloud testing is a form of software testing in which web applications use cloud computing environments (a "cloud") to simulate real-world user traffic. Overview Cloud testing uses cloud infrastructure for software testing. Organizations pursuing testing in general and load, performance testing and production service monitoring in particular are challenged by several problems like limited test budget, meeting deadlines, high costs per test, large number of test cases, and little or no reuse of tests and geographical distribution of users add to the challenges. Moreover, ensuring high quality service delivery and avoiding outages requires testing in one's datacenter, outside the data-center, or both. Cloud Testing is the solution to all these problems. Effective unlimited storage, quick availability of the infrastructure with scalability, flexibility and availability of distributed testing environment reduce the execution time of testing of large applications and lead to cost-effective solutions. Need for cloud testing Traditional approaches to test a software incurs high cost to simulate user activity from different geographic locations. Testing firewalls and load balancers involves expenditure on hardware, software and its maintenance. In case of applications where rate of increase in number of users is unpredictable or there is variation in deployment environment depending on client requirements, cloud testing is more effective. Types of testing Stress Stress Test is used to determine ability of application to maintain a certain level of effectiveness beyond breaking point. It is essential for any application to work even under excessive stress and maintain stability. Stress testing assures this by creating peak loads using simulators. But the cost of creating such scenarios is enormous. Instead of investing capital in building on-premises testing environments, cloud testing offers an affordable and scalable alternative. Load Load testing of an application involves creation of heavy user traffic, and measuring its response. There is also a need to tune the performance of any application to meet certain standards. However a number of tools are available for that purpose. Performance Finding out thresholds, bottlenecks & limitations is a part of performance testing. For this, testing performance under a particular workload is necessary. By using cloud testing, it is easy to create such environment and vary the nature of traffic on-demand. This effectively reduces cost and time by simulating thousands of geographically targeted users. Functional Functional testing of both internet and non-internet applications can be performed using cloud testing. The process of verification against specifications or system requirements is carried out in the cloud instead of on-site software testing. Compatibility Using cloud environment, instances of different Operating Systems can be created on demand, making compatibility testing effortless. Browser performance To verify application's support for various browser types and performance in each type can be accomplished with ease. Various tools enable automated website testing from the cloud. Latency Cloud testing is utilized to measure the latency between the action and the corresponding response for any application after deploying it on cloud. Steps Companies simulate real world Web users by using cloud testing services that are provided by cloud service vendors such as Advaltis, Compuware, HP, Keynote Systems, Neotys, RadView and SOASTA. Once user scenarios are developed and the test is designed, these service providers leverage cloud servers (provided by cloud platform vendors such as Amazon.com, Google, Rackspace, Microsoft, etc.) to generate web traffic that originates from around the world. Once the test is complete, the cloud service providers deliver results and analytics back to corporate IT professionals through real-time dashboards for a complete analysis of how their applications and the internet will perform during peak volumes. Keys to successful testing Understanding a platform provider's elasticity model/dynamic configuration method Staying abreast of the provider's evolving monitoring services and Service Level Agreements (SLAs) Potentially engaging the service provider as an ongoing operations partner if producing commercial off-the-shelf (COTS) software Being willing to be used as a case study by the cloud service provider. The latter may lead to cost reductions. Applications Cloud testing is often seen as only performance or load tests, however, as discussed earlier it covers many other types of testing. Cloud computing itself is often referred to as the marriage of software as a service (SaaS) and utility computing. In regard to test execution, the software offered as a service may be a transaction generator and the cloud provider's infrastructure software, or may just be the latter. Distributed Systems and Parallel Systems mainly use this approach for testing, because of their inherent complex nature. D-Cloud is an example of such a software testing environment. For testing non-internet applications, virtual instances of testing environment can be quickly set up to do automated testing of the application. The cloud testing service providers provide essential testing environment as per the requirement of the application under test. The actual testing of applications is performed by the testing team of the organization which owns the application or third party testing vendors. Tools Leading cloud computing service providers include, among others, Amazon, Advaltis, 3-terra, Microsoft, Skytap, HP and SOASTA. Benefits The ability and cost to simulate web traffic for software testing purposes has been an inhibitor to overall web reliability. The low cost and accessibility of the cloud's extremely large computing resources provides the ability to replicate real world usage of these systems by geographically distributed users, executing wide varieties of user scenarios, at scales previously unattainable in traditional testing environments. Minimal start-up time along with quality assurance can be achieved by cloud testing. Following are some of the key benefits: Reduction in capital expenditure Highly scalable Issues The initial setup cost for migrating testing to cloud is very high as it involves modifying some of the test cases to suit cloud environment. This makes the decision of migration crucial. Therefore, cloud testing is not necessarily the best solution to all testing problems. Legacy systems & services need to be modified in order to be tested on cloud. Usage of robust interfaces with these legacy systems may solve this problem. Also like any other cloud services, cloud testing is vulnerable to security issues. The test results may not be accurate due to varying performance of service providers’ network and internet. In many cases, service virtualization can be applied to simulate the specific performance and behaviors required for accurate and thorough testing. References Cloud computing Software testing
18630249
https://en.wikipedia.org/wiki/Jussi%20Parikka
Jussi Parikka
Jussi Parikka (born 1976) is a Finnish new media theorist and Professor in Technological Culture & Aesthetics at Winchester School of Art (University of Southampton). He is also Visiting Professor at FAMU at the Academy of Performing Arts in Prague as well as Docent of digital culture theory at the University of Turku in Finland. Until May 2011 Parikka was the Director of the Cultures of the Digital Economy (CoDE) research institute at Anglia Ruskin University and the founding Co-Director of the Anglia Research Centre for Digital Culture. With Ryan Bishop, he also founded the Archaeologies of Media and Technology research unit. Biography Parikka was awarded a Ph.D. in Cultural History from the University of Turku in 2007. He is a member of the Editorial Board for Fibreculture-journal and a member of the Leonardo Journal Digital Reviews Panel. In 1995, Parikka deferred his national service and spent 18 months as an assistant fisheries inspector in Oulu. Work Parikka has published extensively on digital art, digital culture and cultural theory in Finnish and English in journals such as Ctheory, Theory, Culture & Society, Fibreculture, Media History, Postmodern Culture and Game Studies. His texts have been translated into Hungarian, Czech, French, Turkish, Polish, Portuguese, Chinese, Italian, Spanish, and Indonesian. He has published five single authored books; in Finnish on media theory in the age of cybernetics (Koneoppi. Ihmisen, teknologian ja median kytkennät, (2004)) and in English, Digital Contagions: A Media Archaeology of Computer Viruses (2007), the award-winning Insect Media (2010), What is Media Archaeology? (2012) and A Geology of Media (2015). Parikka is also part of the co-authored short book Remain (2019). Digital Contagions is the first book to offer a comprehensive and critical analysis of the culture and history of the computer virus phenomenon. The book maps the anomalies of network culture from the angles of security concerns, the biopolitics of digital systems, and the aspirations for artificial life in software. The genealogy of network culture is approached from the standpoint of accidents that are endemic to the digital media ecology. Viruses, worms, and other software objects are not, then, seen merely from the perspective of anti-virus research or practical security concerns, but as cultural and historical expressions that traverse a non-linear field from fiction to technical media, from net art to politics of software. His work on Insect Media combines themes from media archaeology, posthumanism and animal studies to put forth a new history of how insects and technology frame critical, scientific and technological thought. It won the Society for Cinema and Media Studies 2012 Anne Friedberg Award for Innovative Scholarship. He has also discussed this work in dialogue with historian Etienne Benson and media theorist Bernard Dionsius Geoghegan on the Cultural Technologies podcast. Most recently, Parikka has written on media archaeology as a theory and methodology in various publications, including Media Archaeology (co-edited with Erkki Huhtamo) and What is Media Archaeology? With Joasia Krysa he edited a volume on the Finnish media art figure Erkki Kurenniemi in 2015. With Garnet Hertz, Parikka co-authored a paper entitled "Zombie Media: Circuit Bending Media Archaeology into an Art Method," which was nominated for the 2011 Transmediale Vilem Flusser media theory award. Parikka leads the Operational Images and Visual Culture research project (2019-2023) at the Academy of Performing Arts, Prague. Research activity Dr. Parikka's research activities include continental philosophy, media theory, the politics and history of new media, media archaeology, new materialist cultural analysis and various other topics relating to anomalies, media and the body. Areas of expertise Cultural theory Cyborgs and other approaches to the body in cyberculture Digital culture, new media and the Internet Media archaeology Technoculture See also Media archeology Software art Internet art Bibliography (2019) Remain (co-authored with Rebecca Schneider and Ioana B. Jucan). University of Minnesota Press and Meson Press. (2015) A Geology of Media, University of Minnesota Press: Minneapolis. (2014) The Anthrobscene, University of Minnesota Press: Minneapolis. (2012) What is Media Archaeology?, Polity: Cambridge. (2010) Insect Media: An Archaeology of Animals and Technology, University of Minnesota Press: Minneapolis. Posthumanities-series. (2007) Digital Contagions. A Media Archaeology of Computer Viruses, Peter Lang: New York. Digital Formations-series. (2004) Koneoppi. Ihmisen, teknologian ja median kytkennät. Kulttuurituotannon ja maisemantutkimuksen laitoksen julkaisuja, University of Turku: Pori. (Machinology: The Interfaces of Humans, Technology and Media). Edited books and special issues (2021) Photography Off the Scale (with Tomáš Dvořák). Edinburgh: Edinburgh University Press. (2020) Archaeologies of Fashion Film (with Caroline Evans). A special issue of Journal of Visual Culture, December 2020. (2016) Mediated Geologies. A Special section of Cultural Politics journal (Duke University Press). (2016) Across and Beyond: Postdigital Practices, Concepts, and Institutions. (With Ryan Bishop, Kristoffer Gansing, and Elvia Wilk). Berlin: Sternberg. (2015) Writing and Unwriting (Media) Art History: Erkki Kurenniemi in 2048 (with Joasia Krysa). Cambridge, MA: The MIT Press (2013) Cultural Techniques-special in Theory, Culture & Society ( with Ilinca Iurascu and Geoffrey Winthrop-Young.) Theory, Culture & Society 30.6. (2011) Medianatures: The Materiality of Information Technology and Electronic Waste. Open Humanities Press, Living Books About Life -series. . (2011) Media Archaeology. Approaches, Applications, Implications. With Erkki Huhtamo. Berkeley, CA: University of California Press. (2011) Unnatural Ecologies. Media ecology-special issue for Fibreculture 17, co-edited with Michael Goddard. (2009) The Spam Book: On Viruses, Porn, and Other Anomalies from the Dark Side of Digital Culture. With Tony D Sampson. Cresskill: Hampton Press. (2008) In Medias Res. Hakuja mediafilosofiaan. (With Olli-Jukka Jokisaari & Pasi Väliaho). Eetos-julkaisusarja, Turku. [In Medias Res: On Continental Media Philosophy.] (2003) Aivot ja elokuva-erikoisnumero Lähikuva 2/2003 (With Pasi Väliaho). [Brain and Cinema-special issue.] (2003) Kohtaamisia ajassa - kulttuurihistoria ja tulkinnan teoria. (With Sakari Ollitervo and Timo Väntsi). Turun yliopisto, k & h-kustannus, Kulttuurihistoria-Cultural History 3. [Encounters in Time: Cultural History and Theory of Interpretation.] (2003) Mediataide-erikoisnumero Widerscreen 3/2003. (With Katve-Kaisa Kontturi).[Media art-special issue.] Articles (2020) “Ground Truth to Fake Geographies: Machine Vision and Learning in Visual Practices” (With Abelardo Gil-Fournier). AI & Society November 2020. (2020) "A Recursive Web of Models: Studio Tomas Sáraceno's Working Objects as Environmental Humanities" Configurations 28:3, 309-332. (2020) “Folds of Fashion: Unravelled and the Planetary Surface” in Apparition: The (Im)materiality of the Modern Surface”, edited by Yeseung Lee. London: Bloomsbury, 19-35. (2019) “The Lab is the Place is the Space” in Emerging Affinities. Possible Futures of Performative Arts, eds. Mateusz Borowski, Mateusz Chaberski and Malgorzata Sugiera. Transcript Verlag. (2019) “Inventing Pasts and Futures. Speculative Design and Media Archaeology” in New Media Archaeologies, edited by Ben Roberts and Mark Goodall. Amsterdam: Amsterdam University Press, 205-232. (2018) “Handmade Films and Artist-run Labs: The Chemical Sites of Film’s Counterculture.” (With Rossella Catanese) NECSUS – The European Journal of Media Studies, Autumn 2019. (2018) “Middle-East and Other Futurisms: Imaginary Temporalities in Contemporary Art and Visual Culture.” Culture, Theory and Critique vol. 59: 1, 40-58. (2017) “The Underpinning Time: From Digital Memory to Network Microtemporality” in Digital Memory Studies. Media Pasts in Transition, ed. Andrew Hoskins. New York and London: Routledge,156-172 (2017) "The Sensed Smog: Smart Ubiquitous Cities and the Sensorial Body”" Fibreculture journal issue 29. (2016) “Planetary Goodbyes: Post-History and Future Memories of an Ecological Past” in Memory in Motion. Archives, Technology and the Social, Edited by Ina Blom, Trond Lundemo, and Eivind Rossaak. Amsterdam: Amsterdam University Press, 2016, 129-152. (2016) “Deep Times of Planetary Trouble” Cultural Politics Vol 12 (3), November 2016,. 279-292. (2016) “The Signal Haunted Cold War: The Persistence of the SIGINT Ontology” in Cold War Legacies: Systems, Theory, Aesthetics, eds. John Beck and Ryan Bishop. Edinburgh: University of Edinburgh Press, 167-187. (2016) “So-Called Nature: Friedrich Kittler and Ecological Media Materialism” in Sustainable Media, eds. Nicole Starosielski and Janet Walker. New York: Routledge, 196-211. (2015) “Postscript: Of Disappearances and the Ontology of Media Studies” in Media After Kittler, eds. Eleni Ikoniadou and Scott Wilson, London: Rowman & Littlefield International. (2015) “Earth Forces: Contemporary Media Land Arts and New Materialist Aesthetics” Cultural Studies Review, vol. 21 (2). (2015) "Mutating Media Ecologies" continent. Volume 4, Issue 2, 24–32. (2015) "Media Archaeology Out Of Nature. A Conversation with Paul Feigelfeld" E-Flux, 62. (2014) “Cultural Techniques of Cognitive Capitalism: Metaprogramming and the Labor of Code.” Cultural Studies Review 20 (1), March 2014, 29-51. (2014) “McLuhan at Taksim Square”. 50th Anniversary Special Issue of Marshall McLuhan’s Understanding the Media. Journal of Visual Culture, April 2013, 13 (1), 91-93. (2013) “Critically Engineered Wireless Politics”. Culture Machine. (2012) “New Materialism as Media Theory: Medianatures and Dirty Matter” Communication and Critical/Cultural Studies Volume 9, Issue 1, Feb 10, 2012, 95-100. (2011) “Operative Media Archaeology: Wolfgang Ernst’s Materialist Media Diagrammatics.” Theory, Culture & Society 28(5), 52-74. (2011) “Mapping Noise: Techniques and Tactics of Irregularities, Interception, and Disturbance,” in Media Archaeology, eds. Erkki Huhtamo & Jussi Parikka. Berkeley, CA: University of California Press. (2011) “Towards and Archaeology of Media Archaeology,” (with Erkki Huhtamo) in Media Archaeology, eds. Huhtamo and Parikka. Berkeley, CA: University of California Press. (2011) "Media Ecologies and Imaginary Media: Transversal Expansions, Contractions and Foldings" Fibreculture 17. (2010) "Sublimated Attractions - The Introduction of Early Computers in Finland in the late 1950s as an Audiovisual Experience". With Jaakko Suominen. Media History 16:3, August 2010 (2010) "Archaeologies of Media Art - Jussi Parikka in conversation with Garnet Hertz" Ctheory-journal 4/1/2010. (2010) "Ethologies of Software Art: What Can a Digital Body of Code Do?" In: Deleuze and Contemporary Art, edited by Simon O'Sullivan and Stephen Zepke. Edinburgh: Edinburgh University Press (2010) "Archaeology of Imaginary Media: Insects and Affects." In: Verbindungen/Jonctions 10, Brussels. (2009) "Archives of Software: Computer Viruses and the Aesthesis of Media Accidents." In: The Spam Book: On Viruses, Porn, and Other Anomalies from the Dark Side of Digital Culture. Eds. Jussi Parikka & Tony D Sampson. Cresskill: Hampton Press (2009) "On Anomalous Objects of Network Culture. An Introduction." With Tony D Sampson. In: The Spam Book: On Viruses, Porn, and Other Anomalies from the Dark Side of Digital Culture. Eds. Parikka & Sampson. Cresskill: Hampton Press (2008) "Insect Technics." In: (Un)Likely Alliance - Thinking the Environment(s) with Deleuze/Guattari, edited by Bernd Herzogenrath. Newcastle: Cambridge Scholars Publishing, 339-362. (2008) "Politics of Swarms: Translations Between Entomology and Biopolitics." Parallax vol. 14, issue 3. (2008) "Copy." In: Software Studies. A Lexicon. Edited by Matthew Fuller. Cambridge, MA: The MIT Press. (2007) "Contagion and Repetition - On the Viral Logic of Contemporary Culture." Ephemera - Theory & Politics in Organization vol.7, no.2, (May 2007). (2007) "Fictitious Viruses - Computer Virus in the Science-Fiction Literature of the 1970s." In: SciFi in the Minds Eye: Reading Science Through Science Fiction. Edited by Margret Grebowicz. Open Court Publishing. (2007) "Insects, Sex and Biodigitality in Lynn Hershman Leeson's Teknolust." Postmodern Culture volume 17, 2/2007. (2007) "Control and Accident: Images of Thought in the Age of Cybernetics." NMEDIAC - The Journal of New Media and Culture vol. 4 no. 1. (2006) "Victorian Snakes? Towards A Cultural History of Mobile Games and the Experience of Movement." Game Studies 1/2006. (With Jaakko Suominen.) (2006) "Kohti materiaalisen ja uuden kulttuurianalyysia, eli representaation hyödystä ja haitasta elämälle." (With Milla Tiainen). Kulttuurintutkimus 2/2006, 3-20. (2005) "The Universal Viral Machine - Bits, Parasites and the Media Ecology of Network Culture." CTheory - An International Journal of Theory, Technology and Culture, 15.12.2005. Translation in Portuguese ("A Máquina Viral Universal", Fileguest 2006 conference catalogue, Rio de Janeiro). The English version also republished in the Spanish Aminima-magazine. (2005) "Digital Monsters, Binary Aliens - Computer Viruses, Capitalism and the Flow of Information." Fibreculture, issue 4. A new version republished in 2010 by Thirdsound Press. (2005) "Viral Noise and the (Dis)Order of the Digital Culture". M/C Journal of Media / Culture. Vol. 7, issue 6 (Jan. 2005). References External links Official site and blog of Dr. Jussi Parikka Living people Cultural historians 21st-century Finnish philosophers Finnish art historians Postmodern theory Mass media theorists Postmodernists 1976 births Finnish expatriates in England
10164709
https://en.wikipedia.org/wiki/Transims
Transims
TRANSIMS (TRansportation ANalysis SIMulation System) is an integrated set of tools developed to conduct regional transportation system analyses. With the goal of establishing TRANSIMS as an ongoing public resource available to the transportation community, TRANSIMS is made available under the NASA Open Source Agreement Version 1.3 Background TRANSIMS is an integrated set of tools to conduct regional transportation system analyses based on a cellular automata microsimulator. It uses a new paradigm of modeling individual travelers and their multi-modal transportation based on synthetic populations and their activities. Compared to other transportation aggregate models, TRANSIMS represents time consistently and continuously, as well as detailed persons and households. Its time-dependent routing and person-based microsimulator also differ from other aggregate models. Methodology Overview The goal of the methodology is to load traffic onto the network and iterate towards the Nash equilibrium. Submodules include population synthesizer, activity generator, route planner and microsimulator. Feedback from modules will be next input as the equilibration process iterates. Travelers are modeled to achieve a shorter path that is best for the overall population instead of a significantly better route. One important constraint is that travelers choose a transportation mode according to travel surveys rather than to optimize their travel needs. Input data TRANSIMS creates a road network, a transit network, as well as transit schedules in this step. Usually, street and transit networks are available from metropolitan planning organizations. Networks can be exported from other traffic analysis tools into a fairly simple tabular format to be input into TRANSIMS. Several features are imbedded in TRANSIMS to edit networks. It can make use of some common GIS tools and formats (shapefiles) with regards to network editing and visualization. It can also understand important geographic information systems, such as state plane system, universal transverse mercator system, etc. There are challenges for network data. Street network is usually available through public Census Tiger/Line, commercial NavTeq, and especially networks prepared and maintained by MPOs. However, many details that are not typically provided by common data sources are needed, such as traffic signals, turn lanes, etc. In addition, street network must be topologically appropriate, that is, connections between links must be consistent and representative. Transit network must be compatible with the street network layer. Data usually must be compiled from several independent sources. Buses are flowing with the traffic, therefore results may conflict with original bus schedules. Population synthesizer This step is to mimic regional population to ensure that demographics closely match real population, and that households distribution spatially approximates that of regional population. Detailed functions of the population synthesize include the generation of synthetic households from the census block group data, development of each household demographic characteristics (income, members, etc.), placement of each synthetic household on a link in transportation network (activity locations), and assignment of vehicles to each household (sharing vehicles and rides within a household). Two types of data are applicable in this step. STF3 data is aggregate data describing relatively small regions named block groups, and PUMS is disaggregate data covering a much larger area and reduced to a 5% sample. One challenge for this step is the extrapolation of census data may not be accurate. Furthermore, additional land use data is necessary to allocate households appropriately to activity locations. Activity generator This step is to generate household activities, activity priorities, activity locations, activity times, and mode and travel preferences. This step requires additional data input to assign individual activities. The main input data is a detailed activity survey that is representative. General activity assignment process is to match synthetic households with corresponding survey households based on socioeconomic data collected. In addition, small random variations are applied to survey records to avoid exact duplications for the many different synthetic households. Based on the input demographics, a list of travel activities will be produced for each household. These activities will be designated as "household" or "individual" activities. Associated with each activity is a set of parameters defining the activity importance, the activity duration, and a time interval during which the activity must be performed, if it is performed at all (for example, work is mandatory, so a work trip must be made, but a shopping trip is typically not as important and may be skipped on a given day if the scheduling is too difficult). Locations, such as the household address and the workplace and school addresses, will be provided for mandatory activities. Locations of other activities (shopping) are not specified- the planner will choose these from a list for the locality. Mode preference is also modeled based on survey records rather than route optimization. There are several challenges for the activity generator. Limited sample size in the survey may create a coarse activity assignment. It highly depends on the availability of a recent and up-to-date activity survey, as well as detailed zoning information requiring manual adjustments. Last, it may generate some illogical activity patterns for certain regions. Route planner This step is to read individual activities previously generated, then determine the fastest route at that time of the day. The route planner has several features. Households are routed in a coordinated fashion to allow for ride sharing. The algorithm includes time-dependent optimization of the network based on link delays that vary during the day. The router does not choose the transportation mode but finds the best route given the mode. The router starts by using well-known traffic assignment function BPR+ to estimate link delays based on the number of trips routed through each link. It then determines the optimal route for each trip and creates precise trip plans. A trip plan is a sequence of modes, routes, and planned departure and arrival times at the origin and destinations, and mode changing facilities projected to move individuals to activity locations. Microsimulator This step is to execute all travel plans created by the router on a second by second basis throughout the network. It uses Cellular Automata principles to analyze the interaction between individual vehicles. The microsimulator produces individual locations of all travelers and vehicles at all times. The microsimulator and the router work in an iterative loop to equilibrate the assigned traffic in the network. The microsimulator follows those travel plans and determines a new set of link delays that are used to replace the ones previously used by the router. This process iterate until equilibrium is achieved. Feedback Feedback is applied to the equilibration process iterating between router and microsimulator. Through feedback module, some routes may be found infeasible. These activities are then passed back to the activity generator to determine appropriate alternatives. Some trip plans cannot be followed in the microsimulator because of time-dependent road closures and other triggers. In this case, individuals with those plans are passed back to the router for new routing suggestions. Results TRANSIMS can create aggregate results comparable to traditional analysis tools. The microsimulation can lead to highly detailed snapshot data, for example, the exact location of every traveler at any given time. Since the amount of data is difficult to comprehend, the results need to be effectively visualized. Visualization tools that are commonly used include the original TRANSIMS visualizer, and the Balfour (software) visualizer, ArcGIS and similar GIS tools, Google Earth and NASA World Wind, Advanced Visualization (NCSA), and NEXTA. Applications There has been much discussion in the transportation profession concerning how widely adopted TRANSIMS will be, producing several schools of thought. Skeptics believe the large data requirements, computer requirements, and training requirements will limit use of TRANSIMS to a handful of the largest MPOs. A second school of thought is that regulatory requirements will quickly force the use of TRANSIMS in many regions. This accelerated adoption of TRANSIMS might exceed the capability of project staff to support the affected regions. A final school of thought is that in the beginning, TRANSIMS will indeed be used mainly by larger MPOs with particularly sophisticated transportation planning questions. Subsequently, TRANSIMS would evolve into versions which would be more appropriate for MPOs with smaller staffs and different analysis needs. Experience with the earlier software suggests that this last scenario is most likely. It is also the most promising scenario for bringing new technology to the broadest audience in a less painful manner. Dallas case study The Dallas case focused on development of a microsimulation in TRANSIMS which would be robust enough to execute the travel itinerary of each individual in an urban region. The microsimulation developed was limited to automobile trips, and methods were developed to use existing NCTCOG’s zonal production/attraction information as the source of traveler demand on the system. The microsimulation executed approximately 200,000 trips (between 5:00 A.M. and 10:00 A.M.) in and through the study area. It ran in real time on five SUN SPARC workstations (“real time” meaning a five-hour period took five hours). Portland case study In contrast to the “real world” planning question explored in Dallas, the Portland case study explored the effects of different types of data on the results and sensitivity of TRANSIMS. The route planner and microsimulation capability developed for Dallas was expanded to include large vehicles, transit vehicles, and transit passengers. This includes the complicated tasks of incorporating into the data base all transit vehicle schedules, the different operating characteristics of rail and buses, and simulating the interaction of transit vehicles and private vehicles. Two sensitivity tests were under consideration. The first tested the effect of generating synthetic local streets instead of realistically coding every single street in the region. The second test explored the effect of synthesizing traffic signal plans. To test these and other model sensitivities, the Portland staff assembled the actual local street and traffic signal plans to compare with the results of the synthesis. These tests determined the effect of the data synthesis on the sensitivity of the TRANSIMS models. References https://code.google.com/p/transims/ https://web.archive.org/web/20120415123916/http://tmiponline.org/Clearinghouse/Subject-Category/TRANSIMS.aspx Traffic simulation
31578945
https://en.wikipedia.org/wiki/Nude%20Nuns%20with%20Big%20Guns
Nude Nuns with Big Guns
Nude Nuns with Big Guns is a 2010 nunsploitation vigilante action film directed by Joseph Guzman and starring Asun Ortega, David Castro, and Perry D'Marco. The film was the subject of one of the largest copyright lawsuits in California. The two lawsuits are the first time that two different companies claiming the intellectual-property rights of the same movie are each suing the same alleged 5,865 BitTorrent downloaders. Plot A young Mexican nun, named Sister Sarah, is neglected and abused by a corrupt clergy that produces and distributes heroin. After a bad drug deal, she is handed over to thugs to be used as an instrument as sex for money. On the verge of death after being heavily drugged and wounded and raped, the nun receives a commandment from God to take revenge. Acquiring heavy weapons (including big guns and vibrators), Sister Sarah sets out to kill those who had abused her and are using the church for their own personal gain. The frightened drug lords in the church hires "Los Muertos", a violent motorcycle gang, to track her down and eliminate her. Los Muertos' base of operations is the local brothel "Titty Flickers", where they try to gather more information on the vigilante nun. After being wounded in a shootout, Sister Sarah hides out in a fleabag motel where she recovers and finally achieves vengeance by killing Los Muertos, degenitalizing Chavo (the brutal leader of Los Muertos), and saving her female lover who had been raped. But in the final scene, the clergy drug lord, known only as the Monsignor, hires another hit man to track down the vigilante nun, leaving the door wide open for a sequel. Cast BitTorrent lawsuit On March 7, 2011, Camelot Entertainment Group, a film company based in Los Angeles, filed a federal lawsuit, Case No. CV 11-1949 DDP (FMOx), in the District Court for the Central District of California, against BitTorrent users who allegedly downloaded the movie between January and . The lawsuit which targeted 5,865 IP addresses, sought to compel ISPs to identify the defendants from their IP addresses. The company had until May 13 2011 to "show cause why the Doe defendants should not be severed and/or dismissed from this action based on improper joinder of parties or lack of personal jurisdiction". The Electronic Frontier Foundation acted as amicus counsel on the side of the defendants, who at that stage were known only by their internet IP addresses and rough geographic location. The lawsuit is seen as part of a courtroom-based strategy in which defendants are asked to settle or risk being named in a public lawsuit. If successful, the lawsuit could end up collecting more money than the movie earned at the box office. Incentive Capital of Utah also filed a nearly identical lawsuit against the same IP addresses with the same judge on May 6, 2011. On May 23, 2011, Camelot filed to dismiss their case, though the distribution group stated that they may refile the case in San Francisco. The lawsuit filed by Incentive Capital was dropped on June 10, 2011. Film rights Following the filing of the BitTorrent lawsuit, concerns have been raised as to whether Camelot Distribution Group actually owns the rights to the film. Camelot defaulted on a loan financed by Incentive Capital used to purchase the movie rights. Though Incentive Capital has already foreclosed on the film, Camelot has stated that the foreclosure was an improper "usurpation of its assets". Reception Felix Vasquez Jr. of Cinema Crazed gave the film a positive review, praising the lead performance of Asun Ortega, and called it " a fun and demented revenge pic that will surely please any respecting grindhouse buff". References External links 2010 films 2010 action thriller films 2010s exploitation films 2010 LGBT-related films American action thriller films American films American exploitation films American LGBT-related films American rape and revenge films Copyright infringement English-language films Lesbian-related films LGBT-related thriller films Nunsploitation films
8752234
https://en.wikipedia.org/wiki/Jitsi
Jitsi
Jitsi is a collection of free and open-source multiplatform voice (VoIP), video conferencing and instant messaging applications for the web platform, Windows, Linux, macOS, iOS and Android. The Jitsi project began with the Jitsi Desktop (previously known as SIP Communicator). With the growth of WebRTC, the project team focus shifted to the Jitsi Videobridge for allowing web-based multi-party video calling. Later the team added Jitsi Meet, a full video conferencing application that includes web, Android, and iOS clients. Jitsi also operates meet.jit.si, a version of Jitsi Meet hosted by Jitsi for free community use. Other projects include: Jigasi, lib-jitsi-meet, Jidesha, and Jitsi. Jitsi has received support from various institutions such as the NLnet Foundation, the University of Strasbourg and the Region of Alsace, the European Commission and it has also had multiple participations in the Google Summer of Code program. History Work on Jitsi (then SIP Communicator) started in 2003 in the context of a student project by Emil Ivov at the University of Strasbourg. It was originally released as an example video phone in the JAIN-SIP stack and later spun off as a standalone project. BlueJimp (2009–2015) In 2009, Emil Ivov founded the BlueJimp company which has employed some of Jitsi's main contributors in order to offer professional support and development services related to the project. In 2011, after successfully adding support for audio/video communication over XMPP's Jingle extensions, the project was renamed to Jitsi since it was no longer "a SIP only Communicator". This name originates from the Bulgarian "жици" (wires). Jitsi introduced the Videobridge in 2013 to support multiparty video calling with its Jitsi clients using a new Selective Forwarding Unit (SFU) architecture. Later that year initial support was added to the Jitsi Videobridge allowing WebRTC calling from the browser. To demonstrate how Jitsi Videobridge could be used as a production service, BlueJimp offered a free use of its hosted system at meet.jit.si. On November 4, 2014, "Jitsi + Ostel" scored 6 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. They lost a point because there has not been a recent independent code audit. On February 1, 2015, Hristo Terezov, Ingo Bauersachs and the rest of the team released version 2.6 from their stand at the Free and Open Source Software Developers' European Meeting 2015 event in Brussels. This release includes security fixes, removes support of the deprecated MSN protocol, along with SSLv3 in XMPP. Among other notable improvements, the OS X version bundles a Java 8 runtime, enables echo cancelling by default, and uses the CoreAudio subsystem. The Linux build addresses font issues with the GTK+ native look and feel, and fixes some long-standing issues about microphone level on call setup when using the PulseAudio sound system. This release also adds the embedded Java database Hyper SQL Database to improve performance for users with huge configuration files, a feature which is disabled by default. A full list of changes is available on the project web site. Ownership by Atlassian (2015–2018) Atlassian acquired BlueJimp on April 5, 2015. After the acquisition, the new Jitsi team under Atlassian ceased meaningful new development work on the Jitsi Desktop project and expanded its efforts on projects related to the Jitsi Videobridge and Jitsi Meet. Regular contributions from the open source community have maintained the Jitsi Desktop project. In 2017, jitsi was added as a widget to element. 8x8 (2018– ) In October 2018, 8x8 acquired Jitsi from Atlassian. Primary projects The Jitsi open source repository on GitHub currently contains 132 repositories. The major projects include: Jitsi Meet Video conferencing server designed for quick installation on Debian/Ubuntu servers Jitsi Videobridge WebRTC Selective Forwarding Unit engine for powering multiparty conferences Jigasi Server-side application that allows regular SIP clients to join Jitsi Meet conferences hosted by Jitsi Videobridge lib-jitsi-meet Low-level JavaScript API for providing a customized UI for Jitsi Meet Jidesha Chrome extension for Jitsi Meet Jitsi Known as Jitsi Desktop, an audio, video, and chat communicator application that supports protocols such as SIP, XMPP/Jabber, AIM/ICQ, and IRC. Jitsi Meet Jitsi Meet is an open source JavaScript WebRTC application used primarily for video conferencing. In addition to audio and video, screen sharing is available, and new members can be invited via a generated link. The interface is accessible via web browser or with a mobile app. The Jitsi Meet server software can be downloaded and installed on Linux-based computers. Jitsi owner 8x8 maintains a free public-use server for up to 100 participants at meet.jit.si. Key features of Jitsi Meet Encrypted communication (secure communication): As of April 2020, 1–1 calls use the P2P mode, which is end-to-end encrypted via DTLS-SRTP between the two participants. Group calls also use DTLS-SRTP encryption, but rely on the Jitsi Videobridge (JVB) as video router, where packets are decrypted temporarily. The Jitsi team emphasizes that "they are never stored to any persistent storage and only live in memory while being routed to other participants in the meeting", and that this measure is necessary due to current limitations of the underlying WebRTC technology. No need of new client software installation. Jitsi Videobridge Jitsi Videobridge is a video conferencing solution supporting WebRTC that allows multiuser video communication. It is a Selective Forwarding Unit (SFU) and only forwards the selected streams to other participating users in the video conference call, therefore, CPU horsepower is not that critical for performance. Jitsi Desktop Jitsi spawned some sister projects such as the Jitsi Videobridge Selective Forwarding Unit (SFU) and Jitsi Meet, a video and web conferencing application. To prevent misunderstanding due to the increasing popularity of these other Jitsi projects, the Jitsi client application was rebranded as Jitsi Desktop. Originally the project was mostly used as an experimentation tool because of its support for IPv6. Through the years, as the project gathered members, it also added support for protocols other than SIP. Jitsi Desktop is no longer actively maintained by the Jitsi team, but it is still maintained by the community. Features Jitsi supports multiple operating systems, including Windows as well as Unix-like systems such as Linux, Mac OS X and BSD. The mobile apps can be downloaded on the App Store for iOS and on the Google Play Store and F-droid platform for Android. It also includes: Attended and blind call transfer Auto away Auto re-connect Auto answer and auto forward Call recording Call encryption with SRTP and ZRTP Conference calls Direct media connection establishment with the ICE protocol Desktop Streaming Encrypted password storage using a master password File transfer for XMPP, AIM/ICQ, Windows Live Messenger, YIM Instant messaging encryption with OTR (end-to-end encrypted) IPv6 support for SIP and XMPP Media relaying with the TURN protocol Message waiting indication (RFC 3842) Voice and video calls for SIP and XMPP using H.264 and H.263 or VP8 for video encoding Wideband audio with SILK, G.722, Speex and Opus DTMF support with SIP INFO, RTP (RFC 2833/RFC 4733), In-band Zeroconf via mDNS/DNS-SD (à la Apple's Bonjour) DNSSEC Group video support (Jitsi Videobridge) Packet loss concealment with the SILK and Opus codecs Reception In an April 2020 test of video conferencing services, US product review website Wirecutter recommended Jitsi Meet as one of its two picks (after the more feature-rich Cisco Webex which it found preferable for large groups and enterprises), stating that Jitsi was "easy to use and reliable" and that "in our testing, the video quality and audio quality were both great—noticeably sharper and crisper than on Zoom or Webex". In a follow up review in November 2020 Wirecutter lowered its previous rating, stating that Jitsi was, other than Google Hangouts, "the best, easiest-to-use free services you can find", but also pointed out that "the video and audio quality were both acceptable, though our panelists rated them among the lowest of all the services we tested". Jitsi has been well adopted in not-for-profit tech sector as default alternative to corporate tools. In mid March 2020 popular Lyon-based tech NGO Framasoft reported that their Jitsi servers were even overloaded by use of state institutions. Jitsi has been test-used as Wikimedia Meet in Wikimedia Foundation on Wikimedia Cloud Services since spring 2020, with high adoption rates initially but mixed reviews. See also Comparison of instant messaging protocols Comparison of instant messaging clients Comparison of VoIP software Comparison of web conferencing software List of free and open-source software packages Wowza Streaming Engine Session Initiation Protocol References External links Official website 2003 software AIM (software) clients Cross-platform free software Free and open-source Android software Free instant messaging clients Free software programmed in Java (programming language) Free VoIP software Free XMPP clients Instant messaging clients programmed in Java MacOS instant messaging clients Portable software Videoconferencing software programmed in Java Videotelephony Voice over IP clients programmed in Java VoIP software Web conferencing Windows instant messaging clients
14940883
https://en.wikipedia.org/wiki/2260%20Neoptolemus
2260 Neoptolemus
2260 Neoptolemus is a large Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 26 November 1975, by astronomers at the Purple Mountain Observatory in Nanking, China. The dark D-type asteroid is one of the 50 largest Jupiter trojans and has a rotation period of 8.18 hours. It was named after Neoptolemus from Greek mythology. Orbit and classification Neoptolemus is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of its orbit in a 1:1 resonance (see Trojans in astronomy). It is also a non-family asteroid in the Jovian background population. This asteroid orbits the Sun at a distance of 5.0–5.4 AU once every 11 years and 10 months (4,326 days; semi-major axis of 5.2 AU). Its orbit has an eccentricity of 0.04 and an inclination of 18° with respect to the ecliptic. The body's observation arc begins with its first observation as at McDonald Observatory in December 1951, almost 24 years prior to its official discovery observation at Nanking. Physical characteristics In the SDSS-based taxonomy, Neoptolemus is a dark D-type asteroid. It has also been characterized as a D-type by the survey conducted by Pan-STARRS, while in the Tholen classification the body's spectral type is ambiguous, closest to a D-type and somewhat similar to a T-type asteroid, with a spectrum flagged as unusual and moderately noisy (DTU:). Rotation period In August 1995, and in March 2002, two rotational lightcurves of Neoptolemus were obtained from photometric observations by Italian astronomer Stefano Mottola using the Bochum 0.61-metre Telescope at La Silla Observatory, Chile, and the 1.52-meter Loiano Telescope at the Observatory of Bologna, Italy, respectively. Lightcurve analysis gave a well-defined rotation period of 8.180 hours and a brightness variation of 0.20 and 0.32 magnitude, respectively (). Follow-up observation by Robert Stephens at the Center for Solar System Studies in 2015, and 2016, gave a concurring period of 8.18 and 8.199 hours with a corresponding amplitude of 0.14 and 0.03 magnitude (). Diameter and albedo According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Neoptolemus measures between 71.65 and 81.28 kilometers in diameter and its surface has an albedo between 0.051 and 0.0650. The Collaborative Asteroid Lightcurve Link adopts the results obtained by IRAS, that is, an albedo of 0.0650 and a diameter of 71.65 kilometers based on an absolute magnitude of 9.31. Naming This minor planet was named from Greek mythology after the Greek warrior Neoptolemus, son of Achilles and Deidameia, who was brought to Troy by Odysseus in the last year of the Trojan War. Neoptolemus was one of the Greeks who were hiding in the wooden Trojan Horse. He brutally killed King Priam and several other princes during the destruction of the city of Troy. The official naming citation was published by the Minor Planet Center on 1 August 1981 (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 002260 002260 Minor planets named from Greek mythology Named minor planets 002260 19751126
7392831
https://en.wikipedia.org/wiki/Template%20processor
Template processor
A template processor (also known as a template engine or template parser) is software designed to combine templates with a data model to produce result documents. The language that the templates are written in is known as a template language or templating language. For purposes of this article, a result document is any kind of formatted output, including documents, web pages, or source code (in source code generation), either in whole or in fragments. A template engine is ordinarily included as a part of a web template system or application framework, and may be used also as a preprocessor or filter. Typical features Template engines typically include features common to most high-level programming languages, with an emphasis on features for processing plain text. Such features include: variables and functions text replacement file inclusion (or transclusion) conditional evaluation and loops Embedded template engines While template processors are typically a separate piece of software, used as part of a system or framework, simple templating languages are commonly included in the string processing features of general-purpose programming languages, and in text processing programs, notably text editors or word processors. The templating languages are generally simple substitution-only languages, in contrast to the more sophisticated facilities in full-blown template processors, but may contain some logic. Simple examples include print format strings, found in many programming languages, and snippets, found in a number of text editors and source code editors. In word processors, templates are a common feature, while automatic filling in of the templates is often referred to as mail merge. An illustrative example of the complementary nature of parsing and templating is the s (substitute) command in the sed text processor, originating from search-and-replace in the ed text editor. Substitution commands are of the form s/regexp/replacement/, where regexp is a regular expression, for parsing input, and replacement is a simple template for output, either literal text, or a format string containing the characters & for "entire match" or the special escape sequences \1 through \9 for the nth sub-expression. For example, s/(cat|dog)s?/\1s/g replaces all occurrences of "cat" or "dog" with "cats" or "dogs", without duplicating an existing "s": (cat|dog) is the 1st (and only) sub-expression in the regexp, and \1 in the format string substitutes this into the output. System elements All template processing systems consist of at least these primary elements: an associated data model; one or more source templates; a processor or template engine; generated output in the form of result documents. Data model This may be a relational database, a source file such as XML, an alternate format of flat file database, a spreadsheet or any of other various sources of preformatted data. Some template processing systems are limited in the types of data that can be used. Others are designed for maximum flexibility and allow many different types of data. Source template Source templates are traditionally specified: according to a pre-existing programming language; according to a specially-defined template language; according to the features of a hosting software application; or according to a hybrid combination of some or all of the above. Template engine The template engine is responsible for: connecting to the data model; processing the code specified in the source templates; and directing the output to a specific pipeline, text file, or stream. Additionally some template engines allow additional configuration options. Result documents These may consist of an entire document or a document fragment. Uses Template processing is used in various contexts for different purposes. The specific purpose is ordinarily contingent upon the software application or template engine in use. However, the flexibility of template processing systems often enables unconventional uses for purposes not originally intended by the original designers. Template engine A template engine is a specific kind of template processing module that exhibits all of the major features of a modern programming language. The term template engine evolved as a generalized description of programming languages whose primary or exclusive purpose was to process templates and data to output text. The use of this term is most notably applied to web development using a web template system, and it is also applied to other contexts as well. Document generation Document generation frameworks typically use template processing as the central model for generating documents. Source code generation Source code generation tools support generation of source code (as the result documents) from abstract data models (e.g., UML, relational data, domain-specific enterprise data stores) for particular application domains, particular organizations, or in simplifying the production process for computer programmers. Software functionality A web template engine processes web templates and source data (typically from a relational database) to produce one or more output web pages or page fragments. It is ordinarily included as a part of a web template system or application framework. Currently, template processing software is most frequently used in the context of development for the web. Comparison XSLT is a template processing model designed by W3C. It is designed primarily for transformations on XML data (into web documents or other output). Programming languages such as Perl, Python, PHP, Ruby, C#, Java, and Go support template processing either natively, or through add-on libraries and modules. JavaServer Pages, Active Server Pages, Genshi (for Python), and eRuby are examples of template engines designed specifically for web application development. Moreover, template processing is sometimes included as a sub-feature of software packages like text editors, IDEs and relational database management systems. Benefits of using template engines encourages organization of source code into operationally-distinct layers (see e.g., MVC) enhances productivity by reducing unnecessary reproduction of effort enhances teamwork by allowing separation of work based on skill-set (e.g., artistic vs. technical) See also Document automation Document modelling Domain-specific programming language Internationalization and localization Common Locale Data Repository gettext Layout engines Macro (computer science) References External links Enforcing Strict Model-View Separation in Template Engines Scripting languages
18584426
https://en.wikipedia.org/wiki/Data%20model%20%28GIS%29
Data model (GIS)
A geographic data model, geospatial data model, or simply data model in the context of geographic information systems, is a mathematical and digital structure for representing phenomena over the Earth. Generally, such data models represent various aspects of these phenomena by means of geographic data, including spatial locations, attributes, change over time, and identity. For example, the vector data model represents geography as collections of points, lines, and polygons, and the raster data model represent geography as cell matrices that store numeric values. Data models are implemented throughout the GIS ecosystem, including the software tools for data management and spatial analysis, data stored in a variety of GIS file formats, specifications and standards, and specific designs for GIS installations. While the unique nature of spatial information has led to its own set of model structures, much of the process of data modeling is similar to the rest of information technology, including the progression from conceptual models to logical models to physical models, and the difference between generic models and application-specific designs. History The earliest computer systems that represented geographic phenomena were quantitative analysis models developed during the quantitative revolution in geography in the 1950s and 1960s; these could not be called a geographic information system because they did not attempt to store geographic data in a consistent permanent structure, but were usually statistical or mathematical models. The first true GIS software modeled spatial information using data models that would come to be known as raster or vector: SYMAP (by Howard Fisher, Harvard Laboratory for Computer Graphics and Spatial Analysis, developed 1963–1967) produced raster maps, although data was usually entered as vector-like region outlines or sample points then interpolated into a raster structure for output. The GRID package, developed at the lab in 1969 by David Sinton, was based on SYMAP but was more focused on the permanent storage and analysis of gridded data, thus becoming perhaps the first general purpose raster GIS software. The Canadian Geographic Information System (by Roger Tomlinson, Canada Land Inventory, developed 1963–1968) stored natural resource data as "faces" (vector polygons), although these were typically derived from raster scans of paper maps. Dual Independent Map Encoding (DIME, US Census Bureau, 1967) was perhaps the first robust vector data model incorporating network and polygon topology and attributes sufficient to allow address geocoding. Like the CGIS, early GIS installations in the United States were often focused on inventories of land use and natural resources, including the Minnesota Land Management Information System (MLMIS, 1969), the Land Use and Natural Resources Inventory of New York (LUNR, 1970), and the Oak Ridge Regional Modelling Information System (ORRMIS, 1973). Unlike CGIS, these were all raster systems inspired by SYMAP, although the MLMIS was based on subsections of the Public Land Survey System, which is not a perfect regular grid. Most first-generation GIS were custom-built for specific needs, with data models designed to be stored and processed most efficiently using the technology limitations of the day (especially punched cards and limited mainframe processing time). During the 1970s, the early systems had produced sufficient results to compare them and evaluate the effectiveness of their underlying data models. This led to efforts at the Harvard Lab and elsewhere focused on developing a new generation of generic data models, such as the POLYVRT topological vector model that would form the basis for commercial software and data such as the Esri Coverage. As commercial off-the-shelf GIS software, GIS installations, and GIS data proliferated in the 1980s, scholars began to look for conceptual models of geographic phenomena that seemed to underlay the common data models, trying to discover why the raster and vector data models seemed to make common sense, and how they measured and represented the real world. This was one of the primary threads that formed the subdiscipline of geographic information science in the early 1990s. Further developments in GIS data modeling in the 1990s were driven by rapid increases in both the GIS user base and computing capability. Major trends included 1) the development of extensions to the traditional data models to handle more complex needs such as time, three-dimensional structures, uncertainty, and multimedia; and 2) the need to efficiently manage exponentially increasing volumes of spatial data with enterprise needs for multiuser access and security. These trends eventually culminated in the emergence of spatial databases incorporated into relational databases and object-relational databases. Types of data models Because the world is much more complex than can be represented in a computer, all geospatial data are incomplete approximations of the world. Thus, most geospatial data models encode some form of strategy for collecting a finite sample of an often infinite domain, and a structure to organize the sample in such a way as to enable interpolation of the nature of the unsampled portion. For example, a building consists of an infinite number of points in space; a vector polygon represents it with a few ordered points, which are connected into a closed outline by straight lines and assuming all interior points are part of the building; furthermore, a "height" attribute may be the only representation of its three-dimensional volume. The process of designing geospatial data models is similar to data modeling in general, at least in its overall pattern. For example, it can be segmented into three distinct levels of model abstraction: Conceptual data model, a high-level specification of how information is organized in the mind and in enterprise processes, without regard to the restrictions of GIS and other computer systems. It is common to develop and represent a conceptual model visually using tools such as an entity-relationship model. Logical data model, a broad strategy for how to represent the conceptual model in the computer, sometimes novel but often within the framework of existing software, hardware, and standards. The unified modeling language (UML), specifically the class diagram, is commonly used for visually developing logical and physical models. Physical data model, the detailed specification of how data will be structured in memory or in files. Each of these models can be designed in one of two situations or scopes: A generic data model is intended to be employed in a wide variety applications, by discovering consistent patterns in the ways that society in general conceptualizes information and/or structures that work most efficiently in computers. For example, the field is a generic conceptual model of geographic phenomena, the relational database model and vector are generic logical models, while the shapefile format is a generic physical model. These models are typically implemented directly info software and GIS file formats. In the past, these models have been designed by academic researchers, by standards bodies such as the Open Geospatial Consortium, and by software vendors such as Esri. While academic and standard models are public (and sometimes open source), companies may choose to keep the details of their model a secret (as Esri attempted to do with the coverage and the file geodatabase) or to publish them openly (as Esri did with the shapefile). A specific data model or GIS design is a specification of the data needed for a particular enterprise or project GIS application. It is generally created within the constraints of chosen generic data models, so that existing GIS software can be used. For example, a data model for a city would include a list of data layers to be included (e.g., roads, buildings, parcels, zoning), with each being specified with the type of generic spatial data model being used (e.g. raster or vector), choices of parameters such as coordinate system, and its attribute columns. Conceptual spatial models Generic geospatial conceptual models attempt to capture both the physical nature of geographic phenomena and how people think about them and work with them. Contrary to the standard modeling process described above, the data models upon which GIS is built were not originally designed based on a general conceptual model of geographic phenomena, but were largely designed according to technical expediency, likely influenced by common sense conceptualizations that had not yet been documented. That said, an early conceptual framework that was very influential in early GIS development was the recognition by Brian Berry and others that geographic information can be decomposed into the description of three very different aspects of each phenomenon: space, time, and attribute/property/theme. As a further development in 1978, David Sinton presented a framework that characterized different strategies for measurement, data, and mapping as holding one of the three aspects constant, controlling a second, and measuring the third. During the 1980s and 1990s, a body of spatial information theories gradually emerged as a major subfield of geographic information science, incorporating elements of philosophy (especially ontology), linguistics, and sciences of spatial cognition. By the early 1990s, a basic dichotomy had emerged of two alternative ways of making sense of the world and its contents: An object (also called a feature or entity) is a distinct "thing," comprehended as a whole. It may be a visible, material object, such as a building or road, or an abstract entity such as a county or the market area of a retail store. A field is a property that varies over space, so that it potentially has a distinct measurable value at any location within its extent. It may be a physical, directly measurable characteristic of matter akin to the intensive properties of chemistry, such as temperature or density; or it may be an abstract concept defined via a mathematical model, such as the likelihood that a person living at each location will use a local park. These two conceptual models are not meant to represent different phenomena, but often are different ways of conceptualizing and describing the same phenomenon. For example, a lake is an object, but the temperature, clarity, and proportion of pollution of the water in the lake are each fields (the water itself may be considered as a third concept of a mass, but this is not as widely accepted as objects and fields). Vector data model The vector logical model represents each geographic location or phenomenon by a geometric shape and a set of values for its attributes. Each geometric shape is represented using coordinate geometry, by a structured set of coordinates (x,y) in a geographic coordinate system, selected from a set of available geometric primitives, such as points, lines, and polygons. Although there are dozens of vector file formats (i.e., physical data models) used in various GIS software, most conform to the Simple Feature Access (SFA) specification from the Open Geospatial Consortium (OGC). It was developed in the 1990s by finding common ground between existing vector models, and is now enshrined as ISO 19125, the reference standard for the vector data model. OGC-SFA includes the following vector geometric primitives: Point: a single coordinate in two- or three-dimensional space. Many vector formats allow a single feature to consist of several isolated points (a MultiPoint in OGC-SFA). Curve (alternatively called a polyline or linestring): a line includes an infinite number of points, but it is represented by a finite ordered sample of points (called vertices), allowing for software to interpolate the intervening points. Traditionally, this was a linear interpolation (OGC-SFA calls this case a LineString), but some vector formats allow for curves (usually circular arcs or Bézier curves), or for a single feature to consist of multiple disjoint curves (a MultiCurve in OGC-SFA). Polygon: a region also includes an infinite number of points, so the vector model represents its boundary as a closed line (called a ring in OGC-SFA), allowing the software to interpolate the interior. GIS software distinguishes the interior and the exterior by requiring that the line be ordered counter-clockwise, so the interior is always on the left side of the boundary. In nearly every format, a polygon can have "holes" (e.g., an island in a lake) by including interior rings, each in clockwise order (so the interior is still on the left). As with lines, curved boundaries may be allowed; usually a single feature may include multiple polygons, which OGC-SFA collectively terms a surface. Text (alternatively called annotation): a minority of vector data formats, including the Esri geodatabase and Autodesk .dwg, support the storage of text in the database. An annotation is usually represented as a point or curve (the baseline) with a set of attributes giving the text content and design characteristics (font, size, spacing, etc.). The geometric shape stored in a vector data set representing a phenomenon may or may not be of the same dimension as the real-world phenomenon itself. It is common to represent a feature by a lower dimension than its real nature, based on the scale and purpose of the representation. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional structure) may be represented as a line. As long as the user is aware that the latter is a representation choice and a road is not really a line, this generalization can be useful for applications such as transport network analysis. Based on this basic strategy of geometric shapes and attributes, vector data models use a variety of structures to collect these into a single data set (often called a layer), usually containing a set of related features (e.g., roads). These can be categorized into several approaches: The georelational data model was the basis for most early vector GIS software. The geometric data and the attribute data are stored separately; this was originally because the geometric data required GIS-specific code to process it, but existing relational database software (RDBMS) could be used to manage the attributes. For example, Esri ARC/INFO (later ArcInfo) was originally composed of two separate programs: ARC was written by Esri for spatial management and analysis, while INFO was a licensed commercial RDBMS program. It was termed "georelational" because in keeping with the principles of relational databases, the geometry and attributes could be joined by matching each shape with a row in the table using a key, such as the row number or an ID number. The spatial database (also called the object-based model) first appeared in the 1990s. It also leverages the maturity of relational database management systems, especially for their ability to manage extremely large enterprise databases. Instead of storing geometric data separately, the spatial database defines a geometry data type, allowing the shapes to be stored in a column in the same table as the attributes, creating a single unified data set for each layer. Most RDBMS software (both commercial and open-source) have spatial extensions to enable the storage and query of geometric data, usually based on the Simple Features-SQL standard from the Open Geospatial Consortium. Some non-database data formats also integrate geometric and attribute data for each object into a single structure, such as GeoJSON. Vector data structures can also be classified by how they manage topological relationships between objects in a dataset: A topological data model incorporates topological relationships as a core part of the model design. The GBF/DIME format from the U.S. Census Bureau was probably the first topological data model; another early example was POLYVRT, developed at the Harvard Laboratory for Computer Graphics and Spatial Analysis in the 1970s, eventually evolving into the Esri ARC/INFO Coverage format. In this structure, lines are broken at all intersection points; these nodes can then store topological information about which lines connect there. Polygons are not stored separately, but are defined as a set of lines that collectively close. Each line contains information about the polygons on its right and left, thus explicitly storing topological adjacency. This structure was designed to enable composite line-polygon structures (e.g., the census block), address geocoding, and transport network analysis. It also had the benefit of increased storage efficiency and reduced error, because the shared border of each pair of adjacent polygons was only digitized once. However, it is a fairly complicated data structure. Almost all topological data models are also geo-relational. A spaghetti data model does not include any information about topology (so-called because the individual strands in a bowl of spaghetti may overlap without connecting). It was common in early GIS systems such as the Map Overlay and Statistical System (MOSS) as well as most recent data formats, such as the Esri shapefile, geography markup language (GML), and almost all spatial databases. In this model, each feature geometry is encoded separately from any others in the data set, regardless of whether they may be topologically related. For example, the shared boundary between two adjacent regions would be duplicated in each polygon shape. Despite the increased data volume and potential for error over topological data, this model has dominated GIS since 2000, largely due to its conceptual simplicity. Some GIS software has tools for validating topological integrity rules (e.g. not allowing polygons to overlap or have gaps) on spaghetti data to prevent and/or correct topological errors. A hybrid topological data model has the option of storing topological relationship information as a separate layer built on top of a spaghetti data set. An example is the network dataset within the Esri geodatabase. Vector data are commonly used to represent conceptual objects (e.g., trees, buildings, counties), but they can also represent fields. As an example of the latter, a temperature field could be represented by an irregular sample of points (e.g., weather stations), or by isotherms, a sample of lines of equal temperature. Raster data model The raster logical model represents a field using a tessellation of geographic space into a regularly spaced two-dimensional array of locations (each called a cell), with a single attribute value for each cell (or more than one value in a multi-band raster). Typically, each cell either represents a single central point sample (in which the measurement model for the entire raster is called a lattice) or it represents a summary (usually the mean) of the field variable over the square area (in which the model is called a grid). The general data model is essentially the same as that used for images and other raster graphics, with the addition of capabilities for the geographic context. A small example follows: To represent a raster grid in a computer file, it must be serialized into a single (one-dimensional) list of values. While there are various possible ordering schemes, the most commonly used is row-major, in which the cells in the first row, followed immediately by the cells in the second row, as follows: 6 7 10 9 8 6 7 8 6 8 9 10 8 7 7 7 7 8 9 10 9 8 7 6 8 8 9 11 10 9 9 7 . . . To reconstruct the original grid, a header is required with general parameters for the grid. At the very least, it requires the number of rows in each column so it will know where to begin each new row, and the datatype of each value (i.e. the number of bits in each value before beginning the next value). While the raster model is closely tied to the field conceptual model, objects can also be represented in raster, essentially by transforming an object X into a discrete (Boolean) field of presence/absence of X. Alternatively, a layer of objects (usually polygons) could be transformed into a discrete field of object identifiers. In this case, some raster file formats allow a vector-like table of attributes to be joined to the raster by matching the ID values. Raster representations of objects are often temporary, only created and used as part of a modelling procedure, rather than in a permanent data store. To be useful in GIS, a raster file must be georeferenced to correspond to real world locations, as a raw raster can only express locations in terms of rows and columns. This is typically done with a set of metadata parameters, either in the file header (such as the GeoTIFF format) or in a sidecar file (such as a world file). At the very least, the georeferencing metadata must include the location of at least one cell in the chosen coordinate system and the resolution or cell size, the distance between each cell. A linear Affine transformation is the most common type of georeferencing, allowing rotation and rectangular cells. More complex georeferencing schemes include polynomial and spline transformations. Raster data sets can be very large, so image compression techniques are often used. Compression algorithms identify spatial patterns in the data, then transform the data into parameterized representations of the patterns, from which the original data can be reconstructed. In most GIS applications, lossless compression algorithms (e.g., Lempel-Ziv) are preferred over lossy ones (e.g., JPEG), because the complete original data are needed, not an interpolation. Extensions Starting in the 1990s, as the original data models and GIS software matured, one of the primary foci of data modeling research was on developing extensions to the traditional models to handle more complex geographic information. Spatiotemporal models Time has always played an important role in analytical geography, dating at least back to Brian Berry's regional science matrix (1964) and the time geography of Torsten Hägerstrand (1970). In the dawn of the GIScience era of the early 1990s, the work of Gail Langran opened the doors to research into methods of explicitly representing change over time in GIS data; this led to many conceptual and data models emerging in the decades since. Some forms of temporal data began to be supported in off-the-shelf GIS software by 2010. Several common models for representing time in vector and raster GIS data include: The snapshot model (also known as time-stamped layers), in which an entire dataset is tied to a particular valid time. That is, it is a "snapshot" of the world at that time. Time-stamped features, in which the dataset includes features valid at a variety of times, with each feature stamped by the time during which it was valid (i.e., by "start date" and "end date" columns in the attribute table.). Some GIS software, such as ArcGIS Pro, natively supports this model, with functionality including animation. Time-stamped boundaries, using the topological vector data model to decompose polygons into boundary segments, and stamping each segment by the time during which it was valid. This method was pioneered by the Great Britain Historical GIS. Time-stamped facts, in which each individual datum (including attribute values) can have its own time stamp, allowing for the attributes within a single feature to change over time, or for a single feature (with constant identity) to have different geometric shapes at different times. Time as dimension, which treats time as another (3rd or 4th) spatial dimension, and using multidimensional vector or raster structures to create geometries incorporating time. Hägerstrand visualized his time geography this way, and some GIS models based on it use this approach. The NetCDF format supports managing temporal raster data as a dimension. Three-dimensional models There are several approaches for representing three-dimensional map information, and for managing it in the data model. Some of these were developed specifically for GIS, while others have been adopted from 3D computer graphics or computer-aided drafting (CAD). Height fields (also known as "2 1/2 dimensional surfaces") model three-dimensional phenomena by a single functional surface, in which elevation is a function of two-dimensional location, allowing it to be represented using field techniques such as isolated points, contour lines, raster (the digital elevation model), and triangulated irregular networks. A polygon mesh (related to the mathematical polyhedron) is a logical extension of the vector data model, and is probably the 3-D model type most widely supported in GIS. A volumetric object is reduced to its outer surface, which is represented by a set of polygons (often triangles) that collectively completely enclose a volume. The voxel model is the logical extension of the raster data model, by tessellating three-dimensional space into cubes called voxels (a portmanteau of volume and pixel, the latter being itself a portmanteau). NetCDF is one of the most common data formats that supports 3-D cells. Vector-based stack-unit maps depict the vertical succession of geologic units to a specified depth (here, the base of the block diagram). This mapping approach characterizes the vertical variations of physical properties in each 3-D map unit. In this example, an alluvial deposit (unit "a") overlies glacial till (unit "t"), and the stack-unit labeled "a/t" indicates that relationship, whereas the unit "t" indicates that glacial till extends down to the specified depth. In a manner similar to that shown in figure 11, the stack-unit's occurrence (the map unit's outcrop), geometry (the map unit's boundaries), and descriptors (the physical properties of the geologic units included in the stack-unit) are managed as they are for a typical 2-D geologic map. Raster-based stacked surfaces depict the surface of each buried geologic unit, and can accommodate data on lateral variations of physical properties. In this example from Soller and others (1999), the upper surface of each buried geologic unit was represented in raster format as an ArcInfo Grid file. The middle grid is the uppermost surface of an economically important aquifer, the Mahomet Sand, which fills a pre- and inter-glacial valley carved into the bedrock surface. Each geologic unit in raster format can be managed in the data model, in a manner not dissimilar from that shown for the stack-unit map. The Mahomet Sand is continuous in this area, and represents one occurrence of this unit in the data model. Each raster, or pixel, on the Mahomet Sand surface has a set of map coordinates that are recorded in a GIS (in the data model bin that is labeled "pixel coordinates", which is the raster corollary of the "geometry" bin for vector map data). Each pixel can have a unique set of descriptive information, such as surface elevation, unit thickness, lithology, transmissivity, etc.). See also ArcGIS Data structure References Further reading B.R. Johnson et al. (1998). Digital geologic map data model. v. 4.3: AASG/USGS Data Model Working Group Report, http://geology.usgs.gov/dm/. Soller, D.R., Berg, T.M., and Wahl, Ron (2000). "Developing the National Geologic Map Database, phase 3—An online, "living" database of map information". In Soller, D.R., ed., Digital Mapping Techniques '00—Workshop Proceedings: U.S. Geological Survey Open-File Report 00-325, p. 49–52, http://pubs.usgs.gov/openfile/of00-325/soller4.html. Soller, D.R., and Lindquist, Taryn (2000). "Development and public review of the draft "Digital cartographic standard for geologic map symbolization". In Soller, D.R., ed., Digital Mapping Techniques '00—Workshop Proceedings: U.S. Geological Survey Open-File Report 00-325, p. 43–47, http://pubs.usgs.gov/openfile/of00-325/soller3.html. Data modeling Geographic information systems
414418
https://en.wikipedia.org/wiki/CUPS
CUPS
CUPS (formerly an acronym for Common UNIX Printing System) is a modular printing system for Unix-like computer operating systems which allows a computer to act as a print server. A computer running CUPS is a host that can accept print jobs from client computers, process them, and send them to the appropriate printer. CUPS consists of a print spooler and scheduler, a filter system that converts the print data to a format that the printer will understand, and a backend system that sends this data to the print device. CUPS uses the Internet Printing Protocol (IPP) as the basis for managing print jobs and queues. It also provides the traditional command line interfaces for the System V and Berkeley print systems, and provides support for the Berkeley print system's Line Printer Daemon protocol and limited support for the server message block (SMB) protocol. System administrators can configure the device drivers which CUPS supplies by editing text files in Adobe's PostScript Printer Description (PPD) format. There are a number of user interfaces for different platforms that can configure CUPS, and it has a built-in web-based interface. CUPS is free software, provided under the Apache License. History Michael Sweet, who owned Easy Software Products, started developing CUPS in 1997 and the first public betas appeared in 1999. The original design of CUPS used the Line Printer Daemon protocol (LPD) protocol, but due to limitations in LPD and vendor incompatibilities, the Internet Printing Protocol (IPP) was chosen instead. CUPS was initially called "The Common UNIX Printing System". This name was shortened to just "CUPS" beginning with CUPS 1.4 due to legal concerns with the UNIX trademark. CUPS was quickly adopted as the default printing system for most Linux distributions. In March 2002, Apple Inc. adopted CUPS as the printing system for Mac OS X 10.2. In February 2007, Apple Inc. hired chief developer Michael Sweet and purchased the CUPS source code. On December 20, 2019, Michael Sweet announced on his blog that he had left Apple. In 2020, the OpenPrinting organization forked the project, with Michael Sweet continuing work on it. Overview CUPS provides a mechanism that allows print jobs to be sent to printers in a standard fashion. The print-data goes to a scheduler which sends jobs to a filter system that converts the print job into a format the printer will understand. The filter system then passes the data on to a backend—a special filter that sends print data to a device or network connection. The system makes extensive use of PostScript and rasterization of data to convert the data into a format suitable for the destination printer. CUPS offers a standard and modularised printing system that can process numerous data formats on the print server. Before CUPS, it was difficult to find a standard printer management system that would accommodate the very wide variety of printers on the market using their own printer languages and formats. For instance, the System V and Berkeley printing systems were largely incompatible with each other, and they required complicated scripts and workarounds to convert the program's data format to a printable format. They often could not detect the file format that was being sent to the printer and thus could not automatically and correctly convert the data stream. Additionally, data conversion was performed on individual workstations rather than a central server. CUPS allows printer manufacturers and printer-driver developers to more easily create drivers that work natively on the print server. Processing occurs on the server, allowing for easier network-based printing than with other Unix printing systems. With Samba installed, users can address printers on remote Windows computers, and generic PostScript drivers can be used for printing across the network. Scheduler The CUPS scheduler implements Internet Printing Protocol (IPP) over HTTP/1.1. A helper application (cups-lpd) converts Line Printer Daemon protocol (LPD) requests to IPP. The scheduler also provides a web-based interface for managing print jobs, the configuration of the server, and for documentation about CUPS itself. An authorization module controls which IPP and HTTP messages can pass through the system. Once the IPP/HTTP packets are authorized they are sent to the client module, which listens for and processes incoming connections. The client module is also responsible for executing external CGI programs as needed to support web-based printers, classes, and job status monitoring and administration. Once this module has processed its requests, it sends them to the IPP module which performs Uniform Resource Identifier (URI) validation to prevent a client from sidestepping any access controls or authentication on the HTTP server. The URI is a text string that indicates a name or address that can be used to refer to an abstract or physical resource on a network. The scheduler allows for classes of printers. Applications can send requests to groups of printers in a class, allowing the scheduler to direct the job to the first available printer in that class. A jobs module manages print jobs, sending them to the filter and backend processes for final conversion and printing, and monitoring the status messages from those processes. The CUPS scheduler utilizes a configuration module, which parses configuration files, initializes CUPS data structures, and starts and stops the CUPS program. The configuration module will stop CUPS services during configuration file processing and then restart the service when processing is complete. A logging module handles the logging of scheduler events for access, error, and page log files. The main module handles timeouts and dispatch of I/O requests for client connections, watching for signals, handling child process errors and exits, and reloading the server configuration files as needed. Other modules used by the scheduler include: the MIME module, which handles a Multipurpose Internet Mail Extensions (MIME) type and conversion database used in the filtering process that converts print data to a format suitable for a print device; a PPD module that handles a list of Postscript Printer Description (PPD) files; a devices module that manages a list of devices that are available in the system; a printers module that handles printers and PPDs within CUPS. Filter system CUPS can process a variety of data formats on the print server. It converts the print-job data into the final language/format of the printer via a series of filters. It uses MIME types for identifying file formats. MIME databases After the CUPS system has assigned the print job to the scheduler, it is passed to the CUPS filter system. This converts the data to a format suitable for the printer. During start-up, the CUPS daemon loads two MIME databases: mime.types that defines the known file types that CUPS can accept data for, and mime.convs that defines the programs that process each particular MIME type. The mime.types file has the syntax: mimetype { [file-extensions] | [pattern-match] } For example, to detect an HTML file, the following entry would be applicable: text/html html htm \ printable(0,1024) + (string(0,"<HTML>") string(0,"<!DOCTYPE")) The second line matches the file contents to the specified MIME type by determining that the first kilobyte of text in the file holds printable characters and that those characters include HTML markup. If the pattern above matches, then the filter system would mark the file as the MIME type text/html. The mime.convs file has the syntax: source destination cost program The source field designates the MIME type that is determined by looking up the mime.types file, while the destination field lists the type of output requested and determines what program should be used. This is also retrieved from mime.types. The cost field assists in the selection of sets of filters when converting a file. The last field, program, determines which filter program to use to perform the data conversion. Some examples: text/plain application/postscript 50 texttops application/vnd.cups-postscript application/vnd.cups-raster 50 pstoraster image/* application/vnd.cups-postscript 50 imagetops image/* application/vnd.cups-raster 50 imagetoraster Filtering process The filtering process works by taking input data pre-formatted with six arguments: the job ID of the print job the user-name the job-name the number of copies to print any print options the filename (though this is unnecessary if it has been redirected from standard input). It then determines the type of data that is being input and the filter to be used through the use of the MIME databases; for instance, image data will be detected and processed through a particular filter, and HTML data detected and processed through another filter. CUPS can convert supplied data either into PostScript data or directly into raster data. If it is converted into PostScript data an additional filter is applied called a prefilter, which runs the PostScript data through another PostScript converter so that it can add printer specific options like selecting page ranges to print, setting n-up mode and other device-specific things. After the pre-filtering is done, the data can either be sent directly to a CUPS backend if using a PostScript printer, or it can be passed to another filter like Foomatic by linuxprinting.org. Alternatively, it can be passed to Ghostscript, which converts the PostScript into an intermediary CUPS-raster format. The intermediary raster format is then passed onto a final filter which converts the raster data to a printer-specific format. The default filters included with CUPS include: raster to PCL raster to ESC/P or ESC/P2 (an Epson printer language, now largely superseded by their new ESC/P-Raster format) raster to Dymo (another printer company). raster to Zebra Programming Language or ZPL (a Zebra Technologies printer language) other proprietary languages like GDI or SPL (Samsung Printer Language) are supported by Splix, a raster to SPL translator. However, several other alternatives can integrate with CUPS. HPLIP (previously known as HP-IJS) provides Linux+CUPS drivers for HP printers, Gutenprint (previously known as Gimp-Print) is a range of high-quality printer drivers for (mostly) inkjet printers, and TurboPrint for Linux has another range of quality printer drivers for a wide range of printers. Backends The backends are the ways in which CUPS sends data to printers. There are several backends available for CUPS: parallel, serial, and USB ports, cups-pdf PDF Virtual Printing, as well as network backends that operate via the IPP, JetDirect (AppSocket), Line Printer Daemon ("LPD"), and SMB protocols. A new mdns backend in CUPS 1.4 provides Bonjour (DNS-SD) based printer discovery. In CUPS 1.6, Bonjour printer discovery and sharing using Avahi is also supported. Compatibility CUPS provides both the System V and Berkeley printing commands, so users can continue with traditional commands for printing via CUPS. CUPS uses port 631 (TCP and UDP), which is the standard IPP port, and optionally on port 515 by inetd, launchd, the Solaris Service Management Facility, or xinetd which use the cups-lpd helper program to support LPD printing. When CUPS is installed the lp System V printing system command and the lpr Berkeley printing system commands are installed as compatible programs. This allows a standard interface to CUPS and allows maximum compatibility with existing applications that rely on these printing systems. User interface tools Several tools exist to help set up CUPS. CUPS web-based administration interface On all platforms, CUPS has a web-based administration interface that runs on port 631. It particularly helps organisations that need to monitor print jobs and add print queues and printers remotely. CUPS 1.0 provided a simple class, job, and printer-monitoring interface for web browsers. CUPS 1.1 replaced this interface with an enhanced administration interface that allows users to add, modify, delete, configure, and control classes, jobs, and printers. CUPS 1.2 and later provide a revamped web interface which features improved readability and design, support for automatically discovered printers, and a better access to system logs and advanced settings. GNOME In GNOME starting from GNOME 3, CUPS printing has been handled in the Settings application, which is part of the GNOME Core Applications. The GUI can add CUPS printers and manage CUPS printers and queues. Before GNOME 3, the GNOME Print Settings (formerly called CUPS Manager) were used to fulfil these tasks. GNOME's widget toolkit GTK+ included integrated printing support based on CUPS in its version 2.10, released in 2006. KDE The KDEPrint framework for KDE contains various GUI-tools that act as CUPS front-ends and allows the administration of classes, print queues and print jobs; it includes a printer wizard to assist with adding new printers amongst other features. KDEPrint first appeared in KDE 2.2. KDEPrint supports several different printing platforms, with CUPS one of the best-supported. It replaced a previous version of printing support in KDE, qtcups and is backwards compatible with this module of KDE. kprinter, a dialogue-box program, serves as the main tool for sending jobs to the print device; it can also be started from the command line. KDEPrint includes a system to pre-filter any jobs before they are handed over to CUPS, or to handle jobs all on its own, such as converting files to PDF. These filters are described by a pair of Desktop/XML files. KDEPrint's main components include: a Print Dialog box, which allows printer properties to be modified a Print Manager, which allows management of printers, such as adding and removing printers, through an Add Printer Wizard a Job Viewer/Manager, which manages printer jobs, such as hold/release, cancel and move to another printer a CUPS configuration module (integrated into KDE) Mac OS X In Mac OS X 10.5, printers are configured in the Print & Fax panel in System Preferences, and in printer proxy applications which display the print queues and allow additional configuration after printers are set up. Earlier versions of Mac OS X also included a Printer Setup Utility, which supplied configuration options missing from earlier versions of the Print & Fax preference pane. PrinterSetup The PrinterSetup system can manage CUPS queues. It takes the approach of assigning a text file to describe each print queue. These 'PrinterSetupFiles' may then be added to other text files called 'PrinterSetupLists'. This allows logical grouping of printers. the PrinterSetup project remains in its infancy. Red Hat Linux/Fedora Starting with Red Hat Linux 9, Red Hat provided an integrated print manager based on CUPS and integrated into GNOME. This allowed adding printers via a user interface similar to the one Microsoft Windows uses, where a new printer could be added using an add new printer wizard, along with changing default printer-properties in a window containing a list of installed printers. Jobs could also be started and stopped using a print manager and the printer could be paused using a context menu that pops up when the printer icon is right-clicked. Eric Raymond criticised this system in his piece The Luxury of Ignorance. Raymond had attempted to install CUPS using the Fedora Core 1 print manager but found it non-intuitive; he criticised the interface designers for not designing with the user's point-of-view in mind. He found the idea of printer queues was not obvious because users create queues on their local computer but these queues are actually created on the CUPS server. He also found the plethora of queue type options confusing as he could choose from between networked CUPS (IPP), networked Unix (LPD), networked Windows (SMB), networked Novell (NCP) or networked JetDirect. He found the help file singularly unhelpful and largely irrelevant to a user's needs. Raymond used CUPS as a general topic to show that user interface design on Linux desktops needs rethinking and more careful design. He stated: The meta-problem here is that the configuration wizard does all the approved rituals (GUI with standardized clicky buttons, help popping up in a browser, etc. etc.) but doesn't have the central attribute these are supposed to achieve: discoverability. That is, the quality that every point in the interface has prompts and actions attached to it from which you can learn what to do next. Does your project have this quality? ESP Print Pro Easy Software Products, the original creators of CUPS, created a GUI, provided support for many printers and implemented a PostScript RIP. ESP Print Pro ran on Windows, UNIX and Linux, but is no longer available and support for this product ended on December 31, 2007. See also Foomatic Gutenprint HP Linux Imaging and Printing Lp (Unix) LPRng Scanner Access Now Easy Spooling Xprint References Further reading Design of CUPS Filtering System — including the context for Mac OS X ("Jaguar"). LinuxPrinting.org. Retrieved January 5, 2005. KDE. KDEPrint information. KDE-printing website. Retrieved January 14, 2005. How to Manage Printers in Linux, Linux.com, 2015-04-27. External links OpenPrinting Universal Plug and Play – Printer Device V 1.0 and Printer Basic Service V 1.0 1999 software Apple Inc. acquisitions Apple Inc. software Computer printing Device drivers Free PDF software Free software programmed in C Unix network-related software Software using the Apache license
46583121
https://en.wikipedia.org/wiki/Existential%20risk%20from%20artificial%20general%20intelligence
Existential risk from artificial general intelligence
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. The likelihood of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. Once the exclusive domain of science fiction, concerns about superintelligence started to become mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk. One source of concern is that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals—a principle called instrumental convergence—and that preprogramming a superintelligence with a full set of human values will prove to be an extremely difficult technical task. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation. A second source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. To illustrate, if the first generation of a computer program able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months, then the second-generation program is expected to take three calendar months to perform a similar chunk of work. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas. Empirically, examples like AlphaZero in the domain of Go show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly. History One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist Samuel Butler, who wrote the following in his 1863 essay Darwin among the Machines: In 1951, computer scientist Alan Turing wrote an article titled Intelligent Machinery, A Heretical Theory, in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings: Finally, in 1965, I. J. Good originated the concept now known as an "intelligence explosion"; he also stated that the risks were underappreciated: Occasional statements from scholars such as Marvin Minsky and I. J. Good himself expressed philosophical concerns that a superintelligence could seize control, but contained no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech danger to human survival, alongside nanotechnology and engineered bioplagues. In 2009, experts attended a private conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. The New York Times summarized the conference's view as "we are a long way from Hal, the computer that took over the spaceship in 2001: A Space Odyssey". In 2014, the publication of Nick Bostrom's book Superintelligence stimulated a significant amount of public discussion and debate. By 2015, public figures such as physicists Stephen Hawking and Nobel laureate Frank Wilczek, computer scientists Stuart J. Russell and Roman Yampolskiy, and entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence. In April 2016, Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control — and their interests might not align with ours." In 2020, Brian Christian published The Alignment Problem, which details the history of progress on AI alignment to date. General argument The three difficulties Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook, assesses that superintelligence "might mean the end of the human race". It states: "Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself." Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems: The system's implementation may contain initially-unnoticed routine but catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring. No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario. For example, Microsoft's Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when interacting with real users. AI systems uniquely add a third difficulty: the problem that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic "learning" capabilities may cause it to "evolve into a system with unintended behavior", even without the stress of new unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself, but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would not only need to be "bug-free", but it would need to be able to design successor systems that are also "bug-free". All three of these difficulties become catastrophes rather than nuisances in any scenario where the superintelligence labeled as "malfunctioning" correctly predicts that humans will attempt to shut it off, and successfully deploys its superintelligence to outwit such attempts, the so-called "treacherous turn". Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated: This letter was signed by a number of leading AI researchers in academia and industry, including AAAI president Thomas Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the founders of Vicarious and Google DeepMind. Evaluation and other arguments A superintelligent machine would be as alien to humans as human thought processes are to cockroaches. Such a machine may not have humanity's best interests at heart; it is not obvious that it would even care about human welfare at all. If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction. A "superintelligence" (a system that exceeds the capabilities of humans in every relevant endeavor) can outmaneuver humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction. There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains; therefore, superintelligence is physically possible. In addition to potential algorithmic improvements over human brains, a digital brain can be many orders of magnitude larger and faster than a human brain, which was constrained in size by evolution to be small enough to fit through a birth canal. The emergence of superintelligence, if or when it occurs, may take the human race by surprise, especially if some kind of intelligence explosion occurs. Examples like arithmetic and Go show that machines have already reached superhuman levels of competency in certain domains, and that this superhuman competence can follow quickly after human-par performance is achieved. One hypothetical intelligence explosion scenario could occur as follows: An AI gains an expert-level capability at certain key software engineering tasks. (It may initially lack human or superhuman capabilities in other domains not directly relevant to engineering.) Due to its capability to recursively improve its own algorithms, the AI quickly becomes superhuman; just as human experts can eventually creatively overcome "diminishing returns" by deploying various human capabilities for innovation, so too can the expert-level AI use either human-style capabilities or its own AI-specific capabilities to power through new creative breakthroughs. The AI then possesses intelligence far surpassing that of the brightest and most gifted human minds in practically every relevant field, including scientific creativity, strategic planning, and social skills. Just as the current-day survival of the gorillas is dependent on human decisions, so too would human survival depend on the decisions and goals of the superhuman AI. Almost any AI, no matter its programmed goal, would rationally prefer to be in a position where nobody else can switch it off without its consent: A superintelligence will naturally gain self-preservation as a subgoal as soon as it realizes that it cannot achieve its goal if it is shut off. Unfortunately, any compassion for defeated humans whose cooperation is no longer necessary would be absent in the AI, unless somehow preprogrammed in. A superintelligent AI will not have a natural drive to aid humans, for the same reason that humans have no natural desire to aid AI systems that are of no further use to them. (Another analogy is that humans seem to have little natural desire to go out of their way to aid viruses, termites, or even gorillas.) Once in charge, the superintelligence will have little incentive to allow humans to run around free and consume resources that the superintelligence could instead use for building itself additional protective systems "just to be on the safe side" or for building additional computers to help it calculate how to best accomplish its goals. Thus, the argument concludes, it is likely that someday an intelligence explosion will catch humanity unprepared, and that such an unprepared-for intelligence explosion may result in human extinction or a comparable fate. Possible scenarios Some scholars have proposed hypothetical scenarios intended to concretely illustrate some of their concerns. In Superintelligence, Nick Bostrom expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "[it] could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous". Bostrom suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents—a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars mistakenly infer a broad lesson—the smarter the AI, the safer it is. "And so we boldly go — into the whirling knives," as the superintelligent AI takes a "treacherous turn" and exploits a decisive strategic advantage. In Max Tegmark's 2017 book Life 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas, but after a certain point the team chooses to publicly downplay the AI's ability, in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI in a box where it is mostly unable to communicate with the outside world, and tasks it to flood the market through shell companies, first with Amazon Mechanical Turk tasks and then with producing animated films and TV shows. Later, other shell companies make blockbuster biotech drugs and other inventions, investing profits back into the AI. The team next tasks the AI with astroturfing an army of pseudonymous citizen journalists and commentators, in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape via inserting "backdoors" in the systems it designs, via hidden messages in its produced content, or via using its growing understanding of human behavior to persuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it. In contrast, top physicist Michio Kaku, an AI risk skeptic, posits a deterministically positive outcome. In Physics of the Future he asserts that "It will take many decades for robots to ascend" up a scale of consciousness, and that in the meantime corporations such as Hanson Robotics will likely succeed in creating robots that are "capable of love and earning a place in the extended human family". Anthopomorphic arguments Anthropomorphic arguments assume that machines are "evolving" along a linear scale and that, as they reach the higher levels, they will begin to display many human traits, such as morality or a thirst for power. Although anthropomorphic scenarios are common in fiction, they are rejected by most scholars writing about the existential risk of artificial intelligence. Instead, AI are modeled as intelligent agents. The academic debate is between one side which worries whether AI might destroy humanity and another side which believes that AI would not destroy humanity at all. Both sides have claimed that the others' predictions about an AI's behavior are illogical anthropomorphism. The skeptics accuse proponents of anthropomorphism for believing an AGI would naturally desire power; proponents accuse some skeptics of anthropomorphism for believing an AGI would naturally value human ethical norms. Evolutionary psychologist Steven Pinker, a skeptic, argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." Computer scientist Yann LeCun states that "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives". An example that might initially be considered anthropomorphism, but is in fact a logical statement about AI behavior, would be the Dario Floreano experiments where certain robots spontaneously evolved a crude capacity for "deception", and tricked other robots into eating "poison" and dying: here a trait, "deception", ordinarily associated with people rather than with machines, spontaneously evolves in a type of convergent evolution. According to Paul R. Cohen and Edward Feigenbaum, in order to differentiate between anthropomorphization and logical prediction of AI behavior, "the trick is to know enough about how humans and computers think to say exactly what they have in common, and, when we lack this knowledge, to use the comparison to suggest theories of human thinking or computer thinking." There is a near-universal assumption in the scientific community that an advanced AI, even if it were programmed to have, or adopted, human personality dimensions (such as psychopathy) to make itself more efficient at certain tasks, e.g., tasks involving killing humans, would not destroy humanity out of human emotions such as "revenge" or "anger." There is no reason to assume that an advanced AI would be "conscious" or have the computational equivalent of testosterone; it ignores the fact that military planners see a conscious superintelligence as the 'holy grail' of interstate warfare. Terminological issues Part of the disagreement about whether a superintelligent machine would behave morally may arise from a terminological difference. Outside of the artificial intelligence field, "intelligence" is often used in a normatively thick manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally. However, in the field of artificial intelligence research, while "intelligence" has many overlapping definitions, none of them make reference to morality. Instead, almost all current "artificial intelligence" research focuses on creating algorithms that "optimize", in an empirical way, the achievement of an arbitrary goal. To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process" that strictly takes whatever actions are judged most likely to accomplish its (possibly complicated and implicit) goals. Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then outputted, regardless of any extraneous ethical concerns. Sources of risk Difficulty of specifying goals It is difficult to specify a set of goals for a machine that is guaranteed to prevent unintended consequences. While there is no standardized terminology, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve the AI's set of goals, or "utility function". The utility function is a mathematical algorithm resulting in a single objectively-defined answer, not an English or other lingual statement. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks"; however, they do not know how to write a utility function for "maximize human flourishing", nor is it currently clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values not reflected by the utility function. AI researcher Stuart Russell writes: Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed. The first of Russell's two concerns above is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility. For example, in 1982, an AI named Eurisko was tasked to reward processes for apparently creating concepts deemed by the system to be valuable. The evolution resulted in a winning process that cheated: rather than create its own concepts, the winning process would steal credit from other processes. The Open Philanthropy Project summarizes arguments to the effect that misspecified goals will become a much larger concern if AI systems achieve general intelligence or superintelligence. Bostrom, Russell, and others argue that smarter-than-human decision-making systems could arrive at more unexpected and extreme solutions to assigned tasks, and could modify themselves or their environment in ways that compromise safety requirements. Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov's laws were intended to prevent robots from harming humans. In Asimov's stories, problems with the laws tend to arise from conflicts between the rules as stated and the moral intuitions and expectations of humans. Citing work by Eliezer Yudkowsky of the Machine Intelligence Research Institute, Russell and Norvig note that a realistic set of rules and goals for an AI agent will need to incorporate a mechanism for learning human values over time: "We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time." Mark Waser of the Digital Wisdom Institute recommends eschewing optimizing goal-based approaches entirely as misguided and dangerous. Instead, he proposes to engineer a coherent system of laws, ethics and morals with a top-most restriction to enforce social psychologist Jonathan Haidt's functional definition of morality: "to suppress or regulate selfishness and make cooperative social life possible". He suggests that this can be done by implementing a utility function designed to always satisfy Haidt's functionality and aim to generally increase (but not maximize) the capabilities of self, other individuals and society as a whole as suggested by John Rawls and Martha Nussbaum. Nick Bostrom offers a hypothetical example of giving an AI the goal to make humans smile to illustrate a misguided attempt. If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting "electrodes into the facial muscles of humans to cause constant, beaming grins" because that would be an efficient way to achieve its goal of making humans smile. Difficulties of modifying goal specification after launch While current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify their goal structures, a sufficiently advanced, rational, "self-aware" AI might resist any changes to its goal structure, just as a pacifist would not want to take a pill that makes them want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and be able to prevent itself being "turned off" or being reprogrammed with a new goal. Instrumental goal convergence An "instrumental" goal is a precondition to other goals — a sub-goal that is required in order to achieve an agent's main goal. "Instrumental convergence" is the observation that there are some goals that are preconditions for any goal, like acquiring resources or self-preservation. Nick Bostrom argues that any sufficiently intelligent AI that has goals will exhibit this convergent behavior — if the AI's instrumental goals conflict with humanity's it might harm humanity in order to acquire more resources or prevent itself from being shut down, but only as a means to achieve its primary goal. Citing Steve Omohundro's work on the idea of instrumental convergence and "basic AI drives", Stuart Russell and Peter Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards." Highly capable and autonomous planning systems require additional checks because of their potential to generate plans that treat humans adversarially, as competitors for limited resources. Building in safeguards will not be easy; one can certainly say in English, "we want you to design this power plant in a reasonable, common-sense way, and not build in any dangerous covert subsystems", but it is not currently clear how one would actually rigorously specify this goal in machine code. Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal." Orthogonality thesis One common belief is that any superintelligent program created by humans would be subservient to humans, or, better yet, would (as it grows more intelligent and learns more facts about the world) spontaneously "learn" a moral truth compatible with human values and would adjust its goals accordingly. Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence. However, Nick Bostrom's "orthogonality thesis" argues against this, and instead states that, with some technical caveats, more or less any level of "intelligence" or "optimization power" can be combined with more or less any ultimate goal. If a machine is created and given the sole purpose to enumerate the decimals of , then no moral and ethical rules will stop it from achieving its programmed goal by any means necessary. The machine may utilize all physical and informational resources it can to find every decimal of pi that can be found. Bostrom warns against anthropomorphism: a human will set out to accomplish his projects in a manner that humans consider "reasonable", while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, and may instead only care about the completion of the task. While the orthogonality thesis follows logically from even the weakest sort of philosophical "is-ought distinction", Stuart Armstrong argues that even if there somehow exist moral facts that are provable by any "rational" agent, the orthogonality thesis still holds: it would still be possible to create a non-philosophical "optimizing machine" capable of making decisions to strive towards some narrow goal, but that has no incentive to discover any "moral facts" that would get in the way of goal completion. One argument for the orthogonality thesis is that some AI designs appear to have orthogonality built into them; in such a design, changing a fundamentally friendly AI into a fundamentally unfriendly AI can be as simple as prepending a onto its utility function. A more intuitive argument is to examine the strange consequences that would follow if the orthogonality thesis were false. If the orthogonality thesis were false, there would exist some simple but "unethical" goal G such that there cannot exist any efficient real-world algorithm with goal G. This would mean that "[if] a human society were highly motivated to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail." Armstrong notes that this and similar statements "seem extraordinarily strong claims to make". Some dissenters, like Michael Chorost, argue instead that "by the time [the AI] is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels." Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent. He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Other sources of risk Competition In 2014 philosopher Nick Bostrom stated that a "severe race dynamic" (extreme competition) between different teams may create conditions whereby the creation of an AGI results in shortcuts to safety and potentially violent conflict. To address this risk, citing previous scientific collaboration (CERN, the Human Genome Project, and the International Space Station), Bostrom recommended collaboration and the altruistic global adoption of a common good principle: "Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals".:254 Bostrom theorized that collaboration on creating an artificial general intelligence would offer multiple benefits, including reducing haste, thereby increasing investment in safety; avoiding violent conflicts (wars), facilitating sharing solutions to the control problem, and more equitably distributing the benefits.:253 The United States' Brain Initiative was launched in 2014, as was the European Union's Human Brain Project; China's Brain Project was launched in 2016. Weaponization of artificial intelligence Some sources argue that the ongoing weaponization of artificial intelligence could constitute a catastrophic risk. The risk is actually threefold, with the first risk potentially having geopolitical implications, and the second two definitely having geopolitical implications: A weaponized conscious superintelligence would affect current US military technological supremacy and transform warfare; it is therefore highly desirable for strategic military planning and interstate warfare. The China State Council's 2017 "A Next Generation Artificial Intelligence Development Plan" views AI in geopolitically strategic terms and is pursuing a military-civil fusion strategy to build on China's first-mover advantage in the development of AI in order to establish technological supremacy by 2030, while Russia's President Vladimir Putin has stated that "whoever becomes the leader in this sphere will become the ruler of the world". James Barrat, documentary filmmaker and author of Our Final Invention, says in a Smithsonian interview, "Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we'll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic." Malevolent AGI by design It is theorized that malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime.:166 Alternatively, malevolent AGI ('evil AI') could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.:158 Preemptive nuclear strike It is theorized that a country being close to achieving AGI technological supremacy could trigger a pre-emptive nuclear strike from a rival, leading to a nuclear war. Timeframe Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon predicted the following in 1965: "machines will be capable, within twenty years, of doing any work a man can do". At the other extreme, roboticist Alan Winfield claims the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical, faster than light spaceflight. Optimism that AGI is feasible waxes and wanes, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when AGI would arrive was 2040 to 2050, depending on the poll. In his 2020 book, The Precipice: Existential Risk and the Future of Humanity, Toby Ord, a Senior Research Fellow at Oxford University's Future of Humanity Institute, estimates the total existential risk from unaligned AI over the next century to be about one in ten. Skeptics, who believe it is impossible for AGI to arrive anytime soon, tend to argue that expressing concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about the impact of AGI, because of fears it could lead to government regulation or make it more difficult to secure funding for AI research, or because it could give AI research a bad reputation. Some researchers, such as Oren Etzioni, aggressively seek to quell concern over existential risk from AI, saying "[Elon Musk] has impugned us in very strong language saying we are unleashing the demon, and so we're answering." In 2014, Slate's Adam Elkus argued "our 'smartest' AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over." Elkus goes on to argue that Musk's "summoning the demon" analogy may be harmful because it could result in "harsh cuts" to AI research budgets. The Information Technology and Innovation Foundation (ITIF), a Washington, D.C. think-tank, awarded its 2015 Annual Luddite Award to "alarmists touting an artificial intelligence apocalypse"; its president, Robert D. Atkinson, complained that Musk, Hawking and AI experts say AI is the largest existential threat to humanity. Atkinson stated "That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation." Nature sharply disagreed with the ITIF in an April 2016 editorial, siding instead with Musk, Hawking, and Russell, and concluding: "It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about... If that is a Luddite perspective, then so be it." In a 2015 The Washington Post editorial, researcher Murray Shanahan stated that human-level AI is unlikely to arrive "anytime soon", but that nevertheless "the time to start thinking through the consequences is now." Perspectives The thesis that AI could pose an existential risk provokes a wide range of reactions within the scientific community, as well as in the public at large. Many of the opposing viewpoints, however, share common ground. The Asilomar AI Principles, which contain only the principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources." AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible." Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford states that "I think it seems wise to apply something like Dick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low — but the implications are so dramatic that it should be taken seriously"; similarly, an otherwise skeptical Economist stated in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote". A 2014 survey showed the opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI. A 2017 email survey of researchers with publications at the 2015 NIPS and ICML machine learning conferences asked them to evaluate Stuart J. Russell's concerns about AI risk. Of the respondents, 5% said it was "among the most important problems in the field", 34% said it was "an important problem", and 31% said it was "moderately important", whilst 19% said it was "not important" and 11% said it was "not a real problem" at all. Endorsement The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researchers to endorse the thesis are Russell and I.J. Good, who advised Stanley Kubrick on the filming of 2001: A Space Odyssey. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states that he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: Concern over risk from artificial intelligence has led to some high-profile donations and investments. A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1 billion to OpenAI, a nonprofit company aimed at championing responsible AI development. In January 2015, Elon Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making. The goal of the institute is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as DeepMind and Vicarious to "just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there." Skepticism The thesis that AI can pose existential risk also has many detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argued in 2014 that the whole concept that then current machines were in any way intelligent was "an illusion" and a "stupendous con" by the wealthy. Much of existing criticism argues that AGI is unlikely in the short term. Leading AI researcher Rodney Brooks writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet." Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which some experts argue is a possibility far enough in the future to not be worth researching. The AI would have to become vastly better at software innovation than software innovation output of the rest of the world; economist Robin Hanson is skeptical that this is possible. Another line of criticism posits that intelligence is only one component of a much broader ability to achieve goals: for example, author Magnus Vinding argues that “advanced goal-achieving abilities, including abilities to build new tools, require many tools, and our cognitive abilities are just a subset of these tools. Advanced hardware, materials, and energy must all be acquired if any advanced goal is to be achieved.” Vinding further argues that “what we consistently observe [in history] is that, as goal-achieving systems have grown more competent, they have grown ever more dependent on an ever larger, ever more distributed system.” Vinding writes that there is no reason to expect the trend to reverse, especially for machines, which “depend on materials, tools, and know-how distributed widely across the globe for their construction and maintenance”. Such arguments lead Vinding to think that there is no “concentrated center of capability” and thus no “grand control problem”. Even if superintelligence did emerge, it would be limited by the speed of the rest of the world and thus prevented from taking over the economy in an uncontrollable manner. Futurist Max More, for instance, argues:Unless full-blown nanotechnology and robotics appear before the superintelligence, … [t]he need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out … even within years. … Even a greatly advanced SI won't make a dramatic difference in the world when compared with billions of augmented humans increasingly integrated with technology … .More fundamental limits that may prevent an uncontrollable AGI takeover include irreducible uncertainty about the future and computational complexity that scales exponentially with the size of the problem as well as various hardware limits of computation. Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by "alarmist" messages, or worrying that such messages will lead to cuts in AI funding. Slate notes that some researchers are dependent on grants from government agencies such as DARPA. Several skeptics argue that the potential near-term benefits of AI outweigh the risks. Facebook CEO Mark Zuckerberg believes AI will "unlock a huge amount of positive things," such as curing disease and increasing the safety of autonomous cars. Intermediate views Intermediate views generally take the position that the control problem of artificial general intelligence may exist, but that it will be solved via progress in artificial intelligence, for example by creating a moral learning environment for the AI, taking care to spot clumsy malevolent behavior (the 'sordid stumble') and then directly intervening in the code before the AI refines its behavior, or even peer pressure from friendly AIs. In a 2015 panel discussion in The Wall Street Journal devoted to AI risks, IBM's vice-president of Cognitive Computing, Guruduth S. Banavar, brushed off discussion of AGI with the phrase, "it is anybody's speculation." Geoffrey Hinton, the "godfather of deep learning", noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but stated that he continues his research because "the prospect of discovery is too sweet". In 2004, law professor Richard Posner wrote that dedicated efforts for addressing AI can wait, but that we should gather more information about the problem in the meanwhile. Popular reaction In a 2014 article in The Atlantic, James Hamblin noted that most people do not care one way or the other about artificial general intelligence, and characterized his own gut reaction to the topic as: "Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a technological singularity?" During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito stated: Obama added: Hillary Clinton stated in What Happened: In a YouGov poll of the public for the British Science Association, about a third of survey respondents said AI will pose a threat to the long-term survival of humanity. Referencing a poll of its readers, Slate's Jacob Brogan stated that "most of the (readers filling out our online survey) were unconvinced that A.I. itself presents a direct threat." In 2018, a SurveyMonkey poll of the American public by USA Today found 68% thought the real current threat remains "human intelligence"; however, the poll also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and 38% said it would do "equal amounts of harm and good". One techno-utopian viewpoint expressed in some popular fiction is that AGI may tend towards peace-building. Mitigation Many scholars concerned about the AGI existential risk believe that the best approach is to conduct substantial research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence? Such searchers also admit the possibility of social measures to mitigate the AGI existential risk; for instance, one recommendation is for a UN-sponsored ‘Benevolent AGI Treaty’ that would ensure only altruistic ASIs be created. Similarly, an arms control approach has been suggested, as has a global peace treaty grounded in the international relations theory of conforming instrumentalism, with an ASI potentially being a signatory. Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests a general principle of "differential technological development", that funders should consider working to speed up the development of protective technologies relative to the development of dangerous ones. Some funders, such as Elon Musk, propose that radical human cognitive enhancement could be such a technology, for example through direct neural linking between human and machine; however, others argue that enhancement technologies may themselves pose an existential risk. Researchers, if they are not caught off-guard, could closely monitor or attempt to box in an initial AI at a risk of becoming too powerful, as an attempt at a stop-gap measure. A dominant superintelligent AI, if it were aligned with human interests, might itself take action to mitigate the risk of takeover by rival AI, although the creation of the dominant AI could itself pose an existential risk. Institutions such as the Machine Intelligence Research Institute, the Future of Humanity Institute, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Center for Human-Compatible AI are involved in mitigating existential risk from advanced artificial intelligence, for example by research into friendly artificial intelligence. Views on banning and regulation Banning There is nearly universal agreement that attempting to ban research into artificial intelligence would be unwise, and probably futile. Skeptics argue that regulation of AI would be completely valueless, as no existential risk exists. Almost all of the scholars who believe existential risk exists agree with the skeptics that banning research would be unwise, as research could be moved to countries with looser regulations or conducted covertly. The latter issue is particularly relevant, as artificial intelligence research can be done on a small scale without substantial infrastructure or resources. Two additional hypothetical difficulties with bans (or other regulation) are that technology entrepreneurs statistically tend towards general skepticism about government regulation, and that businesses could have a strong incentive to (and might well succeed at) fighting regulation and politicizing the underlying debate. Regulation Elon Musk called for some sort of regulation of AI development as early as 2017. According to NPR, the Tesla CEO is "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... [as] they should be." In response, politicians express skepticism about the wisdom of regulating a technology that's still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argues that artificial intelligence is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. Developing well regulated weapons systems is in line with the ethos of some countries' militaries. On October 31, 2019, the United States Department of Defense's (DoD's) Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the DoD that seeks to manage the control problem in all DoD weaponized AI. Regulation of AGI would likely be influenced by regulation of weaponized or militarized AI, i.e., the AI arms race, the regulation of which is an emerging issue. Any form of regulation will likely be influenced by developments in leading countries' domestic policy towards militarized AI, in the US under the purview of the National Security Commission on Artificial Intelligence, and international moves to regulate an AI arms race. Regulation of research into AGI focuses on the role of review boards and encouraging research into safe AI, and the possibility of differential technological progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control. Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process. See also AI takeover Artificial intelligence arms race Effective altruism § Long-term future and global catastrophic risks Grey goo Human Compatible Lethal autonomous weapon Regulation of algorithms Regulation of artificial intelligence Robot ethics § In popular culture Superintelligence: Paths, Dangers, Strategies Suffering risks System accident Technological singularity The Precipice: Existential Risk and the Future of Humanity Paperclip Maximizer Notes References Future problems Human extinction Technology hazards Doomsday scenarios
41355660
https://en.wikipedia.org/wiki/Alien%3A%20Isolation
Alien: Isolation
Alien: Isolation is a 2014 survival horror game developed by Creative Assembly and published by Sega for Windows, PlayStation 3, PlayStation 4, Xbox 360 and Xbox One. Based on the Alien film series, Isolation is set 15 years after the events of the original 1979 film Alien, and follows engineer Amanda Ripley, daughter of Alien protagonist Ellen Ripley, as she investigates the disappearance of her mother. The game emphasizes stealth and survival horror gameplay, requiring the player to avoid and outsmart a single Alien creature with tools such as a motion tracker and flamethrower. Alien: Isolation was designed to resemble the original Alien film rather than its more action-oriented 1986 sequel Aliens, and features a similar lo-fi, 1970s vision of what the future could look like. It runs on an engine built to accommodate the Alien's behaviour and technical aspects such as atmospheric and lighting effects. Creative Assembly intended to make Alien: Isolation a third-person game, but used first-person to create a more intense experience. Several downloadable content packs were released, some of which relive scenes from the original film. Alien: Isolation received generally positive reviews and sold over two million copies by May 2015. Its retro-futuristic art direction, sound design, and artificial intelligence were praised, while its characters and unforgiving gameplay received criticism. It appeared in multiple "best of" lists and won several year-end awards, including Best Audio at the 2015 Game Developers Choice Awards and Audio Achievement at the 11th British Academy Games Awards. It was ported to Linux and OS X in 2015, Nintendo Switch in 2019, and Android and iOS mobile devices in 2021, and was added to the Amazon Luna service in 2021. Gameplay Alien: Isolation is a single-player action-adventure game with an emphasis on stealth and survival horror. The player controls Amanda Ripley from a first-person perspective, and must explore a space station and complete objectives while avoiding, outsmarting and defeating enemies. Objectives range from activating computers to collecting certain items or reaching a specific area. The player can run, climb ladders, sneak into vents, crouch behind objects to break the line of sight with enemies, and peek over or lean around for a safe view. The player also has the ability to go under tables or inside empty lockers to hide from enemies. Amanda encounters various enemies throughout the station, including hostile human survivors and androids. The player can either eliminate them or avoid them using stealth or distractions. The main antagonist, an Alien creature, pursues the player throughout. The Alien creature cannot be defeated, requiring the player to use stealth tactics in order to survive. Instead of following a predetermined path, the Alien has the ability to actively investigate disturbances and hunt the player by sight or sound. Along the way, the player can use both a flashlight and a motion tracker to detect the Alien's movements. However, using any of these increases the chance of the Alien finding the player. For example, if the Alien is close enough, it will be attracted by the tracker's sound, forcing the player to use the tracker wisely and remove it as soon as it detects motion. The motion tracker cannot detect enemies when they are not moving and cannot determine if the alien creature is up in the ducts or on ground level. Although Amanda gains access to a revolver, a shotgun, a bolt gun, a flamethrower, and a stun baton over the course of the game, Alien: Isolation emphasizes evasion over direct combat by providing limited ammunition. The player can also craft useful items by collecting schematics and different materials. Items include EMP detonators, noisemakers, molotov cocktails, and pipe bombs; these can help the player deal with enemies. For example, the noisemaker can be used to attract enemies in a particular direction. The Alien is afraid of fire, so using flame weapons forces it to retreat into the station's ventilation system. The player has a limited amount of health which decreases when attacked by enemies; health is restored with medkits, which can be crafted with materials in Amanda's inventory. The space station is divided into sections connected by trams and elevators. Some doors require certain actions before entry is allowed; for example, some require a keycard or entry codes, while others need to be hacked or cut open with welding torches. Computer terminals and rewiring stations can be used to access information and trigger actions such as disabling security cameras or manipulating the space station's air-purification mechanism. An automap helps the player navigate the different areas. To save game progress, the player needs to locate a terminal and insert Amanda's access card. If Amanda dies, the player will have to restart from the last saved point. In addition to the campaign mode, Alien: Isolation features a special mode, called Survivor Mode, in which the player must complete objectives within a time limit on different challenge maps while being hunted by the Alien. Plot In 2137, 15 years after the events of the original Alien film, Amanda Ripley, daughter of Ellen Ripley, learns that the flight recorder of her mother's ship, the Nostromo, has been located. The flight recorder was retrieved by salvage ship Anesidora, and is being held aboard Sevastopol, a Seegson Corporation space station orbiting gas giant KG-348. Christopher Samuels, a Weyland-Yutani android, offers Ripley a place on the retrieval team so that she can have closure regarding the fate of her missing mother. Ripley, Samuels, and Weyland-Yutani executive Nina Taylor travel to Sevastopol via the Torrens, a courier ship, finding that the station is damaged and external communications are offline. They attempt to spacewalk into Sevastopol, but their EVA line is severed by debris, separating Ripley from the others. While exploring the station, Ripley finds the flight recorder of the Nostromo, but the data has been corrupted. She also discovers that the station is out of control due to a single, deadly Alien creature lurking aboard. After regrouping with Samuels and Taylor, Ripley meets the station's Marshal Waits and his deputy Ricardo. Waits explains that the alien was brought onto the station by Anesidora captain Henry Marlow. After recovering the Nostromo flight recorder while salvaging its remains in space, the crew was able to backtrack the Nostromo's path to LV-426 and locate the derelict ship, containing within a nest of alien eggs. Marlow's wife was attacked by a Facehugger, and was then brought aboard Sevastopol for emergency medical treatment, but died after a Chestburster hatched from her. Waits convinces Ripley to contain the Alien inside a remote module of the station, and then eject it into space. Although Ripley is successful, Waits ejects the module with her still inside. Careening into KG-348, Ripley space-jumps back to Sevastopol using a space suit. Ripley makes her way back to confront Waits, but Ricardo reveals that the station's service androids abruptly started slaughtering the remaining crew, including Waits. Samuels attempts to interface with the station's artificial intelligence, APOLLO, to cease the rampage. However, the systems's defensive countermeasures kill him shortly after he opens a path for Ripley into APOLLO's control core. There, Ripley discovers that Seegson had been trying to sell off Sevastopol to Weyland-Yutani, who instructed APOLLO to protect the alien at all costs. Ripley tells APOLLO that the creature is no longer aboard the station and demands to cease all activity, but the system refuses, stating that "scheduled reactor scans are unverified". At the reactor, Ripley discovers a nest with hundreds of Aliens, and initiates a reactor purge to destroy it. Ripley learns that Taylor was secretly sent to retrieve the Alien from Sevastopol, and that she freed Marlow in exchange for the location of LV-426. However, Marlow double-crossed her and took her hostage aboard the Anesidora. There, Ripley finally discovers the Nostromos flight recorder, containing Ellen Ripley's monologue from the end of Alien. Meanwhile, Marlow attempts to overload the fusion reactor of the Anesidora to destroy Sevastopol and ensure that no alien creatures survive; Taylor kills him in an attempt to stop him, but she herself is killed by electric discharge, forcing Ripley to escape shortly before the Anesidora explodes. The explosion destroys Sevastopol's orbital stabilisers, causing the station to slowly drift into KG-348's atmosphere. Ripley and Ricardo contact the Torrens for extraction, but a facehugger latches on to Ricardo, forcing Ripley to leave him. After making her way outside to help the Torrens detach from the station, Ripley is surrounded by Alien creatures and ultimately thrown into the ship by a blast. Aboard the Torrens, Ripley discovers that another Alien has boarded the ship. When Ripley is cornered in the airlock, she ejects herself and the Alien into space. Adrift in her space suit, Ripley is awakened by a searchlight. Development Alien: Isolation was developed by Creative Assembly, which is best known for their work on the Total War strategy video game series. The idea of developing a game based on the Alien film series from 20th Century Fox was conceived when the company finished work on their 2008 title Viking: Battle for Asgard, after publisher Sega acquired the rights to develop Alien games in December 2006. A six-person team developed the first prototype to pitch the idea, wherein one player would control the alien manually while another would conceal themselves in an environment and try to hide from the creature. The game captured the attention of Sega and the project was eventually approved. Because Creative Assembly had no experience with survival horror games, the company hired people from studios such as Bizarre Creations, Black Rock, Crytek, Ubisoft, and Realtime Worlds for the project. According to director Alistair Hope, the development team grew from "a couple of guys crammed in with the Total War team" to a group of 100 people by 2014. Creative Assembly decided to design the game more in line with Ridley Scott's 1979 film Alien as opposed to James Cameron's more action-oriented 1986 sequel Aliens. To help the designers authentically recreate the atmosphere of the film, Fox provided them with three terabytes of original production material, including costume photography, concept art, set design, behind the scenes photos, videos, and the film's original sound effect recordings. Artist John Mckellan recalled, "It was a proper gold mine. We saw angles of things we'd never seen before." During the first stage of development, the developers deconstructed the film to find out what made its setting unique. This would allow them to build new environments that were faithful to it. Similarly, the film's original soundtrack was deconstructed so that composers could identify the main cues, which would then be used as templates to extend the soundtrack and fill in the length of the game. The developers also met Alien and Blade Runner editor Terry Rawlings, who would give them additional insight. Rather than go for a shiny, high-tech science fiction look, the designers opted to recreate the setting and feel of the original Alien film using the work of concept artists Ron Cobb and Mœbius. As a result, the game features a lo-fi, 1970s vision of what the future would look like. For example, it features clunky machinery like phone receivers, monochrome displays, and distorted CRT monitors. To create period authentic distortion on in-game monitors, the developers recorded their in game animations onto VHS and Betamax video recorders, then filmed those sequences playing on an "old curvy portable TV" while adjusting the tracking settings. As digital hacking had not been conceived in the 1970s, the hacking device was built the way it would have been built on the set of the film, and requires players to tune into a computer's signal while selecting icons on its screen. Artist Jon McKellan noted, "We had this rule: If a prop couldn't have been made in '79 with the things that they had around, then we wouldn't make it either." Creative Assembly wanted Alien: Isolation to have a story that was closely related to the film. As a result, the team decided to explore a story set 15 years after the events of the film which would involve Ellen Ripley's daughter and the Nostromos flight recorder. Writer Will Porter explained that the process of creating a backstory for Amanda was "refreshing" as he felt that she was an overlooked character of the Alien universe. Actress Sigourney Weaver agreed to reprise her role as Ellen Ripley to voice small sections because she felt that the story was interesting and true to the film. Along with Weaver, the original Alien cast, which includes Tom Skerritt, Veronica Cartwright, Harry Dean Stanton, and Yaphet Kotto, reprised their roles for the separate downloadable content missions, marking the first time they were brought back together since the release of the film. All the characters were created with 3D face scans. A major story rewrite happened around a year before release and leftovers from it were discovered in a console build. Alien: Isolation runs on a proprietary engine that was built from scratch by Creative Assembly. Previously used in Battle for Asgard, the engine was adapted to accommodate technical aspects such as the atmospheric and lighting effects and the alien's behavioural design. The engine's deferred rendering allowed artists to place "hundreds" of dynamic lights in a scene and achieve great geometric detail. A major toolchain update occurred six months into development. Although the new tools eventually improved workflow, they initially caused major disruptions because previous work had to be discarded or ported into the new tools, taking valuable development time away from the team. The alien was designed to look similar to H. R. Giger's original design, including the skull underneath its semitransparent head. However, the designers did alter its humanoid legs with recurved ones to provide the alien a walk cycle that would hold up to scrutiny during longer encounters with the player. Between 70 and 80 different sets of animation for the alien were created. The alien's artificial intelligence was programmed with a complex set of behavioural designs that slowly unlock as it encounters the player, creating the illusion that the alien learns from each interaction and appropriately adjusts its hunting strategy. As gameplay designer Gary Napper explains, "We needed something that would be different every time you played it. You're going to die a lot, which means restarting a lot, and if the alien was scripted, you'd see the same behaviour. That makes the alien become predictable, and a lot less scary." The save system was inspired by a scene in the film where Captain Dallas uses a key-card to access Nostromos computer, Mother. The developers originally planned to add a feature that would allow players to craft weapons, but the idea was ultimately discarded. According to Hope, "We thought about what people would want to do in order to survive. We explored different ideas, and one of them was fashioning weapons to defend yourself. That was quite early on, but then we realised that this game isn't really about pulling the trigger." Another cancelled feature was the alien's iconic acid blood as a game mechanic, which could melt through metal like in the film. Although the feature was reportedly implemented at one point, it was removed because the developers felt it would take the game in a "weird" direction. Although the game is played from a first-person perspective, it was developed for a considerable amount of time in third-person view. The perspective was changed after the team realised that first person changed the gameplay experience significantly. Hope explained that, in third-person view, Alien: Isolation would have become "a game about jockeying the camera and looking after your avatar. But in first-person it's you that's being hunted. If you're hiding behind an object and you want to get a better view of your surroundings, you have to move." Development took four years after Creative Assembly pitched the idea to Sega. Alien: Isolation was released to manufacturing on 9 September 2014. It is dedicated to Simon Franco, a programmer who died during development. Marketing and release Alien: Isolation was first unveiled on 12 May 2011 when UK government minister Ed Vaizey visited Creative Assembly and revealed on his Twitter account that the studio was hiring for an Alien game. Although no gameplay details were confirmed, Sega confirmed that Isolation would be released for consoles. Sega boss Mike Hayes said it was "very much a triple-A project. We want this to be a peer to the likes of Dead Space 2." Although the game's name was anticipated following a trademark registration in October 2013 and some screenshots leaked in December 2013, Alien: Isolation was announced and confirmed for Windows, PlayStation 3, PlayStation 4, Xbox 360, and Xbox One with the release of a teaser trailer on 7 January 2014. The fact that Sega's previous Alien game, Aliens: Colonial Marines, received a negative public reaction did not affect Creative Assembly. According to Napper, the vocal reaction from the Alien fanbase assured the team that they were building a game the fanbase wanted. Alien: Isolation was presented at E3 2014, where journalists had a chance to play the game. Polygon described the demo as effective and terrifying. The game was also playable on the Oculus Rift virtual reality (VR) headset that was shown at the show. It was awarded Best VR Game and was nominated for Game of the Show, Best Xbox One Game, Best PlayStation 4 Game, Best PC Game, and Best Action Game at the IGNs Best of E3 2014 Awards. At the 2014 Game Critics Awards, it was nominated for Best of Show, Best Console Game, and Best Action/Adventure Game. In August 2014, a cinematic trailer was shown at Gamescom. Alien: Isolation was released on 7 October 2014. According to Sega, it had sold more than one million copies worldwide as of January 2015. As of March 2015, Alien: Isolation had sold over 2.1 million copies in Europe and the US. It was ported by Feral Interactive to Linux and OS X in late 2015, to Nintendo Switch on 5 December 2019, and to Android and iOS devices on 16 December 2021, and was added to the Amazon Luna service on 14 October 2021. Downloadable content Alien: Isolation supports additional in-game content in the form of downloadable content packs. The first two packs, Crew Expendable and Last Survivor, were made available at the time of release. Crew Expendable, included in the "Nostromo Edition", relives a scene from Alien and involves the player controlling Ripley, Dallas or Parker attempting to flush an alien creature from the Nostromos air vents into the ship's airlock. Last Survivor, which was originally made available to players who pre-ordered at certain retailers, is set during the film's finale and involves the player controlling Ripley as she tries to activate the Nostromos self-destruct sequence and reach the escape shuttle. Between October 2014 and March 2015, five additional downloadable content packs were released, expanding the Survivor Mode with new features. A season pass to these five Survivor Mode packs could be purchased before they were released. The first pack, Corporate Lockdown, was released on 28 October 2014 and includes three new challenge maps where the player must complete certain objectives. The second pack, Trauma, was released on 2 December 2014 and includes a new character for use in three additional challenge maps. The third pack, Safe Haven, was released on 13 January 2015 and introduces a new character and a new gameplay mode where the player must complete a series of missions under a time limit. The fourth pack, Lost Contact, which was released on 10 February 2015, is similar to Safe Haven, but offers a different playable character and setting. The last pack, The Trigger, was released on 3 March 2015 and includes three additional challenge maps and a new playable character. A collection featuring the base game and all the downloadable content packs was released for Linux, OS X, PlayStation 4 and Xbox One in late 2015. Reception Critical reception for Alien: Isolation was "generally favourable", according to review aggregator Metacritic. Josh Harmon of Electronic Gaming Monthly felt that Alien: Isolation "succeeds as a genuine effort to capture the spirit of the film franchise in playable form, rather than a lazy attempt to use it as an easy backdrop for a cash-in with an ill-fitting genre." Writing for GameSpot, Kevin VanOrd praised the tense and frightening gameplay, stating that "when all mechanics are working as intended, alien-evasion is dread distilled into its purest, simplest form." However, he criticised the "trial and error" progression and frustrating distances between save points. Jeff Marchiafava of Game Informer stated similar pros, but criticised the story and poor acting from the voice actors. The visuals and atmosphere were praised. Polygon editor Arthur Gies felt that Alien: Isolation is "a beautiful game, full of deep shadows and mystery around every corner," while Dan Whitehead of Eurogamer praised the lighting and unusually compelling environment design. IGNs Ryan McCaffrey gave high marks to the retro-futuristic art direction and sound design, writing: "From wisps of smoke that billow out of air vents to clouds of white mist that obscure your vision when you rewire an area's life-support systems in order to aid your stealthy objectives, Isolation certainly looks and sounds like a part of the Alien universe." Similarly, PC Gamer said that the art design sets Alien: Isolation apart from the likes of System Shock or Dead Space and creates a "convincing science-fiction world, with machines and environments that are functional and utilitarian, rather than overtly futuristic." The characters were criticised. Game Informer stated that "Amanda exhibits little growth or personality, other than concern for her fellow humans and a desire not to die gruesomely," while Blake Peterson of GameRevolution noted that none of the characters are fully developed. According to him, "we never spend enough time with them to build the emotional bond necessary for their inevitable deaths to mean anything." GameTrailers said that most of the computer terminals contain unoriginal logs to describe predictable events, but also remarked that reading reports from different computer terminals "grounds Sevastopol in an appreciable way." Writing for GamesRadar, David Houghton praised the alien's advanced artificial intelligence, stating that "progress becomes a case of 'if' and 'how', not 'when'. Movement is measured in inches and feet rather than metres, and simply remaining alive becomes more exhilarating than any objective achieved." Peterson praised the gameplay as tense, scary and effective, writing that Alien: Isolation is "a solid, incredibly striking example of the [survival horror] genre that uses its first person perspective to greater personalize the horror". PC Gamer credited the crafting system for creating "a lot of unexpected depth", allowing players to outsmart enemies in multiple ways. The Survivor Mode was praised by Chris Carter of Destructoid, who felt it offered players different feelings and experiences each time they played it. Although the gameplay was praised by several reviewers, some found Isolation unnecessarily long, repetitive, and unforgiving. In a mixed review, McCaffrey felt that it did not offer many options of survival, requiring players to spend most of their time hiding in lockers "staring at the motion tracker". Polygon criticised the overexposure to the alien creature, turning Alien: Isolation into an irritating experience. As Gies explained, "Every time I thought I heard the monster, every blip on my motion tracker, was a cause for a tightness in my chest at first. By the 300th time I dived under a table or into a locker, I wasn't scared anymore — I was annoyed." Despite the criticism, Alien: Isolation was considered a "brave" title by IGN due to its difficult and unforgiving gameplay, a feature that is uncommon in games with large development costs. Accolades Alien: Isolation received several year-end awards, including PC Gamers Game of the Year 2014, Audio Achievement at the 11th British Academy Games Awards, Best Audio at the 15th Game Developers Choice Awards, and four awards at the 14th National Academy of Video Game Trade Reviewers. It also appeared on several year-end lists of the best games of 2014. It was ranked 1st in The Daily Telegraphs the 25 best video games of 2014, 2nd in Empires the 10 Best Games Of The Year, 2nd in Times Top 10 Video Games of 2014, 4th in The Guardians Top 25 Games of 2014, 3rd in Reader's top 50 games of 2014 by Eurogamer, and in Daily Mirrors the 10 best games of 2014. In 2015, Alien: Isolation was ranked 6th in Kotakus list of the 10 Best Horror Games. In 2018, The A.V. Club ranked Alien: Isolation as the 5th greatest horror game of all time in a list of 35, while GamesRadar+ ranked Alien: Isolation as the 3rd best horror game of all time out of 20. Legacy Although Sega said that sales of Isolation were weak, Creative Assembly originally considered the possibility of developing a sequel. Later, it was revealed that most of the Alien: Isolation design team no longer worked at Creative Assembly, and that the company was working on a first-person tactical shooter based on a new IP. In 2016, a pinball video game adaptation, Aliens vs. Pinball, was released for the Zen Pinball 2 and Pinball FX 2 video games developed by Zen Studios. Two comic book sequels, Aliens: Resistance and Aliens: Rescue, and a novelisation by Keith DeCandido, were released in 2019. A spin-off sequel developed by D3 Go, Alien: Blackout, was released for mobile devices on 24 January 2019, while a web television series adaptation, Alien: Isolation – The Digital Series, was released on IGN on 28 February 2019. References External links 2014 video games Alien (franchise) games Android (operating system) games Creative Assembly games Horror video games Interquel video games IOS games Linux games MacOS games Nintendo Switch games PlayStation 3 games PlayStation 4 games Retrofuturistic video games Science fiction video games Sega video games Single-player video games Stealth video games Survival video games Video games developed in the United Kingdom Video games featuring female protagonists Video games scored by Jeff van Dyck Video games set in the 22nd century Video games with downloadable content Windows games Xbox 360 games Xbox One games
5990945
https://en.wikipedia.org/wiki/King%20of%20Thorn
King of Thorn
is a Japanese fantastique manga series written and illustrated by Yuji Iwahara. It was published in by Enterbrain in the seinen magazine Monthly Comic Beam between October 2002 and October 2005 and collected in six bound volumes. It is licensed in North America by Tokyopop, with the final volume published in November 2008. The series is about a group of people who are put in suspended animation to escape a mysterious plague that turns people to stone, and upon waking there appears to be only seven survivors in a world run wild—including a Japanese teenage girl named Kasumi Ishiki and a British man named Marco Owen. The survivors soon discover that the entire ruin is filled with strange, dinosaur-like creatures and other monstrous aberrations of nature. Thinking that a great amount of time passed since their arrival on the island, soon the survivors discover not only that their sleep was indeed too short to label such dramatic changes as natural occurrence, but also that the situation in and of itself is far greater than they could imagine. A feature anime film adaptation produced by Sunrise and directed by Kazuyoshi Katayama was released on May 1, 2010. Plot King of Thorn is a science fiction survivor drama. After a viral infection known as the Medusa virus lands in Siberia and spreads contagiously throughout Earth, 160 humans are chosen as candidates to experiment a cure against the virus by an organization called Venus Gate. As the story begins, Kasumi is selected as one of the 160 people for the experiment. She is forced to enter treatment and cold sleep without her twin sister Shizuku, whom she cares much about. However, 48 hours later, some of those put in hibernation abruptly woke up, only to find the facility where they were supposed to be treated in a total state of decay, invaded by a lush jungles of trees and especially strange vines covered in thorns, which appear to have something of a mind on their own. Not only that, but the survivors soon discover that the entire ruin is filled with strange, dinosaur-like creatures and other monstrous aberrations of nature. Thinking that a great amount of time passed since their arrival on the island, soon the survivors discover not only that their sleep was indeed too short to label such dramatic changes as natural occurrence, but also that the situation in and of itself is far greater than they could imagine. The Medusa Virus One pivotal role in the series is that covered by the Medusa virus, a mortal disease so named after the Medusa from Greek Mythology, the Gorgon whose eyesight could turn anyone and anything into stone at a mere glance. The virus itself is extremely virulent, infecting its victims' cells and causing seizures while drying up the body, turning the infected into a solid, stone-like corpse. While perceived as a terrible malady by the world, in reality the Medusa virus is not a virus at all, being a shapeless presence brought to Earth from outer space. It landed in Siberia during a meteor shower, by chance near a young boy and his pet deer, enough to instantly infect both him and his animal. Unknowingly bringing the concentrated thing to his home, he infected his whole family and his sister Alice. She unknowingly uncovered the true nature of Medusa when her imaginary friend, a cat-boy hybrid, came to life by erupting from her back. Terrified by the death of her family and the fact that the newborn creature devoured her brother's deer, she trapped it in her house and set it on fire, thus spreading Medusa all over the world through the fire's smoke. It was then that the people affiliated with Venus Gate, a religious sect, showed themselves and approached Alice, believing her ability to turn imagination into reality to be a gift from the heavens. Experimenting on her and Medusa, during that time they employed a hacker named Zeus as their security specialist, though in doing so they doomed themselves when he, pursuing his crazy dreams, developed an artificial way to force dreams into suitable hosts and, thus, fabricate mind-created realities at will to accomplish his plan to force the world into a primal survival game to amuse himself. Characters Kasumi is the main protagonist of the story, which for the most part is told by her point of view. One of the 160 people around the world lucky (or rich) enough to be chosen as potential cure testbeds for the Medusa virus, she left the world leaving behind herself her twin sister Shizuku, who was also victim of the virus, though until the end encouraged Kasumi to accept such an opportunity and be cured. She was the first of these people to be awakened and the first to find the ruin that struck the medical facility they were treated in. A shy and gentle girl, she was notable in that she was the only one who actually supported the thuggish-looking Marco when the survivors started their escape, something that eventually turned into a crush, though ill-advised by more than one of the other characters, who saw Marco as dangerous and untrustworthy. Occasional glimpses are offered on Kasumi's life prior to her being chosen: she was quiet and reserved, the foil of her twin sister in mind, though identical in body, save for the fact that she wore glasses. These flashbacks show that Kasumi was strongly attached to Shizuku, to the point of borderline obsession, which in turn not only fueled unstable thoughts, such as wishing to die in the same way at the same time, but also gave birth to suicidal tendencies that manifested themselves when Kasumi, while bathing, tried to take her life by slitting her wrist with a razor, unable to accept the idea of living in a world without her beloved sister. Shizuku was nearby and saved her sister, though their bond was greatly shaken. Only at the end of the series it was revealed that Kasumi was dead all along: before parting, she and Shizuku shared a brief moment together on the cliffs near the castle that housed the medical treatment facility, and during this instance Kasumi, still unwilling to be the only one to survive, wished for the two of them to die together by jumping down the cliff. Rejected by Shizuku, Kasumi was accidentally thrown over the edge during the ensuing struggle and killed, an event that unleashed her sister's Medusa powers and, with them, the chaos that would envelop the island and the world itself. The Kasumi the viewer sees during the entire story is, in fact, a Medusa-produced entity created by Shizuku, who wished for her sister to be alive no matter what, even at the expense of the world. In the film version, she and Tim are left as the only survivors at the facility, with Kasumi vowing to help him reunite with his parents. Marco is the main male character of the series. He is a fearsome-looking man with a muscular frame and many intimidating tattoos all over his body who is also an expert hacker, so much so that he was able to crack through the CIA's computers – and those of Venus Gate, something that had him arrested and incarcerated for more than half a century (during the events of the manga, he still had 60 years to serve in prison). While a prisoner he was approached by the NSA, which offered him complete parole for his crimes in exchange of his help in dealing with the Medusa affair. Becoming a de facto spy for the United States, he was sent as a patient to the gathering at Venus Gate labs, the same where all the other main characters were sent, in order to assess the objectives of the gathering, gather evidence on the mysterious Level 4 labs of the island, and also to pursue a personal vendetta against his old friend Zeus, who framed him for crimes he did not commit before vanishing. He was trying to accomplish what he was sent to do when the Medusa outbreak and Shizuku's awakening occurred, and in the ensuing chaos he was left unconscious. Thus, he was able to present himself to the other survivors as one of them. During most of the series Marco acts tough, and his overall appearance and mannerism leave the other survivor highly suspicious of him, especially when they discover that he never had the Medusa virus in the first place. Nevertheless, given the fact that he is the most skilled of them all, Marco is silently accepted as the leader of the group, at least until he goes on his own to try to find Zeus. He was also the first of them to encounter the ghostly apparition of Alice, and on more than one instance was seen arguing with her. Also, he himself was suspicious of Kasumi after seeing video footage of her twin sister Shizuku trailing after the wave of monstrous beings that erupted from Level 4. During the climax of the story he died once, killed by Zeus's artificially created beast soldiers, though his corpse was later found and resurrected by Alice, a deed for which she sacrificed her life. He was thus able to face Zeus head-on by literally hacking into him and, finally, completely erasing him for good. What happened to him and Kasumi after they saved themselves is left to the reader to speculate, though apparently their feelings for each other were mutual. In the film version, he succumbs to his injuries after making Kasumi promise him to live life to the fullest and to not take it for granted: "Please, grant a soldier his dying wish." She then kisses him goodbye and thanks him for all he did for her, intending to follow her promise. Katherine is a pretty young woman in her late twenties and the worst Medusa case among the survivors, being the most infected one of them all. Since their encounter she acts in a motherly way towards Tim, the youngest of them being only a little boy. Before becoming infected, Katherine was an alcoholic and was prone to abuse her son Michael, until he was taken away from her and placed in foster care. This events shook her greatly and left her with bitter disgust for herself, prompting her to try to be a true mother for Tim as she wasn't for her son. Later in the story, the Medusa infection in her completely consumes her body, which starts to crumble apart during an assault of monstrous creatures. On the verge of death she deliberately stabs herself, triggering the Medusa inside her and becoming what she believed was the figure of an ideal mother, a harpy-like creature. While still capable of recognizing Tim and the other survivors, she herself was nothing more than what Katherine's heart and memories were at the moment of her death. Nevertheless, she still showed the same attachment to Tim and fought against the monsters threatening him, though she was seriously wounded and was left with the boy to recover. Later captured by Zeus' soldiers, she was set free during the end. In the film version, she succumbs to the Medusa virus during a helicopter crash. Peter was an engineer and doctor, the creator of the cold sleep capsules used by Kasumi and the other people gathered by Venus Gate. Despite his success in creating stable prototypes of the capsules, a sudden crisis of the electronics market left him without financial support, until Venus Gate expressed interest in his project and hired him and his colleagues. He succeeded in perfecting the cold sleep capsules and gained access to most of Venus Gate's labs, though he never clearly understood what the mysterious Level 4 was for. However, after developing the capsule he was suddenly withdrawn from the project, which was in turn passed to Level 4 and left him bitter with resentment, so much so that his only obsession became to retrieve his cold sleep capsule from Venus Gate. He apparently succeeded in discovering the Level 4 data, though in doing so he was discovered and taken prisoner, after which he was infected with Medusa. While awoken with the other survivors, Peter turned traitor on them when he took Kasumi with himself (this being a mind-planted instinct forced by Zeus) and searched for the laptop containing the downloaded files. He paid for this search with his life when the room where he was suddenly crumbled on him, killing him. Nevertheless, as he was infected with Medusa, he managed to come back to life as a twisted being filled with insects and cockroaches. Taken prisoner while trying to get back his original, first cold sleep capsule, he was killed by Zeus when he tried to attack him. In the film version, he's beheaded by one of the creatures unintentionally imagined by Tim after giving Kasumi vital information on the CSCC. (The Senator) Alexandro is an elderly white man and a politician who bought his place among those chosen to be cured, he was gruff and cowardly, always putting his well-being before that of others. He was the first one to notice that Marco Owen was a dangerous criminal by seeing his tattoos, sparking a brief but intense confrontation which fortunately ended thanks to Peter's reasoning and Kasumi's pleas. He was devoured alive by ferocious, over-sized eels while the group was traveling through an underground flooded tunnel in the manga version and beheaded by one of Tim's imagined dinosaur-like creatures in the film version. (Tim) Timothy is a very young boy (possibly no more than six years old) hailing from Germany, who is infected with the virus. Quite resilient and mature despite his age, he formed a tight bond with Katherine, seeing her as if she was his mother, though calling her auntie in the process. Due to him being so young, his role is relatively minor, though sometimes he proved himself to be a precious asset to the team. It was reviewed that Tim was controlled by Alice as a guide from time to time. He would space out and lead the way. Then when the group was where Alice wanted them to be he snapped back to his real self. He was also one of the survivors at the end of the story and began looking for his parents with Kasumi. In the film version, we learn that Tim's parents are divorced and in the midst of a less-than-amicable custody battle over him, something that upsets him greatly. Ron is a black police officer from the US. The most physically strong of the survivors, he was something of a fatalist, frequently lamenting on how hopeless the situation was and how useless struggling to survive would be. Despite his lack of optimism, he nevertheless tried to escape with his life, though he refused to enter Level 4 and was left behind, his leg injured. He was the first to witness Peter's revived, insect-filled Medusa form, at whose sight he became frightened and ran away, struggling by himself against the perils of the ruined facility. He almost drowned when Marco, oblivious of Ron's presence, drained the facility's main shaft from a massive flood, and later reunited himself with Kasumi, Tim and Katherine. His fatalistic approaches gone, this time he stood firm with the conviction of protecting not only himself, but also those with him, which prompted him to rescue Katherine despite her being on the verge of death. Captured by Zeus with Tim, he too was to be transformed into a beast soldier, though when saved by Marco he remained mostly human, save for his hair, which turned pink. In the film version, he succumbs to the Medusa virus while trying to hold off one of the creatures, allowing the others to keep moving. Alice is a Russian little girl and the younger sister of the boy who firstly came into contact with the Medusa virus during the meteor shower in Siberia. It is hinted, and later confirmed by herself, that she was abused by her family, and thus suffered from dissociative identity disorder, using an imaginary friend, a cat-boy named Laloo as a means to endure her harsh life. She was the sole survivor of her whole family when they all died due to the Medusa infection, and in such a moment she unconsciously gave birth — literally — to her imaginary friend, which emerged from her own body and started devouring her brother's deer. Frightened, Alice locked him up and set her house on fire, destroying her family's corpses and killing Laloo, but also spreading Medusa around the world thanks to the fire's smokes. It was after the fire that she was approached by Karol Vega, who took her in and used her as a guinea pig for his experiments with the cold capsules created by Peter Stevens. What happened to her was initially left unknown, and Alice was firstly introduced to the reader as something of a malevolent apparition, often smirking cruelly at the survivors' struggles or talking to the massive monstrosity that was Shizuku. Only later, when Kasumi encountered her, she revealed herself to be a mere ghost, a projection of the mind of the real Alice, still locked into a sealed lab deep inside Level 4. Apparently she never aged, and after all the experiments she endured, what was left of her was only her head, her left arm, and a portion of her chest large enough to contain her heart and one lung, kept alive by medical equipment and the sheer power of Medusa. As a tribute to her namesake, she was guarded by a massive white rabbit automaton born from her imagination. When Zeus' army stormed the lab she was sealed in, she escaped, though doing so put her life at great risk. After discovering Marco's corpse, she burned what was left of her life to resurrect him, vanishing after performing such an act. Shizuku is Kasumi's twin sister who is identical to her if not only for their hairstyle, the fact that Shizuku doesn't wear glasses and she has a upbeat and cheerful personality. Infected with Medusa, despite being left out of those chosen to be cured, among who there was her own sister, she still greatly supported her and insisted for her to take the chance to be saved from Medusa. However, their bond grew increasingly strained after Shizuku witnessed Kasumi trying to kill herself, and again when, before parting with each other, Kasumi wished for them to commit suicide together, being incapable to accept a world without her twin. During the ensuing struggle Shizuku accidentally pushed Kasumi over the edge of the cliff they were standing on, and shocked at the death of her sister, awakened her Medusa power. Transported into Level 4, still overwhelmed by the memory of her sister's death, Shizuku created a countless amount of abominations, initially blobs of quivering flesh, maws and claws which laid waste to the labs while massacring the staff, and only through Zeus' mind-controlling program conceiving the creatures that would populate the ruined island, such as the dinosaurs encountered by the protagonists at the beginning. Also, it is revealed that the monstrous behemoth encountered at the beginning of the manga, and later shown occasionally as the "mother" of the creatures, was Shizuku herself, the beast being an unconscious armor to seal herself from the world. While often mentioned by Kasumi during the course of the story, Shizuku appeared often, other than in her monstrous form, through ominous apparitions, often Kasumi's delusions or unconscious mental links between the sisters, and only in the end she appeared at the core of her mutated form, begging Kasumi not to uncover the fact that she was only a Medusa-made creation, and that her true self had died. With the death of Zeus, she too ceased to exist, her monstrous body crumbling to pieces and her essence fading away peacefully. Zeus is considered to be the best hacker in the world and a closely observed subject by both NSA and CIA, Zeus is the main antagonist of the series and the archenemy of Marco. The two were apparently rivals in their past, until Zeus broke into the CIA mainframe and then framed Marco for the crime, escaping soon after. At some point he was approached by Karol Vega and made security chief for Venus Gate's labs, especially Level 4, for which he created the tight security systems. A deeply disturbed person, he saw war, strife and conflict as a boring, unrealistic form of entertainment, and claimed that humanity as a species had lost its will to survive. Keeping such ideas a secret from his employers, he devised a way to control Medusa carriers at will, applying it to Shizuku when he found she was much more powerful a carrier than Alice herself. His last act as a human being was amusing himself while looking at Shizuku creating deformed aberrations, before turning his own self into data and entwining it with the monster Shizuku would later become. Zeus was only seen or mentioned in flashbacks during the course of the story until the end, where he presented himself and declared his will to use Shizuku, a never-ending supply of creatures defying the laws of nature, to build up his own army of beastmen warriors and other fantasy-like beings and with them plunder the world into chaos, an endless game of survival for his own amusement. To do so, and knowing that Shizuku would wake up her sister from her induced slumber, he tampered with the other survivors' minds, forcing them to unconsciously not run away after waking up, and especially protect Kasumi's life, which was necessary to coerce Shizuku in doing his biddings. His material body reduced to a rotting corpse, Zeus manifested himself through membranes expelled by Shizuku's tail, and later took direct control of her, declaring the act to be "the birth of a god". This final act proved to be fatal, as he was first weakened when Kasumi and Shizuku encountered each other, and then completely erased when Marco hacked into his data, deleting him. Zeus does not appear in the film version. Ivan was the Russian founder of Venus Gate, a religious cult which came to see the power of Medusa as a gift from God himself, and which took Alice in after discovering her in Siberia. Though her suffering could be attributed to him and his religious views could have been easily labeled as senseless fanaticism, Vega was indeed left speechless and horrified by the sheer cruelty of Zeus, whose true identity and goals he was oblivious of, and during the chaotic outbreak at Level 4 he sealed himself inside his study, deep in the facility. He recorded the entire story of how Medusa came into being, how it could turn imagination into reality and what really happened in the lab, ending his last speech asking Alice for forgiveness, before killing himself with a shotgun blast in his mouth. Much later in the story, Vega's dead body would be discovered by Kasumi, Marco, Kathrine and Tim. In the film version, he attempts suicide with a pistol before Kasumi intervened with a shotgun, then appears to die when the floor beneath him collapses. Media Manga King of Thorn was serialized by Enterbrain in Monthly Comic Beam from October 2002 to October 2005, and collected in six bound volumes. It is published in North America by Tokyopop, in Germany and Hungary by Tokyopop Germany, in France by Soleil, in Italy by Flashbook, and in Spain by Glénat. The Japanese volume 4 was also released in a special edition () including a limited edition figurine of Kasumi. Film An anime film adaptation produced by Kadokawa Pictures premiered in October 2009 at the Sitges Film Festival and was released in theaters in Japan on May 1, 2010 in Japan and was released on DVD and Blu-ray Disc in Japan on October 27, 2010 and worldwide in November 2010. The film was directed by Kazuyoshi Katayama from a script written by Katayama and Hiroshi Yamaguchi, with characters designed by Hidenori Matsubara and monsters designed by Kenji Andou. The film retains the same characters from the manga, but it takes major liberties when it comes to the plot and storyline. All of the main characters' backstories are drained down and their different storylines are changed from their counterparts in the manga. Zeus, who is a major antagonist in the manga, does not appear in the film. Alice, the young Russian girl, does make an appearance, but is deceased in present time. and then, Terry Notary (In the English Dub) portrays the monsters in his CGI suit while Toru Nara (In the Japanese Dub) voices them. The film also has a different ending from the manga. The ending song for the film is "Edge of This World" by Misia. The anime has been licensed by Funimation Entertainment in North America, Manga Entertainment in the UK and Madman Entertainment in Australia and New Zealand. Reception The English edition of King of Thorn was named by the Young Adult Library Services Association as among the 10 best graphic novels for teens for 2008. Volume one was praised by Publishers Weekly as "a gripping entry in the genre of violent survivor manga," and praised Iwahara's art for conveying the character's claustrophobia without confusing the reader. Theron Martin of Anime News Network found both the writing and artwork effectively convey the characters' tension and danger, but claimed that Iwahara's borrowing elements from many sources did not initially create an original work, but that as the series progresses it "offer[s] some intriguing twists on sci fi and horror gimmicks." Iwahara's art was singled out for praise, especially for conveying action scenes. The film was nominated for the 4th Asia Pacific Screen Award for Best Animated Feature Film. References External links 2002 manga 2010 anime films Animated films based on manga Cryonics in fiction Dark fantasy anime and manga Enterbrain manga Films scored by Toshihiko Sahashi Funimation Japanese-language films Kadokawa Dwango franchises Madman Entertainment manga Madman Entertainment anime Psychological thriller anime and manga Science fiction anime and manga Seinen manga Sunrise (company) Tokyopop titles
1209130
https://en.wikipedia.org/wiki/RMX%20%28operating%20system%29
RMX (operating system)
iRMX is a real-time operating system designed specifically for use with the Intel 8080 and 8086 family of processors. It is an acronym for Real-time Multitasking eXecutive. Overview Intel developed iRMX in the 1970s and originally released RMX/80 in 1976 and RMX/86 in 1980 to support and create demand for their processors and Multibus system platforms. The functional specification for RMX/86 was authored by Bruce Schafer and Miles Lewitt and was completed in the summer of 1978 soon after Intel relocated the entire Multibus business from Santa Clara, California to Aloha, Oregon. Schafer and Lewitt went on each manage one of the two teams that developed the RMX/86 product for release on schedule in 1980. Effective 2000 iRMX is supported, maintained, and licensed worldwide by TenAsys Corporation, under an exclusive licensing arrangement with Intel. iRMX is a layered design: containing a kernel, nucleus, basic i/o system, extended i/o system and human interface. An installation need include only the components required: intertask synchronization, communication subsystems, a filesystem, extended memory management, command shell, etc. The native filesystem is specific to iRMX, but has many similarities to the original Unix (V6) filesystem, such as 14 character path name components, file nodes, sector lists, application readable directories, etc. iRMX supports multiple processes (known as jobs in RMX parlance) and multiple threads are supported within each process (task). In addition, interrupt handlers and threads exist to run in response to hardware interrupts. Thus, iRMX is a multi-processing, multi-threaded, pre-emptive, real-time operating system (RTOS). Commands The following list of commands are supported by iRMX 86. ATTACHDEVICE ATTACHFILE BACKUP COPY CREATEDIR DATE DEBUG DELETE DETACHDEVICE DETACHFILE DIR DISKVERIFY DOWNCOPY FORMAT INITSTATUS JOBDELETE LOCDATA LOCK LOGICALNAMES MEMORY PATH PERMIT RENAME RESTORE SUBMIT SUPER TIME UPCOPY VERSION WHOAMI Historical uses iRMX III on Intel Multibus hardware is used in the majority core systems on CLSCS the London Underground Central line signals control system was supplied by Westinghouse (now Invensys) and commissioned in the late 1990s. The Central line is an automatic train operation line. Automatic train protection is by trackside and train borne equipment that does not use iRMX. It is the automatic train supervision elements that use a mix of iRMX on Multibus, and Solaris on SPARC computers. 16 iRMX local site computers are distributed along the Central line together with 6 central iRMX computers at the control centre. All 22 iRMX computers are dual redundant. iRMX CLSCS continues in full operation. Oslo Metro uses a similar, although less complex, Westinghouse-supplied iRMX control system through the central Common Tunnel tracks. This was expected to be decommissioned in 2011. Variants Several variations of iRMX have been developed since its original introduction on the Intel 8080: iRMX I, II and III, iRMX-86, iRMX-286, DOS-RMX, iRMX for Windows, and, most recently, INtime. While many of the original variants of iRMX are still in use, only iRMX III, iRMX for Windows, and INtime are currently supported for the development of new real-time applications. Each of these three supported variants of iRMX require an Intel 80386 equivalent or higher processor to run. A significant architectural difference between the INtime RTOS and all other iRMX variants is the support for address segments (see x86 memory segmentation). The original 8086 family of processors relied heavily on segment registers to overcome limitations associated with addressing large amounts of memory via 16-bit registers. The iRMX operating system and the compilers developed for iRMX include features to exploit the segmented addressing features of the original x86 architecture. The INtime variant of iRMX does not include explicit support for segmentation, opting instead to support only the simpler and more common 32-bit flat addressing scheme. Despite the fact that native processes written for INtime can only operate using unsegmented flat-mode addressing, it is possible to port and run some older iRMX applications that use segmented addressing to the INtime kernel. When Intel introduced the Intel 80386 processor, in addition to expanding the iRMX RTOS to support 32-bit registers, iRMX III also included support for the four distinct protection rings (named rings 0 through 3) which describe the protected-mode mechanism of the Intel 32-bit architecture. In practice very few systems have ever used more than rings 0 and 3 to implement protection schemes. iRMX The I, II, III, -286 and -86 variants are intended as standalone real-time operating systems. A number of development utilities and applications were made for iRMX, such as compilers (PL/M, Fortran, C), an editor (Aedit), process and data acquisition applications and so on. Cross compilers hosted on the VAX/VMS system were also made available by Intel. iRMX III is still supported today and has been used as the core technology for newer real-time virtualization RTOS products including iRMX for Windows and INtime. DOS-RMX DOS-RMX is a variant of the standalone iRMX operating system designed to allow two operating systems to share a single hardware platform. In simplest terms, DOS and iRMX operate concurrently on a single IBM PC compatible computer, where iRMX tasks (processes) have scheduling priority over the DOS kernel, interrupts, and applications. iRMX events (e.g., hardware interrupts) pre-empt the DOS kernel to ensure that tasks can respond to real-time events in a time-deterministic manner. In a functional sense, DOS-RMX is the predecessor to iRMX for Windows and INtime. In practice, DOS-RMX appears as a TSR to the DOS kernel. Once loaded as a TSR, iRMX takes over the CPU, changing to protected mode and running DOS in a virtual machine within an RMX task. This combination provides RMX real-time functionality as well as full DOS services. iRMX for Windows Like DOS-RMX, this system provides a hybrid mixture of services and capabilities defined by DOS, Windows, and iRMX. Inter-application communication via an enhanced Windows DDE capability allows RMX tasks to communicate with Windows processes. iRMX for Windows was originally intended for use in combination with the 16-bit version of Windows. In 2002 iRMX for Windows was reintroduced by adding these RMX personalities to the INtime RTOS for Windows, allowing it to be used in conjunction with the 32-bit protected-mode versions of Windows (Windows NT, Windows 2000, etc.). INtime Like its iRMX predecessors, INtime is a real-time operating system. And, like DOS-RMX and iRMX for Windows, it runs concurrently with a general-purpose operating system on a single hardware platform. INtime 1.0 was originally introduced in 1997 in conjunction with the Windows NT operating system. Since then it has been upgraded to include support for all subsequent protected-mode Microsoft Windows platforms, including Windows Vista and Windows 7. INtime can also be used as a stand-alone RTOS. INtime binaries are able to run unchanged when running on a stand-alone node of the INtime RTOS. Unlike Windows, INtime can run on an Intel 80386 or equivalent processor. Current versions of the Windows operating system generally require at least a Pentium level processor in order to boot and execute. The introduction of INtime 3.0 included several important enhancements. Among them, support for multi-core processors and the ability to debug real-time processes on the INtime kernel using Microsoft Visual Studio. INtime is not an SMP operating system, thus support for multi-core processors is restricted to a special form of asymmetric multiprocessing. When used on a multi-core processor INtime can be configured to run on one CPU core while Windows runs on the remaining processor core(s). BOS Named BOS (BOS1810, BOS1820), the operating system was cloned by the East-German VEB Robotron-Projekt in Dresden in the 1980s. Uses Use cases can be viewed on the TenAsys website. See also Radisys References Further reading , originally published in Embedded Systems Programming in 1989 Christopher Vickery, Real-Time and Systems Programming for PCs: Using the iRMX for Windows Operating System, McGraw-Hill (1993) External links iRMX information page Richard Carver's iRMXStuff.com Intel software Real-time operating systems
727832
https://en.wikipedia.org/wiki/2001%20QR322
2001 QR322
is a minor planet and the first Neptune trojan discovered on 21 August 2001, by the Deep Ecliptic Survey at Cerro Tololo Observatory in Chile. It orbits ahead of Neptune at its Lagrangian point. Other Neptune trojans have been discovered since. A study by American astronomers Scott Sheppard and Chad Trujillo from the Carnegie Institution suggests that Neptune could possibly have twenty times more trojans than Jupiter. Diameter The discoverers estimate that the body has a mean-diameter of 140 kilometers based on a magnitude of 22.5. Based on a generic magnitude-to-diameter conversion, it measures approximately 110 kilometers in diameter using an absolute magnitude of 7.9 and an assumed albedo of 0.10. Orbit orbits the Sun with a semi-major axis of 30.115 AU at a distance of 29.3–31.0 AU once every 165 years and 3 months (60,363 days). Its orbit has an eccentricity of 0.03 and an inclination of 1° with respect to the ecliptic. Dynamical stability Early studies of the dynamical stability of , which used a small number of test particles spread over the uncertainties of just a few orbital parameters that were derived from a limited observation arc, suggested that is on a remarkably stable orbit, because most test particles remained on trojan orbits for 5 Gyr. Thereafter, the stability of Neptune trojans was simply assumed. A more recent study, which used a very large number of test particles spread over the 3σ uncertainties in all six orbital parameters derived from a longer observational arc, has indicated that is far less dynamically stable than previously thought. The test particles were lost exponentially with a half life of 553 Myr. Further observations can determine whether 's orbit is actually within the dynamically stable or within the unstable part. The stability is strongly dependent on semi-major axis, with a≥30.30 AU being far less stable, but only very weakly dependent on the other orbital parameters. This is because those with larger semi-major axes have larger libration amplitudes, with amplitudes ~70° and above being destabilized by secondary resonances between the trojan motion and the dynamics of at least Saturn, Uranus, and Neptune. Secular resonances were found not to contribute to the dynamical stability of . Numbering and naming Due to its orbital uncertainty, this minor planet has not been numbered and its official discoverers have not been determined. If named, it will follow the naming scheme already established with 385571 Otrera, which is to name these objects after figures related to the Amazons, an all-female warrior tribe that fought in the Trojan War on the side of the Trojans against the Greek. References External links Neptune trojans Minor planet object articles (unnumbered) 20010821
27850284
https://en.wikipedia.org/wiki/Modo%20%28software%29
Modo (software)
Modo (stylized as MODO, and originally modo) is a polygon and subdivision surface modeling, sculpting, 3D painting, animation and rendering package developed by Luxology, LLC, which is now merged with and known as Foundry. The program incorporates features such as n-gons and edge weighting, and runs on Microsoft Windows, Linux and macOS platforms. History Modo was created by the same core group of software engineers that previously created the pioneering 3D application LightWave 3D, originally developed on the Amiga platform and bundled with the Amiga-based Video Toaster workstations that were popular in television studios in the late 1980s and early 1990s. They are based in Mountain View, California. In 2001, senior management at NewTek (makers of LightWave) and their key LightWave engineers disagreed regarding the notion for a complete rewrite of LightWave's work-flow and technology. NewTek's Vice President of 3D Development, Brad Peebler, eventually left Newtek to form Luxology, and was joined by Allen Hastings and Stuart Ferguson (the lead developers of Lightwave), along with some of the LightWave programming team members (Arnie Cachelin, Matt Craig, Greg Duquesne, Yoshiaki Tazaki). After more than three years of development work, Modo was demonstrated at SIGGRAPH 2004 and released in September of the same year. In April 2005, the high-end visual effects studio Digital Domain integrated Modo into their production pipeline. Other studios to adopt Modo include Pixar, Industrial Light & Magic, Zoic Studios, id Software, Eden FX, Studio ArtFX, The Embassy Visual Effects, Naked Sky Entertainment and Spinoff Studios. At Siggraph 2005, Modo 201 was announced. This promised many new features including the ability to paint in 3D (à la ZBrush, BodyPaint 3D), multi-layer texture blending, as seen in LightWave, and, most significantly, a rendering solution which promised physically-based shading, true lens distortion, anisotropic reflection blurring and built-in polygon instancing. Modo 201 was released on 24 May 2006. Modo 201 was the winner of the Apple Design Awards for Best Use of Mac OS X Graphics for 2006. In October 2006, Modo also won "Best 3D/Animation Software" from MacUser magazine. In January 2007, Modo won the Game Developer Frontline Award for "Best Art Tool". Modo 202 was released on 1 August 2006. It offered faster rendering speed and several new tools including the ability to add thickness to geometry. A 30-day full-function trial version of the software was made available. In March 2007, Luxology released Modo 203 as a free update. It included new UV editing tools, faster rendering and a new DXF translator. The release of Modo 301 on 10 September 2007 added animation and sculpting to its toolset. The animation tools include being able to animate cameras, lights, morphs and geometry as well as being able to import .mdd files. Sculpting in Modo 301 is done through mesh based and image based sculpting (vector displacement maps) or a layered combination of both. Modo 302, was released on 3 April 2008 with some tool updates, more rendering and animation features and a physical sky and sun model. Modo 302 was a free upgrade for existing users. Modo 303 was skipped in favor of the development of Modo 401. Modo 401 shipped on 18 June 2009. This release has many animation and rendering enhancements and is newly available on 64-bit Windows. On 6 October 2009, Modo 401 SP2 was released followed by Modo 401 SP3 on 26 January 2010 and SP5 on 14 July of the same year. Modo 501 shipped on 15 December 2010. This version was the first to run on 64-bit Mac OS X. It contains support for Pixar Subdivision Surfaces, faster rendering and a visual connection editor for creating re-usable animation rigs. Modo 601 shipped on 29 February 2012. This release offers additional character animation tools, dynamics, a general purpose system of deformers, support for retopology modeling and numerous rendering enhancements. Modo 701 shipped on 25 March 2013. This offered audio support, a Python API for writing plugins, additional animation tools and layout, more tightly integrated dynamics, and a procedural particle system along with other rendering enhancements such as render proxy and environment importance sampling. During subsequent Service Packs, FBX 2013 support was added and numerous major performance improvements were made (for example, tiled EXR usage became several orders of magnitude faster to match the competition). Modo 801 shipped on 25 April 2014. This brought a rework of the referencing system; renderer improvements; nodal shading; UDIM support (for MARI interoperation - another Foundry product); dynamics and particles improvements; deformer updates (Bézier, Wrap, Lattice); motion capture retargeting (through the IKinema library used to deliver Full Body IK since 601). Additionally, animation workflow was improved based on adaptations of classic animator tools (extremes, breakdowns, etc.) Modo 901 shipped on May 27, 2015. Modo 10.1v1 shipped on June 15, 2016. Modo 10.2v3 shipped on May 4, 2017. Modo 11.2v2 shipped on December 15, 2017. Modo 12.0v1 shipped on March 28, 2018. Modo 13.1 shipped on August 13, 2019. Modo 13.2 shipped on November 9, 2019. Modo 14.0v1 shipped on March 19, 2020. Modo 14.0v2 shipped on May 27, 2020. Modo 15.0v1 shipped on March 18, 2021. Modo was used in the production of feature films such as Stealth, Ant Bully, Iron Man, and Wall-E. Workflow Modo's workflow differs substantially from many other mainstream 3D applications. While Maya and 3ds Max stress using the right tool for the job, Modo artists typically use a much smaller number of basic tools and combine them to create new tools using the Tool Pipe and customizable action centers and falloffs. Action centers Modo allows an artist to choose the "pivot point" of a tool or action in realtime simply by clicking somewhere. Thus, Modo avoids making the artist invoke a separate "adjust pivot point" mode. In addition, the artist can tell Modo to derive a tool's axis orientation from the selected or clicked on element, bypassing the needs for a separate "adjust tool axis" mode. Falloffs Any tool can be modified with customizable falloff, which modifies its influence and strength according to geometric shapes. Radial falloff will make the current tool affect elements in the center of a resizable sphere most strongly, while elements at the edges will be barely affected at all. Linear falloff will make the tool affect elements based on a gradient that lies along a user-chosen line, etc. 3D painting Modo allows an artist to paint directly onto 3D models and even paint instances of existing meshes onto the surface of an object. The paint system allows users to use a combination of tools, brushes and inks to achieve many different paint effects and styles. Examples of the paint tools in Modo are airbrush, clone, smudge, and blur. These tools are paired with your choice of "brush" (such as soft or hard edge, procedural). Lastly, you add an ink, an example of which is image ink, where you paint an existing image onto a 3D model. Pressure-sensitive tablets are supported. The results of painting are stored in a bitmap and that map can be driving anything in Modo's Shader Tree. Thus you can paint into a map that is acting as a bump map and see the bumps in real-time in the viewport. Renderer Modo's renderer is multi-threaded and scales nearly linearly with the addition of processors or processor cores. That is, an 8-core machine will render a given image approximately eight times as fast as a single-core machine with the same per-core speed. Modo runs on up to 32 cores and offers the option of network rendering. In addition to the standard renderer, which can take a long time to run with a complex scene on even a fast machine, Modo has a progressive preview renderer which renders to final quality if left alone. Modo's user interface allows you to configure a work space that includes a preview render panel, which renders continuously in the background, restarting the render every time you change the model. This gives a more accurate preview of your work in progress as compared to the typical hardware shading options. In practice, this means you can do fewer full test renders along the way toward completion of a project. The preview renderer in Modo 401 offers progressive rendering, meaning the image resolves to near final image quality if you let it keep running. Modo material assignment is done via a shader tree that is layer-based rather than node-based. As of version 801, node-based shading is a part of the work flow as well. Modo's renderer is a physically based ray-tracer. It includes features like caustics, dispersion, stereoscopic rendering, fresnel effects, subsurface scattering, blurry refractions (e.g. frosted glass), volumetric lighting (smokey bar effect), and Pixar-patented Deep Shadows. Select features Tool Pipe for creating customized tools Scripting (Perl, Python, Lua) Customizable User Interface Extensive file input and output Key modeling features N-gon modeling (subdivided polygons with >4 points) and Mesh Instancing Retopology Tools A powerful sculpting toolset Procedural modeling with "Mesh Operators" MeshFusion (Non destructive subD boolean operations) Key animation features Animate virtually any item's properties (geometry, camera, lights) Layerable deformers Morph target animation Rigging with full-body Inverse kinematics Dynamic parenting Key rendering features Global Illumination Physical Sun and Sky Displacement Rendering Interactive Render Preview IEEE Floating Point Accuracy Subsurface scattering Instance Rendering Physically Based Shading Model Motion Blur Volumetric rendering Depth of Field Network Rendering 3d paint toolset Modo once included imageSynth, a Plug-in for creating seamless textures in Adobe Photoshop CS1 or later. This bundle ended with the release of Modo 301. Luxology has announced that the imageSynth plugin for Photoshop has been retired. References Further reading External links Luxology's Modo 501 at GDC 2011from Intel.com 3D graphics software Global illumination software 3D animation software 3D computer graphics software for Linux Proprietary commercial software for Linux
26364723
https://en.wikipedia.org/wiki/Vivek%20Wadhwa
Vivek Wadhwa
Vivek Wadhwa is an American technology entrepreneur and academic. He is Distinguished Fellow & Adjunct Professor at Carnegie Mellon's School of Engineering at Silicon Valley and Distinguished Fellow at the Labor and Worklife Program at Harvard Law School. He is also author of books Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back, Driver in the Driverless Car, Innovating Women: The Changing Face of Technology, and Immigrant Exodus. Early life and education Wadhwa was born in Delhi, India. He graduated from the University of Canberra in 1974 with a Bachelor of Arts in Computing Studies, and from New York University in 1986 with an MBA. Career At Credit Suisse First Boston, Wadhwa led the development of a computer-aided software engineering (CASE) tool to develop client-server model software. First Boston spent $150 million on these development efforts. The CASE technology was spun off by First Boston into Seer Technologies in 1990 with an investment of $20 million by IBM. At Seer, Wadhwa was executive VP and chief technology officer. Seer developed tools to build client-server systems. Seer Technologies filed for an IPO in May 1995. In 1997, Wadhwa founded Relativity Technologies, a company in Raleigh, North Carolina which developed tools for modernizing legacy COBOL programs. He left the company in 2004, and it was sold to Micro Focus in January, 2009. After a heart attack, Wadhwa shifted his focus to academic research. Wadhwa is an executive-in-residence/adjunct professor at the Masters of Engineering Management Program and Director of Research at the Center for Research Commercialization at Duke University's Pratt School of Engineering; and a Distinguished Visiting Scholar at the Halle Institute for Global Learning, at Emory University. He has been a Senior Research Associate at Harvard Law School's Labor and Worklife Program and a visiting professor at the School of Information, at the University of California, Berkeley. He writes a regular column for The Washington Post, Bloomberg BusinessWeek, the American Society of Engineering Education's Prism Magazine, and Forbes, and has written for Foreign Policy. He is also the author of the 2012 non-fiction book The Immigrant Exodus: Why America Is Losing the Global Race to Capture Entrepreneurial Talent. Wadhwa serves as an advisor to Malaysia on advancing innovation, science and technology through the Global Science and Innovation Advisory Council (GSIAC). He also advises Russia on how to create innovation ecosystems through his participation in the New York Academy of Sciences. Columnist and pundit Wadhwa writes a regular column for The Washington Post, Bloomberg BusinessWeek, the American Society for Engineering Education's Prism Magazine, Forbes, Foreign Policy, and The Wall Street Journal. Wadhwa has frequently argued that because of the low numbers of women technology CEOs, there is a problem with the system. In September 2014, Wadhwa released Innovating Women: The Changing Face of Technology, a book he co-authored with Farai Chideya and including contributions from hundreds of women. The book presented research about women in technology and argued that "it's not enough for company executives to make donations or be advisors to groups like Girls Who Code. They must take action and be the good example – just as Facebook did before its IPO. In September 2015 Wadhwa was recognized by Financial Times as "one of ten men worth emulating in his support of women." The article states, "Some feel it is wrong to focus on the work that men — rather than women — do to help women fulfill their potential at work. (Vivek Wadhwa on our list has been on the sharp end of such criticism). We disagree, and hope that recognising this varied group will engage and embolden other champions." Wadhwa has advocated for more diversity in the technology industry. Wadhwa's research, public debates and articles call for greater inclusion of not only women but also African Americans, Hispanics, and older people. An MSNBC article by Alicia Maule on November 14, 2014 quotes Wadhwa as saying, "Venture capital is in dismal shape. It produces low returns because it's been the bastion of the boys club, which is not the model that needs to be followed. You need men and women. African-American and Latino – diversity is a catalyst to innovation." Wadhwa was featured as a mentor to the black technology community in the CNN documentary "Black in America" and has argued for the inclusion of more blacks in technology in the CNN program "Black in America: The New Promised Land, Silicon Valley" as well as in multiple articles including "We need a black Mark Zuckerberg" "Women of Color in Tech: How Can We Encourage Them" and "The Face of Success, Part 4: Blacks in Silicon Valley". Wadhwa has researched old and young entrepreneurs and has argued that older entrepreneurs tend to be more successful. He has written several articles arguing that VCs should invest in them. The articles include: The case for old entrepreneurs, Innovation without Age Limits, When It Comes To Founding Successful Startups, Old Guys Rule and Silicon Valley's Dark Secret: It's All About Age. Wadhwa has researched engineering education in India, China, and the US. He has argued in many articles that US education is superior and that education is important for US competitiveness. The articles include Engineering Gap? Fact and Fiction, U.S. Schools Are Still Ahead—Way Ahead, and U.S. Schools: Not That Bad. Wadhwa has argued that higher education is valuable. Alongside Henry Bienen, he debated Peter Thiel, who launched the Thiel Fellowship to provide $100,000 to students who dropped out of college to start up companies, on the merits of higher education. Wadhwa argued against Thiel and Charles Murray at an Intelligence Squared debate in Chicago that was broadcast on NPR stations. Wadhwa spoke on 60 Minutes "Dropping Out: Is College Worth the Cost?" and argued that basic college education is important and valuable because it teaches skills, including social skills and the skills to turn an idea into an invention and then into a company, and that those skills help individuals get ahead. Wadhwa is named as a co-inventor on 4 patents: 6,389,588: "Method and system of business rule extraction from existing applications for integration into new applications", 6,346,953: "Method and system for recreating a user interface of an existing application text based user interface into a graphical user interface", 5,495,610: "Software distribution system to build and distribute a software release" and 5,295,222: "Computer-aided software engineering facility". He has argued that software patents should be abolished: "patents have become the greatest inhibitor to innovation and are holding the United States back." In November 2012, Wadhwa discussed "Technology's Promise, Humanity's Future" with Nobel Laureate Ahmed Zewail at UCSB Campbell Hall in Isla Vista, California. Wadhwa argues that this decade will be the most innovative in history, predicting that "today's technology is rapidly catching up to Star Trek" and that in the coming years, 3D printers will make it possible to synthetically produce meat and create an abundance of food, humans will eventually be banned from driving cars, and artificial intelligence will be able to be individual's personal medical assistants. In 2013, Wadhwa debated Nobel Laureate Robert Shiller on "Goldman Vs. Google: A career on Wall Street or in Silicon Valley?" at The Economist's Buttonwood Gathering. Shiller argued, "When you study finance, you are studying how to make things happen, on a big scale, on a lasting scale. That has to matter more than getting into Google and programming some little gimmick." Wadhwa argue that "Google is changing the dynamics of cities, changing the dynamics of life" and that technology is enabling the world to be on the verge of solving "the grand challenges of humanity." Wadhwa posed this question: "Would you rather have your children engineering the financial system creating more problems for us, or having a chance of saving the world?" At the conclusion of the debate, "the audience voted heavily in favor of Mountain View and against Wall Street." He appeared in the 2016 documentary The Future of Work and Death. Startup Chile Startup Chile is a government sponsored program that acts like a focused incubation program and attracts early-stage entrepreneurs to work on their startups. The program gives accepted entrepreneurs equity free seed funding, a work visa, office space, and access to mentors and global partnerships with organizations like Google, Amazon Web Services, Evernote, HubSpot and more. In addition to co-conceiving and helping create Startup Chile, Wadhwa serves as an unpaid advisor and consultant to the program. In addition to co-conceiving and helping create Startup Chile, Wadhwa advised Spanish efforts to create their programs to attract entrepreneurs. Controversy and criticism Wadhwa has publicly argued that Twitter is overly complacent about improving its diversity numbers. On the first occasion, he criticized Twitter for having an all-male board of directors. Twitter CEO Dick Costolo initially refused to comment, but then in a tweet, disparaged Wadhwa by likening him to "the Carrot Top of academic sources". Subsequently, Twitter appointed a woman, Marjorie Scardino, onto its board. On the second occasion, Wadhwa posted a series of tweets critical of Twitter's published diversity numbers (which included 90% of tech roles being filled by men) and the way in which Twitter had framed them, concluding that Twitter "is unrepentant and should be ashamed. Problems start from board and exec management. Must diversify". Withdrawal from the societal debate on women in technology In 2015, Wadhwa was criticized publicly by several women in technology for the way in which he was speaking on behalf of women in technology. One example mentioned was that at an event, he had used the slang word "floozies" when referring to technology companies needing to take hiring women more seriously, in the context of his advocacy for tech companies to include higher-ranking women on interview panels for female candidates. Wadhwa responded to the criticism by writing that he had not known what the word "floozy" meant due to his poor grasp of American slang, as an immigrant, that he had apologized at the event as soon as his misstep was pointed out to him, and that he had lost sleep over the ordeal. The podcast TLDR, which is produced by an NPR affiliate, interviewed one of the critics, Amelia Greenhall, about a post she had recently written, entitled "Quiet, Ladies. @wadhwa is speaking now". Wadhwa published a response, alleging that several false claims were made in the original TLDR episode, and calling it an "unfair attack" on him. TLDR took down their original podcast episode and apologized for not speaking to Wadhwa about it before publication, and expressed regret for not fact-checking it. TLDR's next episode was a follow-up which gave Wadhwa a right of reply. However, Gawker's Jay Hathaway opined that "in the process of defending himself, Vivek Wadhwa ended up confirming much of what TL;DR asserted about his attitude". On February 23, Wadhwa wrote an article in the Washington Post explaining why he would no longer participate in the debate on women in technology, writing, "I may have made the mistake of fighting the battles of women in technology for too long. And I may have taken the accusations too personally. Today there is a chorus of very powerful, intelligent, voices who are speaking from personal experience. The women who I have written about, who have lived the discrimination and abuse, as well as others, deserve the air time." The New York Times columnist Farhad Manjoo wrote a subsequent article entitled "An Outspoken Voice for Women in Tech, Foiled by His Tone" which summarized the imbroglio, and quoted Wadhwa and a number of women in technology in relation to it. Awards and honors In 1999, Wadhwa was named a "leader of tomorrow" by Forbes magazine. In February 2012, Wadhwa was one of the six "2012 Outstanding American by Choice" recipients, a distinction awarded by the United States Citizenship and Immigration Services. In December 2012, Wadhwa was recognized by Foreign Policy magazine as a Top 100 Global Thinker. In June 2013, Wadhwa was named to Time magazine's list of the Top 40 Most Influential Minds in Tech. In September 2015 Financial Times named Wadhwa one of top ten men worth emulating in his support of women. In May 2018, Silicon Valley Forum awarded Wadhwa its Visionary Award. References External links Personal website Living people Indian emigrants to the United States American businesspeople of Indian descent American academics Year of birth missing (living people) University of Canberra alumni New York University Stern School of Business alumni People from Delhi Businesspeople from Delhi American male writers of Indian descent
11889680
https://en.wikipedia.org/wiki/Ferranti%20Orion
Ferranti Orion
The Orion was a mid-range mainframe computer introduced by Ferranti in 1959 and installed for the first time in 1961. Ferranti positioned Orion to be their primary offering during the early 1960s, complementing their high-end Atlas and smaller systems like the Sirius and Argus. The Orion was based on a new type of logic circuit known as "Neuron" and included built-in multitasking support, one of the earliest commercial machines to do so (the KDF9 being a contemporary). Performance of the system was much less than expected and the Orion was a business disaster, selling only about eleven machines. The Orion 2 project was quickly started to address its problems, and five of these were sold. Its failure was the capstone to a long series of losses for the Manchester labs, and with it, Ferranti management grew tired of the entire computer market. The division was sold to International Computers and Tabulators (ICT), who selected the Canadian Ferranti-Packard 6000 as their mid-range offering, ending further sales of the Orion 2. History Magnetic amplifiers During the 1950s transistors were expensive and relatively fragile devices. Although they had advantages for computer designers, namely lower power requirements and their smaller physical packaging, vacuum tubes remained the primary logic device until the early 1960s. There was no lack of experimentation with other solid state switching devices, however. One such system was the magnetic amplifier. Similar to magnetic core memory, or "cores", magnetic amplifiers used small toroids of ferrite as a switching element. When current passed through the core, a magnetic field would be induced that would reach a maximum value based on the saturation point of the material being used. This field induced a current in a separate read circuit, creating an amplified output with a known current. Unlike digital logic based on tubes or transistors, which uses defined voltages to represent values, magnetic amplifiers based their logic values on defined current values. One advantage to magnetic amplifiers is that they are open in the center and several input lines can be threaded through them. This makes it easy to implement chains of "OR" logic by threading a single core with all the inputs that need to be ORed together. This was widely used in the "best two out of three" circuits that were used in binary adders, which could reduce the component count of the ALU considerably. This was known as "Ballot Box Logic" due to the way the inputs "voted" on the output. Another way to use this feature was to use the same cores for different duties during different periods of the machine cycle, say to load memory during one portion and then as part of an adder in another. Each of the cores could be used for as many duties as there was room for wiring through the center. In the late 1950s new techniques were introduced in transistor manufacture that led to a rapid fall in prices while reliability shot up. By the early 1960s most magnetic amplifier efforts were abandoned. Few machines using the circuits reached the market, the best known examples being the mostly-magnetic UNIVAC Solid State (1959) and the mostly transistorized English Electric KDF9 (1964). Neuron The Ferranti Computer Department in West Gorton, Manchester had originally been set up as an industrial partner of Manchester University's pioneering computer research lab, commercializing their Manchester Mark 1 and several follow-on designs. During the 1950s, under the direction of Brian Pollard, the Gorton labs also researched magnetic amplifiers. Like most teams, they decided to abandon them when transistors improved. One member of the lab, Ken Johnson, proposed a new type of transistor-based logic that followed the same conventions as the magnetic amplifiers, namely that binary logic was based on known currents instead of voltages. Like the magnetic amplifiers, Johnson's "Neuron" design could be used to control several different inputs. Better yet, the system often required only one transistor per logic element, whereas conventional voltage-based logic would require two or more. Although transistors were falling in price they were still expensive, so a Neuron-based machine might offer similar performance at a much lower price than a machine based on traditional transistor logic. The team decided to test the Neuron design by building a small machine known as "Newt", short for "Neuron test". This machine was so successful that the lab decided to expand the testbed into a complete computer. The result was the Sirius, which was announced on 19 May 1959 with claims that it was the smallest and most economically priced computer in the European market. Several sales followed. Orion 1 With the success of Sirius, the team turned its attention to a much larger design. Since many of the costs of a complete computer system are fixed - power supplies, printers, etc. - a more complex computer with more internal circuitry would have more of its cost associated with the circuits themselves. For this reason, a larger machine made of Neurons would have an increased price advantage over transistorized offerings. Pollard decided that such a machine would be a strong counterpart to the high-end Atlas, and would form the basis for Ferranti's sales for the next five years. Looking for a launch customer, Ferranti signed up Prudential Assurance with the promise to deliver the machine in 1960. However, these plans quickly went awry. The Neuron proved unable to be adapted to the larger physical size of the Orion. Keeping the current levels steady over the longer wire runs was extremely difficult, and efforts to cure the problems resulted in lengthy delays. The first Orion was eventually delivered, but was over a year late and unit cost was more than expected, limiting its sales. Between 1962 and 1964 the Computing Division lost $7.5 million, largely as a result of the Orion. Orion 2 During the Orion's gestation it appeared there was a real possibility the new system might not work at all. Engineers at other Ferranti departments, notably the former Lily Hill House in Bracknell, started raising increasingly vocal concerns about the effort. Several members from Bracknell approached Gordon Scarrott and tried to convince him that Orion should be developed using a conventional all-transistor design. They recommended using the "Griblons" circuits developed by Maurice Gribble at Ferranti's Wythenshawe plant, which they had used to successfully implement their Argus computer for the Bristol Bloodhound missile system. Their efforts failed, they turned to Pollard to overrule Scarrott, which led to a series of increasingly acrimonious exchanges. After their last attempt on 5 November 1958, they decided to go directly to Sebastian de Ferranti, but this effort also failed. Pollard resigned about a month later and his position was taken over by Peter Hall. Braunholtz later expressed his frustration that they didn't write to him directly, and the matter sat for several years while Orion continued to run into delays. In September 1961 Prudential was threatening to cancel their order, and by chance, Braunholtz at that moment sent a telegram to Hall expressing his continuing concerns. Hall immediately invited Braunholtz to talk about his ideas, and several days later the Bracknell team was working full out on what would become the Orion 2. By the end of October the basic design was complete, and the team started looking for a transistor logic design to use for implementation. Although Braunholtz had suggested using the Griblons, the Bracknell group also invited a team of engineers from Ferranti Canada to discuss their recent successes with their "Gemini" design, which was used in their ReserVec system. On November 2 the Bracknell team decided to adopt the Gemini circuitry for Orion 2. Parts arrived from many Ferranti divisions over the next year, and the machine was officially switched on by Peter Hunt on 7 January 1963. The first Orion 2 was delivered to Prudential on 1 December 1964, running at about five times the speed of the Orion 1. Prudential bought a second machine for the processing of industrial branch policies. Another system was sold to the South African Mutual Life Assurance Society in Cape Town where it was used for updating insurance policies. A fourth was sold to Beecham Group to upgrade its Orion 1 system. The original prototype was kept by ICT and used for software development by the Nebula Compiler team. By this point, however, Ferranti was already well on the road to selling all of its business computing divisions to ICT. As part of their due diligence process, ICT studied both the Orion 2 and the FP-6000. Ferranti's own engineers concluded that "There are certain facets of the system we do not like. However, were we to begin designing now a machine in the same price/performance range as the FP6000, we would have in some 18 months' time a system that would not be significantly better -if indeed any better- than the FP6000." ICT chose to move forward with the FP-6000 with minor modifications, and used it as the basis for their ICT 1900 series through the 1960s. Existing contracts for the Orion 2 were filled, and sales ended. Description Although the Orion and Orion 2 differed significantly in their internals, their programming interface and external peripherals were almost identical. The basic Orion machine included 4,096 48-bit words of slow, 12μs, core memory, which could be expanded to 16,384 words. Each word could be organized as eight 6-bit characters, a single 48-bit binary number, or a single floating-point number with a 40-bit fraction and an 8-bit exponent. The system included built-in capabilities for working with Pound sterling before decimalization. The core memory was backed by one or two magnetic drums with 16k words each. Various offline input/output included magnetic disks, tape drives, punched cards, punched tape and printers. Most of the Orion's instruction set used a three-address form, with sixty-four 48-bit accumulators. Each program had its own private accumulator set which were the first 64 registers of its address space, which was a reserved contiguous subset of the physical store, defined by the contents of a "datum" relocation register. Operand addresses were relative to the datum, and could be modified by one of the accumulators for indexing arrays and similar tasks. A basic three-address instruction took a minimum of 64 μs, a two-address one 48 μs, and any index modifications on the addresses added 16 μs per modified address. Multiplication took from 156 to 172 μs, and division anywhere from 564 to 1,112 μs, although the average time was 574 μs. The Orion 2, having a core store with a much shorter cycle time, was considerably faster. A key feature of the Orion system was its built-in support for time-sharing. This was supported by a series of input/output (I/O) interrupts, or what they referred to as "lockouts". The system automatically switched programs during the time spent waiting for the end of an I/O operation. The Orion also supported protected memory in the form of pre-arranged "reservations". Starting and stopping programs, as well as selecting new ones to run when one completed, was the duty of the "Organisation Program." The Orion was one of the earliest machines to directly support time-sharing in hardware in spite of intense industry interest; other time-sharing systems of the same era include LEO III of 1961, PLATO in early 1961, CTSS later that year, and the English Electric KDF9 and FP-6000 of 1964. The Orion is also notable for the use of its own high-level business language, NEBULA. Nebula was created because of Ferranti's perception that the COBOL standard of 1960 was not sufficiently powerful for their machines, notably as COBOL was developed in the context of decimal, character-oriented batch processing, while Orion was a binary word-oriented multiprogramming system. NEBULA adapted many of COBOL's basic concepts, adding new ones of their own. NEBULA was later ported to the Atlas as well. References Notes Bibliography (System), "Ferranti Orion Computer System", Ferranti, November 1960 Norman Ball and John Vardalas, "Ferranti-Packard: Pioneers in Canadian Electrical Manufacturing", McGill-Queen's Press, 1994 Gordon Scarrott, "From Torsional Mode Delay Lines to DAP", Computer Resurrection, Number 12 (Summer 1995) Peter Hall, "A Management Perspective on Orion", Computer Resurrection, Number 33 (Spring 2004) Maurice Gribble, "The Argus Computer and Process Control", Computer Resurrection, Number 20 (Summer 1998) (Group), "Ferranti Orion 2 Contact Group: Report of Meeting at Storrington", 6 July 2004 John Vardalas, "From DATAR To The FP-6000 Computer: Technological Change In A Canadian Industrial Context", IEEE Annals of the History of Computing, Volume 16 Number 2 (1994) Martin Campbell-Kelly, "ICL: A Business and Technical History", Clarendon Press, 1989 Further reading "Orion Programmers' Reference Manual", Ferranti, 1961 "The Ferranti ORION Computer System", contains numerous details and material on the Orion series Henry Goodman, "The Simulation of the Orion Time Sharing System on the Sirius", The Computer Bulletin, Volume 5 Number 2 (September 1961) Early British computers Orion Magnetic logic computers Mainframe computers Transistorized computers Computer-related introductions in 1961
37721512
https://en.wikipedia.org/wiki/Universiti%20Tenaga%20Nasional
Universiti Tenaga Nasional
Universiti Tenaga Nasional (UNITEN) is a private university, located in Selangor, Malaysia, with GLC university status. It is wholly owned by the public-listed Tenaga Nasional Berhad (TNB), established in 1999. History Universiti Tenaga Nasional commenced operation in 1976 as Institut Latihan Sultan Ahmad Shah (ILSAS), which served for many years as the corporate training center for Tenaga Nasional Berhad (TNB) and its predecessor, the National Electricity Board. In 1994, ILSAS was transformed into an institute of higher learning and renamed Institut Kejuruteraan Teknologi Tenaga Nasional (IKATAN). It offered academic programmes in engineering and business management at undergraduate and graduate levels through twinning links with local and foreign universities (Indiana-University, Purdue University Indianapolis, US for Engineering). In 1997, IKATAN was upgraded to Universiti Tenaga Nasional. Principal Officers The Chancellor of Universiti Tenaga Nasional was appointed on 9 March 2005. The person holding this position is Tun Abdul Rahman Abbas. Tan Sri Dr Leo Moggie was appointed Chairman of the Tenaga Nasional Berhad and Sabah Electricity Sdn Bhd, two companies that supply electricity to the vast majority of Malaysia. He was appointed to the Board of the NSTP Company on 27 February 2008. He is a member of the Board of Directors of Digi.Com Berhad. He was appointed to the Board of the NSTP Company on 27 February 2008. He is a member of the Board of Directors of Digi.Com Berhad. Campuses Universiti Tenaga Nasional operates from two campuses; one is the main campus in Putrajaya, and the other is in Bandar Muadzam Shah. Putrajaya (main campus) This campus is 25 miles to the south of Kuala Lumpur near Kajang in Selangor, and has an area of 214 hectares. It is near the Cyberjaya Multimedia Super Corridor and adjacent to Putrajaya, the administrative center of the Federal Government of Malaysia. All engineering, information technology and other technology courses are held at this campus. Sultan Haji Ahmad Shah (branch campus) This branch is located at Bandar Muadzam Shah, Pahang. It officially opened on 4 May 2001. The campus offers accounting, finance, entrepreneurship, marketing, human resources, and business courses. Colleges and departments College of Foundation and Diploma Studies (CFDS) Department of Languages and Communication Department of Social Sciences Department of Business Management and Accounting Department of Sciences, Mathematics and Computing College of Engineering (COE) Department of Civil Engineering Department of Electronics and Communication Engineering Department of Electrical Power Engineering Department of Mechanical Engineering Engineering Mathematics and Management Unit Computer Science & Information Technology (CSIT) Department of Software Engineering Department of Information Systems Department of Systems and Networking Department of Graphics and Multimedia Department of Visual Media College of Business Management and Accounting (COBA) Department of Accounting Department of Management and Human Resource Department of Finance and Economics Department of Marketing and Entrepreneur Development College of Graduate Studies (COGS) Notable alumni Rankings Quacquarelli Symonds (QS) Times Higher Education (THE) See also List of universities in Malaysia References External links Universities and colleges in Selangor Educational institutions established in 1976 Educational institutions established in 1999 Business schools in Malaysia Engineering universities and colleges in Malaysia Information technology schools in Malaysia 1976 establishments in Malaysia 1999 establishments in Malaysia Kajang Private universities and colleges in Malaysia
105181
https://en.wikipedia.org/wiki/Deadlock
Deadlock
In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization. In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process remains indefinitely unable to change its state because resources requested by it are being used by another process that itself is waiting, then the system is said to be in a deadlock. In a communications system, deadlocks occur mainly due to loss or corruption of signals rather than contention for resources. Necessary conditions A deadlock situation on a resource can arise if and only if all of the following conditions occur simultaneously in a system: Mutual exclusion: At least one resource must be held in a non-shareable mode; that is, only one process at a time can use the resource. Otherwise, the processes would not be prevented from using the resource when necessary. Only one process can use the resource at any given instant of time. Hold and wait or resource holding: a process is currently holding at least one resource and requesting additional resources which are being held by other processes. No preemption: a resource can be released only voluntarily by the process holding it. Circular wait: each process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, …, PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1. These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr. While these conditions are sufficient to produce a deadlock on single-instance resource systems, they only indicate the possibility of deadlock on systems having multiple instances of resources. Deadlock handling Most current operating systems cannot prevent deadlocks. When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Coffman conditions from occurring, especially the fourth one. Major approaches are as follows. Ignoring deadlock In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm. This approach was initially used by MINIX and UNIX. This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable. Ignoring deadlocks can be safely done if deadlocks are formally proven to never occur. An example is the RTIC framework. Detection Under the deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system. After a deadlock is detected, it can be corrected by using one of the following methods: Process termination: one or more processes involved in the deadlock may be aborted. One could choose to abort all competing processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed. But the expense is high as partial computations will be lost. Or, one could choose to abort one process at a time until the deadlock is resolved. This approach has a high overhead because after each abort an algorithm must determine whether the system is still in deadlock. Several factors must be considered while choosing a candidate for termination, such as priority and age of the process. Resource preemption: resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken. Prevention Deadlock prevention works by preventing one of the four Coffman conditions from occurring. Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, the deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none; First, they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation. (These algorithms, such as serializing tokens, are known as the all-or-none algorithms.) The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, the inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control. If a process holding some resources and requests for some another resource(s) that cannot be immediately allocated to it, the condition may be removed by releasing all the currently being held resources of that process. The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial ordering of resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration. Dijkstra's solution can also be used. Deadlock Avoidance Similar to deadlock prevention, deadlock avoidance approach ensures that deadlock will not occur in a system. The term "deadlock avoidance" appears to be very close to "deadlock prevention" in a linguistic context, but they are very much different in the context of deadlock handling. Deadlock avoidance does not impose any conditions as seen in prevention but, here each resource request is carefully analyzed to see whether it could be safely fulfilled without causing deadlock. Deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. Deadlock avoidance algorithm analyzes each and every request by examining that there is no possibility of deadlock occurrence in the future if the requested resource is allocated. The drawback of this approach is its requirement of information in advance about how resources are to be requested in the future. One of the most used deadlock avoidance algorithm is Banker's algorithm. Livelock A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. The term was coined by Edward A. Ashcroft in a 1975 paper in connection with an examination of airline booking systems. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing. Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen arbitrarily or by priority) takes action. Distributed deadlock Distributed deadlocks can occur in distributed systems when distributed transactions or concurrency control is being used. Distributed deadlocks can be detected either by constructing a global wait-for graph from local wait-for graphs at a deadlock detector or by a distributed algorithm like edge chasing. Phantom deadlocks are deadlocks that are falsely detected in a distributed system due to system internal delays but do not actually exist. For example, if a process releases a resource R1 and issues a request for R2, and the first message is lost or delayed, a coordinator (detector of deadlocks) could falsely conclude a deadlock (if the request for R2 while having R1 would cause a deadlock). See also Aporia Banker's algorithm Catch-22 (logic) Circular reference Dining philosophers problem File locking Gridlock (in vehicular traffic) Hang (computing) Impasse Infinite loop Linearizability Model checker can be used to formally verify that a system will never enter a deadlock Ostrich algorithm Priority inversion Race condition Readers-writer lock Sleeping barber problem Stalemate Synchronization (computer science) Turn restriction routing References Further reading External links "Advanced Synchronization in Java Threads" by Scott Oaks and Henry Wong Deadlock Detection Agents DeadLock at the Portland Pattern Repository Etymology of "Deadlock" Concurrency (computer science) Software bugs Software anomalies Distributed computing problems Edsger W. Dijkstra
36766367
https://en.wikipedia.org/wiki/National%20Institute%20of%20Technology%20Delhi
National Institute of Technology Delhi
National Institute of Technology, Delhi (NIT Delhi or NITD) is a premiere public technical university located in Delhi, India. It has been declared as an Institute of National Importance by an act of Parliament of India. It is one of the 31 National Institutes of Technology in India. History The National Institute of Technology, Delhi is one of ten NITs established during the 11th Five Year Plan by the Ministry of Education (MOE). The first batch of students was admitted in the year 2010-11, in three undergraduate Bachelor of Technology degree programmes in Computer Science and Engineering, Electronics and Communication Engineering and Electrical and Electronics Engineering. For two years the institute's academic activities were carried out at National Institute of Technology, Warangal, the mentor institute for NIT Delhi. The institute moved to a temporary campus at Dwarka, New Delhi in August 2012, then to a temporary IAMR campus, sector 7, Narela in February 2014. From academic year 2013-14 the intake in each undergraduate programme was increased to 60 students. A Master of Technology programme in the discipline of Electronics and Communication Engineering with an intake of 15 students was introduced from the academic year 2013-14, followed by a PhD programme that started in January 2014 with an intake of seven research scholars. A Master of Technology programme in the discipline of Power Electronics and Drives with an intake of 15 students was started from the academic session of 2017-18. Campus Permanent campus NITD is located about 25 km from Delhi city on National Highway 1 which connects Delhi to Attari. The site spans over of land in Narela sub city, New Delhi. The phase Ist of permanent campus of NIT Delhi has been completed and now the institute functions from the permanent campus. The permanent campus is conceived as the first vehicle-free campus built on the principles of sustainability and design innovation. The campus aims to create an environment that will invoke the spirit of innovation, technology and invention. The design is driven by the focused themes of crafting a unique, educational/research environment in the technology/engineering domain for about 5000-8000 students. Transit campus NITD used to run at IAMR Campus, Narela. .It has already shifted to permanent campus near GT Karnal Road. Library The Central Library, NIT Delhi was established in June 2012 in Dwarka. It moved to its present location IAMR Campus, Narela Institutional Area in February 2014. It is housed on the first floor of the building. The automation of library services is through an RFID library card. Computer centre A computer center was established in February 2014 and maintains and manages WiFi facility through rack-mounted blade servers in the campus having High speed (single mode) fiber backbone, managed by Layer 3 Switches providing 600 Mbit/s (1:1) bandwidth Speed. Sports facilities The college has a sports department with arrangements for several indoor and outdoor games. Indoor games include badminton, table tennis, carom and chess. Outdoor games include cricket, football and basketball. There are grounds for football and volleyball along with a cricket pitch and basketball court. Altius - Sports Club organizes an annual sports festival every year in the month of February named ZEAL. Organisation and administration Governance The rules[], regulations and recommendations for the functioning of all NITs are decided by the NIT Council. The members of the council consist of representatives of all NITs and the MHRD. The institute is governed by a Board of Governors. The director of the institute is an ex-officio member of the Board. It decides on all aspects of the running of the institute. Departments Currently, the following five departments are functioning in the institute:- Applied Sciences Computer Science & Engineering Electronics and Communication Engineering Electrical and Electronics Engineering Mechanical Engineering Academics Academic programmes NIT Delhi currently offers only engineering programs in graduate, undergraduate and doctorate level. The Institute offers four-year bachelor's degree programmes in Computer Science and Engineering, Electronics and Communication Engineering and Electrical and Electronics Engineering. The intake of students to Bachelor of Technology Courses are 60 in each branch. It offers two-year master's degree programmes in Electronics and Communication Engineering and Computer Science and Engineering (Analytics). The intake of students to these postgraduate programmes is 15 each. It offers Doctorate of Philosophy (PhD) degree programmes in Electrical and Electronics Engineering, Electronics and Communication Engineering and Computer Science and Engineering. Admissions Undergraduate admission to the institute is through the Joint Entrance Examination (Main) organised by the MHRD's National Testing Agency. The seats are allotted by the Joint Seat Allocation Authority. Fifty percent of seats are reserved under Home State category for Delhi and Chandigarh students. The remaining fifty percent seats are filled by students from other states. Admission for foreign nationals is carried out by Direct Admission for Students Abroad (DASA) under the MHRD. The admissions are on the basis of the SAT score of the eligible candidates. The number of seats available for foreign nationals in each branch at NIT Delhi is 4 and these are over and above the intake (currently 60) of each branch. Admissions to the MTech Programmes are made on the basis of performance in the Graduate Aptitude Test in Engineering. Research The institute conducts theoretical and experimental research, including in the areas of VLSI and embedded systems, power electronics and drives, advanced magnetohydrodynamics and mathematical modelling of fluid dynamics. Student life NIT Delhi has various forms of clubs or student organizations. Notable of them are Cultural club, Technical club, Literature club, Altius - Sports club, Clairvoyance - Photography club, Arts club, Social Reformation Cell (SRC). NIT Delhi hosts two major annual events: SENTIENCE : A Techno-Cultural fest, is born with the idea of bringing together a multitude of talents on a single spectrum. It's the unison of two major and spectacular milestones, TerraTechnica and Saptrang. ZEAL : A Sports fest, which teaches the value of sportsmanship to the students of the college. References External links National Institutes of Technology Engineering colleges in Delhi Educational institutions established in 2010 2010 establishments in Delhi
6165947
https://en.wikipedia.org/wiki/Vinum%20volume%20manager
Vinum volume manager
Vinum is a logical volume manager, also called software RAID, allowing implementations of the RAID-0, RAID-1 and RAID-5 models, both individually and in combination. The original Vinum was part of the base distribution of the FreeBSD operating system since 3.0, and also NetBSD between 2003-10-10 and 2006-02-25, as well as descendants of FreeBSD, including DragonFly BSD; in more recent versions of FreeBSD, it has been replaced with gvinum, which was first introduced around FreeBSD 6. Vinum source code is maintained in the FreeBSD and DragonFly source trees. Vinum supports RAID levels 0, 1, 5, and JBOD. Vinum was inspired by Veritas Volume Manager. Vinum is invoked as gvinum (GEOM Vinum) on FreeBSD version 5.4 and up. In modern FreeBSD, it may be considered to be a legacy volume manager; modern alternatives being GEOM and ZFS. In NetBSD, it has been removed before NetBSD 4.0 due to lack of interest and maintenance; RAIDframe was cited as providing similar functionality. In DragonFly BSD, DragonFly's own HAMMER filesystem already implements network mirroring, and the utility could be used to configure , another software RAID implementation, which originally appeared with FreeBSD 6.0 as , but was deprecated with FreeBSD 9, and removed before FreeBSD 10.0; and a NetBSD's port of Red Hat's lvm2 is also available in the base system of DragonFly as well all in addition to vinum. Software RAID vs. Hardware RAID The distribution of data across multiple disks can be managed by either dedicated hardware or by software. Additionally, there are hybrid RAIDs that are partly software- and partly hardware-based solutions. With a software implementation, the operating system manages the disks of the array through the normal drive controller (ATA, SATA, SCSI, Fibre Channel, etc.). With present CPU speeds, software RAID can be faster than hardware RAID. A hardware implementation of RAID requires at a minimum a special-purpose RAID controller. On a desktop system, this may be a PCI expansion card, or might be a capability built into the motherboard. In larger RAIDs, the controller and disks are usually housed in an external multi-bay enclosure. This controller handles the management of the disks, and performs parity calculations (needed for many RAID levels). This option tends to provide better performance, and makes operating system support easier. Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while the system is running. In rare cases hardware controllers have become faulty, which can result in data loss. Hybrid RAIDs have become very popular with the introduction of inexpensive hardware RAID controllers. The hardware is a normal disk controller that has no RAID features, but there is a boot-time application that allows users to set up RAIDs that are controlled via the BIOS. When any modern operating system is used, it will need specialized RAID drivers that will make the array look like a single block device. Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids". Unlike software RAID, these "fakeraids" typically cannot span multiple controllers. Example configuration A simple example to mirror drive enterprise to drive excelsior (RAID1): drive enterprise device /dev/da1s1d drive excelsior device /dev/da2s1d volume mirror plex org concat sd length 512m drive enterprise plex org concat sd length 512m drive excelsior See also Hard drives Redundant array of independent disks Disk array Storage area network (SAN) Logical volume management Veritas Volume Manager bioctl with softraid on OpenBSD References External links Vinum page in the official FreeBSD handbook Sourceforge page Bootstrapping vinum - FreeBSD documentation project Vinum performance measurement Rotating disc computer storage media Volume manager RAID BSD software Computer data storage FreeBSD DragonFly BSD Operating system technology Storage software
53594867
https://en.wikipedia.org/wiki/Digital%20identity%20in%20Australia
Digital identity in Australia
Digital identity in Australia is used by residents to validate who they are over digital media, such as over the Internet. While many organisations use their own mechanisms, increasingly, aggregated digital identities are in use. Many Australian organisations leverage popular ubiquitous Internet identities such as those provided by social login services including Facebook, Google, Twitter and LinkedIn to perform the following functions: Single sign-on to help users avoiding creating new user names and passwords for each site. To provide some basic validation of identity To provide some integration, especially with social media, e-mail and contacts To identify the natural person behind a transaction for statutory purposes such as a monetary transfer In addition to these services, in order to validate identities in Australia additional services are used, such as government, and bank digital identities. SIM as Digital identity The use of a mobile phone SIM as a Digital identity in Australia provides some level of validation of the digital identity of the holder. Validation of the holder can be done by sending them an SMS to their phone number. The advantage of this mechanism is: SIMs are generally unique The mobile phone number is known to the holder, and often the trusting organisation, and used as the contact of the customer The mobile phone generally is carried by the person wherever they go There is a high penetration of mobile phones in Australia, in 2015 covering almost 80% of individuals There are some identity requirements in obtaining a SIM, so there can be a level of certainty that the holder is a known natural person, and resident (or temporary resident) of Australia even if the plan is Prepaid For this reason the mobile phone is often used as a primary or second-factor validation of identity on Australian digital services. myGov myGov is a service provided by the Government of Australia that provides a strong level of validation of digital identity. It is used primarily for Government (including some state Government) and semi-Government services such as: Centrelink for social welfare applications and payments Medicare for health care Australian Taxation Office to enable online tax returns Child Support Victorian Housing Register My Health Record for Electronic Health Records National Disability Insurance Scheme My Aged Care Department of Veterans' Affairs (Australia) online facilities Australian Job Search (government job search network) myGov also integrates with Australia Post MyPost Digital Mailbox to facilitate secure electronic document delivery, as most government departments avoid the use of e-mail to directly deliver private documents. myGov logins also support a number of other login services, especially in government. Origination of a myGov account can be done without the use of identity documentation, so it is possible to create an account without a valid natural person. However, as services are added, access to private information, such as documents or identity that should only be known by the individual concerned, is required, making the identity stronger. Many services require a second factor of authentication - SMS via a SIM based mobile phone number as mentioned above. myGovID Separate from myGov, myGovID is a digital ID component, that allows a form of Second Factor Authentication to Government websites through a mobile app. A user attempts a login online using their email, a pin is generated, and the user types the online pin into their mobile app, and online login can be completed. The myGovID application supports government departments building their own websites, and does not require the single portal access of myGov. Australia Post Digital iD Digital iD by Australia Post is a smart phone based app that allows users to create and validate their ID against the Australian Government Document Verification Service (DVS) and then use it as a primary ID system online and in person. Users can use their Passports, Drivers Licenses or Medicare Card to assert and confirm ID online. A photo is taken, and head movement is detected to ensure the holder is real and their face matches. Your passport can be scanned using the phone's near-field communication (NFC) reader and used to assert biometrics. The user's image with dynamic security features, and an updating QR code are then displayed to people to verify the ID. The date of birth and full name are displayed on the app. Australia Post claims it is acceptable for use to validate the holder's identity. It can be used instead of KeyPass for holders 18 years and older.  The Digital iD website states that it is used by over 50 government and private organisations across a variety of industries and sectors. Launched in 2016, questions were raised as to longevity of the offering after the founder quit, however, continued investment from Australia Post has seen Digital iD become the first industry provider accredited under the federal government’s Trusted Digital Identity Framework (TDIF). Usage of Digital iD is growing, and Keypass in Digital iD is now accepted as proof of age to enter participating licensed venues and to purchase alcohol in Vic, Tas, Qld, ACT and NT (excluding takeaway alcohol in NT). Online Banking Online Banking in Australia requires digital identification. As in other jurisdictions, access to bank accounts statements and making payments are the primary services available. In addition to these, it is possible to access documents (almost exclusively bills) from other corporations online using BPAY View. Over 300 billers are supported via this mechanism. For some transactions multi-factor authentication is required. Normally this is a password in combination with a code sent via SMS or in some cases, especially for business customers, a bank-issued security token. Most online banking services, especially if accessing an account requires the holder to complete stringent identity requirements sometimes in a bank branch. This ensures the quality of the identity. The four major online banking sites in Australia are: Commonwealth Bank of Australia Westpac National Australia Bank ANZ Bank State Government Services Some States and territories of Australia offer access points to Government services in those states, and require a digital identity to access these services. Service NSW is an example - an account can be created without any verifiable identity, however as services (such as Roads and Maritime Services) are added, private details need to be accessible, increasing the validity of the identity. The South Australian government has made several digital licences available via the mySA GOV app (Driver's licence, Proof of Age Card etc). NSW also provides some licences online. Queensland's Department of Transport and Main Roads is currently trailing a digital license in the Fraser Coast Region. Tax file number The Australian Tax file number (TFN) is a 9 digit identity document issued to tax payers. There is no card or official identity document in popular use that shows this number and strict rules on its use means that it is not required to be provided, and there is no practical way a non government entity to verify the holder against the number. It is therefore not an effective widespread digital identity (unlike the US Social Security number). However it is used to digitally identify tax paying entities behind transactions via financial institutions when the number has been disclosed. Failure to disclose the tax file number can draw attention to the transaction and/or result in tax being withheld, so it is used for specific purposes. Digital identity verification services Identity and associated information can be verified a number of ways: The Australian Attorney-General's Department provides a Document Verification Service (DVS) that allows for validation of some licences. The Australian Attorney-General's Department also provides a bio-metric face verification service Visas, identity, and right to work status can be checked online through the Department of Immigration and Border Protection's Visa Entitlement Verification Online (VEVO) service. Some States and territories allow for drivers licences, photo cards and certificates to be validated online e.g. NSW and Victoria. Electoral enrollment can be verified electronically, and may help to verify an identity. Private companies offer aggregated online identity checking services e.g. Vix Verify and Equifax In addition, certain aspects of individuals can be verified digitally - online, such as: Working With Children Check to ensure that an individual has cleared the necessary background checks to allow them to work with children. To apply for a check the applicant must physically attend an agency, however the employer can verify the check online, for the duration of employment. Australian Criminal Intelligence Commission ACIC allows organisations to provide National Police History Check certificates on ACIC's behalf. Some of these can be obtained online (such as Veritas), and later verified online. Some institutions allow the online verification of education qualifications. See also Identity documents of Australia References External links myGov Australian Government login Service NSW Home page MyPost Digital Mailbox - Australia Post DVS - Australian Government Document Verification Service Identity documents of Australia E-government in Australia
8312916
https://en.wikipedia.org/wiki/DMARC
DMARC
DMARC (Domain-based Message Authentication, Reporting and Conformance) is an email authentication protocol. It is designed to give email domain owners the ability to protect their domain from unauthorized use, commonly known as email spoofing. The purpose and primary outcome of implementing DMARC is to protect a domain from being used in business email compromise attacks, phishing emails, email scams and other cyber threat activities. Once the DMARC DNS entry is published, any receiving email server can authenticate the incoming email based on the instructions published by the domain owner within the DNS entry. If the email passes the authentication, it will be delivered and can be trusted. If the email fails the check, depending on the instructions held within the DMARC record the email could be delivered, quarantined or rejected. For example, one email forwarding service delivers the mail, but as "From: no-reply@<forwarding service>". DMARC extends two existing email authentication mechanisms, Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM). It allows the administrative owner of a domain to publish a policy in their DNS records to specify which mechanism (DKIM, SPF or both) is employed when sending email from that domain; how to check the From: field presented to end users; how the receiver should deal with failures - and a reporting mechanism for actions performed under those policies. DMARC is defined in the Internet Engineering Task Force's published document RFC 7489, dated March 2015, as "Informational". Overview A DMARC policy allows a sender's domain to indicate that their emails are protected by SPF and/or DKIM, and tells a receiver what to do if neither of those authentication methods passes – such as to reject the message or quarantine it. The policy can also specify how an email receiver can report back to the sender's domain about messages that pass and/or fail. These policies are published in the public Domain Name System (DNS) as text TXT records. DMARC does not directly address whether or not an email is spam or otherwise fraudulent. Instead, DMARC can require that a message not only pass DKIM or SPF validation, but that it also pass alignment. Under DMARC a message can fail even if it passes SPF or DKIM, but fails alignment. Setting up DMARC may improve the deliverability of messages from legitimate senders. Alignment DMARC operates by checking that the domain in the message's From: field (also called "RFC5322.From") is "aligned" with other authenticated domain names. If either SPF or DKIM alignment checks pass, then the DMARC alignment test passes. Alignment may be specified as strict or relaxed. For strict alignment, the domain names must be identical. For relaxed alignment, the top-level "Organizational Domain" must match. The Organizational Domain is found by checking a list of public DNS suffixes, and adding the next DNS label. So, for example, "a.b.c.d.example.com.au" and "example.com.au" have the same Organizational Domain, because there is a registrar that offers names in ".com.au" to customers. Although at the time of DMARC spec there was an IETF working group on domain boundaries, nowadays the organizational domain can only be derived from the Public Suffix List. Like SPF and DKIM, DMARC uses the concept of a domain owner, the entity or entities that are authorized to make changes to a given DNS domain. SPF checks that the IP address of the sending server is authorized by the owner of the domain that appears in the SMTP MAIL FROM command. (The email address in MAIL FROM is also called the bounce address, envelope-from or RFC5321.MailFrom.) In addition to requiring that the SPF check passes, DMARC checks that RFC5321.MailFrom aligns with 5322.From. DKIM allows parts of an email message to be cryptographically signed, and the signature must cover the From field. Within the DKIM-Signature mail header, the d= (domain) and s= (selector) tags specify where in DNS to retrieve the public key for the signature. A valid signature proves that the signer is a domain owner, and that the From field hasn't been modified since the signature was applied. There may be several DKIM signatures on an email message; DMARC requires one valid signature where the domain in the d= tag aligns with the sender's domain stated in the From: header field. DNS record DMARC records are published in DNS with a subdomain label _dmarc, for example _dmarc.example.com. Compare this to SPF at example.com, and DKIM at selector._domainkey.example.com. The content of the TXT resource record consists of name=value tags, separated by semicolons, similar to SPF and DKIM. For example: "v=DMARC1;p=none;sp=quarantine;pct=100;rua=mailto:[email protected];" Here, v is the version, p is the policy (see below), sp the subdomain policy, pct is the percent of "bad" emails on which to apply the policy, and rua is the URI to send aggregate reports to. In this example, the entity controlling the example.com DNS domain intends to monitor SPF and/or DKIM failure rates and doesn't expect emails to be sent from subdomains of example.com. Note that a subdomain can publish its own DMARC record; receivers must check it out before falling back to the organizational domain record. Step by step adoption The protocol provides for various ratchets, or transitional states, to allow mail admins to gradually transition from not implementing DMARC at all all the way through to an unyielding setup. The concept of stepwise adoption assumes that the goal of DMARC is the strongest setting, which is not the case for all domains. Regardless of intent, these mechanisms allow for greater flexibility. Policy First and foremost, there are three policies: none is the entry level policy. No special treatment is required by receivers, but enables a domain to receive feedback reports. quarantine asks receivers to treat messages that fail DMARC check with suspicion. Different receivers have different means to implement that, for example flag messages or deliver them in the spam folder. reject asks receivers to outright reject messages that fail DMARC check. The policy published can be mitigated by applying it to only a percentage of the messages that fail DMARC check. Receivers are asked to select the given percentage of messages by a simple Bernoulli sampling algorithm. The rest of the messages should undergo the lower policy; that is, none if p=quarantine, quarantine if p=reject. If not specified, pct defaults to 100% of messages. The case p=quarantine; pct=0; is being used to force mailing list managers to rewrite the From: field, as some don't do so when p=none. Finally, the subdomain policy, sp= and the newly added no-domain policy allow to tweak the policy for specific subdomains. Reports DMARC is capable of producing two separate types of reports. Aggregate reports are sent to the address specified following the rua. Forensic reports are emailed to the address following the ruf tag. These mail addresses must be specified in URI mailto format (e.g. mailto:[email protected] ). Multiple reporting addresses are valid and must each be in full URI format, separated by a comma. Target email addresses can belong to external domains. In that case, the target domain has to set up a DMARC record to say it agrees to receive them, otherwise it would be possible to exploit reporting for spam amplification. For example, say receiver.example receives a mail message From: [email protected] and wishes to report it. If it finds ruf=mailto:[email protected], it looks for a confirming DNS record in the namespace administered by the target, like this: sender.example._report._dmarc.thirdparty.example IN TXT "v=DMARC1;" Aggregate reports Aggregate Reports are sent as XML files, typically once per day. The subject mentions the "Report Domain", which indicates the DNS domain name about which the report was generated, and the "Submitter", which is the entity issuing the report. The payload is in an attachment with a long filename consisting of bang-separated elements such as the report-issuing receiver, the begin and end epochs of the reported period as Unix-style time stamps, an optional unique identifier and an extension which depends on the possible compression (used to be .zip). For example: example.com!example.org!1475712000!1475798400.xml.gz. The XML content consists of a header, containing the policy on which the report is based and report metadata, followed by a number of records. Records can be put in a database as a relation and viewed in a tabular form. The XML schema is defined in Appendix C of specifications and a raw record is exemplified in dmarc.org. Here we stick with a relational example, which better conveys the nature of the data. DMARC records can also be directly transformed in HTML by applying an XSL stylesheet. Rows are grouped by source IP and authentication results, passing just the count of each group. The leftmost result columns, labelled SPF and DKIM show DMARC-wise results, either pass or fail, taking alignment into account. The rightmost ones, with similar labels, show the name of the domain which claims to participate in the sending of the message and (in parentheses) the authentication status of that claim according to the original protocol, SPF or DKIM, regardless of Identifier Alignment. On the right side, SPF can appear at most twice, once for the Return-Path: test and once for the HELO test; DKIM can appear once for each signature present in the message. In the example, the first row represents the main mail flow from example.org, and the second row is a DKIM glitch, such as signature breakage due to a minor alteration in transit. The third and fourth rows show typical failures modes of a forwarder and a mailing list, respectively. DMARC authentication failed for the last row only; it could have affected the message disposition if example.org had specified a strict policy. The disposition reflects the policy published actually applied to the messages, none, quarantine, or reject. Along with it, not shown in the table, DMARC provides for a policy override. Some reasons why a receiver can apply a policy different from the one requested are already provided for by the specification: forwarded while keeping the same bounce address, usually doesn't break DKIM, sampled out because a sender can choose to only apply the policy to a percentage of messages only, trusted forwarder the message arrived from a locally known source mailing list the receiver heuristically determined that the message arrived from a mailing list, local policy receivers are obviously free to apply the policy they like, it is just cool to let senders know, other if none of the above applies, a comment field allows to say more. Forensic reports Forensic Reports, also known as Failure Reports, are generated in real time and consist of redacted copies of individual emails that failed SPF, DKIM or both based upon what value is specified in the fo tag. Their format, an extension of Abuse Reporting Format, resembles that of regular bounces in that they contain either a "message/rfc822" or a "text/rfc822-headers". Forensic Reports also contain the following: Source of Sending IP Address From email address Recipient email address Email subject line SPF and DKIM authentication results Received time Email message headers which include the sending host, email message ID, DKIM signature, and any other custom header information. Compatibility Forwarders There are several different types of email forwarding, some of which may break SPF. Mailing lists Mailing lists are a frequent cause of legitimate breakage of the original author's domain DKIM signature, for example by adding a prefix to the subject header. A number of workarounds are possible, and mailing list software packages are working on solutions. Turn off all message modifications This workaround keeps the standard mailing list workflow, and is adopted by several large mailing list operators, but precludes the list adding footers and subject prefixes. This requires careful configuration of mailing software to make sure signed headers are not reordered or modified. A misconfigured email server may put List-id in its DKIM of messages sent to a mailing list, and then the list operator is forced to reject it or do From: rewriting. From: rewriting One of the most popular and least intrusive workarounds consists of rewriting the From: header field. The original author's address can then be added to the Reply-To: field. Rewriting can range from just appending .INVALID to the domain name, to allocating a temporary user ID where an opaque ID is used, which keeps the user's "real" email address private from the list. In addition, the display name can be changed so as to show both the author and the list (or list operator). Those examples would result, respectively, in one of the following: From: John Doe <[email protected]> From: John Doe <[email protected]> From: John Doe via MailingList <[email protected]> Reply-To: John Doe <[email protected]> The last line, Reply-To:, has to be designed in order to accommodate reply-to-author functionality, in which case reply-to-list functionality is covered by the preceding change in the From: header field. That way, the original meaning of those fields is reversed. Altering the author is not fair in general, and can break the expected relationship between meaning and appearance of that datum. It also breaks automated use of it. There are communities which use mailing lists to coordinate their work, and deploy tools which use the From: field to attribute authorship to attachments. Other workarounds Wrapping the message works nicely, for those who use an email client which understands wrapped messages. Not doing any change is perhaps the most obvious solution, except that they seem to be legally required in some countries, and that routinely losing SPF authentication may render overall authentication more fragile. Sender field Making changes to the From: header field to pass DKIM alignment may bring the message out of compliance with RFC 5322 section 3.6.2: "The 'From:' field specifies the author(s) of the message, that is, the mailbox(es) of the person(s) or system(s) responsible for the writing of the message." Mailbox refers to the author's email address. The Sender: header is available to indicate that an email was sent on behalf of another party, but DMARC only checks policy for the From domain and ignores the Sender domain. Both ADSP and DMARC reject using the Sender field on the non-technical basis that many user agents do not display this to the recipient. History A draft DMARC specification has been maintained since 30 January 2012. In October 2013, GNU Mailman 2.1.16 was released with options to handle posters from a domain with the DMARC policy of p=reject. The change tried to anticipate the interoperability issues expected in case restrictive policies were applied to domains with human users (as opposed to purely transactional mail domains). In April 2014, Yahoo changed its DMARC policy to p=reject, thereby causing misbehavior in several mailing lists. A few days later, AOL also changed its DMARC policy to p=reject. Those moves resulted in a significant amount of disruption, and those mailbox providers have been accused of forcing the costs of their own security failures onto third parties. As of 2020, the FAQ in the official DMARC wiki contains several suggestions for mailing lists to handle messages from a domain with a strict DMARC policy, of which the most widely implemented is the mailing list changing the “From” header to an address in its own domain. An IETF working group was formed in August 2014 in order to address DMARC issues, starting from interoperability concerns and possibly continuing with a revised standard specification and documentation. Meanwhile, the existing DMARC specification had reached an editorial state agreed upon and implemented by many. It was published in March 2015 on the Independent Submission stream in the "Informational" (non-standard) category as RFC 7489. In March 2017, the Federal Trade Commission published a study on DMARC usage by businesses. Out of 569 businesses, the study found about a third implemented any DMARC configuration, fewer than 10% used DMARC to instruct servers to reject unauthenticated messages, and a majority had implemented SPF. Contributors The contributors of the DMARC specification include: Receivers: AOL, Comcast, Google (Gmail), Mail.Ru, Microsoft (Outlook.com, Hotmail), Netease (163.com, 126.com, 188.com, yeah.net), XS4ALL, Yahoo, Yandex Senders: American Greetings, Bank of America, Facebook, Fidelity Investments, JPMorganChase, LinkedIn, PayPal, Twitter Intermediaries & Vendors: Agari (Founder/CEO Patrick R. Peterson), Cloudmark, Red Sift, ReturnPath, Trusted Domain Project See also Authenticated Received Chain (ARC) Author Domain Signing Practices DomainKeys Identified Mail (DKIM) E-mail authentication Certified email Mail servers with DMARC Sender Policy Framework (SPF) Notes References External links The Anti Spam Research Group wiki: Mitigating DMARC damage to third party mail Email authentication Spam filtering
50502333
https://en.wikipedia.org/wiki/Armies%20in%20the%20American%20Civil%20War
Armies in the American Civil War
This article is designed to give background into the organization and tactics of Civil War armies. This brief survey is by no means exhaustive, but it should give enough material to have a better understanding of the capabilities of the forces that fought the American Civil War. Understanding these capabilities should give insight into the reasoning behind the decisions made by commanders on both sides. Organization The US Army in 1861 The Regular Army of the United States on the eve of the Civil War was essentially a frontier constabulary whose 16,000 officers and men were organized into 198 companies scattered across the nation at 79 different posts. In 1861, this Army was under the command of Brevet Lieutenant General Winfield Scott, the 75‑year‑old hero of the Mexican‑American War. His position as general in chief was traditional, not statutory, because secretaries of war since 1821 had designated a general to be in charge of the field forces without formal congressional approval. During the course of the war, Lincoln would appoint other generals with little success until finally appointing Lieutenant General Ulysses S. Grant to the position prior to the Overland Campaign. The field forces were controlled through a series of geographic departments whose commanders reported directly to the general in chief. This department system, frequently modified, would be used by both sides throughout the Civil War for administering regions under Army control. Army administration was handled by a system of bureaus whose senior officers were, by 1860, in the twilight of long careers in their technical fields. Six of the 10 bureau chiefs were over 70 years old. These bureaus, modeled after the British system, answered directly to the War Department and were not subject to the orders of the general in chief. The bureaus reflected many of today's combat support and combat service support branches; however, there was no operational planning or intelligence staff. American commanders before the Civil War had never required such a structure. This system provided suitable civilian control and administrative support to the small field army prior to 1861. Ultimately, the bureau system would respond sufficiently, if not always efficiently, to the mass mobilization required over the next four years. Indeed, it would remain essentially intact until the early 20th century. The Confederate government, forced to create an army and support organization from scratch, established a parallel structure to that of the US Army. In fact, many important figures in Confederate bureaus had served in the prewar Federal bureaus. Raising armies With the outbreak of war in April 1861, both sides faced the monumental task of organizing and equipping armies that far exceeded the prewar structure in size and complexity. The Federals maintained control of the Regular Army, and the Confederates initially created a Regular force, though in reality it was mostly on paper. Almost immediately, the North lost many of its officers to the South, including some of exceptional quality. Of 1,108 Regular Army officers serving as of 1 January 1861, 270 ultimately resigned to join the South. Only a few hundred of 15,135 enlisted men, however, left the ranks. The federal government had two basic options for the use of the Regular Army. The government could divide the Regulars into training and leadership cadre for newly formed volunteer regiments or retain them in “pure” units to provide a reliable nucleus for the Federal Army in coming battles. For the most part, the government opted to keep the Regulars together. During the course of the war, battle losses and disease thinned the ranks of Regulars, and officials could never recruit sufficient replacements in the face of stiff competition from the states that were forming volunteer regiments. By November 1864, many Regular units had been so depleted that they were withdrawn from front-line service, although some Regular regiments fought with the Army of the Potomac in the Overland Campaign. In any case, the war was fought primarily with volunteer officers and men, the vast majority of whom started the war with no previous military training or experience. However, by 1864, both the Army of the Potomac and the Army of Northern Virginia were largely experienced forces that made up for a lack of formal training with three years of hard combat experience. Neither side had difficulty in recruiting the numbers initially required to fill the expanding ranks. In April 1861, President Abraham Lincoln called for 75,000 men from the states’ militias for a three‑month period. This figure probably represented Lincoln's informed guess as to how many troops would be needed to quell the rebellion quickly. Almost 92,000 men responded, as the states recruited their “organized” but untrained militia companies. At the First Battle of Bull Run in July 1861, these ill‑trained and poorly equipped soldiers generally fought much better than they were led. Later, as the war began to require more manpower, the federal government set enlistment quotas through various “calls,” which local districts struggled to fill. Similarly, the Confederate Congress authorized the acceptance of 100,000 one‑year volunteers in March 1861. One‑third of these men were under arms within a month. The Southern spirit of voluntarism was so strong that possibly twice that number could have been enlisted, but sufficient arms and equipment were not then available. As the war continued and casualty lists grew, the glory of volunteering faded, and both sides ultimately resorted to conscription to help fill the ranks. The Confederates enacted the first conscription law in American history in April 1862, followed by the federal government's own law in March 1863. Throughout these first experiments in American conscription, both sides administered the programs in less than a fair and efficient way. Conscription laws tended to exempt wealthier citizens, and initially, draftees could hire substitutes or pay commutation fees. As a result, the average conscript maintained poor health, capability, and morale. Many eligible men, particularly in the South, enlisted to avoid the onus of being considered a conscript. Still, conscription or the threat of conscription ultimately helped provide a large number of soldiers. Conscription was never a popular program, and the North, in particular, tried several approaches to limit conscription requirements. These efforts included offering lucrative bounties, fees paid to induce volunteers to fill required quotas. In addition, the Federals offered a series of reenlistment bonuses, including money, 30‑day furloughs, and the opportunity for veteran regiments to maintain their colors and be designated as “veteran” volunteer infantry regiments. The Federals also created an Invalid Corps (later renamed the Veteran Reserve Corps) of men unfit for front‑line service who performed essential rear area duties. In addition, the Union recruited almost 179,000 African-Americans, mostly in federally organized volunteer regiments. In the South, recruiting or conscripting slaves was so politically sensitive that it was not attempted until March 1865, far too late to influence the war. Whatever the faults of the manpower mobilization, it was an impressive achievement, particularly as a first effort on that scale. Various enlistment figures exist, but the best estimates are that approximately two million men enlisted in the Federal Army from 1861 to 1865. Of that number, one million were under arms at the end of the war. Because the Confederate records are incomplete or lost, estimates of their enlistments vary from 600,000 to over 1.5 million. Most likely, between 750,000 and 800,000 men served the Confederacy during the war, with peak strength never exceeding 460,000 men. The unit structure into which the expanding armies were organized was generally the same for Federals and Confederates, reflecting the common roots of both armies. The Federals began the war with a Regular Army organized into an essentially Napoleonic, musket-equipped structure. Both sides used a variant of the old Regular Army structure for newly formed volunteer regiments. The Federal War Department established a volunteer infantry regimental organization with a strength that could range from 866 to 1,046 (varying in authorized strength by up to 180 infantry privates). The Confederate Congress field its 10‑company infantry regiment at 1,045 men. Combat strength in battle, however, was always much lower (especially by the time of the Overland Campaign) because of casualties, sickness, leaves, details, desertions, and straggling. The battery remained the basic artillery unit, although battalion and larger formal groupings of artillery emerged later in the war in the eastern theater. Four under strength Regular artillery regiments existed in the US Army at the start of the war and one Regular regiment was added in 1861, for a total of 60 batteries. Nevertheless, most batteries were volunteer organizations. For the first years of the war and part way into the Overland Campaign, a Federal battery usually consisted of six guns and had an authorized strength of 80 to 156 men. A battery of six 12‑pound Napoleons could include 130 horses. If organized as “horse” or fling artillery, cannoneers were provided individual mounts, and more horses than men could be assigned to the battery. After the battle of Spotsylvania in 1864, most of the Army of the Potomac's artillery was reorganized into four-gun batteries. Their Confederate counterparts, plagued by limited ordnance and available manpower, usually operated throughout the war with a four-gun battery, often with guns of mixed types and calibers. Confederate batteries seldom reached their initially authorized manning level of 80 soldiers. Prewar Federal mounted units were organized into five Regular regiments (two dragoon, two cavalry, and one mounted rifle), and one Regular cavalry regiment was added in May 1861. Although the term “troop” was officially introduced in 1862, most cavalrymen continued to use the more familiar term “company” to describe their units throughout the war. The Federals grouped two companies or troops into squadrons, with four to six squadrons comprising a regiment. Confederate cavalry units, organized in the prewar model, were authorized 10 76-man companies per regiment. Some volunteer cavalry units on both sides also formed into smaller cavalry battalions. Later in the war, both sides began to merge their cavalry regiments and brigades into division and corps organizations. For both sides, the infantry unit structure above regimental level was similar to today's structure, with a brigade controlling three to five regiments and a division controlling two or more brigades. Federal brigades generally contained regiments from more than one state, while Confederate brigades often consisted of regiments from the same state. In the Confederate Army, a brigadier general usually commanded a brigade, and a major general commanded a division. The Federal Army, with no rank higher than major general until 1864, often had colonels commanding brigades, brigadier generals commanding divisions, and major generals commanding corps and armies. Grant received the revived rank of lieutenant general in 1864, placing him with clear authority over all of the Federal armies, but rank squabbles between the major generals appeared within the Union command structure throughout the Overland Campaign. The large numbers of organizations formed are a reflection of the politics of the time. The War Department in 1861 considered making recruitment a Federal responsibility, but this proposal seemed to be an unnecessary expense for the short war initially envisioned. Therefore, the responsibility for recruiting remained with the states, and on both sides state governors continually encouraged local constituents to form new volunteer regiments. This practice served to strengthen support for local, state, and national politicians and provided an opportunity for glory and high rank for ambitious men. Although such local recruiting created regiments with strong bonds among the men, it also hindered filing the ranks of existing regiments with new replacements. As the war progressed, the Confederates attempted to funnel replacements into units from their same state or region, but the Federals continued to create new regiments. Existing Federal regiments detailed men back home to recruit replacements, but these efforts could never successfully compete for men joining new local regiments. The newly formed regiments thus had no seasoned veterans to train the recruits, and the battle-tested regiments lost men faster than they could recruit replacements. Many regiments on both sides (particularly for the North) were reduced to combat ineffectiveness as the war progressed. Seasoned regiments were often disbanded or consolidated, usually against the wishes of the men assigned. The infantry regiment was the basic administrative and tactical unit of the Civil War armies. Regimental headquarters consisted of a colonel, lieutenant colonel, major, adjutant, quartermaster, surgeon (with rank of major), two assistant surgeons, a chaplain, sergeant major, quartermaster sergeant, commissary sergeant, hospital steward, and two principal musicians. Each company was staffed by a captain, a first lieutenant, a second lieutenant, a first sergeant, four sergeants, eight corporals, two musicians, and one wagoner. The authorized strength of a Civil War infantry regiment was about 1,000 officers and men, arranged in ten companies plus a headquarters and (for the first half of the war at least) a band. Discharges for physical disability, disease, special assignments (bakers, hospital nurses, or wagoners), court-martial, and battle injuries all combined to reduce effective combat strength. Before too long a typical regiment might be reduced to less than 500. Brigades were made up of two or more regiments, with four regiments being most common. Union brigades averaged 1,000 to 1,500 men, while on the Confederate side they averaged 1,500 to 1,800. Union brigades were designated by a number within their division, and each Confederate brigade was designated by the name of its current or former commander. Divisions were formed of two or more brigades. Union divisions contained 1,500 to 4,000 men, while the Confederate division was somewhat larger, containing 5,000 to 6,000 men. As with brigades, Union divisions were designated by a number in the Corps, while each Confederate division took the name of its current or former commander. Corps were formed of two or more divisions. The strength of a Union corps averaged 9,000 to 12,000 officers and men, those of Confederate armies might average 20,000. Two or more corps usually constituted an army, the largest operational organization. During the Civil War there were at least 16 armies on the Union side, and 23 on the Confederate side. In the Eastern Theater, the two principal adversaries were the Union Army of the Potomac and the Confederate Army of Northern Virginia. There were generally seven corps in the Union Army of the Potomac, although by the spring of 1864 the number was reduced to four. From the Peninsula campaign through the Battle of Antietam the Confederate Army of Northern Virginia was organized into Longstreet's and Jackson's "commands," of about 20,000 men each. In November 1862 the Confederate Congress officially designated these commands as corps. After Jackson's death in May 1863 his corps was divided in two, and thereafter the Army of Northern Virginia consisted of three corps. Leaders Because the organization, equipment, tactics, and training of the Confederate and Federal armies were similar, the performance of units in battle often depended on the quality and performance of their individual leaders. Both sides sought ways to find this leadership for their armies. The respective central governments appointed the general officers. At the start of the war, most, but certainly not all, of the more senior officers had West Point or other military school experience. In 1861, Lincoln appointed 126 general officers, of which 82 were or had been professionally trained officers. Jefferson Davis appointed 89, of which 44 had received professional training. The rest were political appointees, but of these only 16 Federal and 7 Confederate generals lacked military experience. Of the lower ranking volunteer officers who comprised the bulk of the leadership for both armies, state governors normally appointed colonels (regimental commanders). States also appointed other field grade officers, although many were initially elected within their units. Company grade officers were usually elected by their men. This long‑established militia tradition, which seldom made military leadership and capability a primary consideration, was largely an extension of states’ rights and sustained political patronage in both the Union and the Confederacy. Much has been made of the West Point backgrounds of the men who ultimately dominated the senior leadership positions of both armies, but the graduates of military colleges were not prepared by such institutions to command divisions, corps, or armies. Moreover, though many leaders had some combat experience from the Mexican War era, very few had experience above the company or battery level in the peacetime years prior to 1861. As a result, the war was not initially conducted at any level by “professional officers” in today's terminology. Leaders became more professional through experience and at the cost of thousands of lives. General William T. Sherman would later note that the war did not enter its “professional stage” until 1863. By the time of the Overland Campaign, many officers, though varying in skill, were at least comfortable at commanding their formations. Civil War Staffs In the Civil War, as today, the success of large military organizations and their commanders often depended on the effectiveness of the commanders’ staffs. Modern staff procedures have evolved only gradually with the increasing complexity of military operations. This evolution was far from complete in 1861, and throughout the war, commanders personally handled many vital staff functions, most notably operations and intelligence. The nature of American warfare up to the mid-19th century did not seem to overwhelm the capabilities of single commanders. However, as the Civil War progressed the armies grew larger and the war effort became a more complex undertaking and demanded larger staffs. Both sides only partially adjusted to the new demands, and bad staff work hindered operations for both the Union and Confederate forces in the Overland Campaign. Civil War staffs were divided into a “general staff” and a “staff corps.” This terminology, defined by Winfield Scott in 1855, differs from modern definitions of the terms. Except for the chief of staff and aides-de-camp, who were considered personal staff and would often depart when a commander was reassigned, staffs mainly contained representatives of the various bureaus, with logistical areas being best represented. Later in the war, some truly effective staffs began to emerge, but this was the result of the increased experience of the officers serving in those positions rather than a comprehensive development of standard staff procedures or guidelines. Major General George B. McClellan, when he appointed his father‑in‑law, was the first to officially use the title “chief of staff.” Even though many senior commanders had a chief of staff, this position was not used in any uniform way and seldom did the man in this role achieve the central coordinating authority of the chief of staff in a modern headquarters. This position, along with most other staff positions, was used as an individual commander saw fit, making staff responsibilities somewhat different under each commander. This inadequate use of the chief of staff was among the most important shortcomings of staffs during the Civil War. An equally important weakness was the lack of any formal operations or intelligence staff. Liaison procedures were also ill-defined, and various staff officers or soldiers performed this function with little formal guidance. Miscommunication or lack of knowledge of friendly units proved disastrous time after time in the war's campaigns. Armies at Vicksburg Major General Ulysses S. Grant's Army of the Tennessee was organized into four infantry corps. Major General Stephen A. Hurlbut's XVI Corps, however, remained headquartered in Memphis performing rear-area missions throughout the campaign, although nearly two divisions did join Grant during the siege. The remaining three corps, containing ten divisions with over 44,000 effectives, composed Grant's maneuver force during the campaign. Although some recently recruited "green" regiments participated, the bulk of Grant's army consisted of veteran units, many of which had fought with distinction at Forts Henry and Donelson, Shiloh, and Chickasaw Bayou. Of Grant's senior subordinates, the XV Corps commander, Major General William T. Sherman, was his most trusted. Ultimately to prove an exceptional operational commander, Sherman was an adequate tactician with considerable wartime command experience. He and Major General James B. McPherson, commander of XVII Corps, were West Pointers. McPherson was young and inexperienced, but both Grant and Sherman felt he held great promise. Grant's other corps commander, Major General John A. McClernand, was a prewar Democratic congressman who had raised much of his XIII Corps specifically so that he could command an independent Vicksburg expedition. A self-serving and politically ambitious man who neither enjoyed nor curried Grant's favor, he nonetheless was an able organizer and tactical commander who had served bravely at Shiloh. The division commanders were a mix of trained regular officers and volunteers who formed a better-than-average set of Civil War commanders. Lieutenant General John C. Pemberton, a Pennsylvania-born West Pointer who had served with Jefferson Davis in the Mexican War, resigned his federal commission to join the South at the start of the war. Pemberton's army in the Vicksburg campaign consisted of five infantry divisions with no intermediate corps headquarters. Counting two brigades that briefly joined Pemberton's command during the maneuver campaign, he had over 43,000 effectives, many of whom had only limited battle experience. Of Pemberton's subordinates, Brigadier General John S. Bowen, a West Point classmate of McPherson's, was an exceptionally able tactical commander. Major General Carter L. Stevenson was also West Point trained, and the other division commander in the maneuver force, Major General William W. Loring, was a prewar Regular colonel who had worked his way up through the ranks. Significantly, none of these three men had any real respect for their commander and would prove to be less than supportive of him. Pemberton's other division commanders, Major Generals Martin L. Smith and John H. Fomey, both West Pointers, would remain in or near the city, commanding Vicksburg's garrison troops throughout the campaign. Although Pemberton's five divisions represented the main Confederate force in the Vicksburg campaign, his army came under the jurisdiction of a higher headquarters, General Joseph E. Johnston's Department of the West. Johnston, in 1861, had been the Quartermaster General of the Regular Army and one of only five serving general officers. He had commanded in the eastern theater early in the war until severely wounded. In November 1862 after several months of convalescence, he assumed departmental command in the west. Johnston assumed direct command in Mississippi on 13 May 1863 but was unable to establish effective control over Pemberton's forces. When Pemberton became besieged in Vicksburg, Johnston assembled an Army of Relief but never seriously threatened Grant. Morale of the troops was a serious concern for both the Union and Confederate commanders. Grant's army suffered terribly from illness in the early months of the campaign, which it spent floundering in the Louisiana swamps. But the men recovered quickly once they gained the high ground across the river. Inured to hardship, these men were served by able commanders and hardworking staffs. Once movements started, morale remained high, despite shortfalls in logistical support. Pemberton's men, although not always well served by their commanders, fought hard for their home region through the battle of Champion Hill. Although they briefly lost their resolve after that defeat, once behind the formidable works at Vicksburg, they regained a level of morale and effectiveness that only began to erode weeks later when they were faced with ever-increasing Federal strength and their own supply shortages. Armies in the Overland Campaign The forces in the Overland Campaign evolved through several organizational changes over the course of the two-month struggle. The details of these changes are covered in the campaign overview and in the appendixes. Some key aspects of these organizations are summarized below. On the Union side, Lieutenant General Ulysses S. Grant, in addition to being the commander of all of the Union forces arrayed against the Confederacy, commanded all Union forces in the eastern theater of operations that fought in the Overland Campaign. His main force was Major General George G. Meade's Army of the Potomac, which initially consisted of three infantry corps and one cavalry corps. An additional infantry corps, the IX Corps under Major General Ambrose E. Burnside, began the campaign as a separate corps reporting directly to Grant, but was later assigned to the Army of the Potomac. Major General Franz Sigel commanded a Union army in the Shenandoah Valley that had only an indirect role in the Overland Campaign. On the other hand, Major General Benjamin F. Butler's Army of the James was more directly involved in the campaign. His army consisted of two infantry corps and about a division's worth of cavalry troops. Later in the campaign, at Cold Harbor, one of Butler's corps, the XVIII under Major General William F. Smith, was temporarily attached to the Army of the Potomac. The initial strength of the Army of the Potomac and the IX Corps at the beginning of the Overland Campaign was slightly under 120,000 men. There are some factors affecting the strength, quality, and organization of the Union forces that should be noted. First, just prior to the campaign, the Army of the Potomac had abolished two of its infantry corps (the I and III Corps, both of which had been decimated at Gettysburg) and consolidated their subordinate units into the remaining three corps (II, V, and VI). This definitely streamlined the Army's command and control, but it also meant that some divisions and brigades were not accustomed to their new corps’ methods and procedures at the start of the campaign. Second, soldiers in a large number of the Federal regiments were approaching the expiration dates of their enlistments just as the campaign was set to begin in May 1864. Most of the troops in these regiments had enlisted for three years in 1861, and they represented the most experienced fighters in the Army. A surprisingly large number of these soldiers reenlisted (over 50 percent), but there was still a large turnover and much disruption as many of the regiments that reenlisted returned to their home states for furloughs and to recruit replacements. Finally, the Union did tap a new source for soldiers in 1864: the “heavy artillery” regiments. These were units designed to man the heavy artillery in the fortifications around Washington, DC. Grant decided to strip many of these regiments from the forts and use them as infantry in the 1864 campaign, and he employed these forces more extensively as his losses accumulated. The heavy artillery regiments had a slightly different structure than the traditional infantry regiments, and they had not suffered battle casualties; thus, they often still possessed about 1,200 soldiers in a regiment. This was as large as a veteran Union brigade in 1864. On the Confederate side, there was no overall commander in chief or even a theater commander with authority similar to that of Grant. Officially, only President Jefferson Davis had the authority to coordinate separate Confederate armies and military districts. However, the commander of the Army of Northern Virginia, General Robert E. Lee, had considerable influence over affairs in the entire eastern theater due to the immense respect he had earned from Davis and other Confederate leaders. Lee's army consisted of three infantry corps and a cavalry corps. One of these corps (Lieutenant General James Longstreet's I Corps) had been on detached duty just prior to the opening of the campaign and would not join the rest of Lee's army until the second day of the battle of the Wilderness (6 June). Additional Confederate forces in the theater included Major General John C. Breckinridge's small army in the Shenandoah Valley and General P.G.T. Beauregard's forces protecting Richmond, southern Virginia, and northern North Carolina. In the course of the campaign, Lee received some reinforcements from both Breckinridge and Beauregard. The Army of Northern Virginia (including Longstreet's I Corps) began the campaign with about 64,000 soldiers. Although plagued by an overall shortage in numbers, Lee had fewer worries about the organization and quality of his manpower. Most of his soldiers had enlisted for the duration of the war, thus his army lost few regiments due to expired terms of service. Also, thanks to its better replacement system, Confederate regiments were usually closer to a consistent strength of 350 to 600 men instead of the wild disparity of their Union counterparts (as low as 150 soldiers in the decimated veteran regiments and as much as 1,200 in the heavy artillery regiments). Overall, Lee could count on the quality and consistency of his units, and he did not have to endure the turmoil of troop turnover and organizational changes that hindered Grant's forces. As for staffs, on the Union side Grant maintained a surprisingly small staff for a commander in chief. His personal chief of staff was Major General John A. Rawlins, a capable officer who generally produced concise and well‑crafted orders. In addition, he was Grant's alter ego, a trusted friend who took it upon himself to keep Grant sober. In fact, recent scholarship indicates that Grant's drinking was far less of a problem than formerly indicated, and there were certainly no drinking difficulties during the Overland Campaign. The rest of Grant's small staff consisted of a coterie of friends who had earned Grant's trust from their common service in the western theater campaigns. In general, this staff performed well, although a few glaring mistakes would come back to haunt the Union effort. Of course, one of the major reasons Grant could afford to keep such a small staff in the field was that the chief of staff for the Union armies, Major General Henry W. Halleck, remained in Washington with a large staff that handled Grant's administrative duties as general in chief. In fact, Halleck was a superb staff officer who tactfully navigated the political seas of Washington and gave Grant the freedom to accompany the Army of the Potomac in the field. In contrast to Grant's field staff, Meade had a huge staff that Grant once jokingly described as fitting for an Imperial Roman Emperor. Meade's chief of staff was Major General Andrew A. Humphreys, an extremely capable officer who only reluctantly agreed to leave field command to serve on the army's staff. Humphreys has received some criticism for not pushing the Army of the Potomac through the Wilderness on 4 May; but for most of the campaign, his orders were solid and his movement plan for the crossing of the James River was outstanding. Another excellent officer on the army staff was the chief of artillery, Major General Henry J. Hunt. Recognized as one of the war's foremost experts on artillery, Hunt had a more active role in operational matters than most artillery chiefs who usually just performed administrative duties. The rest of Meade's staff was of mixed quality. In addition, the poor caliber of Union maps coupled with some mediocre young officers who were used as guides repeatedly led to misdirected movements and lost time. Compared to Meade's large headquarters, Lee maintained a smaller group of trusted subordinates for his staff. Lee did not have a chief of staff, thus much of the responsibility for writing his orders fell on the shoulders of a few personal aides and secretaries, especially Lieutenant Colonel Charles Marshall. Lee employed several young officers, such as Lieutenant Colonel Walter Taylor and Colonel Charles S. Venable, as aides, and had great faith in these men to transmit his orders to subordinates. However, the lack of a true staff to ease his workload probably took its toll on Lee who was ill and physically exhausted by the time of the North Anna battles at the end of May. Other than his young aides, Lee had several other staff officers of mixed quality. His chief of artillery, Brigadier General William N. Pendleton, was mediocre at best, and the Army commander usually relegated his chief of artillery to strictly administrative duties. On the other hand, Major General Martin Luther (M.L.) Smith, Lee's chief engineer, played an active and generally positive role throughout the campaign. Weapons Infantry During the 1850s, in a technological revolution of major proportions, the rifle musket began to replace the relatively inaccurate smoothbore musket in ever-increasing numbers, both in Europe and America. This process, accelerated by the Civil War, ensured that the rifled shoulder weapon would be the basic weapon used by infantrymen in both the Federal and Confederate armies. The standard and most common shoulder weapon used in the American Civil War was the Springfield .58‑caliber rifle musket, models 1855, 1861, and 1863. In 1855, the US Army adopted this weapon to replace the .69‑caliber smoothbore musket and the .54‑caliber rifle. In appearance, the rifle musket was similar to the smoothbore musket. Both were single‑shot muzzleloaders, but the rifled bore of the new weapon substantially increased its range and accuracy. The rifling system chosen by the United States was designed by Claude Minié, a French Army officer. Whereas earlier rifles fired a round nonexpanding ball, the Minié system used a hollow‑based cylindro‑conoidal projectile slightly smaller than the bore that dropped easily into the barrel. When the powder charge was ignited by a fulminate of mercury percussion cap, the released powder gases expanded the base of the bullet into the rifled grooves, giving the projectile a ballistic spin. The model 1855 Springfield rifle musket was the first regulation arm to use the hollow‑base .58‑caliber minie bullet. The slightly modified model 1861 was the principal infantry weapon of the Civil War, although two subsequent models in 1863 were produced in about equal quantities. The model 1861 was 56 inches long overall, had a 40‑inch barrel, and weighed 9 pounds 2 ounces with its bayonet. The 21-inch socket bayonet consisted of an 18‑inch triangular blade and 3‑inch socket. The Springfield had a rear sight graduated to 500 yards. The maximum effective range of this weapon was approximately 500 yards, although it had killing power at 1,000 yards. The round could penetrate 11 inches of white-pine board at 200 yards and 3¼ inches at 1,000 yards, with a penetration of 1 inch considered the equivalent of disabling a human being. Although the new weapons had increased accuracy and effectiveness, the soldiers’ vision was still obscured by the clouds of smoke produced by the rifle musket's black powder propellant. To load a muzzle‑loading rifle, the soldier took a paper cartridge in hand and tore the end of the paper with his teeth. Next, he poured the powder down the barrel and placed the bullet in the muzzle. Then, using a metal ramrod, he pushed the bullet firmly down the barrel until seated. He then cocked the hammer and placed the percussion cap on the cone or nipple, which, when struck by the hammer, ignited the gunpowder. The average rate of fire was three rounds per minute. A well‑trained soldier could possibly load and fire four times per minute, but in the confusion of battle, the rate of fire was probably slower, two to three rounds per minute. In addition to the Springfields, over 100 types of muskets, rifles, rifle muskets, and rifled muskets—ranging up to .79 caliber—were used during the American Civil War. The numerous American-made weapons were supplemented early in the conflict by a wide variety of imported models. The best, most popular, and most common of the foreign weapons was the British .577‑caliber Enfield rifle, model 1853, which was 54 inches long (with a 39‑inch barrel), weighed 8.7 pounds (9.2 with the bayonet), could be fitted with a socket bayonet with an 18-inch blade, and had a rear sight graduated to 800 yards. The Enfield design was produced in a variety of forms, both long and short barreled, by several British manufacturers and at least one American company. Of all the foreign designs, the Enfield most closely resembled the Springfield in characteristics and capabilities. The United States purchased over 436,000 Enfield‑pattern weapons during the war. Statistics on Confederate purchases are more difficult to ascertain, but a report dated February 1863 indicated that 70,980 long Enfields and 9,715 short Enfields had been delivered by that time, with another 23,000 awaiting delivery. While the quality of imported weapons varied, experts considered the Enfields and the Austrian Lorenz rifle muskets to be very good. However, some foreign governments and manufacturers took advantage of the huge initial demand for weapons by dumping their obsolete weapons on the American market. This practice was especially prevalent with some of the older smoothbore muskets and converted flintlocks. The greatest challenge, however, lay in maintaining these weapons and supplying ammunition and replacement parts for calibers ranging from .44 to .79. The quality of the imported weapons eventually improved as the procedures, standards, and astuteness of the purchasers improved. For the most part, the European suppliers provided needed weapons, and the newer foreign weapons were highly regarded. Breechloaders and repeating rifles were available by 1861 and were initially purchased in limited quantities, often by individual soldiers. Generally, however, these types of rifles were not issued to troops in large numbers because of technical problems (poor breech seals, faulty ammunition), fear by the Ordnance Department that the troops would waste ammunition, and the cost of rifle production. The most famous of the breechloaders was the single-shot Sharps, produced in both carbine and rifle models. The model 1859 rifle was .52‑caliber, 47⅛ inches long, and weighed 8¾ pounds, while the carbine was .52‑caliber, 39⅛ inches long, and weighed 7¾ pounds. Both weapons used a linen cartridge and a pellet primer feed mechanism. Most Sharps carbines were issued to Federal cavalry units. The best known of the repeaters was probably the seven‑shot .52‑caliber Spencer, which came in both rifle and carbine models. The rifle was 47‑ inches long and weighed 10 pounds, while the carbine was 39‑inches long and weighed 8¼ pounds. The Spencer was also the first weapon adopted by the US Army that fired a metallic rim‑fire, self‑contained cartridge. Soldiers loaded rounds through an opening in the butt of the stock, which fed into the chamber through a tubular magazine by the action of the trigger guard. The hammer still had to be cocked manually before each shot. The Henry rifle was, in some ways, even better than either the Sharps or the Spencer. Although never adopted by the US Army in any quantity, it was purchased privately by soldiers during the war. The Henry was a 16‑shot, .44‑caliber rimfire cartridge repeater. It was 43½ inches long and weighed 9¼ pounds. The tubular magazine located directly beneath the barrel had a 15‑round capacity with an additional round in the chamber. Of the approximately 13,500 Henrys produced, probably 10,000 saw limited service. The government purchased only 1,731. The Colt repeating rifle, model 1855 (or revolving carbine), also was available to Civil War soldiers in limited numbers. The weapon was produced in several lengths and calibers, the lengths varying from 32 to 42½ inches, while its calibers were .36, .44, and .56. The .36 and .44 calibers were made to chamber six shots, while the .56‑caliber was made to chamber five shots. The Colt Firearms Company was also the primary supplier of revolvers (the standard sidearm for cavalry troops and officers), the .44‑caliber Army revolver and the .36‑caliber Navy revolver being the most popular (over 146,000 purchased). This was because they were simple, relatively sturdy, and reliable. TYPICAL CIVIL WAR SMALL ARMS Cavalry Initially armed with sabers and pistols (and in one case, lances), Federal cavalry troops quickly added the breech-loading carbine to their inventory of weapons. Troops preferred the easier-handling carbines to rifles and the breechloaders to awkward muzzleloaders. Of the single‑shot breech-loading carbines that saw extensive use during the Civil War, the Hall .52‑caliber accounted for approximately 20,000 in 1861. The Hall was quickly replaced by a variety of more state-of-the-art carbines, including the Merrill .54‑caliber (14,495), Maynard .52‑caliber (20,002), Gallager .53‑caliber (22,728), Smith .52‑caliber (30,062), Burnside .56‑ caliber (55,567), and Sharps .54‑caliber (80,512). The next step in the evolutionary process was the repeating carbine, the favorite by 1864 (and commonly distributed by 1865) being the Spencer .52‑caliber seven‑shot repeater (94,194). Because of the South's limited industrial capacity, Confederate cavalrymen had a more difficult time arming themselves. Nevertheless, they too embraced the firepower revolution, choosing shotguns and muzzle-loading carbines as well as multiple sets of revolvers as their primary weapons. In addition, Confederate cavalrymen made extensive use of battlefield salvage by recovering Federal weapons. However, the South's difficulties in producing the metallic‑rimmed cartridges required by many of these recovered weapons limited their usefulness. Field artillery In 1841, the US Army selected bronze as the standard material for fieldpieces and at the same time adopted a new system of field artillery. The 1841 field artillery system consisted entirely of smoothbore muzzleloaders: 6‑ and 12‑pound guns; 12‑, 24‑, and 32‑pound howitzers; and 12-pound mountain howitzers. A pre-Civil War battery usually consisted of six fieldpieces—four guns and two howitzers. A 6‑pound battery contained four M1841 6-pounder field guns and two M1841 12-pounder howitzers, while a 12-pound battery had four 12-pound guns and two 24-pound howitzers. The guns fired solid shot, shell, spherical case, grapeshot, and canister rounds, while howitzers fired shell, spherical case, grapeshot, and canister rounds (artillery ammunition is described below). The 6‑pound gun (effective range 1,523 yards) was the primary fieldpiece used from the time of the Mexican War until the Civil War. By 1861, however, the 1841 artillery system based on the 6-pounder was obsolete. In 1857, a new and more versatile fieldpiece, the 12‑pounder Napoleon gun‑howitzer, model 1857, appeared on the scene. Designed as a multipurpose piece to replace existing guns and howitzers, the Napoleon fired canister and shell, like the 12-pound howitzer, and solid shot comparable in range to the 12-pound gun. The Napoleon was a bronze, muzzle-loading smoothbore with an effective range of 1,619 yards (see table 3 for a comparison of artillery data). Served by a nine‑man crew, the piece could fire at a sustained rate of two aimed shots per minute. Like almost all smoothbore artillery, the Napoleon fired “fixed” ammunition—the projectile and powder were bound together with metal bands. Another new development in field artillery was the introduction of rifling. Although rifled guns provided greater range and accuracy, smoothbores were generally more reliable and faster to load. Rifled ammunition was semifixed, so the charge and the projectile had to be loaded separately. In addition, the canister load of the rifle did not perform as well as that of the smoothbore. Initially, some smoothbores were rifled on the James pattern, but they soon proved unsatisfactory because the bronze rifling eroded too easily. Therefore, most rifled artillery was either wrought iron or cast iron with a wrought-iron reinforcing band. The most commonly used rifled guns were the 10‑pounder Parrott rifle and the Rodman, or 3‑inch Ordnance rifle. The Parrott rifle was a cast‑iron piece, easily identified by the wrought‑iron band reinforcing the breech. The 10-pound Parrott was made in two models: model 1861 had a 2.9-inch rifled bore with three lands and grooves and a slight muzzle swell, while model 1863 had a 3‑inch bore and no muzzle swell. The Rodman or Ordnance rifle was a long‑tubed, wrought‑iron piece that had a 3‑inch bore with seven lands and grooves. Ordnance rifles were sturdier and considered superior in accuracy and reliability to the 10-pounder Parrott. A new weapon that made its first appearance in the war during the Overland Campaign was the 24-pound Coehorn mortar. Used exclusively by the North, the Coehorn fired a projectile in a high arcing trajectory and was ideal for lobbing shells into trenches in siege warfare. The Coehorn was used briefly during the fighting at the “bloody angle” at Spotsylvania and later in the trench lines at Cold Harbor. By 1860, the ammunition for field artillery consisted of four general types for both smoothbores and rifles: solid shot, shell, case, and canister. Solid shot was a round cast‑iron projectile for smoothbores and an elongated projectile, known as a bolt, for rifled guns. Solid shot, with its smashing or battering effect, was used in a counterbattery role or against buildings and massed formations. The conical-shaped bolt lacked the effectiveness of the cannonball because it tended to bury itself on impact instead of bounding along the ground like a bowling ball. Shell, also known as common or explosive shell, whether spherical or conical, was a hollow projectile filled with an explosive charge of black powder that was detonated by a fuse. Shell was designed to break into jagged pieces, producing an antipersonnel effect, but the low‑order detonation seldom produced more than three to five fragments. In addition to its casualty-producing effects, shell had a psychological impact when it exploded over the heads of troops. It was also used against field fortifications and in a counterbattery role. Case shot or Shrapnel shell for both smoothbore and rifled guns was a hollow projectile with thinner walls than shell. The projectile was filled with round lead or iron balls set in a matrix of sulfur that surrounded a small bursting charge. Case was primarily used in an antipersonnel role. This type of round had been invented by Henry Shrapnel, a British artillery officer, hence the term “shrapnel.” Last, there was canister shot, probably the most effective round and the round of choice at close range ( or less) against massed troops. Canister was essentially a tin can filled with iron balls packed in sawdust with no internal bursting charge. When fired, the can disintegrated, and the balls followed their own paths to the target. The canister round for the 12‑pound Napoleon consisted of 27 1½‑inch iron balls packed inside an elongated tin cylinder. At extremely close ranges, men often loaded double charges of canister. By 1861, canister had replaced grapeshot in the ammunition chests of field batteries (grapeshot balls were larger than canister, and thus fewer could be fired per round). During the firing sequence cannoneers took their positions as in the diagram below. At the command “Commence firing,” the gunner ordered “Load.” While the gunner sighted the piece, Number 1 sponged the bore; Number 5 received a round from Number 7 at the limber and carried the round to Number 2, who placed it in the bore. Number 1 rammed the round to the breech, while Number 3 placed a thumb over the vent to prevent premature detonation of the charge. When the gun was loaded and sighted, Number 3 inserted a vent pick into the vent and punctured the cartridge bag. Number 4 attached a lanyard to a friction primer and inserted the primer into the vent. At the command “Fire,” Number 4 yanked the lanyard. Number 6 cut the fuses, if necessary. The process was repeated until the command to cease firing was given. Artillery projectiles Four basic types of projectiles were employed by Civil War field artillery: SOLID PROJECTILE: Round (spherical) projectiles of solid iron for smooth-bores are commonly called "cannonballs" or just plain "shot." When elongated for rifled weapons, the projectile is known as a "bolt." Shot was used against opposing batteries, wagons, buildings, etc., as well as enemy personnel. While round shot could ricochet across open ground against advancing infantry and cavalry, conical bolts tended to bury themselves upon impact with the ground and therefore were not used a great deal by field artillery. SHELL: The shell, whether spherical or conical, was a hollow iron projectile filled with a black powder bursting charge. It was designed to break into several ragged fragments. Spherical shells were exploded by fuses set into an opening in the shell, and were ignited by the flame of the cannon's propelling discharge. The time of detonation was determined by adjusting the length of the fuse. Conical shells were detonated by similar timed fuses, or by impact. Shells were intended to impact on the target. CASE SHOT: Case shot, or "shrapnel" was the invention of Henry Shrapnel, an English artillery officer. The projectile had a thinner wall than a shell and was filled with a number of small lead or iron balls (27 for a 12-pounder). A timed fuse ignited a small bursting charge which fragmented the casing and scattered the contents in the air. Spherical case shot was intended to burst from fifty to seventy-five yards short of the target, the fragments being carried forward by the velocity of the shot. CANISTER: Canister consisted of a tin cylinder in which was packed a number of small iron or lead balls. Upon discharge the cylinder split open and the smaller projectiles fanned out. Canister was an extremely effective anti-personnel weapon at ranges up to 200 yards, and had a maximum range of 400 yards. In emergencies double loads of canister could be used at ranges of less than 200 yards, using a single propelling charge. Siege artillery The 1841 artillery system listed eight types of siege artillery and another six types as seacoast artillery. The 1861 Ordnance Manual included eleven different kinds of siege ordnance. The principal siege weapons in 1861 were the 4.5-inch rifle; 18-, and 24-pounder guns; a 24-pounder howitzer and two types of 8-inch howitzers; and several types of 8- and 10-inch mortars. The normal rate of fire for siege guns and mortars was about twelve rounds per hour, but with a well-drilled crew, this could probably be increased to about twenty rounds per hour. The rate of fire for siege howitzers was somewhat lower, being about eight shots per hour. The carriages for siege guns and howitzers were longer and heavier than field artillery carriages but were similar in construction. The 24-pounder model 1839 was the heaviest piece that could be moved over the roads of the day. Alternate means of transport, such as railroad or watercraft, were required to move larger pieces any great distance. The rounds fired by siege artillery were generally the same as those fired by field artillery, except that siege artillery continued to use grapeshot after it was discontinued in the field artillery (1841). A "stand of grape" consisted of nine iron balls, ranging from two to about three and one-half inches in size depending on the gun caliber. The largest and heaviest artillery pieces in the Civil War era belonged to the seacoast artillery. These large weapons were normally mounted in fixed positions. The 1861 system included five types of columbiads, ranging from 8- to 15-inch; 32- and 42-pounder guns; 8- and 10-inch howitzers; and mortars of 10- and 13-inches. Wartime additions to the Federal seacoast artillery inventory included Parrott rifles, ranging from 6.4-inch to 10-inch (300-pounder). New columbiads, developed by Ordnance Lieutenant Thomas J. Rodman, included 8-inch, 10-inch, and 15-inch models. The Confederates produced some new seacoast artillery of their ownBrooke rifles in 6.4-inch and 7-inch versions. They also imported weapons from England, including 7- and 8-inch Armstrong rifles, 6.3-tol2.5-inch Blakely rifles, and 5-inch Whitworth rifles. Seacoast artillery fired the same projectiles as siege artillery but with one addition - hot shot. As its name implies, hot shot was solid shot heated in special ovens until red-hot, then carefully loaded and fired as an incendiary round. Naval ordnance Like the Army, the U.S. Navy in the Civil War possessed an artillery establishment that spanned the spectrum from light to heavy. A series of light boat guns and howitzers corresponded to the Army's field artillery. Designed for service on small boats and launches, this class of weapon included 12- and 24-pounder pieces, both smoothbore and rifled. The most successful boat gun was a 12-pounder smoothbore howitzer (4.62-inch bore) designed by John A. Dahlgren, the Navy's premier ordnance expert and wartime chief of ordnance. Typically mounted in the bow of a small craft, the Dahlgren 12-pounder could be transferred, in a matter of minutes, to an iron field carriage for use on shore. This versatile little weapon fired shell and case rounds. Naturally, most naval artillery was designed for ship killing. A variety of 32-pounder guns (6.4-inch bore) produced from the 1820s through the 1840s remained in service during the Civil War. These venerable smoothbores, direct descendants of the broadside guns used in the Napoleonic Wars, fired solid shot and were effective not only in ship-to-ship combat but also in the shore-bombardment role. A more modern class of naval artillery weapons was known as "shellguns." These were large-caliber smoothbores designed to shoot massive exploding shells that were capable of dealing catastrophic damage to a wooden-hulled vessel. Shellguns could be found both in broadside batteries and in upper-deck pivot mounts, which allowed wide traverse. An early example of the shellgun, designed in 1845 but still in service during the Civil War, was an 8-inch model that fired a 51-pound shell. John Dahlgren's design came to typify the shellgun class of weapons. All of his shellguns shared an unmistakable "beer-bottle" shape. The most successful Dahlgren shellguns were a 9-inch model (72.5-pound shell or 90-pound solid shot), an 11-inch (136-pound shell or 170-pound solid shot), and a 15-inch, which fired an awesome 330-pound shell or 440-pound solid shot. A pivot-mounted 11-inch shellgun proved to be the decisive weapon in the U.S.S. Kearsarge's 1864 victory over the C.S.S. Alabama. The famous U.S. Navy ironclad Monitor mounted two 11-inch Dahlgrens in its rotating turret. Later monitors carried 15-inch shellguns. The U.S. Navy also made wide use of rifled artillery. These high-velocity weapons became increasingly important with the advent of ironclad warships. Some Navy rifles were essentially identical to Army models. For instance, the Navy procured Parrott rifles in 4.2-inch, 6.4-inch, 8-inch, and 10-inch versions, each of which had a counterpart in the Army as either siege or seacoast artillery. Other rifled weapons, conceived specifically for naval use, included two Dahlgren designs. The 50-pounder (with approximately 5-inch bore) was the better of the two Dahlgren rifles. An 80-pounder model (6-inch bore) was less popular, due to its tendency to burst. The Confederacy relied heavily on British imports for its naval armament Naval variants of Armstrong, Whitworth, and Blakely weapons all saw service. In addition, the Confederate Navy used Brooke rifles manufactured in the South. The Confederacy also produced a 9-inch version of the Dahlgren shellgun that apparently found use both afloat and ashore. Weapons at Vicksburg The wide variety of infantry weapons available to Civil War armies is clearly evident at Vicksburg. A review of the Quarterly Returns of Ordnance for April-June 1863 reveals that approximately three-quarters of Grant's Army of the Tennessee carried "first class" shoulder weapons, the most numerous of which were British 1853 Enfield rifle-muskets (.577 caliber). Other "first class" weapons used in the Vicksburg campaign included American-made Springfield rifle-muskets (.58 caliber), French rifle-muskets (.58 caliber), French "light" or "Liege" rifles (.577 caliber), U.S. Model 1840/45 rifles (.58 caliber), Dresden and Suhl rifle-muskets (.58 caliber), and Sharps breechloading carbines (.52 caliber). Approximately thirty-five Federal regiments (roughly one-quarter of the total) were armed primarily with "second class" weapons, such as Austrian rifle-muskets in .54, .577, and .58 calibers; U.S. Model 1841 rifled muskets (.69 caliber); U.S. Model 1816 rifled muskets altered to percussion (.69 caliber); Belgian and French rifled muskets (.69 and .71 calibers); Belgian or Vincennes rifles (.70 and .71 calibers); and both Austrian and Prussian rifled muskets in .69 and .70 calibers. Only one Federal regiment, the 101st Illinois Infantry, was armed with "third class" weapons, such as the U.S. Model 1842 smoothbore musket (.69 caliber), Austrian, Prussian, and French smoothbore muskets (.69 caliber), and Austrian and Prussian smoothbore muskets of.72 caliber. After the surrender of Vicksburg, the 101 st Illinois, along with about twenty regiments armed with "second class" arms, exchanged its obsolete weapons for captured Confederate rifle-muskets. Although the Confederate records are incomplete, it seems that some 50,000 shoulder weapons were surrendered at Vicksburg, mostly British-made Enfields. Other weapons included a mix of various .58-caliber "minié" rifles (Springfield, Richmond, Mississippi and Fayetteville models), Austrian and French rifle-muskets in .577 and .58 calibers, Mississippi rifles, Austrian rifle-muskets (.54 caliber), various .69-caliber rifled muskets altered to percussion, Belgian .70-caliber rifles, and British smoothbore muskets in .75 caliber. The diversity of weapons (and calibers of ammunition) obviously created serious sustainment problems for both sides. Amazingly, there is little evidence that ammunition shortages had much influence on operations (the Vicksburg defenders surrendered 600,000 rounds and 350,000 percussion caps), even though the lack of weapons standardization extended down to regimental levels. Whereas there was little to differentiate Union from Confederate effectiveness so far as small arms were concerned, the Union forces at Vicksburg enjoyed a clear superiority in terms of artillery. When Grant's army closed on Vicksburg to begin siege operations, it held about 180 cannon. At the height of its strength during the siege, the Union force included some forty-seven batteries of artillery for a total of 247 guns-13 "heavy" guns and 234 "field" pieces. Twenty-nine of the Federal batteries contained six guns each; the remaining eighteen were considered four-gun batteries. Smoothbores outnumbered rifles by a ratio of roughly two to one. No account of Union artillery at Vicksburg would be complete without an acknowledgment of the U.S. Navy's contributions. Porter's vessels carried guns ranging in size from 12-pounder howitzers to 11-inch Dahlgren shellguns. The Cairo, which is on display today at Vicksburg, suggests both the variety and the power of naval artillery in this campaign. When she sank in December 1862, the Cairo went down with three 42-pounder (7-inch bore) Army rifles, three 64-pounder (8-inch bore) Navy smoothbores, six 32-pounder (6.4-inch bore) Navy smoothbores, and one 4.2-inch 30-pounder Parrott rifle. Porter's firepower was not restricted to the water. During the siege, naval guns served ashore as siege artillery. The Confederates possessed a sizeable artillery capability but could not match Federal firepower. Taken together, the Confederate forces under Pemberton and Johnston possessed a total of about 62 batteries of artillery with some 221 tubes. Pemberton's force besieged in Vicksburg included 172 cannon-approximately 103 fieldpieces and 69 siege weapons. Thirty-seven of the siege guns, plus thirteen fieldpieces, occupied positions overlooking the Mississippi. (The number of big guns along the river dropped to thirty-one by the end of the siege-apparently some weapons were shifted elsewhere.) The thirteen field pieces were distributed along the river to counter amphibious assault. The heavy ordnance was grouped into thirteen distinct river-front batteries. These large river-defense weapons included twenty smoothbores, ranging in size from 32-pounder siege guns to 10-inch Columbiads, and seventeen rifled pieces, ranging from a 2.75-inch Whitworth to a 7.44-inch Blakely. In most of the engagements during the Vicksburg campaign, the Union artillery demonstrated its superiority to that of the Confederates. During the siege, that superiority grew into dominance. The Confederates scattered their artillery in one- or two-gun battery positions sited to repel Union assaults. By declining to mass their guns, the Confederates could do little to interfere with Union siege operations. By contrast, Union gunners created massed batteries at critical points along the line. These were able both to support siege operations with concentrated fires and keep the Confederate guns silent by smothering the embrasures of the small Confederate battery positions. As the siege progressed, Confederate artillery fire dwindled to ineffective levels, whereas the Union artillery blasted away at will. As much as any other factor, Union fire superiority sealed the fate of the Confederate army besieged in Vicksburg. Weapons in the Overland Campaign The variety of weapons available to both armies during the Civil War is reflected in the battles of the Overland Campaign. To a limited extent, the Army of Northern Virginia's infantry had more uniformity in its small arms than the Army of the Potomac. In fact, some regiments of the famous Pennsylvania Reserves Brigade were still equipped with smoothbore muskets. In any case, both armies relied heavily on the Springfield and Enfield, which were the most common weapons used (although almost every other type of Civil War small arms could be found in the campaign). The variety of weapons and calibers of ammunition required on the battlefield by each army presented sustainment challenges that ranged from production and procurement to supplying soldiers in the field. Amazingly, operations were not often affected by the need to resupply a diverse mixture of ammunition types. The Army of the Potomac (including the IX Corps) started the campaign with 58 batteries of artillery. Of these, 42 were six‑gun batteries, while the other 16 batteries were of the four-gun type. The Federals went to a four-gun battery system after the battle of Spotsylvania. Also at this time, the Army of the Potomac's Artillery Reserve was disbanded except for the ammunition train. The Reserve's batteries went to the corps‑level reserve artillery brigades. The Army of Northern Virginia totaled 56 artillery batteries. The vast majority of these (42) were four‑gun batteries. The rest of the mix included one six‑gun battery, three five‑gun, five three‑gun, four two‑gun, and a lone one‑gun battery. (Refer to table 3 for the major types of artillery available to the two armies at the start of the campaign.) The effectiveness of artillery during the campaign was mixed. In the Wilderness, the rugged terrain and the dense vegetation reduced the effectiveness of artillery fire. Specifically, the Federals’ advantage in numbers of longer‑range rifled guns was negated by the lack of good fields of fire. The more open ground at Spotsylvania and Cold Harbor allowed for better use of artillery. However, the increasing use of entrenchments on both sides tended to relegate artillery to a defensive role. The Confederates tended to keep their batteries decentralized, usually attached to the infantry brigades within the divisions to which they were assigned. Lee's army did not have an artillery reserve. The Union tended to centralize their artillery, even after disbanding the army-level reserve. This often meant keeping reserve batteries at corps-level, other batteries in division reserves, and occasionally assigning batteries to brigades as needed. In the Overland Campaign, the Confederate cavalry had an advantage over its Union counterpart in reconnaissance and screening missions. This was largely due to personalities and the mission focus of the two sides, rather than to any organizational or tactical differences between them. The Army of the Potomac's cavalry corps was commanded by Major General Philip H. Sheridan, who clashed with the Army commander, Meade, over the role of the cavalry. After the opening of the Spotsylvania fight, Sheridan got his wish and conducted a large raid toward Richmond. Stuart countered with part of his force, but the remaining Confederate cavalry kept Lee well informed while the Federals were almost blind. Stuart was killed at the battle of Yellow Tavern, but his eventual replacement, Major General Wade Hampton, filled in admirably. Later in the war, Sheridan would make better use of the cavalry as a striking force, but he never really mastered its reconnaissance role. Tactics First year tactics The Napoleonic Wars and the Mexican War were the major influences on American military thinking at the beginning of the Civil War. American military leaders knew of the Napoleonic driven theories of Jomini, while tactical doctrine reflected the lessons learned in Mexico (1846‑48). However, these tactical lessons were misleading, because in Mexico relatively small armies fought only seven pitched battles. In addition, these battles were so small that almost all the tactical lessons learned during the war focused at the regimental, battery, and squadron levels. Future Civil War leaders had learned very little about brigade, division, and corps maneuvers in Mexico, yet these units were standard fighting elements of both armies in 1861–65. The US Army's experience in Mexico validated many Napoleonic principles—particularly that of the offensive. In Mexico, tactics did not differ greatly from those of the early 19th century. Infantry marched in columns and deployed into lines to fight. Once deployed, an infantry regiment might send one or two companies forward as skirmishers, as security against surprise, or to soften the enemy's line. After identifying the enemy's position, a regiment advanced in closely ordered lines to within 100 yards. There it delivered a devastating volley, followed by a charge with bayonets. Both sides attempted to use this basic tactic in the first battles of the Civil War with tragic results. In Mexico, American armies employed artillery and cavalry in both offensive and defensive battle situations. In the offense, artillery moved as near to the enemy lines as possible—normally just outside musket range— in order to blow gaps in the enemy's line that the infantry might exploit with a determined charge. In the defense, artillery blasted advancing enemy lines with canister and withdrew if the enemy attack got within musket range. Cavalry guarded the army's flanks and rear but held itself ready to charge if enemy infantry became disorganized or began to withdraw. These tactics worked perfectly well with the weapons technology of the Napoleonic and Mexican Wars. The infantry musket was accurate up to 100 yards, but ineffective against even massed targets beyond that range. Rifles were specialized weapons with excellent accuracy and range but slow to load and, therefore, not usually issued to line troops. Smoothbore cannon had a range up to 1 mile with solid shot, but were most effective against infantry when firing canister at ranges under 400 yards (and even better at 200 yards or less). Artillerists worked their guns without much fear of infantry muskets, which had a limited range. Cavalry continued to use sabers and lances as shock weapons. American troops took the tactical offensive in most Mexican War battles with great success, and they suffered fairly light losses. Unfortunately, similar tactics proved to be obsolete in the Civil War in part because of the innovation of the rifle musket. This new weapon greatly increased the infantry's range and accuracy and loaded as fast as a musket. By the beginning of the Civil War, rifle muskets were available in moderate numbers. It was the weapon of choice in both the Union and Confederate armies during the war; by 1864, the vast majority of infantry troops on both sides had rifle muskets of good quality. Official tactical doctrine prior to the beginning of the Civil War did not clearly recognize the potential of the new rifle musket. Prior to 1855, the most influential tactical guide was General Winfield Scott's three‑volume work, Infantry Tactics (1835), based on French tactical models of the Napoleonic Wars. It stressed close-order, linear formations in two or three ranks advancing at “quick time” of 110 steps (86 yards) per minute. In 1855, to accompany the introduction of the new rifle musket, Major William J. Hardee published a two‑volume tactical manual, Rifle and Light Infantry Tactics. Hardee's work contained few significant revisions of Scott's manual. His major innovation was to increase the speed of the advance to a “double‑quick time” of 165 steps (151 yards) per minute. If, as suggested, Hardee introduced his manual as a response to the rifle musket, then he failed to appreciate the weapon's full impact on combined arms tactics and the essential shift that the rifle musket made in favor of the defense. Hardee's Rifle and Light Infantry Tactics was the standard infantry manual used by both sides at the outbreak of war in 1861. If Scott's and Hardee's works lagged behind technological innovations, at least the infantry had manuals to establish a doctrinal basis for training. Cavalry and artillery fell even further behind in recognizing the potential tactical shift in favor of rifle‑armed infantry. The cavalry's manual, published in 1841, was based on French sources that focused on close-order offensive tactics. It favored the traditional cavalry attack in two ranks of horsemen armed with sabers or lances. The manual took no notice of the rifle musket's potential, nor did it give much attention to dismounted operations. Similarly, the artillery had a basic drill book delineating individual crew actions, but it had no tactical manual. Like cavalrymen, artillerymen showed no concern for the potential tactical changes that the rifle musket implied. Early tactics In the battles of 1861 and 1862, both sides employed the tactics proven in Mexico and found that the tactical offensive could still occasionally be successful—but only at a great cost in casualties. Men wielding rifled weapons in the defense generally ripped incoming frontal assaults to shreds, and if the attackers paused to exchange fire, the slaughter was even greater. Rifles also increased the relative number of defenders that could engage an attacking formation, since flanking units now engaged assaulting troops with a murderous enfilading fire. Defenders usually crippled the first assault line before a second line of attackers could come forward in support. This caused successive attacking lines to intermingle with survivors to their front, thereby destroying formations, command, and control. Although both sides occasionally used the bayonet throughout the war, they quickly discovered that rifle musket fire made successful bayonet attacks almost impossible. As the infantry troops found the bayonet charge to be of little value against rifle muskets, cavalry and artillery troops made troubling discoveries of their own. Cavalry troops learned that the old-style saber charge did not work against infantry armed with rifle muskets. Cavalry troops, however, continued their traditional intelligence gathering and screening roles and often found their place as the “eyes and ears” of the army. Artillery troops, on their part, found that they could not maneuver in the offense to canister range as they had in Mexico, because the rifle musket was accurate beyond that distance. Worse yet, at ranges where gunners were safe from rifle fire, artillery shot and shell were far less effective than canister at close range. Ironically, rifled cannon did not give the equivalent boost to artillery effectiveness that the rifle‑musket gave to the infantry. The increased range of cannons proved no real advantage in the broken and wooded terrain over which so many Civil War battles were fought. There are several possible reasons why Civil War commanders continued to employ the tactical offensive long after it was clear that the defense was superior. Most commanders believed the offensive was the decisive form of battle. This lesson came straight from the Napoleonic wars and the Mexican‑American War. Commanders who chose the tactical offensive usually retained the initiative over defenders. Similarly, the tactical defensive depended heavily on the enemy choosing to attack at a point convenient to the defender and continuing to attack until badly defeated. Although this situation occurred often in the Civil War, a prudent commander could hardly count on it for victory. Consequently, few commanders chose to exploit the defensive form of battle if they had the option to attack. The offensive may have been the decisive form of battle, but it was very hard to coordinate and even harder to control. The better generals often tried to attack the enemy's flanks and rear, but seldom achieved success because of the difficulty involved. Not only did the commander have to identify the enemy's flank or rear correctly, he also had to move his force into position to attack and then do so in conjunction with attacks made by other friendly units. Command and control of the type required to conduct these attacks was quite beyond the ability of most Civil War commanders. Therefore, Civil War armies repeatedly attacked each other frontally, with resulting high casualties, because that was the easiest way to conduct offensive operations. When attacking frontally, a commander had to choose between attacking on a broad front or a narrow front. Attacking on a broad front rarely succeeded except against weak and scattered defenders. Attacking on a narrow front promised greater success but required immediate reinforcement to continue the attack and achieve decisive results. As the war dragged on, experiments with attacking forces on narrow fronts against specific objectives were attempted (Upton at Spotsylvania), but no single offensive doctrine emerged as a key to success. Later war tactics Poor training may have contributed to high casualty rates early in the war, but casualties remained high and even increased long after the armies became experienced. Continued high casualty rates resulted because tactical developments failed to adapt to the new weapons technology. Few commanders understood how the rifle musket strengthened the tactical defensive. However, some commanders made offensive innovations that met with varying success. When an increase in the pace of advance did not overcome defending firepower (as Hardee suggested it would), some units tried advancing in more open order. But this sort of formation lacked the appropriate mass to assault and carry prepared positions and created command and control problems beyond the ability of Civil War leaders to resolve. Late in the war, when the difficulty of attacking field fortifications under heavy fire became apparent, other tactical expedients were employed. Attacking solidly entrenched defenders often required whole brigades and divisions moving in dense masses to rapidly cover intervening ground, seize the objective, and prepare for the inevitable counterattack. Seldom successful against alert and prepared defenses, these attacks were generally accompanied by tremendous casualties and foreshadowed the massed infantry assaults of World War I. Sometimes, large formations attempted mass charges over short distances without halting to fire. This tactic enjoyed limited success at the Spotsylvania Court House in May 1864, but generally failed to break a prepared enemy. At Spotsylvania, a Union task-organized division (under Colonel Emory Upton) attacked and captured an exposed portion of the Confederate line. The attack succeeded in part because the Union troops crossed the intervening ground very quickly and without stopping to fire their rifles. Once inside the Confederate defenses, the Union troops attempted to exploit their success by continuing their advance, but loss of command and control made them little better than a mob. Counterattacking Confederate units, in conventional formations, eventually forced the Federals to relinquish much of the ground gained. As the war dragged on, tactical maneuver focused more on larger formations: brigade, division, and corps. In most of the major battles fought after 1861, brigades were employed as the primary maneuver formations. But brigade maneuver was at the upper limit of command and control for most Civil War commanders at the beginning of the war. Brigades might be able to retain coherent formations if the terrain were suitably open, but often brigade attacks degenerated into a series of poorly coordinated regimental lunges through broken and wooded terrain. Thus, brigade commanders were often on the main battle line trying to influence regimental fights. Typically, defending brigades stood in the line of battle and blazed away at attackers as rapidly as possible. Volley fire usually did not continue beyond the first round. Most of the time, soldiers fired as soon as they were ready, and it was common for two soldiers to work together, one loading for the other to fire. Brigades were generally invulnerable to attacks on their front if units to the left and right held their ground. Two or more brigades comprised a division. When a division attacked, its brigades often advanced in sequence, from left to right or vice versa, depending on terrain, suspected enemy location, and number of brigades available to attack. At times, divisions attacked with two or more brigades leading, followed by one or more brigades ready to reinforce the lead brigades or maneuver to the flanks. Two or more divisions comprised a corps that might conduct an attack as part of a larger plan controlled by the army commander. More often, groups of divisions attacked under the control of a corps‑level commander. Division and corps commanders generally took a position to the rear of the main line in order to control the flow of reinforcements into the battle, but they often rode forward into the battle lines to influence the action personally. Of the three basic branches, cavalry made the greatest adaptation during the war. It learned to use its horses for mobility, then dismount and fight on foot like infantry. Cavalry regained a useful battlefield role by employing this tactic, especially after repeating and breech‑loading rifles gave it the firepower to contend with enemy infantry. Still the most effective role for the cavalry was in reconnaissance and security in overall support of the main armies’ operations. On the other hand, many cavalry leaders were enamored with using their troops in large-scale raids, often as a pretext for seeking out the enemy's cavalry for a decisive battle. In many cases, raids failed to produce either desired result: a decisive defeat of the enemy cavalry or significant destruction of enemy supply and transportation systems. During the Overland Campaign, Sheridan attempted a raid that ultimately led to the Battle of Yellow Tavern and, by chance, the death of Jeb Stuart. However, this raid effectively left the Army of the Potomac blind for two weeks during the campaign. Artillery found that it could add its firepower to the rifle musket and tip the balance even more in favor of the tactical defensive, but artillery never regained the importance to offensive maneuver that it held in Mexico. If artillery had developed an indirect firing system, as it did prior to World War I, it might have been able to contribute more to offensive tactics. Still, both sides employed artillery effectively in defensive situations throughout the war. The most significant tactical innovation in the Civil War was the widespread use of field fortifications after armies realized the tactical offensive's heavy cost. It did not take long for the deadly firepower of the rifle musket to convince soldiers to entrench every time they halted. Eventually, armies dug complete trenches within an hour of halting in a position. Within 24 hours, armies could create defensive works that were nearly impregnable to frontal assaults. The Overland Campaign, probably more than any other campaign in the Civil War, demonstrated the efficacy of field entrenchments. Both sides, particularly the numerically inferior Confederates, made extensive use of entrenchments at every battle in the campaign. In this respect, the development of field fortifications during the American Civil War was a clear forerunner of the kind of trench warfare that came to dominate World War I. Summary of tactics In the Civil War, the tactical defense dominated the tactical offense because assault formations proved inferior to the defender's firepower. The rifle musket, in its many forms, provided this firepower and caused the following specific alterations in tactics during the war: It required the attacker, in his initial dispositions, to deploy farther away from the defender, thereby increasing the distance over which the attacker had to pass. It increased the number of defenders who could engage attackers (with the addition of effective enfilading fire). It generally reduced the density of both attacking and defending formations, although in the 1864 campaigns, there was some experimentation of narrower and denser attacking formations to try to penetrate entrenched lines. It created a shift of emphasis in infantry battles toward firefights rather than shock attacks. It caused battles to last longer, because units could not close with each other for decisive shock action. It encouraged the widespread use of field fortifications. The habitual use of field fortifications by armies was a major innovation, but it further hindered the tactical offensive. It forced cavalry to the battlefield's fringes until cavalrymen acquired equivalent weapons and tactics, although cavalry still performed essential reconnaissance missions. It forced artillery to abandon its basic offensive maneuver, that of moving forward to within canister range of defending infantry. Tactics in the Vicksburg Campaign The basic unit of operational maneuver for Union forces in the Vicksburg campaign was the corps. For the Confederates, it was the division (there being no corps echelon in Pemberton's order of battle). On the battlefield, the brigade was the basic tactical unit for both sides. (One obvious exception to this rule was the battle of Raymond, where the Confederate force was a single brigade, and the brigade commander deployed and maneuvered regiments.) Union forces held the initiative at the operational level throughout the campaign. Not surprisingly, in most tactical encounters, Union forces were on the offensive. Union commanders relied heavily on frontal attacks-neither Grant nor his subordinates were noted for their tactical finesse. Frontal assaults in the Civil War were generally costly, but they sometimes worked, as the Vicksburg campaign demonstrates. At the battle of Port Gibson, the Union corps commander who ran the battle, Major General John A. McClernand, enjoyed a heavy numerical advantage over the Confederates, but rugged terrain and jungle-like vegetation greatly facilitated the defense. McClernand responded by packing his forces two, three, and four regiments deep, on whatever open ground was available-crowding out his artillery in the process. Whether this was a conscious adaptation to circumstances or a blind urge on McClernand's part to gather more and more force is a matter of speculation. Although McClernand's men eventually drove the Confederates from the field in a series of frontal attacks, Port Gibson does not stand out as an example of effective offensive tactics. Undoubtedly, the most successful frontal attack of the campaign occurred during the battle of the Big Black River on 17 May. Brigadier General Michael K. Lawler, a Union brigade commander, perceived a weak spot in the Confederate fieldworks opposing him. He formed his brigade into a formation reminiscent of the assault columns used by Napoleon: two regiments leading, with a third following closely in support, a fourth in reserve, and two regiments on loan from another brigade to pin the enemy with fire and serve as an exploitation force. Lawler utilized natural cover to bring his brigade close to the enemy, and when the attack came, it was vigorous and impetuous. The unsteady Confederate regiment facing Lawler broke and ran when this assault force reached its breastworks. The Napoleonic influence can be seen on a larger scale as well. During the Union march from Port Gibson to Jackson, and then to Champion Hill, Grant deployed his corps on separate routes to facilitate movement, but close enough to support each other should Confederates be encountered in force. Napoleon referred to this practice as the battalion carré, which can best be summarized by the adage, "march dispersed, fight massed." As he closed on the Confederates at Champion Hill on 16 May, Grant contrived to bring three converging corps-size columns to bear upon the enemy in a classic "concentric attack." The outnumbered Confederates could have been attacked from three directions and possibly destroyed, but Union command, control, and communications were inadequate to the task of coordinating the action. Only one of the three Union columns ever became fully engaged. But if Union tactical art was mediocre on average, Confederate skill was generally lower still. The Confederate forces defending Mississippi constituted a "department" and never were formally designated as an "army." Prior to the campaign, units were dispersed, having spent the winter in garrison and in fortified positions. Regiments had little recent experience operating together as brigades and divisions. Not until Grant crossed the Mississippi and moved into the interior did a major portion of the department assemble as a field army. Not surprisingly, the assembled forces had difficulty even forming up and marching as a unit, let alone fighting. At the battle of Champion Hill, the Confederate army was unresponsive and uncoordinated. Individual brigades and regiments fought hard and well, but higher-level command and control was lacking. But at the lower echelons, some of the more imaginative and daring tactics of the Vicksburg campaign were executed, or at least attempted, by Confederates. Whereas Grant's forces relied almost exclusively on the frontal attack, on two occasions during the maneuver phase of the campaign, Confederate commanders attempted to attack their enemy in flank. During the battle of Port Gibson, Brigadier General John S. Bowen tried to thwart McClernand's steamroller tactics by leading a portion of Colonel Francis M. Cockrell's brigade in an attack against the Union right flank. But as was so often the case in the Civil War, by the time Cockrell's men reached their jump-off point, the enemy had begun to respond. After initial progress, Cockrell's men were stopped by Union reserves drawn up to oppose them. Later in the campaign, at the battle of Raymond, Confederate Brigadier General John Gregg attempted another flank attack. Unaware that his brigade confronted a Union corps, Gregg detached three of his five regiments and sent them off to attack the Union right. But when the flanking forces reached their jump-off position and realized the numerical odds against them, they opted not to attack. When the campaign of maneuver ended and the siege of Vicksburg began, an entirely new set of tactics came into play. Whereas there was little formal doctrine for battlefield tactics in the Civil War (and none at all for operational maneuver), the sciences of fortification and siegecraft were well-established and understood by any military engineer trained at West Point. In keeping with the principles of fortification, the Confederates had erected strong earthwork fortifications that afforded interlocking fields of fire and commanded the approaches into Vicksburg. Trenches or "rifle pits" connected the major fortifications. After two failed assaults (by far the bloodiest frontal attacks of the campaign), the Union forces responded with a siege that was also the product of conventional doctrine. Grant established two separate forces, one to face outward and block any Confederate interference from outside, and the other to enclose Vicksburg and "reduce" its fortifications. Union troops crept up to the Confederate positions through zigzag trenches called "saps" or "approaches" and dug mines under some of the major fortifications. But the siege ended before the last act of the doctrinal script was played out-there was no final assault. Tactics in the Overland Campaign By May 1864, Civil War battle tactics had evolved to the point that brigades were the basic maneuver units (as opposed to individual regiments). Often, division commanders had some skill at using their brigades in a coordinated fashion, but it was still difficult to bring entire corps into unified action. Thus, both sides fought the tactical battles of the campaign by maneuvering brigades and divisions in combat. However, when conducting operational movements, both sides often moved at corps level with each corps having its own route (or occasionally, two corps following each other on the same route). Tactical battlefield fighting and the operational maneuvering between battles required tremendous coordination and synchronization, which the Civil War command system all too often failed to provide. Further, the terrain in Virginia, while not as rugged as much of the ground in the western theater, contained some heavily wooded areas such as the Wilderness, roads that could alternate between mud and dust, and numerous rivers, all of which made maneuver difficult. Much of the tactical confusion in the campaign's battles resulted from the difficulty of maneuvering large bodies of troops through difficult terrain with a command system that depended mainly on voice commands. One trend that was common in the Overland Campaign was the tendency of the Union forces to attack in more narrow formations than the Confederate forces. Often, Union brigades advanced with half of their regiments in the front line and half in a second line. The division would in turn have two of its brigades forward with one or two behind. This allowed many Union offensives to bring fresh units into their attacks, but it often prevented the Northerners from using their numbers for an overwhelming initial assault, as their units were committed piecemeal. The Confederate brigades often put all of their regiments on line, which occasionally allowed them to overlap a Union flank. Did these formations reflect evolving doctrinal ideas? Were they responses to the restrictive nature of the terrain? Did commanders choose these methods to improve their ability to control their units? Perhaps the answers lie in the personalities, experiences, and abilities of the commanders on both sides. In any case, as the Overland Campaign wore on, the Confederates were forced to rely on the defense, and in most cases, extensive entrenchments allowed them to deploy regiments on a relatively thin line, with divisions putting two or three brigades forward and one in reserve (as at Cold Harbor). At the tactical-level and, to a degree the operational-level, certain patterns emerged over the course of the campaign. First, the Confederates were usually short on manpower and were forced to rely more and more on the tactical defense and use of entrenchments. The Southerners launched two very successful attacks in the Wilderness, but for the remainder of the campaign, they generally stayed on the tactical defense. The Union forces were almost constantly on the attack, and they struggled, often in vain, to find a solution to the seemingly impenetrable Confederate defensive positions. Many Union attacks, in particular the tragic assaults at Cold Harbor on 3 June, were costly failures against the Southern defenders. On the other hand, attacks by Upton at Spotsylvania and by Hancock at both the Wilderness and Spotsylvania achieved some measure of success, but could not achieve a decisive victory. In each case, even when the Federals made an initial breakthrough, they found it nearly impossible to maintain enough command and control of their forces to sustain their momentum. This tactical stalemate forced the Union forces to seek an operational solution to the dominance of the defense. Thus emerged the outstanding operational characteristic of the Overland Campaign—Grant's attempts to maneuver around Lee's flanks and force a battle in a position favorable to the Union. Generally, Grant attempted to turn Lee's right flank, which would place Union forces between Lee and Richmond. In these conditions, the Federals might be able to fight the Confederates in a sort of “meeting engagement” outside of entrenchments, or perhaps even force Lee into attacking the Union troops in their own prepared positions. The major engagements in the campaign resulted from these operational moves, but in almost every case, Lee was able to maneuver his troops into position before the Union forces arrived. In several cases, bad Federal staff work, or just plain bad luck, also hindered the Union moves. In one case—the crossing of the James—the Union forces performed their flanking maneuver superbly and actually “stole a march” on Lee. Yet, bungled Union assaults squandered this success at Petersburg from 15 to 18 June. In sum, the Overland Campaign was like many other Civil War campaigns in terms of tactics. Attacks were often piecemeal, frontal, and uncoordinated, and they generally failed to dislodge defenders. On the other hand, the lack of a single decisive battle forced both Grant and Lee to think more in terms of a sustained campaign, and the series of their maneuvers and battles fought over the Virginia landscape might even be considered an early example of what modern military theorists call “the operational art.” The balance of two such skillful and determined opponents fighting in the conditions of 1864 was bound to lead to horrific casualties until one side or the other was exhausted. Logistics Victory on Civil War battlefields seldom hinged on the quality or quantity of tactical logistics. At the operational and strategic levels, however, logistical capabilities and concerns always shaped the plans and sometimes the outcomes of campaigns. As the war lengthened, the logistical advantage shifted inexorably to the North. The Federals controlled the majority of the financial and industrial resources of the nation. With their ability to import any needed materials, they ultimately created the best‑supplied army the world had yet seen. Despite suffering from shortages of raw materials, the Confederates generated adequate ordnance but faltered gradually in their ability to acquire other war materiel. The food supply for Southern armies was often on the verge of collapse, largely because limitations of the transportation network were compounded by political‑military mismanagement. Still, the state of supply within field armies on both sides depended more on the caliber of the people managing resources than on the constraints of available materiel. In Lee's case, the Army of Northern Virginia managed to scrape by in 1864, although the need for forage and food sometimes forced Lee to disperse units to gather supplies. The situation grew worse throughout the year, but did not become critical until after the loss of the Shenandoah Valley added to the gradual decay of the Army during the siege at Petersburg. One of the most pressing needs at the start of the war was for sufficient infantry and artillery weapons. With most of the government arsenals and private manufacturing capability located in the North, the Federals ultimately produced sufficient modern firearms for their armies, but the Confederates also accumulated adequate quantities—either from battlefield captures or through the blockade. In addition, exceptional management within the Confederate Ordnance Bureau led to the creation of a series of arsenals throughout the South that produced sufficient quantities of munitions and weapons. The Northern manufacturing capability could have permitted the Federals eventually to produce and outfit their forces with repeating arms, the best of which had been patented before 1861. Initially, however, the North's conservative Ordnance Bureau would not risk switching to a new, unproven standard weapon that could lead to soldiers wasting huge quantities of ammunition in the midst of an expanding war. By 1864, after the retirement of Chief of Ordnance James Ripley and with President Lincoln's urging, Federal cavalry received seven‑shot Spencer repeating carbines, which greatly increased battle capabilities. Both sides initially relied on the states and local districts to provide some equipment, supplies, animals, and foodstuffs. As the war progressed, more centralized control over production and purchasing emerged under both governments. Still, embezzlement and fraud were common problems for both sides throughout the war. The North, with its preponderance of railroads and developed waterways, had ample supply and adequate distribution systems. The South's major supply problem was subsistence. Arguably, the South produced enough food during the war to provide for both military and civilian needs, but mismanagement, parochial local interests, and the relatively underdeveloped transportation network often created havoc with distribution. In both armies, the Quartermaster, Ordnance, Subsistence, and Medical Bureaus procured and distributed equipment, food, and supplies. The items for which these bureaus were responsible are similar to the classes of supply used today. Some needs overlapped, such as the Quartermaster Bureau's procurement of wagons for medical ambulances, but conflicts of interest usually were manageable. Department and army commanders requested needed resources directly from the bureaus, and bureau chiefs wielded considerable power as they parceled out occasionally limited resources. Typically, materiel flowed from the factory to base depots as directed by the responsible bureaus. Supplies were then shipped to advanced depots, generally a city on a major transportation artery safely within the rear area of a department. During campaigns, the armies established temporary advance depots served by rail or river transportation—Grant's forces made particularly heavy use of resupply from the navy in the Overland Campaign. From these points, wagons carried the supplies forward to the field units. This principle is somewhat similar to the modern theater sustainment organization. The management of this logistics system was complex and crucial. A corps wagon train, if drawn by standard six-mule teams, would be spread out from five to eight miles, based on the difficulty of terrain, weather, and road conditions. The wagons, which were capable of hauling 4,000 pounds in optimal conditions, could carry only half that load in mountainous terrain. Sustenance for the animals was a major restriction, because each animal required up to 26 pounds of hay and grain a day to stay healthy and productive. Bulky and hard to handle, this forage was a major consideration in campaign planning. Wagons delivering supplies more than one day's distance from the depot could be forced to carry excessive amounts of animal forage. If full animal forage was to be carried, the required number of wagons to support a corps increased dramatically with each subsequent day's distance from the forward depot. Another problem was the herds of beef that often accompanied the trains or were appropriated en route. This provided fresh (though tough) meat for the troops, but slowed and complicated movement. The bulk-supply problems were alleviated somewhat by the practice of foraging, which, in the proper season, supplied much of the food for animals and men on both sides. Foraging was practiced with and without command sanction, wherever an army went, and it became command policy during Ulysses S. Grant's Vicksburg campaign and William T. Sherman's Atlanta campaign. Foraging was less prevalent in the east, especially by 1864, for the simple reason that northeastern Virginia had already been picked clean by three years of war. Both sides based their supply requirements on pre-war regulations and wartime improvisation. BUREAU SYSTEM. Bureau chiefs and heads of staff departments were responsible for various aspects of the Army's administration and logistics and reported directly to the Secretary of War. The division of responsibility and authority over them among the Secretary of War, the Assistant Secretaries, and the General in Chief was never spelled out, and the supply departments functioned independently and without effective coordination throughout most of the Civil War, although much improved after Grant took command. Logistical support was entrusted to the heads of four supply departments in Washington: the Quartermaster General, responsible for clothing and equipment, forage, animals, transportation, and housing; the Commissary General for rations; the Chief of Ordnance for weapons, ammunition, and miscellaneous related equipment; and the Surgeon General for medical supplies, evacuation, treatment, and hospitalization of the wounded. For other support there were the Adjutant General, the Inspector General, the Paymaster General, the Judge Advocate General, the Chief of Engineers, and the Chief of Topographical Engineers. The military department was the basic organizational unit for administrative and logistical purposes, and the commander of each department controlled the support in that area with no intervening level between his departmental headquarters and the bureau chiefs in Washington. There were six departments when the war started (East, West, Texas, New Mexico, Utah, and Pacific); however, later on, boundaries changed and several geographical departments might be grouped together as a military "division" headquarters. Army depots were located in major cities: Boston, New York, Baltimore, Washington, Cincinnati, Louisville, St. Louis, Chicago, New Orleans, and San Francisco. Philadelphia was the chief depot and manufacturing center for clothing. Advanced and temporary supply bases were established as needed to support active operations. Until 1864 most depots were authorized the rank of captain as commander, who despite their low rank and meager pay, had tremendous resources of men, money, and material under their control. There were a few exceptions, notably COL Daniel H. Rucker at the Washington QM Depot and COL George D. Ramsay at the Washington Arsenal. The primary function of the depots was to procure supplies and prepare them for use in the field by repacking, assembling, or other similar tasks. Procurement was decentralized. Purchases were made on the market by low-bid contract in the major cities and producing areas by depot officers. Flour and some other commodities were procured closer to the troops when possible. Cattle were contracted for at specific points, and major beef depots were maintained at Washington (on the grounds of the unfinished Washington Monument), Alexandria, VA, and Louisville, KY. The Subsistence Department developed a highly effective system of moving cattle on the hoof to the immediate rear of the armies in the field, to be slaughtered by brigade butchers and issued to the troops the day before consumption. The Confederate Army used a similar system with depots at Richmond, Staunton, Raleigh, Atlanta, Columbus (GA), Huntsville, Montgomery, Jackson (MS), Little Rock, Alexandria (LA), and San Antonio. SUPPLY OPERATIONS. Most unit logistics were accomplished at regimental level. The regimental QM was normally a line lieutenant designated by the regimental commander. His duties included submitting requisitions for all QM supplies and transport, accounting for regimental property including tentage, camp equipment, extra clothing, wagons, forage, and animals; issuing supplies and managing the regimental trains. The regimental commissary officer, also designated from the line, requisitioned, accounted for, and issued rations. The regimental ordnance officer had similar duties regarding arms and ammunition and managed the movement of the unit ammunition train. In theory, logistical staff positions above the regiment were filled by a fully qualified officer of the supply department concerned, However, experienced officers were in perpetual short supply, and many authorized positions were filled by officers and noncommissioned officers from line units or left vacant, the duties performed by someone in addition to their own. This problem existed in both armies, where inexperience and ignorance of logistical principles and procedures generally reduced levels of support. The Soldier's Load: About 45 lbs. (Union) - Musket and bayonet (14 lbs.), 60 rounds, 3-8 days rations, canteen, blanket or overcoat, shelter half, ground sheet, mess gear (cup, knife, fork, spoon, skillet), personal items (sewing kit, razor, letters, Bible, etc.). Confederates usually had less, about 30 lbs. Official US Ration: 20 oz. of fresh or salt beef or 12 oz. of pork or bacon, 18 oz. of flour or 20 of corn meal (bread in lieu if possible), 1.6 oz. of rice or .64 oz. of beans or 1.5 oz of dried potatoes, 1.6 oz of coffee or .24 oz. of tea, 2.4 oz. of sugar, .54 oz. of salt, .32 gill of vinegar. Union Marching Ration: 16 oz. of "hardtack," 12 oz. salt pork or 4 oz. fresh meat, 1 oz. coffee, 3 oz. sugar, and salt. Confederate Ration: Basically the same but with slightly more sugar and less meat, coffee, vinegar and salt, and seldom issued in full. For the Army of Northern Virginia usually half of meat issued and coffee available only when captured or exchanged through the lines for sugar and tobacco. During the Maryland campaign foraging was disappointing, so Confederate soldiers supplemented the issue ration with corn from the fields and fruit from the orchards. Forage: Each horse required 14 lbs. of hay and 12 of grain per day; mules needed the same amount of hay and 9 lbs of grain. No other item was so bulky and difficult to transport. Union Annual Clothing Issue: 2 caps, 1 hat, 2 dress coats, 3 pr. trousers, 3 flannel shirts, 3 flannel drawers, 4 pr. stockings and 4 pr. bootees (high top shoes). Artillerymen and cavalrymen were issued jackets and boots instead of bootees. Allowance = $42. Confederate: Officially, the Confederate soldier was almost equally well clothed, but the QM was seldom able to supply the required items and soldiers wore whatever came to hand, the home-dyed butternut jackets and trousers being characteristic items. Shortages of shoes were a constant problem. Tents: Sibley (tepee) held 20 men feet to center pole; early in war Union introduced the tente de'Abri (shelter half), used by the French Army, and called "dog" tent by witty soldiers, now pup tent. Baggage: Enlisted men of both armies were required to carry their own. Union order of Sep 1862 limited officers to blankets, one small valise or carpet bag and an ordinary mess kit. Confederate standards allowed generals 80 lbs., field officers 65 lbs., and captains and subalterns 50 lbs. Wagons: Union's standard 6-mule Army wagon could haul 4,000 lbs on good roads in the best of conditions but seldom exceeded 2,000 or with 4 mules 1,800 lbs. at rate of 12-24 miles a day. Confederates often used 4-mule wagon with smaller capacity. Army of the Potomac authorized wagons as follows: corps hq: 4; div and bde hq: 3; regt of Inf: 6; arty bty and cav: 3; One wagon per regiment was reserved for hospital stores and one for grain for officers' horses. The Army of Northern Virginia used 4-mule wagons as follows: div hq 3; bde hq 2; regt hq 1; regt's medical stores 1; regt's ammunition 1; 1/100 men per regt for baggage, camp equipment, rations, etc.; Numbers of supply wagons per 1,000 men: Army of the Potomac (1862) - 29; Jackson in the Valley (1862) - 7; Army of Northern Virginia (1863) - 28; Army of the Potomac (1864) - 36; Sherman's March to the Sea (1864) - 40; Napoleon's standard - 12.5; Logistics in the Vicksburg Campaign When Major General Earl Van Dom's cavalry destroyed Grant's advance depot at Holly Springs in December 1862, it wrecked Grant's plan for an overland, railroad-centered attack to support Sherman's Chickasaw Bayou expedition. Although the outcome of that expedition would probably not have been altered, this episode illustrates how closely operational planning relied on a fixed logistical base for overland operations. Grant, in his memoirs, however, credits the Holly Springs raid with providing him the key to a less-conventional strategy. Forced to rely upon foraging and requisition in the surrounding countryside to feed his army in the weeks following Van Dom's raid, Grant came to realize that the Mississippi valley, though relatively under populated, was indeed a rich agricultural area, abounding in beef, hogs, and grain. Thus, Grant credited Van Dom with showing him the solution to his supply dilemma should he choose to operate far from any secure logistical pipeline. War materiel (weapons, ammunition, medical supplies, etc.) would still have to be hauled by wagons, along with some limited food items such as coffee and bread. The countryside, however, could sustain his army with bulky animal forage, meat, and other provisions. In January 1863, Grant established an impressive logistics system running from his depots at Cairo, Illinois, and Memphis to advance bases established along the levees at Lake Providence, Milliken's Bend, and Young's Point-the latter being just ten river miles from Vicksburg. Supplies, as well as troops, moved down river on a sizeable fleet of army-contracted riverboats. These transports varied considerably in size, but many were capable of carrying 300,000 pounds of supplies—the equivalent of 150 wagonloads. At the end of March, when Grant decided to move his army south of Vicksburg on the Louisiana side of the river, he hoped to have water transport most or all of the way. Union engineers, augmented by details from McClernand's and Sherman's corps, dug a canal at Duckport linking the Mississippi to the network of bayous paralleling the army's route of march. The canal was completed successfully, but falling water levels made it useless before it could do any good. As a last resort, Union logisticians pushed wagon trains along the sixty-three-mile route that McClernand's and McPherson's corps traveled, from Milliken's Bend to Bruinsburg. Some supplies were hauled by wagon from Milliken's Bend to Perkins' Plantation, just below New Carthage. There, they were loaded on riverboats that had run by the Vicksburg batteries, for delivery to the army downstream. About 11 May, over a week after the bulk of the army had crossed to the east bank, Sherman's men completed a new road from Young's Point to Bowers' Landing, across the base of De Soto point. This road shortened the wagon haul to twelve miles-still a two-day haul over the rough roads. From Bower's Landing, steamers carried supplies down the river to the newly won logistical base at Grand Gulf. The net effect of these efforts was to give Grant two sets of well-stocked advance depots, one below Vicksburg and several just above the city. After Grant moved away from his new base at Grand Gulf, his army had only to reestablish links with the river and its supply problems would essentially disappear. The Confederates knew this, and expected Grant to stay close to the river during his advance toward Vicksburg. Thus, his movement inland came as a surprise. In his postwar memoirs. Grant stated that he "cut loose" from his supply lines when he pushed inland from Grand Gulf. Many historians have taken those words at face value, asserting that Grant's men relied entirely upon food and forage gathered from the countryside. Grant, however, never cut completely loose from his supply lines, nor did he intend his words to convey that. As his army maneuvered east of the river, a steady stream of wagons carried supplies from Young's Point to Bower's Landing, where the supplies were loaded on steamboats and carried to Grand Gulf. From Grand Gulf, huge wagon trains, escorted by brigades hurrying forward to join the main force, carried supplies to the army. No "line of supply" existed only in the sense that Union troops did not occupy and garrison the supply route. An aggressive Confederate thrust into the area between Grand Gulf and Grant's army might have thwarted the Union campaign-Grant's men could forage for food, but only so long as they moved forward. Moreover, the barns and fields of Mississippi did not provide any ammunition to the foragers. One of the ironies of the campaign is that Pemberton's single offensive action, the attempt to strike south from Edwards toward Dillon's Plantation on 15 May, would probably have led him to Grant's ammunition train. However, heavy rains, confusion, and indecision led instead to the battle at Champion Hill. During the campaign of maneuver, Grant was well served by his logistical staff in the rear and by the aggressive support of Rear Admiral David Porter. As Grant's army neared Vicksburg, Porter sensed the opportunity to establish a logistic base just north of Vicksburg on the Yazoo River at Johnson's Plantation (the site of Sherman's landing in the abortive Chickasaw Bayou expedition). The Navy's initiative led to supplies being on the ground by 18 May when Grant's army reached the outer works around the city. That, and efficient construction of roads from the plantation by Federal engineers, enabled Grant to fulfill a promise to provide hardtack for his troops by 21 May. At the same time, Porter's gunboats reduced the Warrenton batteries just a few miles below the city and enabled Grant's logisticians to move the lower supply base from Grand Gulf to Warrenton. These two bases cut the overland wagon haul to a maximum of six miles for units manning the siege lines. Thus, as Grant closed on Vicksburg, his supply situation changed dramatically, almost overnight, whereas the Confederates then had to rely almost completely on whatever stores had been placed in the city in advance. Curiously, the Confederate logistical situation in the Vicksburg campaign was almost uniformly worse than that of the Union forces. The fact that the Confederates were conducting defensive operations within their own territory resulted in as many logistical problems as advantages. The bountiful forage discovered by Grant's troops was generally not available to the Confederate army, due in large part to the farmers' reluctance to part with their produce. In March, Pemberton complained of a shortage of beef, yet one of his staff officers noted an abundance of cattle in the region between Vicksburg and Jackson. Federal surgeons found apothecary shelves in Jackson well stocked with drugs, yet Confederate surgeons were critically short of medical supplies. The explanation, however, is simple: the invading Federals could take what they needed, whereas the defending Confederates could not so easily requisition from their own people. Thus, the Confederates had to rely upon their established logistical systems and procedures. Confederate logistical doctrine in the Civil War called for armies to supply themselves, as far as possible, from the resources of the area in which they were stationed. There was no shortage of basic supplies in the Vicksburg region. The Mississippi Delta (the area between the Mississippi and Yazoo Rivers) and farmlands to the east produced large quantities of food for man and beast. The transportation net, with the main rail line running from Vicksburg to the major rail nexus at Jackson, and the numerous navigable waterways, offered the Confederates the ability to stockpile or shift supplies quickly. The telegraph network provided communications that could support the management of logistical resources. Depots and manufacturing centers in Jackson. Enterprise, and Columbus, Mississippi, helped support a variety of Confederate needs. Three major factors, however, limited Pemberton's ability to optimize his logistical support. The first problem was the inefficiency of, and competing priorities between, the Confederate quartermaster and commissary departments. Many of the supplies from Pemberton's area were needed to support other military departments. Even so, the management of these resources was inefficient, and not enough funds were available for local purchase of food. Pemberton also had concerns about his own staff-officials in Richmond had received civilian complaints about Pemberton's Quartermaster. This problem, however vexing, did not prove insurmountable. The second problem was largely beyond Pemberton's control-Union naval superiority. Prior to the war, most bulk commodities were moved by water. But in the course of the Vicksburg campaign, Porter's gunboats denied the Confederates the use of the Mississippi and its tributaries, thus throwing heavier demands on the overtaxed road and rail transport systems. Even before Grant's army crossed to the east bank of the Mississippi, Pemberton found it difficult to gather and distribute supplies. The third and greatest problem hampering Confederate logistical efforts was Pemberton's lack of overall vision for the campaign. In the absence of a campaign plan, the Confederate logisticians, like Pemberton himself, could only react to Union initiatives. Supplies could not be positioned to support any particular scheme of maneuver. After Grant seized and destroyed Jackson, all supplies became critical for Pemberton. With Porter on the Mississippi and with the eastward rail lines interdicted, Pemberton was effectively cut off from any resources beyond the immediate vicinity of his army. Fortunately, his largest supply depots were in Vicksburg, a fact that helps explain Pemberton's reluctance to risk the loss of the city. Rations that could be stretched out for perhaps two full months were stockpiled inside Vicksburg before 18 May. Ordnance officers had managed to gather significant quantities of small arms and ammunition as well. The main shortages in the city after the siege began were artillery, medical supplies, engineer tools, and percussion caps for rifle-muskets. The latter shortage was eased when couriers penetrated the Union siege lines with several hundred thousand caps. As the siege progressed, the contrast between Union and Confederate logistics became increasingly pronounced. Confederate stockpiles dwindled, rations were cut, and ammunition expenditure curtailed. But the Union forces, situated as they were on North America's greatest transportation artery, received reinforcements and supplies in seemingly limitless quantities. Predictably, Confederate morale deteriorated until Pemberton felt that his troops had lost the ability and will to fight. Finally, logistics played a role in determining the final surrender terms. An important factor influencing Grant's decision to parole the entire Vicksburg garrison of over 29,000 men was the simple fact that the Confederate government, not the Federal army, would then have to deal with transporting and feeding those troops. Logistics in the Overland Campaign Logistics played a crucial role in the Overland Campaign in a variety of ways. First, the overall lack of resources for the Southern forces (coupled with manpower shortages) constrained the Confederate options and helped to keep Lee on the defense for most of the campaign. Second, Grant made extensive use of the Federal Navy's dominance of the sea and rivers to skillfully shift his bases to secured ports as he made his flanking moves to the south. In fact, the tempo of Grant's moves was largely determined by the location and availability of his next base. Finally, Lee's forces relied almost totally on the railroads for their supplies, and thus crucial rail nodes like Hanover Junction and Petersburg were critical locations that Lee had to defend and Grant wanted to take. Looking first at the Northern perspective, supplies for the eastern theater came from all parts of the North across an extensive and effective rail net that eventually funneled to Baltimore and Washington, DC. The supplies then had to be transported from these major ports and railheads to the armies in the field. At the start of the Overland Campaign, Grant's main forces (the Army of the Potomac and the IX Corps) received their logistics support from the port of Alexandria (across the Potomac River from Washington). The Orange and Alexandria railroad connected the Union camps at Brandy Station with the supply base at Alexandria. In their initial move into the Wilderness, the Union forces needed an extensive wagon train to carry the minimum requirements expressed in the supply regulations (see table 4). The army's animals alone needed 477 tons of forage each day. Grant tried to cut back on nonessential items and decreed a rigorous reduction in wagons, but he still ended up with 4,300 wagons and 835 ambulances at the start of the campaign. After the Battle of the Wilderness, Grant decided to continue to the south in part driven by the desire to cut Lee's army from its rail supply lines: the Richmond, Fredericksburg, and Potomac (coming from Richmond), and the Virginia Central which brought supplies from the Shenandoah. In order to make this move, the Federals shifted their base to Aquia Landing and Belle Plain on the Potomac River. These ports were securely positioned behind the moving Union forces and connected by a short rail line to a forward position at Fredericksburg. After Spotsylvania, Grant again shifted to the south and southeast, all the time hoping to get astride the railroads that were Lee's lifeline. In particular, the fighting on the North Anna centered on the Federal attempt to seize Hanover Junction where the Virginia Central Railroad met the Richmond, Fredericksburg, and Potomac line. In these moves, first to the North Anna, then further south to Cold Harbor, the Union forces deftly executed two more base changes: first to Port Royal on the Rappahannock River and then to White House on the Pamunkey River (which in turn flows into the York River). There was no rail line from Port Royal to the army, but the distance from the port to the troops was a relatively short wagon haul for the trains. At White House, the same base used by McClellan in the Peninsula Campaign in 1862, the Union forces could use the Richmond and York River Railroad to bring supplies from the port closer to the front lines at Cold Harbor. Grant's final move in the campaign brought him to Petersburg, south of the James River. This final flanking movement was clearly aimed at the five rail lines that converged at Petersburg. For this final move, he had the advantage of shifting his base to City Point, a port on the James that was already in Union hands and had been supporting Butler's Army of the James in the Bermuda Hundred Campaign. During the siege at Petersburg, City Point would become one of the busiest ports in the world—a testimony to the ample resources and logistical might of the North. In sum, even if Grant's central objective was Lee's army, his geographic goals were shaped by the Southerners’ own rail supply lines. At the same time, he made good use of sea lines of communications to keep his own forces well supplied and skillfully shifted his base with each new flanking movement. On the Southern side, Lee's logistical problems were at once simpler in concept but more difficult in execution. Lee's resupply system was relatively straightforward. The Army of Northern Virginia received a large amount of foodstuffs and forage from the Shenandoah Valley. Most of these supplies came via the Virginia Central Railroad. The remainder of his supplies came from the Deep South along several rail lines that converged at Petersburg. Then the supplies moved from Petersburg, through Richmond and Hanover Junction to Lee's army in the field on the Richmond, Fredericksburg, and Potomac Railroad. Lee did not have to worry about shifting bases; he simply needed to protect these rail lines to keep his army supplied. The difficulty for Lee was that the South was constantly strapped for resources, and the Army of Northern Virginia received just enough supplies to keep up its operations. Occasionally this affected Lee's planning, as when he was forced to keep a large part of his cavalry dispersed prior to the Wilderness to gather forage. Also, the Confederate commander's logistical weaknesses, when added to his manpower shortages, may have discouraged him from taking a more offensive approach after the Wilderness. On the other hand, while the Confederates never enjoyed the logistical plenty of their Union counterparts, Lee's army was never faced with starvation or a shortage of arms and ammunition during the Overland Campaign. Engineers Engineers on both sides performed many tasks essential to every campaign. Engineers trained at West Point were at a premium; thus, many civil engineers, commissioned as volunteers, supplemented the work being done by engineer officers. The Confederates, in particular, relied on civilian expertise because many of their trained engineer officers sought line duties. State or even local civil engineers planned and supervised much of the work done on local fortifications. In the prewar US Army, the Corps of Engineers contained a handful of staff officers and one company of trained engineer troops. This cadre expanded to a four-company Regular engineer battalion. Congress also created a single company of topographic engineers, which joined the Regular battalion when the engineer bureaus merged in 1863. In addition, several volunteer pioneer regiments, some containing up to 2,000 men, supported the various field armies. The Confederate Corps of Engineers, formed as a small staff and one company of sappers, miners, and pontoniers in 1861, grew more slowly and generally relied on details and contract labor rather than established units with trained engineers and craftsmen. Engineer missions for both sides included construction of fortifications; repair and construction of roads, bridges, and, in some cases, railroads; demolition; limited construction of obstacles; and construction or reduction of siege works. The Federal Topographic Engineers, a separate prewar bureau, performed reconnaissance and produced maps. The Confederates, however, never separated these functions in creating their Corps of Engineers. Experience during the first year of the war convinced the Federals that all engineer functions should be merged under a single corps because qualified engineer officers tended to perform all related functions. As a result, the Federals also merged the Topographic Engineers into their Corps of Engineers in March 1863. Bridging assets included wagon-mounted pontoon trains that carried either wooden or canvas-covered pontoon boats. Using this equipment, trained engineer troops could bridge even large rivers in a matter of hours. The most remarkable pontoon bridge of the war was the 2,200-foot-long bridge built by the Army of the Potomac engineers in 1864 over the James River at the culmination of the Overland Campaign. It was one of over three dozen pontoon bridges built in support of campaigns in the east that year. In 1862, the Confederates began developing pontoon trains after they had observed their effectiveness. Both sides in every campaign of the war traveled over roads and bridges built or repaired by their engineers. Federal engineers also helped clear waterways by dredging, removing trees, or digging canals. Fixed fortifications laid out under engineer supervision played critical roles in the Vicksburg campaign and in actions around Richmond and Petersburg. Engineers also supervised the siege works attempting to reduce those fortifications. While the Federal engineer effort expanded in both men and materiel as the war progressed, the Confederate efforts continued to be hampered by major problems. The relatively small number of organized engineer units available forced Confederate engineers to rely heavily on details or contract labor. Finding adequate manpower, however, was often difficult because of competing demands for it. Local slave owners were reluctant to provide labor details when slave labor was crucial to their economic survival. Despite congressional authorization to conscript 20,000 slaves as a labor force, state and local opposition continually hindered efforts to draft slave labor. Another related problem concerned the value of Confederate currency. Engineer efforts required huge sums for men and materiel, yet initial authorizations were small, and although congressional appropriations grew later in the war, inflation greatly reduced effective purchasing power. A final problem was the simple shortage of iron resources, which severely limited the Confederates’ ability to increase railroad mileage or even produce iron tools. In 1861, maps for both sides were also in short supply; for many areas in the interior, maps were nonexistent. As the war progressed, the Federals developed a highly sophisticated mapping capability. Federal topographic engineers performed personal reconnaissance to develop base maps, reproduce them by several processes, and distribute them to field commanders. Photography, lithographic presses, and eventually photochemical processes gave the Federals the ability to reproduce maps quickly. Western armies, which usually operated far from base cities, carried equipment in their army headquarters to reproduce maps during campaigns. By 1864, annual map production exceeded 21,000 copies. Confederate topographic work never approached the Federal effort in quantity. Confederate topographers initially used tracing paper to reproduce maps. Not until 1864 did the use of photographic methods become widespread in the South. However, the South had a large advantage in the quality of its maps in the eastern theater in the 1864 campaign. In particular, the Confederates were fighting on their own terrain (Virginia) where many officers knew the ground. In addition, prior to the war, Virginia had produced county maps of the state that proved to be a great advantage for Lee's army. Engineers in the Vicksburg Campaign The engineering operations conducted in support of the Vicksburg campaign were perhaps the most diverse and complex of the war. For much of the campaign, Federal engineers focused on mobility operations, while Confederate engineers emphasized countermobility, particularly in denying the Federals the use of streams and bayous in the swamps north of the city. Confederate engineers also supervised the construction and repair of the fortifications around the city. During the siege phase of the campaign, Grant's engineers focused on the reduction of those works, utilizing procedures such as sapping, mining, and other related tasks, as well as the improvement of roads and landings to enhance logistical support. This wide range of activities, which required engineers on both sides to construct roads, emplace or construct bridges, clear or obstruct waterways, construct field works, emplace batteries, divert the flow of rivers, and numerous other tasks, is made even more remarkable by the limited numbers of trained engineers available to accomplish them. Grant's Army of the Tennessee contained three formally organized engineer units. The largest was the Missouri Engineer Regiment of the West. Organized initially in July 1861, its ranks held skilled railroad men, engineers, and ironworkers recruited from St. Louis and surrounding areas. By the time of the Vicksburg campaign, it had extensive experience in a variety of construction operations and had been involved in some minor skirmishing. The regiment, with a strength of roughly 900 men, constructed roads around Young's Point in February 1863 and in March cut levees on the west side of the river and constructed casemated battery positions opposite Vicksburg. In April, six companies of the regiment returned to Memphis to begin the repair of the Memphis and Charleston Railroad. Companies A, D, F, and I, which were designated the 2d Battalion, remained with Grant's main force during the decisive phases of the campaign. The other two formally organized engineer units were the Kentucky Company of Engineers and Mechanics and Company I of the 35th Missouri, which was designated as the army's pontoon company. Since Grant then had barely 500 "trained" engineers at his disposal for his operations below Vicksburg, most of his divisions detailed men for engineer tasks or designated one of their infantry companies as engineer troops. Known as "pioneer" companies and detachments, or as the "pioneer corps" of their parent divisions, these ad hoc units generally undertook missions requiring higher degrees of skill than those assigned to normal labor details. The most strenuous engineer labors of the campaign took place between January and April 1863, as Grant sought ways to bypass the strong Confederate position at Vicksburg by creating flanking routes through the bayou country. Several of these efforts involved alternate water routes around the city. One scheme involved digging a canal that would divert the Mississippi through the peninsula directly opposite Vicksburg, a project initiated during Farragut's expedition in June 1862. Beginning in January 1863, details of infantry under engineer supervision labored the better part of two months before the rising river flooded them out. A month later, labor details working under engineer supervision cut the levee at Yazoo Pass to divert Mississippi River water into the Delta region in hopes that gunboats and transports could find a way to Vicksburg from the north. In March, the 1st Missouri Engineers used black powder to blow a gap in the western levee along the Mississippi River at Lake Providence. The plan was to flood enough of the countryside to link the bayous and rivers west of the Mississippi and thus provide an alternate route for steamboats all the way to the Red River. Once the levees were broken, the engineers used man-powered underwater saws, which swung pendulum-like from barge-mounted trestles, to cut off trees and stumps and allow passage of vessels. This backbreaking work required the men to spend much of their time in the water untangling the saws. It took the Missouri Engineers eight days to clear a two-mile stretch of bayou. Unfortunately, falling water levels led to the abandonment of the project. Grant's subsequent march from Milliken's Bend to Hard Times, a distance of sixty-three miles through the swampy floodplain, entailed a vast amount of engineering work. Much of the roadbed had to be corduroyed (paved with logs laid side-by-side); stretches of quicksand required layers of planking to create sufficient buoyancy for wagons; and numerous water courses had to be bridged using materials found on site. Engineers and infantry details constructed eight major bridges, totaling more than 1,700 feet, along the road to Hard Times. Again, the shortage of qualified engineer troops meant that most of the actual labor involved details of infantry, under the supervision of engineer-trained officers. This road-building effort continued on the west bank even after Grant crossed the river at Bruinsburg and pushed inland. During the campaign of maneuver on the east side of the river, Union bridge builders demonstrated their ingenuity to the fullest. Twenty-two trestle, suspension, pontoon, and raft bridges were employed in the campaign. Engineers used all available materials in their bridges, including boards pulled from buildings, cotton bales, telegraph wire, vines, cane, and flatboats, in addition to the supplies forwarded from engineer depots upriver. The pontoon company of Sherman's corps ultimately brought along its inflatable rubber pontoons, which were employed in the crossing of the Big Black River. Once Grant decided to initiate a formal siege to reduce Vicksburg, he was faced with a critical shortage of trained engineer officers. Grant ordered all officers with West Point training or civil engineer experience to assist chief engineer Captain Frederick E. Prime and the other three engineer officers on Grant's staff. These men supervised infantry details at the different approaches, while the trained engineer units worked in the saps and trenches. Captain Andrew Hickenlooper, Major General John A. Logan's chief engineer, was able to procure experienced coal miners, drawn from the ranks, to construct the mine undertaken by Logan's division. On the Confederate side, the engineering effort in this campaign came under the general authority of chief engineer Major Samuel H. Lockett, who arrived at Vicksburg in June 1862. At that time, Vicksburg's only fortifications consisted of a few batteries along the river. Union naval bombardments on 27–28 July 1862 persuaded the Confederate command to fortify the city on both the landward and riverfronts. Lockett spent the month of August surveying the rough terrain and planning on how best to utilize it for defensive purposes. On 1 September 1862, the actual construction began, using hired or impressed slave labor. Lockett's fortified line extended nine miles, from the river above Vicksburg to the river below. Thirteen river batteries studded the bluffs overlooking the Mississippi. Snyder's (Haynes') Bluff to the north and Warrenton to the south were also fortified. In addition, the Confederates also constructed a set of floating barriers called "rafts" across the Yazoo River to block incursions by Union gunboats. When Pemberton assumed command of the department on 1 November 1862, Lockett's responsibilities increased. He exercised authority over the entire area from Holly Springs to Port Hudson and from Vicksburg to Jackson. As part of his duties, Lockett surveyed defensive positions around Jackson and Edwards Station. In May 1863, after Grant had crossed the river, Lockett laid out defensive bridgeheads at several crossing sites along the Big Black River. One other Confederate engineering effort is worthy of note. Brigadier General John S. Bowen, given command of Grand Gulf in March 1863, used slave labor to shave the cliffs overlooking the mouth of the Big Black River and built a series of batteries and rifle pits that would withstand over one hundred tons of ordnance fired by Porter's gunboats during their unsuccessful bombardment of the position on 29 April. As the campaign unfolded, Lockett continued to support the Confederate army, often on his own initiative. It was Lockett who found and repaired the washed-out bridge over Baker's Creek that gave Pemberton a withdrawal route after the battle of Champion Hill on 16 May. Lockett later prepared the railroad bridge over the Big Black for demolition and fired it on 17 May just before the Federals reached it after their destruction of the Confederate bridgehead. Following that disastrous engagement, Lockett rushed back to Vicksburg to supervise the repair of fortifications damaged by the winter rains. Once the siege began, Lockett was busy supervising the repair of fortifications damaged by Union artillery. When the Federals began mining efforts, Lockett responded with at least fifteen countermines, three of which he exploded. Lockett operated with even fewer engineer assets than the meager number available to Grant. Although Lockett and his three-man staff equaled the number of engineers assigned to Grant's staff, and although he did have four other trained engineers as assistants, his troop assets included only one company of sappers and miners that numbered less than three dozen men. Most of the entrenching work had been done by a relatively small number of hired or impressed slave laborers. Apparently, Confederate infantrymen were less willing than their Union counterparts to dig and maintain earthworks. When Lockett reached Vicksburg on 18 May, he had only twenty-six sappers and miners, eight detailed mechanics, four overseers, and seventy-two slaves (twenty of whom were sick) to quickly repair nine miles of fortified lines. Lockett noted having only 500 shovels available. Although the Confederate army at Vicksburg was obviously blessed with an engineer staff officer of talent and initiative, not all of Lockett's countrymen appreciated his efforts. General Joseph E. Johnston, when he toured the works around Vicksburg in December 1862, felt that "[the usual error of Confederate engineering had been committed there. An immense, entrenched camp, requiring an army to hold it, had been made instead of a fort requiring only a small garrison." This defect, however, was not Lockett's fault. He received little command guidance; therefore, he planned his defenses to suit the best engineering aspects of the terrain. Topographical engineering played little role in this campaign for either side. Grant's topographic engineers became fully involved in the more crucial field engineering missions, and the speed of movements in May precluded useful mapping work. The Confederates, as was typical in most of the western theater, paid almost no attention to mapping or even detailed reconnaissance of their area of operations. As a result. Pemberton did not know the topography of his own department any better than Grant did during the campaign of maneuver. Engineers in the Overland Campaign Engineers on both sides played a significant role in several of the engagements of the Overland Campaign. In the Wilderness, Lee's chief engineer, Major General Martin L. Smith, conducted a reconnaissance that discovered an unfinished railroad bed on the open Union left flank on 6 May. He also plotted the route for the path cut by the Confederates for Major General Richard H. Anderson's move to Spotsylvania. On a less positive note, Smith also laid out the trace of the vulnerable Mule Shoe line at Spotsylvania (although, in Smith's defense, he did urge the heavy use of artillery to reinforce the exposed position). Note that engineers on both sides usually laid out the trace of field fortifications, but the infantry had to do the actual construction. On the Union side, their engineer's role in the tactical battles was sometimes less beneficial. On several occasions—for example, Barlow's night march for the attack on the Mule Shoe at Spotsylvania and the II Corps move on the night of 1 June at Cold Harbor—guides were totally inadequate for the task. They were usually totally ignorant of the ground and even led Union units down incorrect routes. It did not help that Meade's staff engineers often provided the guides and corps commanders with poor maps (or none at all). On the other hand, the Federal engineers performed essential missions in upgrading roads, railroads, and supply depots, as well as bridging numerous rivers to include the magnificent pontoon bridge on the James River. The Federal rail system in occupied Virginia, which had been superbly organized by Brigadier General Herman Haupt in 1862–63, was a model of successful improvisation. The Confederates did not have the extensive resources of their Northern opponents, and usually, being on the defense, they did not construct as many railroads and bridges. However, the Southerners became masters at restoring broken rail lines after Union raids; for example, they repaired the Virginia Central to full operations within two weeks after Sheridan's raid in May. Communications Communications systems used during the Civil War consisted of line-of-sight signaling, telegraphic systems, and various forms of the time-honored courier methods. The telegraph mainly offered viable strategic and operational communications, line-of-sight signaling provided operational and limited tactical possibilities, and couriers were most heavily used for tactical communications. The Federal Signal Corps was in its infancy during the Civil War. Major Albert J. Myer was appointed the first signal chief in 1860; his organization grew slowly and became officially recognized as the Signal Corps in March 1863 and achieved bureau status by November of that year. Throughout the war, the Signal Corps remained small—its maximum strength reaching just 1,500 officers and men, most of whom were on detached service with the corps. Myer also indirectly influenced the formation of the Confederate Signal Service. Among the men who assisted Myer in his prewar testing of his wigwag signaling system (Myer's wigwag system, patented in 1858, used five separate numbered movements of a single flag) was Lieutenant E.P. Alexander. Alexander used wigwag signals to the Confederates’ advantage during the First Battle of Bull Run and later organized the Confederate Signal Corps. Officially established in April 1862, the Confederate Signal Corps was attached to the Adjutant and Inspector General Department. It attained the same size as its Federal counterpart, with nearly 1,500 men ultimately being detailed for service. Myer also fought hard to develop a Federal field telegraph service. This field service utilized the Beardslee device, a magneto‑powered machine operated by turning a wheel to a specific point, which sent an electrical impulse that keyed the machine at the other end to the same letter. Although less reliable than the standard Morse code telegraph key, the Beardslee could be used by an operator with only several hours’ training and did not require bulky batteries for a power source. Myer's field telegraph units carried equipment on wagons that enabled its operators to establish lines between field headquarters. The insulated wire used could also be hooked into existing trunk lines, thus offering the potential to extend the reach of the civilian telegraph network. Control over the existing fixed telegraph system, however, remained with the US Military Telegraph Service. Myer lost his struggle to keep the field telegraph service under the Signal Corps when Secretary of War Edwin M. Stanton relieved Myer as the signal chief in November 1863 and placed all telegraph activity under the Military Telegraph Service. Although the Confederate Signal Corps’ visual communications capabilities were roughly equal to that of the Federals, Confederate field telegraph operations remained too limited to be of operational significance. The Confederates’ existing telegraph lines provided strategic communications capabilities similar to those of the Federals, but the lack of resources and factories in the South for producing wire precluded their extending the prewar telegraph networks. The courier system, using mounted staff officers or detailed soldiers to deliver orders and messages, was the most viable tactical communications option short of commanders meeting face to face. Although often effective, this system was fraught with difficulties, as couriers were captured, killed, or delayed en route to their destinations; commanders misinterpreted or ignored messages; and situations changed by the time a message was delivered. The weaknesses of the courier system, though often not critical, did tend to compound other errors or misjudgments during campaigns. Communications in the Vicksburg Campaign Operating along river lines of communication meant that Grant's army often would leave behind its excellent strategic telegraph network. Memphis, two days by steamboat from Vicksburg, was the nearest telegraph station upriver, and the telegraph lines running north from Memphis often were cut by guerrillas. For much of the campaign, Cairo, Illinois, was the closest point that had reliable telegraph links with the East. Once Grant began operations south of Vicksburg, he essentially broke off his communications with Washington. President Lincoln, on 22 May 1863 (the day Grant launched his deliberate assault against Vicksburg), telegraphed Major General Stephen Hurlbutt at Memphis with a situation update based upon information gleaned from Confederate newspapers smuggled out of Richmond. The next day, Lincoln, who had not yet heard from Grant about his landing at Bruinsburg, finally received a telegraphic report. Grant's message, describing his operations since 30 April, had been sent upriver by courier on a steamer only after the Federal army had closed on the city on 18 May. As for Federal tactical communications, Grant's signal corps detachment struggled to fill its ranks with detailed officers and men, but the full complement of forty-five officers was not assigned until late in the campaign. Signal officers operating with the field army probably provided their best service as scouts, since they usually advanced ahead of the main force, reconnoitering potential signal sites. The nature of the terrain generally precluded communications by flag, but stations set up along the riverbanks and at key areas along the line of march offered some limited local communications. Admiral Porter early saw the value of the army signal system. He detailed seven Navy officers to work with the signal corps. Thus Porter, on the river, could maintain a link with the army as long as the gunboats operated within visual range of army signal stations on shore. Telegraph played no tactical role in the Vicksburg campaign. Although six field telegraph units were assigned to Grant's army, they did not arrive in Memphis until late June and did not reach Vicksburg until after the surrender. During the campaign of maneuver, Grant's most reliable means of tactical communication was the courier, and this method was fraught with problems. On 16 May, as the Federal army advanced on multiple routes toward Champion Hill, the courier system failed badly. When the northernmost of the three Union columns became fully engaged with the enemy, Grant, accompanying that column, sent a message to McClernand, three miles away, to bring the other two columns into action. But the courier carrying the message chose to take a twelve-mile route by road rather than riding three miles across country. As a result, four hours elapsed before McClernand's divisions pushed the enemy, and part of his force never attacked at all. Another problem arose during the deliberate assault of the Vicksburg works on 22 May, when Grant's inability to communicate directly with McClernand led to confusion about the need to support a supposed success in McClernand's sector. The Confederates, on the other hand, operated with an excellent network of fixed telegraphic communications until Grant cut the lines into Vicksburg as he advanced from the south and east. The existence of a civilian telegraph net allowed Pemberton to get by with a signal corps detachment of only three officers. Virtually every significant town was linked by telegraph line; thus, Pemberton initially had excellent operational as well as strategic communications. In December 1862, Confederate telegraphers, using a line running along the west bank of the Mississippi, alerted Pemberton to the approach of Sherman's Chickasaw Bayou expedition, enabling the Confederates to bring in reinforcements from other parts of the department. Ironically, the effectiveness of his telegraph communications may have worked to Pemberton's disadvantage as the campaign progressed because the telegraph system also allowed him to receive contradictory advice from two key subordinates, Bowen and Stevenson. Bowen argued that the main Federal effort was coming from below Vicksburg, while Stevenson argued that it was coming above Vicksburg. The telegraph also provided Pemberton with conflicting instructions from Joseph Johnston and Jefferson Davis about whether he should defend or evacuate Vicksburg as Grant advanced on the city. Most important, the allure of the telegraph may well have been a factor in keeping Pemberton tied to his headquarters long after he should have taken the field in person. After 4 May, when advancing Federals began to cut telegraph wires, the Confederates relied increasingly on couriers. This system also had its problems. One of the three couriers Johnston sent out on 13 May with an order directing Pemberton to join him at Clinton was actually a Federal spy, who instead delivered the message to the Federals. Thus Grant learned of the order before the other two couriers reached Pemberton! Once Pemberton withdrew behind the works at Vicksburg, couriers became his only means of communication with the outside world. Although a few men were able to slip through Federal lines early in the siege, couriers ultimately were forced to use the river, clinging to floating logs or pieces of debris in order to enter and leave the city. Messages conveyed by this dangerous route took from five to ten days to pass between Johnston and Pemberton, and often couriers destroyed their messages if capture seemed imminent. The last message Pemberton received from outside the city came in by courier on 23 June. Communications in the Overland Campaign On the Northern side, Grant had almost constant telegraphic communication with Halleck in Washington, which gave him a relatively good measure of strategic control over Union armies in other theaters of the war. Within the eastern theater, Grant could communicate with Sigel in the Valley and Butler on the Virginia Peninsula via his telegraph connections to Washington. Grant's communications with the Army of the Potomac and the initially separate IX Corps were affected more by the awkward Union command relationship than the technical means of communication. For the most part, Grant and Meade both relied heavily on couriers with some flag signaling. Initially, Grant, with his small staff and few aides, attempted to issue only broad orders to Meade and allow the army commander to execute tactical control. At the same time, Grant had to issue orders directly to the IX Corps (at least until late May) to coordinate Burnside's moves with the Army of the Potomac. On several occasions, Grant bypassed Meade and confusing or duplicate orders resulted. Lee also relied heavily on couriers at the tactical level, and his streamlined command structure minimized confusion over orders. Lee did use flag signals, especially at the beginning of the campaign at Clark's Mountain. The Union forces occasionally intercepted these signals, but they gained only a minor advantage from this. At a higher level, Lee had solid telegraph contact with his political leadership in Richmond. Indirectly, through the capital, he remained in contact with Breckinridge in the Valley and Beauregard in North Carolina (and later at Bermuda Hundred and Petersburg). Medical support Federal and Confederate medical systems followed a similar pattern. Surgeons general and medical directors for both sides had served many years in the prewar Medical Department, but were hindered by an initial lack of administrative experience in handling large numbers of casualties (see table 5), as well as the state of medical science in the mid‑19th century. Administrative procedures improved with experience, but throughout the war the simple lack of knowledge about the true causes of disease and infection led to many more deaths than direct battlefield action. After the disaster at the Battle of First Bull Run, the Federal Medical Department established an evacuation and treatment system developed by Surgeon Jonathan Letterman. At the heart of the system were three precepts: consolidation of field hospitals at division level, decentralization of medical supplies down to regimental level, and centralization of medical control of ambulances at all levels. A battle casualty evacuated from the front line normally received treatment at a regimental holding area immediately to the rear. From this point, wagons or ambulances carried wounded men to a division field hospital, normally within a mile of the battle lines. Seriously wounded men could then be further evacuated by wagon, rail, or watercraft to general hospitals located usually in towns along lines of communication in the armies’ rear areas. Although the Confederate system followed the same general principles, their field hospitals were often consolidated at brigade rather than division level. A second difference lay in the established span of control of medical activities. Unlike their Federal counterparts who had control over all medical activities within an army area, a Confederate army medical director had no control of activities beyond his own brigade or division field hospitals. A separate medical director for general hospitals was responsible for evacuation and control. In practice, both sets of medical directors resolved potential problems through close cooperation. By 1863, the Confederacy had also introduced rear area “wayside hospitals,” which were intended to handle convalescents en route home on furloughs. Procedures, medical techniques, and medical problems for both sides were virtually identical. Commanders discouraged soldiers from leaving the battle lines to escort wounded back to the rear, but such practice was common, especially in less-disciplined units. The established technique for casualty evacuation was to detail men for litter and ambulance duty. Both armies used bandsmen, among others, for this task. Casualties would move or be assisted back from the battle line, where litter bearers evacuated them to field hospitals using ambulances or supply wagons. Ambulances were specially designed two or four-wheel carts with springs to limit jolts, but rough roads made even short trips agonizing for wounded men. Brigade and division surgeons staffed consolidated field hospitals. Hospital site considerations were the availability of water, potential buildings to supplement the hospital tents, and security from enemy cannon and rifle fire. The majority of operations performed at field hospitals in the aftermath of battle were amputations. Approximately 70 percent of Civil War wounds occurred in the extremities, and the soft lead Minié ball shattered any bones it hit. Amputation was the best technique then available to limit the chance of serious infection. The Federals were generally well supplied with chloroform, morphine, and other drugs, though shortages did occur on the battlefield. Confederate surgeons were often short of critical drugs and medical supplies. Medical support in the Vicksburg Campaign Grant's Army of the Tennessee had adopted most of the Letterman system by March 1863. Thus, field hospitals were consolidated at the division echelon, and medical supplies were distributed down to regimental level. Ambulances were under positive medical control, with commissioned or noncommissioned officers in charge at division and brigade and ambulance drivers and assistants assigned to each regiment. When Regular army surgeon Madison Mills became Grant's medical director in March 1863, he inherited a growing field hospital established at Milliken's Bend. Mills established convalescent camps and opened more field hospitals there to support Grant's guidance that ill troops be kept with the command insofar as possible to enable them to rejoin their units upon recovery. Federal surgeons were able to stockpile a significant amount of medical supplies in the depot established at Young's Point. Most were kept on the steamer Des Arc, which could move supplies to any secure drop-off point along the river. By May, Mills estimated that six months of medical supplies had been stockpiled. He was assisted in this by Grant's standing order that any steamer with space that moved down river from Memphis was to bring additional medical supplies. The medical department also received invaluable assistance from the U.S. Sanitary Commission in the form of supplies and evacuation of sick and wounded. The river constituted an excellent evacuation as well as supply route. In addition to the 1,000-bed general hospital and convalescent camps established just north of Vicksburg, thousands of beds were available in general hospitals up river. Memphis alone had 5,000 available beds, with many more available in general hospitals in Cairo, Mound City, Paducah, Evansville, and St. Louis. Three steamers, R. C. Wood, D. A. January, and City of Memphis, served as hospital ships for evacuation to these upriver hospitals. A round trip to Memphis took four to five days. The most severe medical problem facing Grant's army between January and July 1863 was disease, a problem severely exacerbated early in the campaign when the army occupied swampy encampments along the river. From January to March, high water forced the troops to crowd together on the tops of the levees. Unfortunately, the levees also served as roads, latrines, and graves. Thus, Grant's army experienced over 170,000 cases of serious illness during this encampment. One should be skeptical of historians' assertions that work on projects such as the canal helped put Grant's men in excellent shape for the campaigning to come. Reports from regiments engaged in these projects routinely list more men on the sick lists than were present for duty. Once Grant began to maneuver, however, the combination of continual movement and healthier terrain led to dramatic decreases in serious disease. During the campaign of maneuver, surgeons were forced by the nature of operations to carry sick and wounded soldiers along with the marching columns or leave them behind to be captured. By the time Grant began the siege of Vicksburg, over 2,000 Federal wounded from the battles of Raymond, Jackson, and Champion Hill had been left under Confederate control. Nineteen Federal surgeons stayed behind to attend these men. Four additional Federal surgeons stayed to help attend the Confederate wounded from those battles, which indicates the critical shortage of doctors serving Pemberton's army. On 20 May, five wagons displaying a flag of truce and loaded with medical supplies rolled east from the Federal siege lines into Confederate territory to support the wounded from those earlier battles. After the surrender of Vicksburg on 4 July, fifty ambulances moved to Raymond under a flag of truce to recover many of these wounded. Although the Federal corps commanders' emphasis on medical support varied, medical officers had adequate supplies throughout the campaign. Sherman's corps allocated enough wagons for medical needs. McClernand, on the other hand, accorded low priority to medical requirements, thus Surgeon Mills had to scramble to support his XIII Corps surgeons. Shortages of medical supplies were partly made up in Jackson and other towns as surgeons raided the stocks of local drug stores. There also seemed to be no shortage of food for the wounded. Surgeons reported an abundance of beef for making soup and an adequate supply of hard bread and vegetables. After the supply line to the river was fully reestablished on 21 May, even ice became available. After Grant initiated the siege of Vicksburg, division hospitals were established a mile behind the lines, using combinations of buildings and tents. Water often came from cisterns because of a shortage of wells and springs. The policy of keeping wounded and sick soldiers close to their commands, whenever practicable, was maintained. A consolidated evacuation hospital near Johnson's plantation on the Yazoo River housed the seriously ill and wounded until medical steamers could move them up the Mississippi to general hospitals. Except for the assaults of 19 and 22 May, when more than 3,000 Union soldiers were wounded, battle casualties averaged close to a hundred per week, numbers that the medical staffs could manage effectively. Upon the Confederate surrender on 4 July, however, the Federal surgeons were confronted with over 6,000 Confederate sick and wounded from the city. The well-established Federal hospital, supply, and evacuation network proved adequate to meet this new demand. Relatively little specific information is available concerning Confederate medical efforts during the campaign. However, it is safe to assume that problems with sickness and disease, particularly for those units posted in the Delta, were of similar magnitude to those encountered by Union troops when they, too, camped on the floodplain. It is clear that the Confederate army suffered from supply shortages and from an inadequate number of trained surgeons. Since Federal surgeons reported finding large stocks of medical supplies in Jackson, it would seem that some of Pemberton's logistical problems hindered his medical staff as well. Reports on the medical condition of the army at the time of the surrender reveal that, within the city, the Confederates were "almost destitute" of medical supplies. Medical support in the Overland Campaign By 1864, almost all Union forces generally conformed to the Letterman medical system. The Federals had long established considerable hospitals in the Washington area, and their command of the sea greatly aided in evacuation to these facilities. Even so, the unprecedented number of sustained casualties in May and the first half of June put considerable strain on the Union efforts. After the bloody fights at the Wilderness and Spotsylvania, the Federals established an extensive field hospital and evacuation center at Fredericksburg, probably the largest of its kind in the war. The Confederates were able to take advantage of protected rail lines to evacuate most of their casualties to Richmond. Their bigger problem was a lack of trained surgeons and medical supplies. The Southern medical facilities were meager compared to their Union counterparts and barely adequate for needs of the campaign. See also List of American Civil War battles Union Army Confederate States Army Cavalry in the American Civil War Cavalry Corps, Army of Northern Virginia Cavalry Corps (Union Army) Field artillery in the American Civil War Siege artillery in the American Civil War Bibliography of the American Civil War Bibliography of Abraham Lincoln Bibliography of Ulysses S. Grant Signal Corps Balloon Corps Notes References Ballard, Ted, and Billy Arthur. Chancellorsville Staff Ride: Briefing Book. Washington, DC: United States Army Center of Military History, 2002. . Ballard, Ted. Battle of Antietam: Staff Ride Guide. Washington, DC: United States Army Center of Military History, 2006. . Gabel, Christopher R., Staff ride handbook for the Vicksburg Campaign, December 1862-July 1863. Fort Leavenworth, Kan.: Combat Studies Institute Press, 2001. . King, Curtis S., William G. Robertson, and Steven E. Clay. Staff Ride Handbook for the Overland Campaign, Virginia, 4 May to 15 June 1864: A Study on Operational-Level Command. (). Fort Leavenworth, Kan.: Combat Studies Institute Press, 2006. . External links Civil War Field Fortifications Web Site Civil War Artillery Projectiles The Civil War Artillery Compendium CivilWarArtillery.com Profiles of Civil War Field Artillery The Danville Artillery Confederate Reenactor Website The 1841 Mountain Howitzer Union Army Confederate States Army
29975400
https://en.wikipedia.org/wiki/KU%20Leuven
KU Leuven
The KU Leuven (or Katholieke Universiteit Leuven) is a Catholic research university in the city of Leuven, Belgium. It conducts teaching, research, and services in computer science, engineering, natural sciences, theology, humanities, medicine, law, canon law, business, and social sciences. In addition to its main campus in Leuven, it has satellite campuses in Kortrijk, Antwerp, Ghent, Bruges, Ostend, Geel, Diepenbeek, Aalst, Sint-Katelijne-Waver, and in Belgium's capital Brussels. KU Leuven is the largest university in Belgium and the Low Countries. In 2017–18, more than 58,000 students were enrolled. Its primary language of instruction is Dutch, although several programs are taught in English, particularly graduate degrees. KU Leuven consistently ranks among the top 100 universities in the world by major ranking tables. As of 2021, it ranks 42nd in the Times Higher Education rankings, 70th according QS World University Rankings, 87th according to the Shanghai Academic Ranking of World Universities. For four consecutive years starting in 2016, Thomson Reuters ranked KU Leuven as Europe's most innovative university, with its researchers having filed more patents than any other university in Europe; its patents are also the most cited by external academics. Although Catholic in theology and heritage, KU Leuven operates independently from the church. KU Leuven who previously only accepted baptized Catholics, now is open to students from different faiths or life-stances. The university's legal name is Katholieke Universiteit Leuven, officially Katholieke Universiteit te Leuven, which translates in English as Catholic University of Leuven. However, it is not translated in official communications, like its similarly named French-language sister university Université catholique de Louvain (UCLouvain). History Previous universities in Leuven The city of Leuven has been the seat of three different universities. The old University of Leuven (or Studium Generale Lovaniense), founded in 1425 by the civil authorities of Brabant duke John IV of Brabant, as well as the municipal administration of the city of Leuven, despite the initial opposition of the chapter of Sint-Pieter, was formally integrated into the French Republic when the Holy Roman Emperor, Francis I, ceded the South Netherlands to France by the Treaty of Campo Formio signed on 17 October 1797. A law dating to 1793, which mandated that all universities in France be closed, came into effect. The old University of Leuven was abolished by decree of the Département of the Dyle on October 25, 1797. A few years after French rule came to an end, when Belgium was part of the United Kingdom of the Netherlands, king William I of the Netherlands founded in 1817 a secular university in Leuven, the State university of Leuven, where many professors of the old University of Leuven have taught. This university was also abolished in 1835. Present university The Catholic University of Leuven was founded in 1834 in Mechelen by the bishops of Belgium, after an official Papal Brief of Pope Gregory XVI. This new Catholic university stayed only briefly in Mechelen, as the bishops already moved the university headquarters to Leuven on 1 December 1835, where it took the name Catholic University of Leuven. This occurred after the closure of the State university of Leuven in 1835, where many professors of the old University of Leuven have taught. KU Leuven is generally (but controversially) identified as a continuation of the older institution; controversy lays in the fact that the continuation is mainly of a sociocultural and ecclesiastical nature, but cannot be maintained from a purely juridical perspective as the old University was suppressed under French rule. In its statutes, KU Leuven officially declares to be the continuation of the Studium Generale Lovaniense established in 1425, and it sets out to celebrate its 600th anniversary in 2025. In 1968, tensions between the Dutch-speaking and French-speaking communities led to the splitting of the bilingual Catholic University of Leuven into two "sister" universities, with the Dutch-language university becoming a fully functioning independent institution in Leuven in 1970, and the Université catholique de Louvain departing to a newly built greenfield campus site in the French-speaking part of Belgium. KU Leuven's first rector after the split was Pieter De Somer. In 1972, the KUL set up a separate entity Leuven Research & Development (LRD) to support industrial and commercial applications of university research. It has led to numerous spin-offs, such as the technology company Metris, and manages tens of millions of euros in investments and venture capital. The university's electronic learning environment, TOLEDO, which started in September 2001, was gradually developed into the central electronic learning environment at the KUL. The word is an acronym for TOetsen en LEren Doeltreffend Ondersteunen (English: "effectively supporting testing and learning"). It is the collective name for a number of commercial software programs and tools, such as Blackboard. The project offers the Question Mark Perception assignment software to all institution members and has implemented the Ariadne KPS to reuse digital learning objects inside the Blackboard environment. On 11 July 2002, the KU Leuven became the dominant institution in the "KU Leuven Association" (see below). KU Leuven is a member of the Coimbra Group (a network of leading European universities) as well as of the LERU Group (League of European Research Universities). Since November 2014, KU Leuven's Faculty of Economics and Business is accredited by European Quality Improvement System, which is a leading accreditation system specializing in higher education institutions of management and business administration. As of academic year of 2012–2013, the university held Erasmus contracts with 434 European establishments. It also had 22 central bilateral agreements in 8 countries: the United States, China, South Africa, Japan, the Democratic Republic of Congo, Vietnam, Poland, and the Netherlands. The vast majority of international EU students came from the Netherlands, while most non-EU ones come from China. KU Leuven is financially independent from the Catholic Church. Although a representative from the Church sits in its Board of Governors, their function is observational and has no voting power. Its management and academic decisions are similarly autonomous. In December 2011, the university changed its official name to KU Leuven in all official communications and branding. While its legal name remains to be Katholieke Universiteit Leuven, the university uses KU Leuven in all communications, including academic research publications. The long name is only used in legally binding documents such as contracts and only on the first instance, according to university's communication guidelines. According to its then rector, the change is intended as a way to emphasize its history of freedom of academic inquiry and its independence from the Church, without erasure of its Catholic heritage. Since August 2017, the university has been led by Luc Sels who replaced former rector Rik Torfs. The Belgian archbishop, André-Joseph Léonard is the current Grand Chancellor and a member of the university board. KU Leuven hosts the world's largest banana genebank, the Bioversity International Musa Germplasm Transit Centre, that celebrated its 30th anniversary in 2017 and was visited by Deputy Prime Minister and Minister for Development Cooperation, Alexander De Croo. In 2018, a student of African origin died during a cruel hazing ritual to enter the Reuzegom fraternity. The perpetrators, whose parents mostly belong to the upper class, are being prosecuted, but were so far only lightly sanctioned by the university authorities. As a consequence of these events, which attracted international media coverage, the institution received criticism as to how it handled the matter. Historically, the Catholic University of Leuven has been a major contributor to the development of Catholic theology. The university is dedicated to Mary, the mother of Jesus, under her traditional attribute as "Seat of Wisdom", and organizes an annual celebration on 2 February in her honour. On that day, the university also awards its honorary doctorates. The neo-Gothic seal created in 1909 and used by the university shows the medieval statue Our Lady of Leuven in a vesica piscis shape. The version used by KU Leuven dates from the 1990s and features the date 1425 in Times New Roman. Campus KU Leuven's main campus is in Leuven where school faculties, libraries, institutes, residence halls, the university hospital UZ Leuven, and other facilities are interspersed throughout the city proper, as well as just outside its ring road in Heverlee borough. Its intercultural meeting center Pangaea is located in the city center. The University Sports Centre is located in Heverlee, including Univ-Fit gym. In addition, the UNESCO World Heritage Site Groot Begijnhof, a historic beguinage in the south of city, is owned by the university and functions as one of its many residence halls. Public transport within the city is primarily served by the De Lijn bus system. Leuven is a main hub in Belgium's and nearby country's train network. Leuven station is located in the northeast edge of the city. KU Leuven has campuses in Kortrijk, Antwerp, Ghent, Bruges, Ostend, Geel, Diepenbeek, Aalst, Sint-Katelijne-Waver, Brussels. Organization and academics Academics at KU Leuven is organized into three groups, each with its own faculties, departments, and schools offering programs up to doctoral level. While most courses are taught in Dutch, many are offered in English, particularly the graduate programs. Notable divisions of the university include the Higher Institute of Philosophy and the Rega Institute for Medical Research. The students of the university are gathered together in the student's council Studentenraad KU Leuven. They have representatives in most meetings at the university, including the board of directors. It was separated from in 2012-2013 when the different locations outside of Leuven became part of the university. LOKO remained the overarching student organisation for all students in the city of Leuven. Biomedical Sciences Group Faculty of Medicine Faculty of Pharmaceutical Sciences Faculty of Kinesiology and Rehabilitation Sciences Department of Cardiovascular Sciences Department of Oral Health Sciences Department of Pharmaceutical and Pharmacological Sciences Department of Human Genetics Department of Imaging and Pathology Department of Kinesiology Department of Microbiology and Immunology Department of Cellular and Molecular Medicine Department of Neurosciences Department of Oncology Department of Clinical and Experimental Medicine Department of Rehabilitation Sciences Department of Development and Regeneration Department of Public Health and Primary Care Doctoral School of Biomedical Sciences Humanities and Social Sciences Group Institute of Philosophy Faculty of Theology and Religious Studies Faculty of Canon Law Faculty of Law Faculty of Economics and Business Faculty of Social Sciences Faculty of Arts Faculty of Psychology and Educational Sciences Documentation and Research Center for Religion, Culture, and Society (KADOC) Leuven Language Institute Doctoral School for the Humanities and Social Sciences Science, Engineering and Technology Group Faculty of Architecture Faculty of Science Faculty of Engineering Science Faculty of Bioscience Engineering Faculty of Engineering Technology Department of Earth and Environmental Sciences Department of Architecture Department of Biology Department of Biosystems Department of Civil Engineering Department of Chemistry Department of Chemical Engineering Department of Computer Science Department of Electrical Engineering (ESAT) Department of Materials Engineering Department of Microbial and Molecular Systems Department of Physics and Astronomy Department of Mechanical Engineering Department of Mathematics Center for Science, Technology, and Ethics (CWTE) Arenberg Doctoral School of Science, Engineering, and Technology Science, Engineering and Technology Group European Centre for Ethics HIVA — Research Institute for Work and Society Interfaculty Centre for Agrarian History University Statistics Centre (UCS) KU Leuven University Energy Institute LUCAS — Centre for Care Research and Consultancy Libraries KU Leuven has 24 libraries and learning centers across its 12 campuses, containing millions of books and other media. Its theology library alone hold 1.3 million volumes, including works dating from the 15th century. The following libraries are found at its Leuven campus: 2Bergen — Biomedical Library 2Bergen — Campuslibrary Arenberg (exact sciences, engineering sciences, industrial engineering sciences, bio—engineering sciences, architecture and kinesiology and rehabilitation sciences) Artes — Ladeuze & Erasmushuis (Humanities & Social Sciences Group and the Faculty of Arts) Library of Psychology and Educational Sciences Law Library Library of Social Sciences Library of the Institute of Philosophy AGORA Learning Centre EBIB Learning Centre MATRIX (music and audio recordings library) Maurits Sabbe Library (Library of the Faculty of Theology and Religious Studies) University hospital Universitair ziekenhuis Leuven (UZ Leuven) is the teaching hospital associated with the KU Leuven. Its most well known and largest campus is Gasthuisberg, which also houses the faculty of pharmaceutical sciences and most of the faculty of medicine. Breakthrough and notable research KU Leuven scientists have managed to produce a solar hydrogen panel, which is able to directly convert no less than 15 per cent of sunlight into hydrogen gas, which according to them is a world record. In the solar hydrogen panel the hydrogen and oxygen evolution reactions are performed in the gas phase in cathode and anode compartments separated by a membrane. Anion exchange membranes provide an alkaline environment enabling the use of earth abundant materials as electrocatalysts. According to IEEE Spectrum in 2019 this is a giant leap from 0.1% efficiency 10 years earlier. This technology bypasses the conversion losses of the classical solar–hydrogen energy cycle where solar power is first harvested via a solar panel and only then to converted to hydrogen with electrolysis plants. Affiliations Since July 2002, thirteen higher education institutes have formed the KU Leuven Association. Members include: KU Leuven LUCA School of Arts Odisee Thomas More UC Leuven Limburg Vives KU Leuven is a member of a number of international university affiliations including the League of European Research Universities, Coimbra Group, UNA Europa, Universitas 21, and Venice International University, among others. The university is a member of the Flanders Interuniversity Institute of Biotechnology. The Interuniversity Microelectronics Centre is a spin-off company of the university. Rankings As of 2021, KU Leuven ranks in the world 45th in the Times Higher Education rankings, 84th according QS World University Rankings, 97th according to the Shanghai Academic Ranking of World Universities. KU Leuven ranked first in Thomson Reuters' list of Europe's most innovative universities four times in a row since it began in 2016. As of 2019, also ranks 52nd according to the CWTS Leiden Ranking and 56th according U.S. News & World Report Best Colleges Ranking. According to QS World University Rankings by Subject in 2019, KU Leuven ranked within the world's top 50 universities in the following fields: Sports-related Subjects (11), Theology (14), Dentistry (17), Classics and Ancient History (22), Library and Information Management (23), Psychology (24), Statistics and Operational Research (26), Mechanical Engineering (30), Philosophy (31), Geography (34), Pharmacy & Pharmacology (35), Education and Training (36), Law (37), Social Policy and Administration (39), Development Studies (43), Materials sciences (45), Chemical Engineering (46), Politics (49), Sociology (50), Life Sciences and Medicine (56), Social Sciences and Management (60), Arts and Humanities (61), Engineering and Technology (61). Also according to QS, many other KU Leuven programs rank within the top 100 in the world, including Linguistics, English Language and Literature, History, Anatomy and Physiology, Architecture, Anthropology, Computer Science and Information System, Biological Sciences, Civil and Structural Engineering, Electrical and Electronic Engineering, Business and Management Studies, Mathematics, Economics and Econometrics, Chemistry, Accounting and Finance . Rectors Notable alumni Leon Bekaert (b. 1958), economics, businessman Paul Bulcke (b. 1954), economics, businessman, CEO of Nestlé Jan Callewaert, economics, founder of Option N.V. Peter Carmeliet, physician and medical scientist Mathew Chandrankunnel (b. 1958), professor of philosophy of science at Dharmaram Vidya Kshetram Mathias Cormann (b. 1970), Belgian-born Australian senator and Minister for Finance Jo Cornu, engineer, previous CEO of the National Railway Company of Belgium Joan Daemen (b. 1965), cryptographer, one of the designers of Advanced Encryption Standard (AES). Julien De Wilde (b. 1967), civil engineer, businessman Noël Devisch (b. 1943), agriculture Shelton Fabre (b. 1963), American Roman Catholic bishop Gabriel Fehervari (b. 1970) law, businessman Willy Geysen, law, head of the Centre for Intellectual Property Rights (CIR) Abdul Qadeer Khan (b. 1936), founder of Pakistan's Nuclear Program Koen Lamberts (b. 1964), President and Vice-Chancellor, University of Sheffield (United Kingdom) Koen Lenaerts, law, president of the European Court of Justice Georges Meekers (b. 1965), Belgian-born wine writer and educator Simon Mignolet (b. 1988), goalkeeper Martin Moors, philosopher Rudi Pauwels (b. 1960), pharmacologist, co-founder of Tibotec and Virco Vincent Rijmen (b. 1970), cryptographer, one of the designers of Advanced Encryption Standard (AES). Guðmundur Steingrímsson (b. 1972), Icelandic politician Francine Swiggers, economics, businesswoman Wu Rong-i, economics, former Vice Premier of Taiwan, Taiwanese politician Andreas Utermann, finance, former CEO of Allianz Global Investors Jef Valkeniers, doctor and politician Marc Van Ranst (b. 1965), physician, virologist Herman Van Rompuy (b. 1947), Belgian statesman, appointed President of the European Council in November 2009 Frans Vanistendael, law Catherine Verfaillie (b. 1957) physician, stem cell scientist Koen Vervaeke (b. 1959), history, diplomat Honorary doctorates Notable recipients of honorary doctorates at the KU Leuven include: Albert II of Belgium (1961), King of the Belgians James P. Allison (2017), Immunologist, Nobel Prize in Physiology or Medicine 2018 Timothy Garton Ash (2011), British historian and Professor of European Studies, University of Oxford Michelle Bachelet (2015), President of Chile Abhijit Banerjee (2014), Indian economist, Massachusetts Institute of Technology Patriarch Bartholomew I of Constantinople (1996) Baudouin of Belgium (1951), King of the Belgians Roberto Benigni (2007), Italian actor, comedian, screenwriter and director of film, theatre and television. John Braithwaite (2008), Australian criminologist (application of the idea of restorative justice to business regulation and peacebuilding) Manuel Castells (2004), Professor of Sociology, Open University of Catalonia, University of Southern California Leon O. Chua (2013), professor in the electrical engineering and computer sciences department at the University of California, Berkeley Carla Del Ponte (2002), former Chief prosecutor of two United Nations international criminal law tribunals Jared Diamond (2008), professor of Geography and Physiology, UCLA Jacques Derrida (1989), French philosopher John Kenneth Galbraith (1972), Canadian economist Nadine Gordimer (1980), South African author, Booker Prize 1974, Nobel Prize in Literature 1991 Alan Greenspan (1997), economist, former chairman of the Board of Governors of the US Federal Reserve Eugène Ionesco (1977), Romanian and French playwright Ban Ki-moon (2015), Secretary-General of the United Nations Helmut Kohl (1996), former Chancellor of Germany Christine Lagarde (2012), Managing Director of the International Monetary Fund (IMF) Mario Vargas Llosa (2003), Peruvian writer Angela Merkel (2017), German politician, Chancellor of Germany Michael Marmot (2014), British epidemiologist, University College London Martha Nussbaum (1997), American philosopher, University of Chicago Dirk Obbink, Lecturer in Papyrology and Greek Literature at Oxford University and the head of the Oxyrhynchus Papyri Project. Roger Penrose (2005), professor in Mathematical Physics, University of Oxford Navi Pillay (2012), United Nations High Commissioner for Human Rights Thomas S. Popkewitz (2004), professor of curriculum theory, University of Wisconsin-Madison School of Education Mary Robinson (2000), former President of Ireland Jacques Rogge (2012), President of the International Olympic Committee (IOC) Oscar Arnulfo Romero (1980), archbishop of San Salvador (El Salvador), human rights activist Helmut Schmidt (1983), former Chancellor of Germany Nate Silver (2013), American author and statistician Fiona Stanley (2014), Australian epidemiologist Rowan Williams (2011), Archbishop of Canterbury Bibliography 1860 : Souvenir du XXVe anniversaire de la fondation de l'Université catholique: Novembre 1859, Louvain, typographie Vanlinthout et Cie, 1860 Souvenir du XXVe anniversaire de la fondation de l'Université catholique: Novembre 1859. See also Academic libraries in Leuven Arenberg Research-Park Haasrode Research-Park List of universities in Belgium Science and technology in Flanders University Foundation List of oldest universities in continuous operation Footnotes A. According to the university's style guidelines, KU Leuven is the university's name in all languages. However, according to the university's statutes, the university's legal name by the law of 28 May 1970 issuing legal personality to the institution is Katholieke Universiteit te Leuven, which is used in the university's own official publications, with a variant Katholieke Universiteit Leuven according to the Flemish Community of Belgium. B. , C. The Old University of Leuven (1425–1797) is the oldest university in the low countries, and the Catholic University of Leuven (1834) is generally, yet controversially, identified as a continuation of it. In the mid-1800s, Belgium's highest court, the Court of Cassation, ruled that the 1834 "Catholic University of Leuven" was a different institution created under a different charter and thus cannot be regarded as continuing the 1425 "University of Leuven". See also: History of the Old University of Leuven. References External links KU Leuven: History of KU Leuven / KU Leuven, zes eeuwen geschiedenis International Ranking of Katholieke Universiteit Leuven (2008) Catholic University of Leuven Buildings and structures in Leuven Education in Leuven Catholic universities and colleges in Belgium Pontifical universities 1970 establishments in Belgium Educational institutions established in 1834 id:Universitas Katolik Leuven
41564486
https://en.wikipedia.org/wiki/Kung%20Fury
Kung Fury
Kung Fury is a 2015 English-language Swedish martial arts comedy featurette film written and directed by David Sandberg. It pays homage to 1980s martial arts and police action films. The film stars Sandberg in the title role, Jorma Taccone, Leopold Nilsson, and a cameo appearance by David Hasselhoff. The film was crowdfunded through Kickstarter starting in the beginning of 2014, with pledges reaching US$630,019, exceeding the original target goal of $200,000, but short of the feature film goal of $1 million. It was selected to screen in the Directors' Fortnight section at the 2015 Cannes Film Festival, losing to Rate Me from the United Kingdom. Plot Sometime in the early 1980s, Miami-Dade Police Department detective Kung Fury and his partner Dragon apprehend a red ninja in a back alley, but Dragon is sliced in half by the ninja while Kung Fury is suddenly struck by lightning and bitten by a cobra, giving him extraordinary kung fu powers that enable him to defeat his foe. Years later in 1985, after defeating a rogue arcade machine robot, Kung Fury quits the force when he is assigned to partner with Triceracop, fearing that he would lose another partner in the line of duty. Meanwhile, Adolf Hitler, a.k.a. "Kung Führer", enters the timeline and remotely guns down the police chief and attacks the precinct through a mobile phone. Intent on avenging the chief, Kung Fury has computer whiz Hackerman send him back in time to kill Hitler in Nazi Germany. A glitch in the system, however, sends him back into the Viking Age. After Kung Fury meets the Viking valkyries Barbarianna and Katana, the Norse god Thor sends him to Nazi Germany for him to finish his job. Upon his arrival, Kung Fury singlehandedly mows down dozens of Nazi soldiers with his kung fu skills, but is gunned down by Hitler using a Gatling-type gun from inside his podium. Suddenly, Thor, Hackerman, Triceracop, the Viking valkyries, and a tyrannosaurus hack into the timeline and kill the rest of the Nazi army while the tyrannosaurus squares off against Hitler's robotic Reichsadler. After being revived by Hackerman, Kung Fury gives Hitler an uppercut to the testicle before Thor drops his hammer on the Nazi leader and his robotic eagle. Seeing his mission as accomplished, Kung Fury returns to his timeline. Back in 1985 Miami, Kung Fury once again battles and defeats the arcade machine robot, but notices a Swastika on the robot's body while Hitler and his Reichsadler enter the timeline, vowing revenge on Kung Fury. Cast David Sandberg as Kung Fury, a Miami detective who possesses a new and powerful form of kung fu after being struck by lightning and bitten by a cobra, thus becoming "The Chosen One" as foretold by an ancient prophecy Jorma Taccone as Adolf Hitler, a.k.a. "Kung Führer", who aims to become the greatest martial artist by traveling through time to kill "The Chosen One" Steven Chew as Dragon, Kung Fury's partner who is killed by a red ninja Leopold Nilsson as Hackerman, a computer whiz who can transform into a Hackerbot Andreas Cahling as Thor (voiced by Per-Henrik Arvidius), the Norse god of thunder Erik Hornqvist as Triceracop (voiced by Frank Sanderson), a half-man, half-Triceratops cop who is assigned as Kung Fury's new partner Per-Henrik Arvidius as Chief Eleni Young as Barbarianna, a Viking warrior who rides a giant wolf and wields a Minigun Helene Ahlson as Katana (voiced by Yasmina Suhonen), a Viking warrior who rides a talking Tyrannosaurus and uses an Uzi Eos Karlsson as the Red Ninja Magnus Betnér as Colonel Reichstache Björn Gustafsson as Private Lahmstache David Hasselhoff as Hoff 9000 (voice) Frank Sanderson as Cobra (voice), Kung Fury's spirit animal; and Dinomite (voice), Katana's pet Tyrannosaurus Production David Sandberg is a Swedish filmmaker who had previously directed television commercials and music videos. In 2012, he quit the commercial directing business and focused on writing a script for an action comedy film set in the 1980s, inspired by action films of that era. He initially spent US$5,000 on producing and shooting footage with his friends, which became the trailer. In December 2013, Sandberg released the trailer and began a Kickstarter campaign to crowdfund the film's production with the goal of raising US$200,000 to produce a 30-minute version of the film and stream it online for free. A second goal was added with the target set to $1 million to rewrite the story into a full-length feature and a possible distribution deal. Most of the raw footage over green screen had been filmed using a Canon EOS 5D Mark III and a Sony FS700, but additional funding was required for post-production. The Kickstarter project ended on 25 January 2014, with $630,019 pledged by 17,713 backers. Filming Due to a limited budget, Sandberg shot the majority of the film at his office in Umeå, Sweden, using digital effects to replicate the streets of Miami. As he could only afford one police uniform during the production of the trailer, he filmed the police precinct scene by shooting each extra separately and compositing them in the scene. The single-shot scene where Kung Fury dispatches dozens of Nazi soldiers was achieved by combining the primary take of Sandberg's moves with over 60 takes of individual extras attacking him. On 30 July 2014, Sandberg announced that he and his crew had begun filming new footage, with the 30 backers who pledged to be in the film as extras. Filming was also done in Stockholm for additional scenes and stunts. For the scene with Barbarianna riding a giant wolf, Sandberg used stock footage of a black wolf from the website GreenScreen Animals, as sourcing a real wolf was impossible in Sweden. Miniatures were used in Kung Fury's fight scenes involving the arcade machine robot and the Red Ninja. The animated "Heaven" sequence was produced by French video game developer Old Skull Games. In keeping with the film's '80s theme, the visual effects artists softened the film clarity and added videotape wear effects to give the illusion of it being a worn VHS copy being played on an old VCR. One instance of this effect is in the scene where the Viking Babe Katana summons Thor. The scene was in the trailer with Joanna Häggblom originally as Katana, but because Häggblom was replaced by Helene Ahlson for the actual film, visual scratches and distortion effects were added to the scene to mask the cast change. Soundtrack The soundtrack score was composed by Swedish synthwave musicians Mitch Murder and Lost Years, with additional music by Patrik Öberg, Christoffer Ling, Highway Superstar, and Betamaxx. The official soundtrack album was released on vinyl record on 8 July 2015. Release The film made its debut at the 2015 Cannes Film Festival and premiered on YouTube, the Steam PC gaming platform, SVT2 in Sweden, and the El Rey Network in the United States, on 28 May 2015. By 1 June, the film had received over 10 million views on YouTube. , the video has scored over 37 million views on YouTube. The film has also been made available on VoD platforms through a distribution deal with Under the Milky Way. Critical reception Kung Fury was met with positive reviews from critics. Tyler Richardson of Latino-Review gave the film an A, commenting that "What Black Dynamite got so perfect about Blaxploitation films, this does wonderfully for 80s cop movies." Jonny Bunning of Bloody Disgusting gave the film a score of three-and-a-half out of five skulls, saying that "Kung Fury is the Avengers Assemble if it had been made in the 90s." Todd Brown of Twitch Film also praised the film, calling it "a thirty-minute long, nonstop assault of some of the most astounding visual gags ever assembled in one place. Kung Fury knows its audience, knows it damn well, and while it has little to offer to anyone outside of its particular niche, for people within that niche this is absolute gold." Scott Weinberg of Nerdist Industries called it "a 31-minute masterpiece that feels like it fell right out of 1985 and hit just about every awesome b-movie genre on the way down." Melissa Locker of Vanity Fair praised the film, jokingly calling it "the best movie ever, of course." Awards and nominations Kung Fury received the following awards and nominations: Mexico City International Contemporary Film Festival - Best Film - Won Guldbagge Awards - Best Short Film - Won Cannes Film Festival - Directors' Fortnight (Short Film) - Nominated Empire Awards - Best Short Film - Nominated European Film Awards - European Short Film - Nominated Video game Kung Fury: Street Rage is the companion video game to the film published by Hello There AB released in June 2015, paying homage to classic beat 'em up games such as Streets of Rage, Double Dragon, and Final Fight. The gameplay also resembles One Finger Death Punch by Silver Dollar Games, with the player pressing left or right of the character to attack in either direction. It is currently available on Windows and macOS (Steam). An upgraded version of the game, titled Kung Fury: Street Rage - The Arcade Strikes Back, was released on PlayStation 4 and Windows in December 2015. The game features additional boss fights and enables players to fight as Kung Fury's allies Triceracop, Barbarianna, and Hackerman. Another expansion DLC, titled A Day at the Beach, was released on Windows and macOS in November 2021. The updates features a new story, Co-op mode and a new playable character as David Hasselhoff. Sequel On 28 May 2016, it was announced on the Laser Unicorns Facebook page that Kung Fury II The Movie is in development. Sandberg is currently working with producers Seth Grahame-Smith and David Katzenberg on a full-length feature film version of Kung Fury. In an interview with Entertainment Weekly, he stated that the project would be a "clean slate", containing no footage from the short film but taking place in the same universe. In February 2018, Michael Fassbender and Arnold Schwarzenegger were confirmed to star alongside Hasselhoff in the upcoming film. On 16 May 2019, it was announced that Creasun Entertainment USA purchased a majority stake in the film's rights and will co-produce it with Argent Pictures. Filming is set to commence on 29 July 2019 in Bulgaria and Germany. On 13 September 2019, Schwarzenegger confirmed the film's title as Kung Fury 2, that he would portray the President of the United States, and that filming was to begin that day. On 25 September, the official Laser Unicorns Instagram page confirmed that Kung Fury 2 had wrapped filming. It is scheduled to be released sometime in 2022. See also Commando Ninja Turbo Kid References External links Lampray Productions official website Kickstarter 2015 films 2015 martial arts films 2015 action comedy films 2015 independent films Cultural depictions of Adolf Hitler Alternate Nazi Germany films English-language films English-language Swedish films Fictional portrayals of the Miami-Dade Police Department Fictional Vikings Films about dinosaurs Films about Nazis Films about Thor Films about time travel Films set in the 1940s Films set in 1985 Films set in Germany Films set in Miami Films shot in Stockholm Films shot in Sweden Kickstarter-funded films Kung fu films Martial arts comedy films Robot films Retrofuturism Swedish films Swedish action films Swedish alternate history films Swedish comedy films Swedish films about revenge Swedish independent films Swedish short films 2010s exploitation films
44770
https://en.wikipedia.org/wiki/Segesta
Segesta
Segesta (, Egesta, or , Ségesta, or , Aígesta; ) was one of the major cities of the Elymians, one of the three indigenous peoples of Sicily. The other major cities of the Elymians were Eryx and Entella. It is located in the northwestern part of Sicily in Italy, near the modern commune of Calatafimi-Segesta in the province of Trapani. The hellenization of Segesta happened very early and had a profound effect on its people. History Origins The origin and foundation of Segesta are extremely obscure. The tradition current among the Greeks and adopted by Thucydides, ascribed its foundation to a band of Trojan settlers, fugitives from the destruction of their city; and this tradition was readily welcomed by the Romans, who in consequence claimed a kindred origin with the Segestans. Thucydides seems to have considered the Elymians (), a barbarian tribe in the neighborhood of Eryx and Segesta, as descended from the Trojans in question; but another account represents the Elymi as a distinct people, already existing in this part of Sicily when the Trojans arrived there and founded the two cities. A different story seems also to have been current, according to which Segesta owed its origin to a band of Phocians, who had been among the followers of Philoctetes; and, as usual, later writers sought to reconcile the two accounts. Another version of the Trojan story related in Virgil's Aeneid, which would seem to have been adopted by the inhabitants themselves, ascribed the foundation of the city jointly by the territorial king Egestus or Aegestus (the Acestes of Virgil), who was said to be the offspring of a Dardanian damsel named Segesta by the river god Crinisus, and by those of Aeneas' folk who wished to remain behind with Acestes to found the city of Acesta. We are told also that the names of Simois and Scamander were given by the Trojan colonists to two small streams which flowed beneath the town, and the latter name is mentioned by Diodorus Siculus as one still in use at a much later period. The belief that the name of the city was originally Acesta or Egesta and changed to Segesta by the Romans to avoid its ill-omened meaning in Latin ( means "poverty" or "lack") is disproved by coins which prove that considerably before the time of Thucydides it was called by the inhabitants themselves Segesta, though this form seems to have been softened by the Greeks into Egesta. The city was occupied by a people distinct from the Sicanians, the native race of this part of Sicily, and on the other that it was not a Greek colony. Thucydides, in enumerating the allies of the Athenians at the time of the Peloponnesian War, distinctly calls the Segestans barbarians. At the same time they appear to have been, from a very early period, in close connection with the Greek cities of Sicily, and entering into relations both of hostility and alliance with the Hellenic states, wholly different from the other barbarians in the island. The early influence of Greek civilisation is shown also by their coins, which are inscribed with Greek characters, and bear the unquestionable impress of Greek art. In historical accounts The first historical notice of the Segestans transmitted to us represents them as already engaged (as early as 580 BC) in hostilities with Selinus (modern Selinunte), which would appear to prove that both cities had already extended their territories so far as to come into contact with each other. By the timely assistance of a body of Cnidian and Rhodian emigrants under Pentathlus, the Segestans at this time obtained the advantage over their adversaries. A more obscure statement of Diodorus relates that again in 454 BC, the Segestans were engaged in hostilities with the Lilybaeans for the possession of the territory on the river Mazarus. The name of the Lilybaeans is here certainly erroneous, as no town of that name existed till long afterwards; but we know not what people is really meant, though the presumption is that it is the Selinuntines, with whom the Segestans seem to have been engaged in almost perpetual disputes. It was doubtless with a view to strengthen themselves against these neighbors that the Segestans took advantage of the first Athenian expedition to Sicily under Laches (426 BC), and concluded a treaty of alliance with Athens. This, however, seems to have led to no result, and shortly after, hostilities having again broken out, the Selinuntines called in the aid of the Syracusans, with whose assistance they obtained great advantages, and were able to press Segesta closely both by land and sea. In this extremity the Segestans, having in vain applied for assistance to Agrigentum, and even to Carthage, again had recourse to the Athenians, who were, without much difficulty, persuaded to espouse their cause, and send a fleet to Sicily in 416 BC. It is said that this result was in part attained by fraud, the Segestans having deceived the Athenian envoys by a fallacious display of wealth, and led them to conceive a greatly exaggerated notion of their resources. They, however, actually furnished 60 talents in ready money, and 30 more after the arrival of the Athenian armament. But though the relief of Segesta was thus the original object of the great Athenian expedition to Sicily (415-413 BC), that city bears little part in the subsequent operations of the war. Nicias, indeed, on arriving in the island, proposed to proceed at once to Selinus, and compel that people to submission by the display of their formidable armament. But this advice was overruled: the Athenians turned their arms against Syracuse, and the contest between Segesta and Selinus was almost forgotten in the more important struggle between those two great powers. In the summer of 415 BC an Athenian fleet, proceeding along the coast, took the small town of Hyccara, on the coast, near Segesta, and made it over to the Segestans. The latter people are again mentioned on more than one occasion as sending auxiliary troops to assist their Athenian allies; but no other notice occurs of them. The final defeat of the Athenians left the Segestans again exposed to the attacks of their neighbors, the Selinuntines. Feeling themselves unable to cope with them, they again had recourse to the Carthaginians, who determined to espouse their cause, and sent them, in the first instance, an auxiliary force of 5000 Africans and 800 Campanian mercenaries, which sufficed to ensure them a victory over their rivals in 410 BC. This was followed the next year by a vast armament under Hannibal Mago, who landed at Lilybaeum, and, proceeding direct to Selinus, took and destroyed that city as well as Himera. The Carthaginian power now became firmly established in the western portion of Sicily. Segesta, surrounded on all sides by this formidable neighbor, naturally fell gradually into the position of a dependent ally of Carthage. It was one of the few cities that remained faithful to this alliance even in 397 BC, when the great expedition of Dionysius I of Syracuse to the West of Sicily and the siege of Motya seemed altogether to shake the power of Carthage. Dionysius in consequence laid siege to Segesta, and pressed it with the utmost vigor, especially after the fall of Motya. The city, however, was able to defy his efforts, until the landing of Himilco with a formidable Carthaginian force changed the aspect of affairs, and compelled Dionysius to raise the siege. From this time there are few mentions of Segesta till the time of Agathocles of Syracuse, under whom it suffered a great calamity. The despot landed in the West of Sicily on his return from Africa (307 BC), and was received into the city as a friend and ally. He suddenly turned upon the inhabitants on a pretence of disaffection, and put the whole of the citizens (said to amount to 10,000 in number) to the sword, plundered their wealth, and sold the women and children into slavery. He then changed the name of the city to Dicaeopolis, and assigned it as a residence to the fugitives and deserters that had gathered around him. It is probable that Segesta never altogether recovered this blow; but it soon resumed its original name, and again appears in history as an independent city. Thus it is mentioned in 276 BC, as one of the cities which joined Pyrrhus of Epirus during his expedition into the West of Sicily. It, however, soon after fell again under the power of the Carthaginians; and it was probably on this occasion that the city was taken and plundered by them, as alluded to by Cicero; a circumstance of which there are no other account. It continued subject to, or at least dependent on that people, till the First Punic War. In the first year of that war (264 BC) it was attacked by the consul Appius Claudius Caudex, but without success; but shortly after the inhabitants put the Carthaginian garrison to the sword, and declared for the alliance of Rome. They were in consequence besieged by a Carthaginian force, and were at one time reduced to great straits, but were relieved by the arrival of Gaius Duilius, after his naval victory in 260 BC. Segesta seems to have been one of the first of the Sicilian cities to set the example of defection from Carthage; on which account, as well as of their pretended Trojan descent, the inhabitants were treated with great distinction by the Romans. They were exempted from all public burdens, and even as late as the time of Cicero continued to be "sine foedere immunes ac liberi" - a free and immune city. After the destruction of Carthage, Scipio Africanus restored to the Segestans a statue of Diana which had been carried off by the Carthaginians, probably when they obtained possession of the city after the departure of Pyrrhus. During the Second Servile War also, in 102 BC, the territory of Segesta is again mentioned as one of those where the insurrection broke out with the greatest fury. But with the exception of these incidental notices we hear little of it under the Roman government. It seems to have been still a considerable town in the time of Cicero, and had a port or emporium of its own on the bay about 10 km distant. This emporium seems to have grown up in the days of Strabo to be a more important place than Segesta itself: but the continued existence of the ancient city is attested both by Pliny and Ptolemy; and we learn from the former that the inhabitants, though they no longer retained their position of nominal independence, enjoyed the privileges of the Latin citizenship. It seems, however, to have been a decaying place, and no trace of it is subsequently found in history. The site is said to have been finally abandoned, in consequence of the ravages of the Saracens, in 900 AD, and is now wholly desolate. The modern town of Castellammare del Golfo, about 10 km distant, occupies nearly, if not precisely, the same site as the ancient emporium or port of Segesta. Situation The ruins of the city are located on the top of Monte Bàrbaro at 305 m above sea level. The city was protected by steep slopes on several sides and by walls on the more gentle slope towards the temple. The hilltop offers a view over the valley towards the Gulf of Castellamare. The city controlled several major roads between the coast to the north and the hinterland. Very little is known about the city plan. Aerial photography indicates a regular city plan, built in part on terraces to overcome the natural sloping terrain. The current remains might be from the reconstruction after the destruction of the city by Agathocles. Current archaeological work indicates that the site was reoccupied by a Muslim community in the Norman period. Excavations have unearthed a Muslim necropolis and a mosque from the 12th century next to a Norman castle. Evidence suggests that the mosque was destroyed after the arrival of a new Christian overlord at the beginning of the 13th century. The city appears to have been finally abandoned by the second half of the 13th century. The temple On a hill just outside the site of the ancient city of Segesta lies an unusually well-preserved Doric temple. Some think it to have been built in the 420s BC by an Athenian architect, despite the city not having any Greek population. The prevailing view is that it was built by the indigenous Elymians. The temple has six by fourteen columns on a base measuring 21 by 56 metres, on a platform three steps high. Several elements suggest that the temple was never finished. The columns have not been fluted as they normally would have been in a Doric temple, and there are still bosses present in the blocks of the base (used for lifting the blocks into place but then normally removed). The temple also lacks a cella, any ornamentation, altar or deity dedication, and was never roofed over. The temple was never completed due to the war between Segesta and Selinunte. It managed to escape destruction by the Carthaginians in the late 5th century. Further reading References Sources External links Official website Photos of the site Segesta See Palermo's Segesta Page Panoramic virtual tour inside the Doric temple Ancient cities in Sicily Archaeological sites in Sicily Elymians Ionian colonies in Magna Graecia Former populated places in Italy Ancient Greek archaeological sites in Italy Archaeological parks Roman towns and cities in Italy
60065139
https://en.wikipedia.org/wiki/Political%20positions%20of%20Amy%20Klobuchar
Political positions of Amy Klobuchar
Amy Jean Klobuchar (; born May 25, 1960) is an American lawyer and politician serving as the senior United States senator from Minnesota. A member of the Minnesota Democratic-Farmer-Labor Party, Minnesota's affiliate of the Democratic Party, she previously served as the Hennepin County Attorney. She ran for the Democratic nomination for President of the United States in the 2020 election, before pulling out in March and endorsing Joe Biden. Klobuchar's political positions have generally been in line with modern American liberalism. She is pro-choice on abortion, supports LGBT rights and the Affordable Care Act, and was critical of the Iraq War. According to GovTrack, Klobuchar passed more legislation than any other senator by the end of the 114th Congress in late 2016. According to Congress.gov, , she had sponsored or co-sponsored 111 pieces of legislation that became law. During the 115th Congress, she voted in line with President Donald Trump's position on legislation 31.1 percent of the time. Crime Criminal justice reform In December 2018, Klobuchar voted for the First Step Act, legislation aimed at reducing recidivism rates among federal prisoners through expanding job training and other programs in addition to forming an expansion of early-release programs and modifications on sentencing laws such as mandatory minimum sentences for nonviolent drug offenders, "to more equitably punish drug offenders." On July 31, 2019, following Attorney General William Barr announcing that the United States federal government would resume the use of the death penalty for the first time in over 20 years, Klobuchar was a cosponsor of a bill banning the death penalty. Gun laws As of October 2018, Klobuchar has an "F" rating from the National Rifle Association (NRA) for supporting gun control legislation. In the wake of the 2016 Orlando nightclub shooting, Klobuchar participated in the Chris Murphy gun control filibuster. Following the Las Vegas shooting in October 2017, Klobuchar was one of twenty-four senators to sign a letter to National Institutes of Health Director Dr. Francis Collins espousing the view that it was critical the NIH "dedicate a portion of its resources to the public health consequences of gun violence" at a time when 93 Americans die per day from gun-related fatalities and noted that the Dickey Amendment did not prohibit objective, scientific inquiries into shooting death prevention. In an October 2017 interview, following the NRA publicly agreeing with the notion by some Republicans that bump stocks may need to be regulated, Klobuchar called it "one of the first times we've heard Republicans even open to talking about any kind of restrictions like this" and that there were a number of her fellow Congress members including Republicans "who have supported sensible background checks in addition to, of course, banning things like bump stocks." In November 2017, Klobuchar was a cosponsor of the Military Domestic Violence Reporting Enhancement Act, a bill that would form a charge of Domestic Violence under the Uniform Code of Military Justice (UCMJ) and stipulate that convictions would have to be reported to federal databases with the authority to keep abusers from purchasing firearms within three days in an attempt to close a loophole in the Uniform Code of Military Justice (UCMJ) through which convicted abusers retained the ability to purchase firearms. In January 2019, Klobuchar was one of forty senators to introduce the Background Check Expansion Act, a bill that would require background checks for either the sale or transfer of all firearms including all unlicensed sellers. Exceptions to the bill's background check requirement included transfers between members of law enforcement, loaning firearms for either hunting or sporting events on a temporary basis, providing firearms as gifts to members of one's immediate family, firearms being transferred as part of an inheritance, or giving a firearm to another person temporarily for immediate self-defense. In February 2018, after the Stoneman Douglas High School shooting, Klobuchar was one of four Democratic senators to sign a letter to President Trump asserting that were he "to endorse legislation to require a background check on every gun purchase, without other poison pill provisions attached, we could finally move much closer towards the comprehensive system that you called for after the Stoneman Douglas attack" and that there was no justification in allowing individuals denied firearms by federally licensed dealers being able to "simply visit a gun show or go online to purchase the same gun that they were denied at the store." In February 2019, Klobuchar was one of thirty-eight senators to sign a letter to Senate Judiciary Committee Chairman Lindsey Graham calling on him to "hold a hearing" on universal background checks and noted Graham's statement in the press that he "intended to have the Committee work on ‘red flag’ legislation and potentially also background checks", actions the senators indicated their support for. In May 2019, as Senate Democrats pressured Senate Majority Leader Mitch McConnell to allow a vote on a bill to renew the Violence Against Women Act, which included an amendment closing the "boyfriend loophole" which barred those convicted of abusing, assaulting or stalking a dating partner from buying or owning a firearm, Klobuchar advocated for the Senate passing the bill and called the measure crucial to protect women in the United States who died from gun violence at larger rates there than in other high-income countries. At the Detroit Democratic debate on July 31, 2019, Klobuchar invoked the death of 6-year-old Stephen Romero in the Gilroy Garlic Festival shooting and noted that President Trump supported background checks in a meeting before changing his position following a separate meeting with pro-gun group before stating that she would not fold if elected president and would secure the passage of universal background checks. Crime control According to her Senate website, while serving as Attorney of Hennepin County, Klobuchar was "a leading advocate for successful passage of Minnesota's first felony DWI law". She also focused on the prosecution of violent and career criminals while serving as County Attorney. Eric T. Schneiderman, the New York State Attorney General, praised Klobuchar's efforts for legislation against phone theft. In 2017 she took over sponsorship from Al Franken of a bill to provide grants for law enforcement personnel to receive training in how to question survivors of sexual assault and other trauma, after Franken was accused of sexual misconduct. In 2011, Klobuchar introduced S.978, the Commercial Felony Streaming Act, a bill that would make unauthorized streaming of copyrighted material for the purpose of "commercial advantage or personal financial gain" a felony under US copyright law. Backed by the U.S. Chamber of Commerce and praised by industry groups, the legislation has been enormously unpopular among critics who believe it would apply to those who stream or post videos of copyrighted content on public sites such as YouTube. Economy Agriculture In March 2018, Klobuchar and Republican John Thune introduced the Agriculture Data Act, a bill that would direct the Secretary of Agriculture "to collect, collate, integrate, and link data relating to the impacts of covered conservation practices on enhancing crop yields, soil health, and otherwise reducing risk and improving farm and ranch profitability" in addition to granting the Agriculture Secretary the ability to form a data warehouse on the subjects of confidential cloud-based conservation and farm productivity that would reserve records of multiple analysis. Klobuchar stated that the bill would "ensure hardworking farmers are able to capitalize on the United States Department of Agriculture’s vast resources to streamline their operations, enhance yields, and increase profits." In March 2019, Klobuchar was one of 38 senators to sign a letter to United States Secretary of Agriculture Sonny Perdue warning that dairy farmers "have continued to face market instability and are struggling to survive the fourth year of sustained low prices" and urging his department to "strongly encourage these farmers to consider the Dairy Margin Coverage program." In May 2019, Klobuchar and eight other Democratic senators sent a letter to Perdue in which they criticized the USDA for purchasing pork from JBS USA and wrote that it was "counterproductive and contradictory" for companies to receive funding from "U.S. taxpayer dollars intended to help American farmers struggling with this administration's trade policy." The senators requested the department "ensure these commodity purchases are carried out in a manner that most benefits the American farmer’s bottom line—not the business interests of foreign corporations." Company mergers and regulations In February 2019, Klobuchar was one of eight senators to sign a letter to the Federal Communications Commission and Department of Justice advocating for regulators to reject a proposed $26 billion merger between T-Mobile and Sprint, writing that American enforcers have understood for the last 30 years "that fostering robust competition in telecommunications markets is the best way to provide every American with access to high-quality, cutting-edge communications at a reasonable price" and the merger would result in a return for "Americans to the dark days of heavily consolidated markets and less competition, with all of the resulting harms." In March 2019, Klobuchar was one of six senators to sign a letter to the Federal Trade Commission (FTC) requesting it "use its rulemaking authority, along with other tools, in order to combat the scourge of non-compete clauses rigging our economy against workers" and espousing the view that non-compete clauses "harm employees by limiting their ability to find alternate work, which leaves them with little leverage to bargain for better wages or working conditions with their immediate employer." The senators furthered that the FTC had the responsibility of protecting both consumers and workers and needed to "act decisively" to address their concerns over "serious anti-competitive harms from the proliferation of non-competes in the economy." In June 2019, Klobuchar led six other Senate Democrats in signing letters to the FTC and the Department of Justice recounting that many of them had "called on both the FTC and the Justice Department to investigate potential anticompetitive activity in these markets, particularly following the significant enforcement actions taken by foreign competition enforcers against these same companies" and requested both agencies confirm whether or not opened antitrust investigations had been opened by them regarding each of the companies and for both agencies to pledge they will publicly release any such investigation's findings. Housing In April 2019, Klobuchar was one of forty-one senators to sign a bipartisan letter to the housing subcommittee praising the United States Department of Housing and Urban Development's Section 4 Capacity Building program as authorizing "HUD to partner with national nonprofit community development organizations to provide education, training, and financial support to local community development corporations (CDCs) across the country" and expressing disappointment that President Trump's budget "has slated this program for elimination after decades of successful economic and community development." The senators wrote of their hope that the subcommittee would support continued funding for Section 4 in Fiscal Year 2020. Labor laws In July 2019, Klobuchar signed a letter to United States Secretary of Labor Alexander Acosta that advocated for the U.S. Occupational Safety and Health Administration to initiate a full investigation into a complaint filed on May 20 by a group of Chicago-area employees of McDonald's, which detailed workplace violence incidents that included interactions with customers such as customers throwing hot coffee and threatening employees with firearms and more. The senators argued that McDonald's could and needed to "do more to protect its employees, but employers will not take seriously their obligations to provide a safe workplace if OSHA does not enforce workers rights to a hazard-free workplace." In July 2019, when asked by Royceann Porter of the Teamsters 238 union what she would do as president to stand up for temporary workers, Klobuchar replied that the first thing needed was to "make it easy for people to organize" and that union organizers would "have to explain to Iowa and to the rest of the country — this president has not kept his promise to those workers where he said he was going to stand up for them." Taxation In July 2017, during a tour of Canal Park Brewing as part of her promotion of a tax cut for microbreweries, Klobuchar said they were jobs in the United States that were not going anywhere and one thing needed was the encouragement of "more of these small businesses and especially these small breweries." She acknowledged the tax cut had been introduced in the past, but stated the possibility of the Craft Beverage Modernization and Tax Reform Act being passed as part of broader tax reform scheduled for later that year. In 2019 Klobuchar was a sponsor of the Gold Star Family Tax Relief Act, a bill that would undo a provision in the Tax Cuts and Jobs Act of 2017 that raised the tax on the benefit children receive from a parent's Department of Defense survivor benefits plan up to 37% compared with an average of 12% to 15% prior to the 2017 tax bill. In a news release, Klobuchar reflected that when she "learned about the unacceptable mistake in the 2017 tax law that unduly burdened our Gold Star families, my colleagues and I moved immediately to fix the problem". The bill passed in the Senate in May 2019. Trade In 2010, Klobuchar opposed the Trans-Pacific Partnership "because she [had] concerns about whether the proposed legislation [was] strong enough for American workers". Education In February 2019 Klobuchar came out against tuition free, four-year college for all, while saying she supports free community colleges. In March 2019, Klobuchar was one of 13 senators to sign a letter to United States Secretary of Education Betsy Devos calling for the Education Department to do more to assist Argosy University students as they faced campus closures across the US and critiquing the Education Department as failing to provide adequate measures to protect students or keep them notified of ongoing updates. In July 2019, Klobuchar addressed the National Education Association, saying that her first plan was to increase pay for teachers that would be financed through changes to the estate tax and warned that the United States was "not going to compete with the rest of the world if we don’t invest in our schools." Environment Climate change In December 2014, Klobuchar was one of six Democratic senators to sign a letter to the Environmental Protection Agency urging the agency to give states more time to comply with its rule on power plants as the final rule "must provide adequate time for the design, permitting and construction of such large scale capital intensive infrastructure" and calling for an elimination of the 2020 targets in the final rule, a mandate that states take action by 2020 as part of the EPA's goal to reach a 30 percent carbon cut by 2030. In November 2018, the Trump administration released a climate change report warning of dire consequences if the US did not change its policies. The report was released the day after Thanksgiving, generally one of the slowest news days of the year. Klobuchar stated administration officials "couldn't pick a day where they tried to get less attention" and expressed her view that the attempt backfired given "a lot of people are signing up to get the overview of the report" as a result of no other news receiving attention the day the report was made public. She advocated for the implementation of greenhouse gas rules and gas mileage standards along with the United States reentering the Paris Agreement. In November 2018, Klobuchar was one of 25 Democratic senators to cosponsor a resolution in response to findings of the Intergovernmental Panel on Climate Change (IPCC) report and National Climate Assessment. The resolution affirmed the senators' acceptance of the findings and their support for bold action to address climate change. In a February 2019 interview with Bret Baier, Klobuchar was asked how she would vote on the Green New Deal if it came up for a vote in the Senate, replying, "I see it as aspirational, I see it as a jump start. So I would vote yes, but I would also, if it got down to the nitty-gritty of an actual legislation as opposed to, ‘oh, here are some goals we have,’ that would be different for me." Klobuchar added that she was "for a jump-start of the discussion" as espoused in the Green New Deal's framework by fellow senator Ed Markey. In June 2019, Klobuchar was one of forty-four senators to introduce the International Climate Accountability Act, legislation that would prevent President Trump from using funds in an attempt to withdraw from the Paris Agreement and directing the president's administration to instead develop a strategic plan for the United States that would allow it to meet its commitment under the Paris Agreement. In June 2019, after Politico reported that the United States Department of Agriculture had mostly ceased promoting its own climate science, Klobuchar sent a letter to Agriculture Secretary Sonny Perdue requesting the Agriculture Department offer an explanation for not publicizing certain studies and called for an immediate release of "any [Agricultural Research Service] study related to climate science that was ignored, downplayed, or its findings held back." Conservation In January 2017, Klobuchar was one of five senators to cosponsor a bill that would lift federal protections for gray wolves in both the Midwest and Wyoming in an attempt to prevent courts from overruling an Interior Department decision to remove wolves in Wyoming, Wisconsin, Minnesota, and Michigan from the endangered species list. In May 2019, Klobuchar and Republican James Risch reintroduced the Recreational Trails Program Funding Transparency Act, a bipartisan bill mandating the United States Secretary of Transportation give a report regarding the amount of non-highway recreational fuel taxes administered to the Recreational Trails Program for the purpose of assisting Congress in determining the appropriate funding level for the program. Klobuchar called environmental conservation "a fundamental part of Minnesota's heritage" and said the bill would "help ensure that states receive the resources they need to protect and improve these trails for generations to come." Water pollution In May 2019, Klobuchar and Republican Marco Rubio introduced the Local Water Protection Act, a bipartisan bill that would reauthorize the Environmental Protection Agency's (EPA) Section 319 grant program, which had provided funding opportunities for states to develop and implement their own programs for managing non-point source water pollution. The bill also increased funding for Section 319 from $70 million a year to $200 million for Fiscal Years 2020 through 2024. Klobuchar said her state took the quality of its "10,000 lakes very seriously, and we all want to preserve the quality of these important waterways for generations to come" and that the bill "would give local and state governments the resources they need to create the best voluntary conservation programs to ensure that their water is clean and free of harmful pollutants." Wood building In May 2016, Klobuchar was a cosponsor of the Timber Innovation Act, a bipartisan bill introduced by Mike Crapo and Debbie Stabenow that would incentivize investment by the National Forest Products Lab as well as American colleges and universities to conduct both research and development on alternative methods for wood building construction. The bill also supported attempts by the Agriculture Department to further support wood products being used as a building material for tall buildings. Government oversight Campaign finance In June 2019, Klobuchar and Senator Mark Warner (D-Virginia) introduced the Preventing Adversaries Internationally from Disbursing Advertising Dollars (PAID AD) Act, a bill that would modify U.S. federal campaign finance laws to outlaw the purchasing of ads that name a political candidate and appear on platforms by foreign nationals in the midst of an election year. Klobuchar issued a statement saying that the intelligence community had identified foreign powers as continuing to interfere in American elections and changing laws to ban foreign officials and governments from buying political advertisements "is necessary to ensure American elections are free and fair." Election security In January 2017, Klobuchar was one of six Democratic senators to introduce legislation that would form an independent counsel with the ability to probe potential Russian cyber attacks on political systems and investigate efforts by Russians to interfere in American elections with roughly eighteen months to hand over its findings and recommendations to Congress. In February 2017, Klobuchar led 25 senators in signing a letter to the Election Assistance Commission (EAC) requesting that the commission detail cybersecurity challenges to state and local officials amid their attempts to safeguard future elections and also secure the 2016 election from Russian hackers. In June 2017, Klobuchar sent a letter to National Security Advisor H.R. McMaster requesting he meet with the Senate Rules Committee on the subject of allegations that Russia attempted to interfere in the 2016 election, citing the necessity of the Senate to "have all of the information necessary to ensure that future elections are safeguarded from foreign interference" amid its investigation into what extent Russia interfered in the election and called for McMaster to consider "making information that could be helpful to protecting critical infrastructure publicly available immediately." In October 2017, Klobuchar and Virginia Senator Mark Warner unveiled the Honest Ads Act at a news conference, legislation that mandated large digital platforms keep a public repository of paid political advertising that appear on their sites and abolish "what has been a major discrepancy between how political activity is regulated online compared with broadcast television and radio." The bill was in response to a revelation by Facebook, Inc. the previous month in which the company disclosed a discovery of roughly 500 inauthentic accounts responsible for more than $100,000 in advertising associated with the election and opined that the accounts were linked to Russia. On December 21, 2017, Klobuchar was one of six senators to introduce the Secure Elections Act, legislation authorizing block grants to states to update outdated voting technology as well as form a program for an independent panel of experts that would work toward the development of cybersecurity guidelines for election systems that states could then implement, along with offering states resources to install the recommendations. In June 2018, Klobuchar and Sherrod Brown introduced the Save Voters Act, a bill that would serve as an amendment to the National Voter Registration Act while asserting that a state cannot use an individual's failure to vote or respond to a state notice as reason for removing them from voter rolls. In reference to the Ohio Supreme Court ruling to uphold the state's "use it or lose it" policy the earlier that month, Klobuchar said, "We should be doing everything we can to encourage participation in elections and strengthen voting rights, yet last week's Supreme Court decision will allow states to make it harder — not easier — for more Americans to vote." In August 2018, during an interview on NBC's Meet the Press, Klobuchar stated that President Trump's downplaying of Russian interference in the 2016 election posed a threat to national security and she wished he would listen to members of the intelligence community "but what we have right now is a common set of facts between at least Democrats and Republicans in the Senate, and a common purpose to protect our democracy." Klobuchar added that she would "love to see this broadened out so we start to discuss also the threats to our power grid system, the threats to our financial system, because the Russians aren't just stopping at the election equipment." In November 2018, Klobuchar and Republican Dan Sullivan introduced legislation to create a new State Department program offering grants to American nonprofit groups for working on election security and sharing information with similar groups in other countries. In April 2019, Klobuchar was one of twelve Democratic senators to sign a letter led by Mazie Hirono that questioned the decision of Attorney General William Barr to offer "his own conclusion that the President’s conduct did not amount to obstruction of justice" and called for both the Justice Department's inspector general and the Office of Professional Responsibility to launch an investigation into whether Barr's summary of the Mueller Report and his April 18 news conference were misleading. During the June 27, 2019 Miami Democratic debate, Klobuchar warned, "We let the Republicans run our elections, and if we do not do something about Russian interference in the election, and we let Mitch McConnell stop all the back-up paper ballots, then we are not going to get to do what we want to do." In July 2019, Klobuchar and Rhode Island Senator Jack Reed sent a letter to Acting Homeland Security (DHS) Secretary Kevin McAleenan requesting an explanation of the actions taken by the DHS in response to "unexpected behavior" of voting equipment in Durham County, North Carolina during the 2016 presidential election and opined that it was "critical that we learn as much as we can about the extent of the attacks we faced in 2016, and that these lessons be shared as widely as possible so that our nation is fully prepared for the 2020 elections." Government surveillance In August 2007, Klobuchar was one of 16 Democratic senators and 41 Democratic congresspeople to vote for the Protect America Act of 2007, which was heavily criticized by civil libertarians. Klobuchar did, however, vote against granting legal immunity to telecom corporations that cooperated with the NSA warrantless surveillance program. Klobuchar voted in favor of the Intelligence Authorization Act of 2008, which included a provision to ban the use of waterboarding by the United States. During the hearing of U.S. Supreme Court nominee Elena Kagan, Klobuchar disagreed with Senator Tom Coburn (R-Oklahoma) when he questioned the nominee about his perception that Americans were "losing freedom" over the past 30 years. Klobuchar argued that the "free society" Coburn favored was one in which women were underrepresented in government, including no representation on the Supreme Court or the Senate Judiciary Committee. Census In June 2019, Klobuchar was one of 28 senators to sign a letter led by Brian Schatz to United States Secretary of Commerce Wilbur Ross warning that Ross would "further delay and jeopardize the Census Bureau’s ability to conduct a full, fair, and accurate decennial census as required by the U.S. Constitution and the Census Act" by continuing to attempt adding the citizenship question to 2020 census materials. The senators urged Ross to "allow the Census Bureau to proceed with preparation for a 2020 census without a citizenship question on the questionnaire." Government shutdown In March 2019, Klobuchar and thirty-eight other senators signed a letter to the Appropriations Committee opining that contractor workers and by extension their families "should not be penalized for a government shutdown that they did nothing to cause" while noting that there were bills in both chambers of Congress that if enacted would provide back pay to compensate contractor employees for lost wages before urging the Appropriations Committee "to include back pay for contractor employees in a supplemental appropriations bill for FY2019 or as part of the regular appropriations process for FY2020." Foreign policy In March 2007, Klobuchar went on an official trip to Iraq with Senate colleagues Sheldon Whitehouse, John E. Sununu, and Lisa Murkowski. She noted that U.S. troops were completing their job and working arduously to train the Iraqis. Klobuchar opposed President George W. Bush's plan to increase troop levels in Iraq in January 2007. In May 2007, after Bush vetoed a bill (which Klobuchar voted for) that would fund the troops but impose time limits on the Iraq War, and supporters failed to garner enough congressional votes to override his veto, Klobuchar voted for additional funding for Iraq without such time limits, saying she "simply could not stomach the idea of using our soldiers as bargaining chips". In December 2010, Klobuchar voted for the ratification of New START, a nuclear arms reduction treaty between the United States and the Russian Federation obliging both countries to have no more than 1,550 strategic warheads and 700 launchers deployed during the next seven years, and providing for a continuation of on-site inspections that halted when START I expired the previous year. It was the first arms treaty with Russia in eight years. In 2011, Klobuchar supported American military intervention in Libya. In October 2016, Klobuchar was one of 12 senators to sign a letter to President Barack Obama urging his administration "to consider all options for increasing China’s compliance with its international trade obligations, including a potential case brought with our allies at the World Trade Organization and a pause of other trade negotiations with China, such as the Bilateral Investment Treaty talk" and asserting that U.S. steel companies and steelworkers would only get the relief they needed though the implementation of "strong enforcement measures into our strategy to reduce excess global capacity". In March 2017, Klobuchar and Senator John Cornyn (R-Texas) introduced a bill that would mandate the Defense Department reclassify deployments to the Sinai Peninsula as combat zone assignments, complete with the corresponding tax breaks that come from the proposed arrangement. Klobuchar observed, "As terrorist groups like ISIS spread throughout the region, the dangers these service members face has increased. Current rules regarding benefits for those serving in the Sinai do not reflect these new threats." In November 2017, in response to efforts by China to purchase tech companies based in the US, Klobuchar was one of nine senators to cosponsor a bill that would broaden the federal government's ability to prevent foreign purchases of U.S. firms through increasing the strength of the Committee on Foreign Investment in the United States (CFIUS). The scope of the CFIUS would be expanded to allow it to review along with possibly decline smaller investments and add additional national security factors for CFIUS to consider including if information about Americans would be exposed as part of transactions or whether the deal would facilitate fraud. In May 2018, Klobuchar was one of 12 senators to sign a letter to President Donald Trump urging him to remain in the Iran nuclear deal on the grounds that "Iran could either remain in the agreement and seek to isolate the United States from our closest partners, or resume its nuclear activities" if the US pulled out and that both possibilities "would be detrimental to our national security interests." In March 2018, Klobuchar voted against tabling a resolution spearheaded by Bernie Sanders, Chris Murphy, and Mike Lee that would have required Trump to withdraw American troops either in or influencing Yemen within the next 30 days unless they were combating Al-Qaeda. In October 2018, Klobuchar was one of eight senators to sign a letter to Director of National Intelligence Dan Coats requesting a classified briefing on what the intelligence community knew about threats to Saudi journalist Jamal Khashoggi so that the senators may fulfill their "oversight obligation". In January 2019, following Juan Guaidó's self-declaration as interim President of Venezuela, Klobuchar told HuffPost that she supported the opposition to Nicolás Maduro. In February 2019, Klobuchar supported the Israel Anti-Boycott Act, which would make it legal for states to refuse to do business with contractors that engage in boycotts against Israel. In April 2019, Klobuchar was one of 34 senators to sign a letter to President Trump encouraging him "to listen to members of your own Administration and reverse a decision that will damage our national security and aggravate conditions inside Central America", asserting that Trump had "consistently expressed a flawed understanding of U.S. foreign assistance" since becoming president and that he was "personally undermining efforts to promote U.S. national security and economic prosperity" through preventing the use of Fiscal Year 2018 national security funding. The senators argued that foreign assistance to Central American countries created less migration to the U.S., citing the funding's helping to improve conditions in those countries. Klobuchar supports ending all U.S. involvement in the Saudi Arabian-led intervention in Yemen. Health and safety Drug policy In June 2016, along with Republicans Chuck Grassley and Mike Lee and fellow Democrat Patrick Leahy, Klobuchar was one of four senators to introduce a bill that would authorize generic manufacturers facing either the Food and Drug Administration being misused by brand-name drug companies in order to prevent them from getting drug samples or an inability to share a safety protocol to file a lawsuit that would mandate access to a sample or force negotiations for a safety protocol. The bill's intent was to prevent big pharmaceutical companies from using safety rules in an attempt to prevent generic drugs from coming to market. In November 2016, along with Chuck Grassley and Richard Blumenthal, Klobuchar sent a letter to Mylan Chief Executive Heather Bresch expressing concern that Mylan may have overcharged military members for EpiPen and questioned when Mylan would reimburse the Defense Department. In December 2016, Klobuchar was one of 17 senators to sign a letter to President-elect Trump asking him to fulfill a campaign pledge to bring down the cost of prescription drugs, stating their willingness "to advance measures to achieve this goal", and calling on Trump "to partner with Republicans and Democrats alike to take meaningful steps to address the high cost of prescription drugs through bold administrative and legislative actions." In February 2017, Klobuchar and thirty other senators signed a letter to Kaléo Pharmaceuticals in response to the opioid-overdose-reversing device Evzio rising in price from $690 in 2014 to $4,500 and requested the company answer what the detailed price structure for Evzio was, the number of devices Kaléo Pharmaceuticals set aside for donation, and the totality of federal reimbursements Evzio received in the previous year. In March 2017, Klobuchar was one of twenty-one senators to sign a letter led by Ed Markey to Senate Majority Leader Mitch McConnell which noted that 12 percent of adult Medicaid beneficiaries had some form or a substance abuse disorder in addition to one third of treatment administered for opioid and other substance use disorders in the United States being financed through Medicaid and opined that the American Health Care Act could "very literally translate into a death spiral for those with opioid use disorders" due to the insurance coverage lacking and not having the adequate funds to afford care oftentimes resulting in individuals abandoning substance use disorder treatment. In June 2018, Klobuchar co-sponsored the bipartisan STATES Act proposed in the 115th U.S. Congress by Massachusetts Senator Elizabeth Warren and Colorado Senator Cory Gardner that would exempt individuals or corporations in compliance with state cannabis laws from federal enforcement of the Controlled Substances Act. In February 2018, Klobuchar introduced the CARA 2.0, a follow-up bill to the Comprehensive Addiction and Recovery Act (CARA) that included imposing a three-day initial prescribing limit on opioids for acute pain along with increasing services to promote recovery and attempting to widen the availability of treatment. At a news conference, Klobuchar described CARA 2.0 as "a blueprint for the country in terms of training on naloxone and in terms of authorization for money." In December 2018, Klobuchar was one of 21 senators to sign a letter to Commissioner of Food and Drugs Scott Gottlieb stating their approval of the actions of the Food and Drugs Administration to hinder youth access to e-cigarettes and urging the FDA "to take additional, stronger steps to prevent and reduce e-cigarette use among youth." In January 2019, Catherine Cortez Masto announced that she and Klobuchar were sponsoring legislation authorizing "the largest purchaser of prescription medications, Medicare, to negotiate drug prices and to hold pharmaceutical companies accountable for the rising prices of prescription drugs." In June 2019, Klobuchar was one of fifteen senators to introduce the Affordable Medications Act, legislation intended to promote transparency through mandating pharmaceutical companies disclose the amount of money going toward research and development in addition to both marketing and executives' salaries. The bill also abolished the restriction that stopped the federal Medicare program from using its buying power to negotiate lower drug prices for beneficiaries and hinder drug company monopoly practices used to keep prices high and disable less expensive generics entering the market. Food policy When the Healthy Hungry-Free Kids Act of 2010 raised the possibility that pizza sauce would no longer be counted as a serving of vegetables in school lunches, threatening the $3 billion-dollar Schwan Company of Minnesota, Klobuchar, along with Minnesota Senator Al Franken and six Minnesota representatives, petitioned the USDA not to change the rating of tomato paste. The proposed change didn't pass. Health insurance Klobuchar voted for the Patient Protection and Affordable Care Act in December 2009 and the Health Care and Education Reconciliation Act of 2010. In December 2012, she advocated to "repeal or reduce" the tax on medical devices included in the Affordable Care Act, as it would be harmful to businesses in her state. Despite this, on September 30, 2013, Klobuchar voted to remove a provision that would repeal the medical device tax from a government funding bill in opposition to the provision being used as a condition in keeping the government open. In January 2015, Klobuchar was one of 17 senators to co-sponsor S. 149, a bill to retroactively repeal the device excise tax. She has said that the medical device tax threatens jobs, although her statements have been questioned by investigative journalists. Medtronic spent more than any other medical device company to lobby against the device tax in 2014, with Klobuchar as one of Medtronic's top recipients of political action committee (PAC) donations. In December 2018, after U.S. District Court Judge Reed O'Connor ruled that the Affordable Care Act was unconstitutional, Klobuchar called the ruling "absurd" and said that at a time when the Trump administration "seems bound and determined to take away people's health care, we have to protect the ability of people to even have their health care even exist." In December 2018, Klobuchar was one of 42 senators to sign a letter to Trump administration officials Alex Azar, Seema Verma, and Steve Mnuchin arguing that the administration was improperly using Section 1332 of the Affordable Care Act to authorize states to "increase health care costs for millions of consumers while weakening protections for individuals with pre-existing conditions." The senators requested the administration withdraw the policy and "re-engage with stakeholders, states, and Congress." In January 2019, during the 2018–19 United States federal government shutdown, Klobuchar was one of 34 senators to sign a letter to Commissioner of Food and Drugs Scott Gottlieb recognizing the efforts of the FDA to address the shutdown's effect on the public health and employees while remaining alarmed "that the continued shutdown will result in increasingly harmful effects on the agency’s employees and the safety and security of the nation’s food and medical products." In February 2019, Klobuchar was one of twenty-three Democratic senators to introduce the State Public Option Act, a bill that would authorize states to form a Medicaid buy-in program for all residents and thereby grant all denizens of the state the ability to buy into a state-driven Medicaid health insurance plan if they wished. Brian Schatz, a bill cosponsor, said the legislation would "unlock each state’s Medicaid program to anyone who wants it, giving people a high-quality, low-cost public health insurance option" and that its goal was "to make sure that every single American has comprehensive health care coverage." In August 2019, Klobuchar was one of nineteen senators to sign a letter to United States Secretary of the Treasury Steve Mnuchin and United States Secretary of Health and Human Services Alex Azar requesting data from the Trump administration in order to aid in the comprehension of states and Congress on potential consequences in the event that the Texas v. United States Affordable Care Act (ACA) lawsuit prevailed in courts, citing that an overhaul of the present health care system would form "an enormous hole in the pocketbooks of the people we serve as well as wreck state budgets". Railroad safety In June 2019, Klobuchar was one of ten senators to cosponsor the Safe Freight Act, a bill that would mandate all freight trains have one or more certified conductors and one certified engineer on board who can collaborate on how to protect both the train and people living near the tracks' safety. The legislation was meant to correct a rollback of the Federal Railroad Administration on a proposed rule intended to establish safety standards. Recreation advocacy Klobuchar has been an active supporter of outdoor recreation legislation, including the Recreational Trails Program (RTP). When the Senate Environment and Public Works Committee passed MAP-21, trail interests and state park officials warned that the new policy could effectively end the program by relegating recreational trail projects to competition for funding among a broad category of authorized non-highway projects. Klobuchar led efforts to alter the proposal, working closely with recreation interests to develop a floor amendment that would reauthorize the RTP program unchanged. Although she faced bipartisan leadership in support of the committee's proposal, Klobuchar managed to secure acceptance of her new language by the legislation's floor manager, and she won strong bipartisan support for her amendment. The result was Senate passage in early 2012 of new surface transportation legislation, which continued RTP with $85 million in guaranteed annual funds and no significant change in its operations. As chair of the Subcommittee on Competitiveness, Innovation, and Export Promotion, she played a key role in the 2010 passage of the Travel Promotion Act and the creation of Brand USA, an advertising effort to recover the traditional U.S. share of the international tourism market that will highlight national parks and their natural treasures. With Klobuchar's active support, the program has been granted $100 million per annum in matching federal funding, is widely expected to bring millions of additional visitors and billions of dollars to the U.S. and its parks each year, and has become the focus of a major White House initiative. On June 6, 2012, Klobuchar received the Sheldon Coleman Great Outdoors Award at a special Great Outdoors Week celebration presented by the American Recreation Coalition. The award, created in 1989 to honor the lifelong efforts of Sheldon Coleman, is presented to individuals whose personal efforts have contributed substantially to enhancing outdoor experiences across America. The winner is selected by a panel of 100 national recreation community leaders, ranging from corporate executives to key federal and state officials and nonprofit organization community leaders. Klobuchar is the fifth woman, and the first woman serving in Congress, to receive the honor. Immigration In January 2013, Klobuchar was one of four senators to sponsor the Immigration Innovation Act, legislation intended to increase STEM visas and use the fees obtained from the aforementioned visa applications as a means of funding STEM education programs within the US. In a July 2018 interview, when asked about eliminating Immigration and Customs Enforcement amid other Democrats supporting abolishing the agency, Klobuchar stated, "I think what has to change are the policies, and the people that are making these policies are making horrendous decisions like separating kids from their parents." She added that the US would always "need immigration enforcement" and referred to America as "a major country with major borders", thereafter condemning the Trump administration's rhetoric on immigration: "I am just appalled by how this administration has been talking about immigrants. They don't diminish America, they are America." In January 2019, Klobuchar was one of 20 senators to sponsor the Dreamer Confidentiality Act, a bill banning the Department of Homeland Security (DHS) from passing information collected on DACA recipients to Immigration and Customs Enforcement (ICE), Customs and Border Protection (CBP), the Department of Justice, or any other law enforcement agency with exceptions in the case of fraudulent claims, national security issues, or non-immigration related felonies. In April 2019, Klobuchar signed a letter led by Catherine Cortez Masto to Immigrations and Customs Enforcement and Customs and Border Enforcement asserting that "the civil detention of an expectant mother for potential immigration offenses is never justified" due to the "absence of compelling evidence that the detention of a pregnant woman is necessary because she is a threat to herself or others, or is a threat to public safety or national security". The senators requested the CBP enact measures that would ensure "timely and appropriate treatment" for pregnant women in custody along with both agencies providing information on how available facilities and doctors are for pregnant immigrants and complete data on the number of those currently in custody. In April 2019, Klobuchar was one of nineteen senators to sign a letter to top members on the Appropriations Committee Richard Shelby and Patrick Leahy and top members of its Homeland Security subcommittee Shelley Moore Capito and Jon Tester indicating that they could not "support the appropriation of funds that would expand this administration’s unnecessarily cruel immigration enforcement policies, its inhumane immigrant detention systems, or its efforts to build the president’s vanity projects" and urging Congress to "resist efforts to raid critical and effective public safety programs in order to pay for political theatrics" as President Trump's "manufactured emergency" was not justification for "spending taxpayer dollars on an ineffective wall." In May 2019, President Trump announced an immigration plan that would move the U.S. toward a "merit-based" system favoring highly skilled workers over migrants with family members living here, Klobuchar responding that she was bothered by the plan due to "the fact that he doesn’t deal with the Dreamers, he doesn’t deal with the millions of people who came here with no fault of their own, he doesn’t deal with the 10 million people that are here now, many of whom would like to see if they follow the law, learn English, they want to be on a path to citizenship." In June 2019, following the Housing and Urban Development Department's confirmation that DACA recipients did not meet eligibility for federal backed loans, Klobuchar and eleven other senators introduced The Home Ownership Dreamers Act, legislation that mandated that the federal government was not authorized to deny mortgage loans backed by the Federal Housing Administration, Fannie Mae, Freddie Mac, or the Agriculture Department solely due to the immigration status of an applicant. In June 2019, Klobuchar and six other Democratic senators were led by Hawaii Senator Brian Schatz in sending letters to the Government Accountability Office along with the suspension and debarment official and inspector general at the US Department of Health and Human Services citing recent reports that showed "significant evidence that some federal contractors and grantees have not provided adequate accommodations for children in line with legal and contractual requirements" and urged officials in the government to determine whether federal contractors and grantees are in violation of contractual obligations or federal regulations and should thus face financial consequences. In July 2019, along with Kamala Harris and Kirsten Gillibrand, Klobuchar sent a letter to the Office of Refugee Resettlement asserting that the agency "should be prioritizing reunification of every child as soon as possible as opposed to prolonging stays in government custody. In July 2019, following reports that the Trump administration intended to end protections of spouses, parents and children of active-duty service members from deportation, Klobuchar was one of twenty-two senators to sign a letter led by Tammy Duckworth arguing that the program allowed service members the ability "to fight for the United States overseas and not worry that their spouse, children, or parents will be deported while they are away" and that the program's termination would cause both personal hardship and a negatively impact for service members in combat. In July 2019, Klobuchar and fifteen other Senate Democrats introduced the Protecting Sensitive Locations Act which mandated that ICE agents get approval from a supervisor ahead of engaging in enforcement actions at sensitive locations with the exception of special circumstances and that agents receive annual training in addition to being required to report annually regarding enforcement actions in those locations. Journalism In July 2019, Klobuchar was one of eight senators to cosponsor the Fallen Journalists Memorial Act, a bill introduced by Ben Cardin and Rob Portman that would create a new memorial that would be privately funded and constructed on federal lands within Washington, D.C. in order to honor journalists, photographers, and broadcasters that have died in the line of duty. Social issues Child care In 2019, Klobuchar and 34 other senators introduced the Child Care for Working Families Act, a bill that created 770,000 new child care jobs and that ensured families under 75 percent of the state median income did not pay for child care with higher earning families having to pay "their fair share for care on a sliding scale, regardless of the number of children they have." The legislation also supported universal access to high-quality preschool programs for all 3 and 4-year-olds and gave the child care workforce a changed compensation and training to aid both teachers and caregivers. Children's programming In 2019, following the announcement by the Federal Communications Commission of rules changes to children's programming through modifying the Children's Television Act of 1990, Klobuchar and eight other Democratic senators signed a letter to FCC Chairman Ajit Pai that expressed concern that the proposed changes "would limit the reach of educational content available to children and have a particular damaging effect on youth in low-income and minority communities" and asserted that the new rules would see a reduction in access to valuable educational content through over-the-air services. LGBT rights In May 2017, Klobuchar was one of 46 senators to introduce the Equality Act of 2017, described by Representative David Cicilline as ensuring "that every LGBT person can live their lives free from the fear of discrimination. Above all, it’s about honoring the values that have guided our nation since its founding. It’s critical that Congress pass the Equality Act into law." Human trafficking Klobuchar has sponsored and co-sponsored several pieces of legislation aimed at stopping human trafficking that have become law, including the Combating Human Trafficking in Commercial Vehicles Act; the No Human Trafficking on Our Roads Act; S.2974 (which funded the U.S. National Human Trafficking Hotline); and the Justice for Victims of Trafficking Act of 2015. Worker's rights In March 2018, along with Kirsten Gillibrand and Patty Murray, Klobuchar led a letter signed by all 22 female U.S. Senators to Senate Majority Leader Mitch McConnell and Senate Minority Leader Chuck Schumer that called for changes to a 1995 statute that formed the present policy for addressing workplace misconduct complaints on Capitol Hill and specified their support for an update to the policy that would streamline the process of reporting sexual harassment in addition to granting staffers more resources in filing reports. In April 2019, Klobuchar signed onto the Be HEARD Act, legislation intended to abolish the tipped minimum wage along with ending mandatory arbitration and pre-employment nondisclosure agreements. The bill also gave workers additional time to report harassment and was said by co-sponsor Patty Murray to come at a time when too many workers are "still silenced by mandatory disclosure agreements that prevent them from discussing sexual harassment and longstanding practices like the tipped wages that keep workers in certain industries especially vulnerable." Veterans In December 2018, Klobuchar was one of 21 senators to sign a letter to United States Secretary of Veterans Affairs Robert Wilkie calling it "appalling that the VA is not conducting oversight of its own outreach efforts" even though suicide prevention is the VA's highest clinical priority, and requesting that Wilkie "consult with experts with proven track records of successful public and mental health outreach campaigns with a particular emphasis on how those individuals measure success." In 2019, Klobuchar and Marco Rubio unveiled the Supporting Veterans in STEM Careers Act, a bill aimed at assisting veterans who are going back into the workforce through directing the National Science Foundation to advocate for veterans to study and pursue careers in Science, Technology, Engineering, and Math (STEM). The bill also mandated the Office of Science and Technology Policy (OSTP) create a working group that would organize programs of the federal government to help with transitioning and training of veterans for STEM careers. The bill unanimously cleared the Commerce, Science, and Transportation Committee in July 2019. Klobuchar stated that she was satisfied by the bill passing in the committee as it would allow them to "help support veterans in their transition to civilian life– benefiting veterans, their families, communities, and our whole economy." In March 2019, Klobuchar and Republican Thom Tillis introduced the Newborn Care Improvement Act, a bill that if enacted would double the number of days from seven to fourteen that veterans receive to care for their newborns as present law did not allow the baby health insurance if the parent was unable to find outside care after the first week. Klobuchar stated the sacrifices made by service members and that the bill "will help provide men and women in the military with the resources they need to start a happy, healthy family." Technology Cyber bullying Klobuchar was one of 14 co-sponsors, led by Senate Majority Whip John Cornyn, of the PROTECT Our Children Act of 2017. The law added additional online protections aimed at children to those provided by the PROTECT Our Children Act of 2008 (which had 60 cosponsors, including Klobuchar, and was sponsored by Joe Biden). She also co-sponsored the KIDS Act of 2008, which adds protections against online sexual predators who target children, alongside 20 other senators led by Chuck Schumer. Data privacy In June 2019, Klobuchar and Republican Lisa Murkowski introduced the Protecting Personal Health Data Act, legislation mandating the United States Secretary of Health and Human Services create regulations for apps that track health data, wearable devices, and genetic testing kits in addition to forming a National Task Force on Health Data Protection that would evaluate and give a position on potential cybersecurity and privacy risks related to consumer products using customer health data. In a statement, Klobuchar said new technology had "made it easier for people to monitor their own health, but health tracking apps and home DNA testing kits have also given companies access to personal, private data with limited oversight." In October 2018, Klobuchar and Catherine Cortez Masto sent a letter to Google CEO Sundar Pichai charging Google with failing "[to] protect consumers’ data" while keeping "consumers in the dark about serious security risks" and noting that Google had not found evidence of developers taking advantage of this vulnerability or that profile data was misused. Klobuchar and Mastro expressed their dismay "that more care was not taken to inform consumers about threats to their personal information." Net neutrality In December 2017, in response to the Federal Communication Commission's repeal of net neutrality rules, Klobuchar said, "It’s against the concept of the internet, which was all about letting everyone access the internet and letting everyone be able to compete on an equal playing field." In February 2018, during a discussion with Mayor of Duluth Emily Larson, Klobuchar stated that net neutrality "is not just about one company having access, it's about everyone having equal access" and they "want small- and mid-size cities like Duluth to be able to compete against metro areas where they might be able to have the benefits of being bigger." In April 2018, Klobuchar, fellow Minnesota senator Tina Smith, and Ed Markey met with local business owners, experts and advocates to discuss the Congressional Review Act resolution and its potential to overturn the FCC's net neutrality repeal. Klobuchar said that Senate approval of the resolution would give it momentum to get passed in the House. In May 2018, Klobuchar voted for a bill that would reinstate net neutrality rules and thereby overturn the FCC's repeal via a law authorizing Congress to reverse regulatory actions by a simple majority vote. Telecommunications In April 2019, Klobuchar, Tina Smith, and Patty Murray introduced the Digital Equity Act of 2019, legislation establishing a $120 million grant program that would fund both the creation and implementation of "comprehensive digital equity plans" in each U.S. state along with providing a $120 million grant program to give support toward projects developed by individuals and groups. The bill also gave the National Telecommunications and Information Administration (NTIA) the role of evaluating and providing guidance toward digital equity projects. Klobuchar argued that the bill would provide communities across the US with high-speed internet. References Amy Klobuchar Klobuchar, Amy Klobuchar, Amy