id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
5600372
https://en.wikipedia.org/wiki/SAP%20ERP
SAP ERP
SAP ERP is an enterprise resource planning software developed by the German company SAP SE. SAP ERP incorporates the key business functions of an organization. The latest version of SAP ERP (V.6.0) was made available in 2006. The most recent SAP enhancement package 8 for SAP ERP 6.0 was released in 2016. Business Processes included in SAP ERP are Operations (Sales & Distribution, Materials Management, Production Planning, Logistics Execution, and Quality Management), Financials (Financial Accounting, Management Accounting, Financial Supply Chain Management), Human Capital Management (Training, Payroll, e-Recruiting) and Corporate Services (Travel Management, Environment, Health and Safety, and Real-Estate Management). Development An ERP was built based on the former SAP R/3 software. SAP R/3, which was officially launched on 6 July 1992, consisted of various applications on top of SAP Basis, SAP's set of middleware programs and tools. All applications were built on top of the SAP Web Application Server. Extension sets were used to deliver new features and keep the core as stable as possible. The Web Application Server contained all the capabilities of SAP Basis. A complete architecture change took place with the introduction of mySAP ERP in 2004. R/3 Enterprise was replaced with the introduction of ERP Central Component (SAP ECC). The SAP Business Warehouse, SAP Strategic Enterprise Management and Internet Transaction Server were also merged into SAP ECC, allowing users to run them under one instance. The SAP Web Application Server was wrapped into SAP NetWeaver, which was introduced in 2003. Architectural changes were also made to support an enterprise service architecture to transition customers to a Service-oriented architecture. The latest version, SAP ERP 6.0, was released in 2006. SAP ERP 6.0 has since then been updated through SAP enhancement packs, the most recent: SAP enhancement package 8 for SAP ERP 6.0 was released in 2016. Implementation SAP ERP consists of several modules, including Financial Accounting (FI), Controlling (CO), Asset Accounting (AA), Sales & Distribution (SD),SAP Customer Relationship Management (SAP CRM), Material Management (MM), Production Planning (PP), Quality Management (QM), Project System (PS), Plant Maintenance (PM), Human Resources (HR), Warehouse Management (WM). Traditionally an implementation is split into: Phase 1 – Project Preparation Phase 2 – Business Blueprint Phase 3 – Realization Phase 4 – Final Preparation Phase 5 – Go Live Support Deployment and maintenance costs It is estimated that "for a Fortune 500 company, software, hardware, and consulting costs can easily exceed $100 million (around $50 million to $500 million). Large companies can also spend $50 million to $100 million on upgrades. Full implementation of all modules can take years", which also adds to the end price. Midsized companies (fewer than 1,000 employees) are more likely to spend around $10 million to $20 million at most, and small companies are not likely to have the need for a fully integrated SAP ERP system unless they have the likelihood of becoming midsized and then the same data applies as would a midsized company. Independent studies have shown that deployment and maintenance costs of a SAP solution can vary depending on the organization. For example, some point out that because of the rigid model imposed by SAP tools, a lot of customization code to adapt to the business process may have to be developed and maintained. Some others pointed out that a return on investment could only be obtained when there was both a sufficient number of users and sufficient frequency of use. SAP Transport Management System SAP Transport Management System (STMS) is a tool within SAP ERP systems to manage software updates, termed transports, on one or more connected SAP systems. The tool can be accessed from transaction code STMS. This should not be confused with SAP Transportation Management, a stand-alone module for facilitating logistics and supply chain management in the transportation of goods and materials. SAP Enhancement Packages for SAP ERP 6.0 (SAP EhPs) The latest version (SAP ERP 6.0) was made available in 2006. Since then, additional functionality for SAP ERP 6.0 has been delivered through SAP Enhancement Packages (EhP). These Enhancement Packages allow SAP ERP customers to manage and deploy new software functionality. Enhancement Packages are optional; customers choose which new capabilities to implement. SAP EhPs do not require a classic system upgrade. The installation process of Enhancement Packages consists of two different steps: Technical installation of an Enhancement Package Activation of new functions The technical installation of business functions does not change the system behavior. The installation of new functionalities is decoupled from its activation and companies can choose which business functions they want to activate. This means that even after installing a new business function, there is no change to existing functionality before activation. Activating a business function for one process will have no effect on users working with other functionalities. EhP8 served as a foundation to transition to SAP's new business suite: SAP S/4HANA. See also GuiXT List of ERP software packages SAP NetWeaver SAP GUI SOA Secure Network Communications Secure Sockets Layer T-code UK & Ireland SAP Users Group References Sources Odell, Laura A., Brendan T. Farrar-Foley, John R. Kinkel, Rama S. Moorthy, and Jennifer A. Schultz. “History and Current Status of ERP Systems in DoD.” Beyond Enterprise Resource Planning (ERP): The Next Generation Enterprise Resource Planning Environment. Institute for Defense Analyses, 2012. Jerry Rolia, Giuliano Casale, Diwakar Krishnamurthy, Stephen Dawson, and Stephan Kraft. 2009. Predictive modelling of SAP ERP applications: challenges and solutions. In Proceedings of the Fourth International ICST Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS '09). ICST (Institute for Computer Sciences, Social Informatics and Telecommunications Engineering), Brussels, BEL, Article 9, 1–9. Al-Sabri, H.M., Al-Mashari, M. and Chikh, A. (2018), "A comparative study and evaluation of ERP reference models in the context of ERP IT-driven implementation: SAP ERP as a case study", Business Process Management Journal, Vol. 24 No. 4, pp. 943-964. Gargeya, V.B. and Brady, C. (2005), "Success and failure factors of adopting SAP in ERP system implementation", Business Process Management Journal, Vol. 11 No. 5, pp. 501-516. Hufgard A., Gerhardt E. (2011) Consolidating Business Processes as Exemplified in SAP ERP Systems. In: Schmidt W. (eds) S-BPM ONE - Learning by Doing - Doing by Learning. S-BPM ONE 2011. Communications in Computer and Information Science, vol 213. Springer, Berlin, Heidelberg. R. R. Savchuk and N. A. Kirsta, "Managing of the Business Processes in Enterprise by Moving to SAP ERP System," 2019 Institute of Electrical and Electronics Engineers Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), 2019, pp. 1467-1470, In White Paper Review, Industry Week OCT 2009, ‘ERP Best Practices: The SaaS Difference, Plex Systems, Retrieved 21/04/2012. Shim, Sung J. and Minsuk K. Shim. “Effects of user perceptions of SAP ERP system on user learning and skills.” Journal of Computing in Higher Education 32 (2020): 41-56. . . Angolia, Mark and Leslie Pagliari. “Experiential Learning for Logistics and Supply Chain Management Using an SAP ERP Software Simulation.” Decision Sciences Journal of Innovative Education 16 (2018): 104-125. . . Lorenc, Augustyn and Maciej Szkoda. “Customer logistic service in the automotive industry with the use of the SAP ERP system.” 2015 4th International Conference on Advanced Logistics and Transport (ICALT) (2015): 18-23. . . Al-Sabri, Hamdan Mohammed, Majed A. Al-Mashari and Azeddine Chikh. “A comparative study and evaluation of ERP reference models in the context of ERP IT-driven implementation: SAP ERP as a case study.” Bus. Process. Manag. J. 24 (2018): 943-964. . . Vlasov, Vladimir, Victoria Chebotareva, Marat Rakhimov and Sergey Kruglikov. “AI User Support System for SAP ERP.” (2017). . . Gottschalk, Friederike. "Validation of SAP R/3 and Other ERP Systems: Methodology and Tools." Pharmaceutical Technology Europe, vol. 12, no. 12, Dec. 2000, p. 26. Gale Academic OneFile, . Accessed 26 Jan. 2022. VAN EVERDINGEN, YVONNE, et al. "ERP ADOPTION BY EUROPEAN MIDSIZE COMPANIES." Communications of the ACM, vol. 43, no. 4, Apr. 2000, p. 27. Gale Academic OneFile, . Accessed 26 Jan. 2022. Lui, Kim Man, and Keith C.C. Chan. "Capability maturity model and SAP: toward a universal ERP implementation model." International Journal of Enterprise Information Systems, vol. 1, no. 3, July-Sept. 2005, pp. 69+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Laosethakul, Kittipong, and Thaweephan Leingpibul. "Investigating Student Perceptions and Behavioral Intention to Use Multimedia Teaching Methods for the SAP ERP System." e-Journal of Business Education and Scholarship Teaching, vol. 15, no. 1, June 2021, pp. 1+. Gale Academic OneFile, . Accessed 26 Jan. 2022. J. Nađ and M. Vražić, "Decision making in transformer manufacturing companies with help of ERP business software," 2017 15th International Conference on Electrical Machines, Drives and Power Systems (ELMA), 2017, pp. 379-382, . C. F. Vera, R. T. Carmona, J. Armas-Aguirre and A. B. Padilla, "Technological architecture to consume On-Premise ERP services from a hybrid cloud platform," 2019 IEEE XXVI International Conference on Electronics, Electrical Engineering and Computing (INTERCON), 2019, pp. 1-4, . R. R. Savchuk and N. A. Kirsta, "Managing of the Business Processes in Enterprise by Moving to SAP ERP System," 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), 2019, pp. 1467-1470, . Sung J. Shim and Minsuk K. Shim. 2018. How user perceptions of SAP ERP system change with system experience. In Proceedings of the First International Conference on Data Science, E-learning and Information Systems (DATA '18). Association for Computing Machinery, New York, NY, USA, Article 20, 1–4. Pliskin, Nava, and Marta Zarotski. "Big-bang ERP implementation at a global company." Journal of Cases on Information Technology, vol. 2, annual 2000. Gale Academic OneFile, . Accessed 26 Jan. 2022. Wang, John. "Sankar, Chetan S., Karl-Heinz Rau. 2006. Implementation Strategies for SAP R/3 in a Multinational Organization: Lessons from a Real-World Case Study." Interfaces, vol. 38, no. 4, July-Aug. 2008, pp. 347+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Leyh, Christian. "Teaching ERP systems: results of a survey at research-oriented universities and universities of applied sciences in Germany." Journal of Information Systems Education, vol. 23, no. 2, summer 2012, pp. 217+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Qiu, Manying, et al. "TO CLOSE THE SKILLS GAP, TECHNOLOGY AND HIGHER-ORDER THINKING SKILLS MUST GO HAND IN HAND." Journal of International Technology and Information Management, vol. 29, no. 1, Jan. 2020, pp. COV5+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Cronan, Timothy Paul, and David E. Douglas. "Assessing ERP learning (management, business process, and skills) and attitudes." Journal of Organizational and End User Computing, vol. 25, no. 2, Apr.-June 2013, pp. 59+. Gale Academic OneFile, . Accessed 26 Jan. 2022. de Souza, Cesar Alexandre, and Ronaldo Zwicker. "Capabilities and actors in ERP systems management: an exploratory study in corporate users of SAP ERP/capacidades e atores na gestao de sistemas ERP: um estudo exploratorio entre usuarios corporativos do ERP da SAP." Journal of Information Systems & Technology Management, vol. 4, no. 2, Apr. 2007, pp. 197+. Gale Academic OneFile, . Accessed 26 Jan. 2022. ROBSON, SEAN. “PROJECT CONCEPTION.” In Agile SAP: Introducing Flexibility, Transparency and Speed to SAP Implementations, 33–52. IT Governance Publishing 2013. . Seddon, Peter B., et al. “A Multi-Project Model of Key Factors Affecting Organizational Benefits from Enterprise Systems.” MIS Quarterly, vol. 34, no. 2, Management Information Systems Research Center, University of Minnesota, 2010, pp. 305–28, . I. Tereshchenko, S. Shtangey and A. Tereshchenko, "The application SAP® ERP principles for the development and implementation of corporate integrated information system for SME," 2016 Third International Scientific-Practical Conference Problems of Infocommunications Science and Technology (PIC S&T), 2016, pp. 168-170, . Usmanij, P.A., Khosla, R. & Chu, MT. Successful product or successful system? User satisfaction measurement of ERP software. J Intell Manuf 24, 1131–1144 (2013). Diptikalyan Saha, Neelamadhav Gantayat, Senthil Mani, and Barry Mitchell. 2017. Natural language querying in SAP-ERP platform. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). Association for Computing Machinery, New York, NY, USA, 878–883. T. Orosz, "Analysis of SAP Development tools and methods," 2011 15th IEEE International Conference on Intelligent Engineering Systems, 2011, pp. 439-443, . External links ERP software Computer-related introductions in 1972 Cloud applications Cloud platforms Automation Automation software Accounting software Project management software Enterprise software Business software Human resource management software Customer relationship management software ERP
12810
https://en.wikipedia.org/wiki/Garry%20Kasparov
Garry Kasparov
Garry Kimovich Kasparov (Russian: Гарри Кимович Каспаров, Russian pronunciation: [ˈɡarʲɪ ˈkʲiməvʲɪtɕ kɐˈsparəf], born Garik Kimovich Weinstein, Гарик Кимович Вайнштейн; 13 April 1963) is a Russian chess grandmaster, former World Chess Champion, writer, political activist and commentator. His peak rating of 2851, achieved in 1999, was the highest recorded until being surpassed by Magnus Carlsen in 2013. From 1984 until his retirement in 2005, Kasparov was ranked world No. 1 for a record 255 months overall for his career, a record that outstrips all other previous and current players. Kasparov also holds records for the most consecutive professional tournament victories (15) and Chess Oscars (11). Kasparov became the youngest ever undisputed World Chess Champion in 1985 at age 22 by defeating then-champion Anatoly Karpov. He held the official FIDE world title until 1993 when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association. In 1997 he became the first world champion to lose a match to a computer under standard time controls when he lost to the IBM supercomputer Deep Blue in a highly publicized match. He continued to hold the "Classical" World Chess Championship until his defeat by Vladimir Kramnik in 2000. Despite losing the title, he continued winning tournaments and was the world's highest-rated player when he retired from professional chess in 2005. Since retiring, he has devoted his time to politics and writing. He formed the United Civil Front movement and joined as a member of The Other Russia, a coalition opposing the administration and policies of Vladimir Putin. In 2008, he announced an intention to run as a candidate in that year's Russian presidential race, but after encountering logistical problems in his campaign, for which he blamed "official obstruction", he withdrew. In the wake of the Russian mass protests that began in 2011, he announced in 2013 that he had left Russia for the immediate future out of fear of persecution. Following his flight from Russia, he had lived in New York City with his family. In 2014, he obtained Croatian citizenship, and has maintained a residence in Podstrana near Split. Kasparov is currently chairman of the Human Rights Foundation and chairs its International Council. In 2017, he founded the Renew Democracy Initiative (RDI), an American political organization promoting and defending liberal democracy in the U.S. and abroad. He serves as chairman of the group. Kasparov is also a Security Ambassador for the software company Avast. Early life and career Kasparov was born Garik Kimovich Weinstein (Russian: Гарик Ки́мович Вайнштейн, Garik Kimovich Vainshtein) in Baku, Azerbaijan SSR (now Azerbaijan), Soviet Union. His father, Kim Moiseyevich Weinstein, was Jewish, and his mother, Klara Shagenovna Gasparian, was Armenian. Kasparov has described himself as a "self-appointed Christian", although "very indifferent" and identifies as Russian: "although I'm half-Armenian, half-Jewish, I consider myself Russian because Russian is my native tongue, and I grew up with Russian culture." Kasparov began the serious study of chess after he came across a chess problem set up by his parents and proposed a solution. When Garry was seven years old, his father died of leukaemia. At the age of twelve, Garry, upon request of his mother Klara and with the consent of the family, adopted Klara's surname Kasparov, which was done to avoid possible antisemitic tensions, which were common in the USSR at the time. From age 7, Kasparov attended the Young Pioneer Palace in Baku and, at 10 began training at Mikhail Botvinnik's chess school under coach Vladimir Makogonov. Makogonov helped develop Kasparov's positional skills and taught him to play the Caro-Kann Defence and the Tartakower System of the Queen's Gambit Declined. Kasparov won the Soviet Junior Championship in Tbilisi in 1976, scoring 7 points of 9, at age 13. He repeated the feat the following year, winning with a score of 8.5 of 9. He was being trained by Alexander Shakarov during this time. In 1978, Kasparov participated in the Sokolsky Memorial tournament in Minsk. He had been invited as an exception but took first place and became a chess master. Kasparov has repeatedly said that this event was a turning point in his life and that it convinced him to choose chess as his career. "I will remember the Sokolsky Memorial as long as I live", he wrote. He has also said that after the victory, he thought he had a very good shot at the World Championship. He first qualified for the Soviet Chess Championship at age 15 in 1978, the youngest ever player at that level. He won the 64-player Swiss system tournament at Daugavpils on tiebreak over Igor V. Ivanov to capture the sole qualifying place. Kasparov rose quickly through the FIDE world rankings. Starting with oversight by the Russian Chess Federation, he participated in a grandmaster tournament in Banja Luka, SR Bosnia and Herzegovina (part of Yugoslavia at the time), in 1979 while still unrated (he was a replacement for the Soviet defector Viktor Korchnoi, who was originally invited but withdrew due to the threat of a boycott from the Soviets). Kasparov won this high-class tournament, emerging with a provisional rating of 2595, enough to catapult him to the top group of chess players (at the time, number 15 in the world). The next year, 1980, he won the World Junior Chess Championship in Dortmund, West Germany. Later that year, he made his debut as the second reserve for the Soviet Union at the Chess Olympiad at Valletta, Malta, and became a Grandmaster. Towards the top As a teenager, Kasparov tied for first place in the USSR Chess Championship in 1981–82. His first win in a superclass-level international tournament was scored at Bugojno, SR Bosnia and Herzegovina, Yugoslavia in 1982. He earned a place in the 1982 Moscow Interzonal tournament, which he won, to qualify for the Candidates Tournament. At age 19, he was the youngest Candidate since Bobby Fischer, who was 15 when he qualified in 1958. At this stage, he was already the No. 2-rated player in the world, trailing only World Chess Champion Anatoly Karpov on the January 1983 list.Kasparov's first (quarter-final) Candidates match was against Alexander Beliavsky, whom he defeated 6–3 (four wins, one loss). Politics threatened Kasparov's semi-final against Viktor Korchnoi, which was scheduled to be played in Pasadena, California. Korchnoi had defected from the Soviet Union in 1976 and was at that time the strongest active non-Soviet player. Various political manoeuvres prevented Kasparov from playing Korchnoi, and Kasparov forfeited the match. This was resolved by Korchnoi allowing the match to be replayed in London, along with the previously scheduled match between Vasily Smyslov and Zoltán Ribli. The Kasparov-Korchnoi match was put together on short notice by Raymond Keene. Kasparov lost the first game but won the match 7–4 (four wins, one loss). In January 1984, Kasparov became the No. 1 ranked player in the world, with a FIDE rating of 2710. He became the youngest ever world No. 1, a record that lasted 12 years until being broken by Vladimir Kramnik in January 1996; the record is currently held by Magnus Carlsen. Later in 1984, he won the Candidates' final 8½–4½ (four wins, no losses) against the resurgent former world champion Vasily Smyslov, at Vilnius, thus qualifying to play Anatoly Karpov for the World Championship. That year he joined the Communist Party of the Soviet Union (CPSU), as a member of which he was elected to the Central Committee of Komsomol in 1987. 1984 World Championship The World Chess Championship 1984 match between Anatoly Karpov and Garry Kasparov had many ups and downs, and a very controversial finish. Karpov started in very good form, and after nine games Kasparov was down 4–0 in a "first to six wins" match. Fellow players predicted he would be whitewashed 6–0 within 18 games. In an unexpected turn of events, there followed a series of 17 successive draws, some relatively short, and others drawn in unsettled positions. Kasparov lost game 27 (5–0), then fought back with another series of draws until game 32 (5–1), earning his first-ever win against the World Champion. Another 14 successive draws followed, through game 46; the previous record length for a world title match had been 34 games, the match of José Raúl Capablanca vs. Alexander Alekhine in 1927. Kasparov won games 47 and 48 to bring the scores to 5–3 in Karpov's favour. Then the match was ended without result by Florencio Campomanes, the President of the Fédération Internationale des Échecs (FIDE), and a new match was announced to start a few months later. The termination was controversial, as both players stated that they preferred the match to continue. Announcing his decision at a press conference, Campomanes cited the health of the players, which had been strained by the length of the match. The match became the first, and so far only, world championship match to be abandoned without result. Kasparov's relations with Campomanes and FIDE were greatly strained, and the feud between them finally came to a head in 1993 with Kasparov's complete break-away from FIDE. World Champion The second Karpov-Kasparov match in 1985 was organized in Moscow as the best of 24 games where the first player to win 12½ points would claim the World Champion title. The scores from the terminated match would not carry over; however, in the event of a 12–12 draw, the title would remain with Karpov. On 9 November 1985, Kasparov secured the title by a score of 13–11, winning the 24th game with Black, using a Sicilian defence. He was 22 years old at the time, making him the youngest ever World Champion, and breaking the record held by Mikhail Tal for over 20 years. Kasparov's win as Black in the 16th game has been recognized as one of the all-time masterpieces in chess history. As part of the arrangements following the aborted 1984 match, Karpov had been granted (in the event of his defeat) a right to rematch. Another match took place in 1986, hosted jointly in London and Leningrad, with each city hosting 12 games. At one point in the match, Kasparov opened a three-point lead and looked well on his way to a decisive match victory. But Karpov fought back by winning three consecutive games to level the score late in the match. At this point, Kasparov dismissed one of his seconds, grandmaster Evgeny Vladimirov, accusing him of selling his opening preparation to the Karpov team (as described in Kasparov's autobiography Unlimited Challenge, chapter Stab in the Back). Kasparov scored one more win and kept his title by a final score of 12½–11½. A fourth match for the world title took place in 1987 in Seville, as Karpov had qualified through the Candidates' Matches to again become the official challenger. This match was very close, with neither player holding more than a one-point lead at any time during the contest. Kasparov was down one full point at the time of the final game and needed a win to draw the match and retain his title. A long tense game ensued in which Karpov blundered away a pawn just before the first time control, and Kasparov eventually won a long ending. Kasparov retained his title as the match was drawn by a score of 12–12. (All this meant that Kasparov had played Karpov four times in the period 1984–87, a statistic unprecedented in chess. Matches organized by FIDE had taken place every three years since 1948, and only Botvinnik had a right to a rematch before Karpov.) The fifth match between Kasparov and Karpov was held in New York and Lyon in 1990, with each city hosting 12 games. Again, the result was a close one with Kasparov winning by a margin of 12½–11½. In their five world championship matches, Kasparov had 21 wins, 19 losses, and 104 draws in 144 games. Break with and ejection from FIDE With the World Champion title in hand, Kasparov began opposing FIDE. In November 1986, he created the Grandmasters Association (GMA), an organization to represent professional chess players and give them more say in FIDE's activities. Kasparov assumed a leadership role. GMA's major achievement was in organizing a series of six World Cup tournaments for the world's top players. This caused a somewhat uneasy relationship to develop with FIDE. This stand-off lasted until 1993, by which time a new challenger had qualified through the Candidates cycle for Kasparov's next World Championship defence: Nigel Short, a British grandmaster who had defeated Anatoly Karpov in a qualifying match and then Jan Timman in the finals held in early 1993. After a confusing and compressed bidding process produced lower financial estimates than expected, the world champion and his challenger decided to play outside FIDE's jurisdiction, under another organization created by Kasparov called the Professional Chess Association (PCA). At this point, a great fracture occurred in the lineage of the FIDE World Championship. In an interview in 2007, Kasparov called the break with FIDE the worst mistake of his career, as it hurt the game in the long run. Kasparov and Short were ejected from FIDE and played their well-sponsored match in London in 1993. Kasparov won convincingly by a score of 12½–7½. The match considerably raised the profile of chess in the UK, with an unprecedented level of coverage on Channel 4. Meanwhile, FIDE organized a World Championship match between Jan Timman (the defeated Candidates finalist) and former World Champion Karpov (a defeated Candidates semi-finalist), which Karpov won. FIDE removed Kasparov and Short from the FIDE rating lists. Until this happened, there was a parallel rating list presented by PCA which featured all the world top players, regardless of their relation to FIDE. There were now two World Champions: PCA champion Kasparov, and FIDE champion Karpov. The title remained split for 13 years. Kasparov defended his title in a 1995 match against Viswanathan Anand at the World Trade Center in New York City. Kasparov won the match by four wins to one, with thirteen draws. Kasparov tried to organize another World Championship match, under another organization, the World Chess Association (WCA) with Linares organizer Luis Rentero. Alexei Shirov and Vladimir Kramnik played a candidates match to decide the challenger, which Shirov won in a surprising upset. But when Rentero admitted that the funds required and promised had never materialized, the WCA collapsed. This left Kasparov stranded, and yet another organization stepped in: BrainGames.com, headed by Raymond Keene. No match against Shirov was arranged, and talks with Anand collapsed, so a match was instead arranged against Kramnik. During this period, Kasparov was approached by Oakham School in the United Kingdom, at the time the only school in the country with a full-time chess coach, and developed an interest in the use of chess in education. In 1997, Kasparov supported a scholarship programme at the school. Kasparov also won the Marca Leyenda trophy that year. Losing the title and aftermath The Kasparov-Kramnik match took place in London during the latter half of 2000. Kramnik had been a student of Kasparov's at the famous Botvinnik/Kasparov chess school in Russia and had served on Kasparov's team for the 1995 match against Viswanathan Anand. The better-prepared Kramnik won game 2 against Kasparov's Grünfeld Defence and achieved winning positions in Games 4 and 6, although Kasparov held the draw in both games. Kasparov made a critical error in Game 10 with the Nimzo-Indian Defence, which Kramnik exploited to win in 25 moves. As White, Kasparov could not crack the passive but solid Berlin Defence in the Ruy Lopez, and Kramnik successfully drew all his games as Black. Kramnik won the match 8½–6½. After losing the title, Kasparov won a series of major tournaments, and remained the top-rated player in the world, ahead of both Kramnik and the FIDE World Champions. In 2001 he refused an invitation to the 2002 Dortmund Candidates Tournament for the Classical title, claiming his results had earned him a rematch with Kramnik. Kasparov and Karpov played a four-game match with rapid time controls over two days in December 2002 in New York City. Karpov surprised the experts and emerged victorious, winning two games and drawing one. Due to Kasparov's continuing strong results, and status as world No. 1 in much of the public eye, he was included in the so-called "Prague Agreement", masterminded by Yasser Seirawan and intended to reunite the two World Championships. Kasparov was to play a match against the FIDE World Champion Ruslan Ponomariov in September 2003. But this match was called off after Ponomariov refused to sign his contract for it without reservation. In its place, there were plans for a match against Rustam Kasimdzhanov, winner of the FIDE World Chess Championship 2004, to be held in January 2005 in the United Arab Emirates. These also fell through due to a lack of funding. Plans to hold the match in Turkey instead came too late. Kasparov announced in January 2005 that he was tired of waiting for FIDE to organize a match and so had decided to stop all efforts to regain the World Championship title. Retirement from chess After winning the prestigious Linares tournament for the ninth time, Kasparov announced on 10 March 2005 that he would retire from serious competitive chess. He cited as the reason a lack of personal goals in the chess world (he commented when winning the Russian championship in 2004 that it had been the last major title he had never won outright) and expressed frustration at the failure to reunify the world championship. Kasparov said he might play in some rapid chess events for fun, but he intended to spend more time on his books, including the My Great Predecessors series, and work on the links between decision-making in chess and other areas of life. He also stated that he would continue to involve himself in Russian politics, which he viewed as "headed down the wrong path." Post-retirement chess On 22 August 2006, in his first public chess games since his retirement, Kasparov played in the Lichthof Chess Champions Tournament, a blitz event played at the time control of 5 minutes per side and 3-second increments per move. Kasparov tied for first with Anatoly Karpov, scoring 4½/6. Kasparov and Anatoly Karpov played a 12-game match from 21 to 24 September 2009, in Valencia, Spain. It consisted of four rapid (or semi rapid) games, in which Kasparov won 3–1, and eight blitz games, in which Kasparov won 6–2, winning the match with a total result of 9–3. The event took place exactly 25 years after the two players' legendary encounter at World Chess Championship 1984. Kasparov actively coached Magnus Carlsen for approximately one year beginning in February 2009. The collaboration remained secret until September 2009. Under Kasparov's tutelage, Carlsen in October 2009 became the youngest ever to achieve a FIDE rating higher than 2800 and rose from world number four to world number one. While the pair initially planned to work together throughout 2010, in March of that year it was announced that Carlsen had split from Kasparov and would no longer be using him as a trainer. According to an interview with the German magazine Der Spiegel, Carlsen indicated that he would remain in contact and that he would continue to attend training sessions with Kasparov, but in fact, no further training sessions were held and the cooperation gradually fizzled out over the course of the spring. In May 2010 he played 30 games simultaneously, winning each one, against players at Tel Aviv University in Israel. In the same month, it was revealed that Kasparov had aided Viswanathan Anand in preparation for the World Chess Championship 2010 against challenger Veselin Topalov. Anand won the match 6½–5½ to retain the title. In January 2011, Kasparov began training the U.S. grandmaster Hikaru Nakamura. The first of several training sessions were held in New York just before Nakamura participated in the Tata Steel Chess tournament in Wijk aan Zee, the Netherlands. In December 2011, it was announced that the cooperation had come to an end. Kasparov played two blitz exhibition matches in the autumn of 2011. The first was in September against French grandmaster Maxime Vachier-Lagrave, in Clichy (France), which Kasparov won 1½–½. The second was a longer match consisting of eight blitz games played on 9 October, against English grandmaster Nigel Short. Kasparov won again by a score of 4½–3½. A little after that, in October 2011, Kasparov played and defeated fourteen opponents in a simultaneous exhibition that took place in Bratislava. On 25 and 26 April 2015, Kasparov played a mini-match against Nigel Short. The match consisted of two rapid games and eight blitz games and was contested over the course of two days. Both commentators GM Maurice Ashley and Alejandro Ramirez remarked how Kasparov was an 'initiative hog' throughout the match, consistently not allowing Short to gain any foothold in the games, and won the match decisively with a score of 8½–1½. Kasparov also managed to win all five games on the second day, with his victories characterised by aggressive pawn moves breaking up Short's position, thereby allowing Kasparov's pieces to achieve positional superiority. On Wednesday 19 August 2015, he played and won all 19 games of a simultaneous exhibition in Pula, Croatia. On Thursday 28 April and Friday 29 April 2016 at the Chess Club and Scholastic Center of Saint Louis, Kasparov played a 6-round exhibition blitz round-robin tournament with Fabiano Caruana, Wesley So, and Hikaru Nakamura in an event called the Ultimate Blitz Challenge. He finished the tournament third with 9.5/18, behind Hikaru Nakamura (11/18) and Wesley So (10/18). At the post-tournament interview, he considered the possibility of playing future top-level blitz exhibition matches. On 2 June 2016, Kasparov played against fifteen chess players in a simultaneous exhibition in the Kaiser-Friedrich-Halle of Mönchengladbach. He won all games. Candidate for FIDE presidency On 7 October 2013, Kasparov announced his candidacy for World Chess Federation president during a reception in Tallinn, Estonia, where the 84th FIDE Congress took place. Kasparov's candidacy was supported by his former student, reigning World Chess Champion and FIDE#1 ranked player Magnus Carlsen. At the FIDE General Assembly in August 2014, Kasparov lost the presidential election to incumbent FIDE president Kirsan Ilyumzhinov, with a vote of 110–61. A few days before the election took place, the New York Times Magazine had published a lengthy report on the viciously fought campaign. Included was information about a leaked contract between Kasparov and former FIDE Secretary General Ignatius Leong from Singapore, in which the Kasparov campaign reportedly "offered to pay Leong US$500,000 and to pay $250,000 a year for four years to the ASEAN Chess Academy, an organization Leong helped create to teach the game, specifying that Leong would be responsible for delivering 11 votes from his region [...]". In September 2015, the FIDE Ethics Commission found Kasparov and Leong guilty of violating its Code of Ethics and later suspended them for two years from all FIDE functions and meetings. Return from retirement In 2017, Kasparov came out of retirement to participate in the inaugural St. Louis Rapid and Blitz tournament from 14 to 19 August, scoring 3.5/9 in the rapid and 9/18 in the blitz, finishing eighth out of ten participants, which included Nakamura, Caruana, former world champion Anand, and the eventual winner, Aronian. Any tournament money that he earned would go towards charities to promote chess in Africa. In 2020 he participated in 9LX tournament in Chess 960. He finished eighth in a field of 10 players. Notably he drew a game against Magnus Carlsen, who tied for first place. In 2021 he launched Kasparovchess, a subscription-based online chess community featuring documentaries, lessons and puzzles, podcasts, articles, interviews, and playing zones. In 2021, Kasparov played in the blitz section of the Grand Chess Tour event in Zagreb, Croatia. He performed poorly, however, scoring 0.5/9 on the first day and 2.0/9 on the second day, getting his only win against Jorden Van Foreest. He also participated in 9XL 2 - Chess 960 tournament - finishing fifth in a field of 10 players, with a score of 5/9. Politics 1980s Kasparov's grandfather was a staunch communist but Kasparov gradually began to have doubts about the Soviet Union's political system at age 13 when he travelled abroad for the first time to Paris for a chess tournament. In 1981, at age 18 he read Solzhenitsyn's The Gulag Archipelago, a copy of which he bought while abroad. Kasparov joined the Communist Party of the Soviet Union (CPSU) in 1984, and in 1987 was elected to the Central Committee of Komsomol. However, in 1990, he left the party. 1990s In January 1990, Kasparov and his family fled Baku to escape pogroms against Armenians. In May 1990, Kasparov took part in the creation of the Democratic Party of Russia, which at first was a liberal anti-communist party, later shifting to centrism. Kasparov left the party on 28 April 1991, after its conference. In 1991, Kasparov received the Keeper of the Flame award from the Center for Security Policy, a Washington, D.C. based far-right, anti-Muslim think tank. In his acceptance speech Kasparov lauded the defeat of communism while also urging the United States to give no financial assistance to central Soviet leaders. In June 1993, Kasparov was involved with the creation of the "Choice of Russia" bloc of parties and in 1996 took part in the election campaign of Boris Yeltsin. In 2001 he voiced his support for the Russian television channel NTV. In 1997, Kasparov was awarded honorary citizenship of Bosnia and Herzegovina for his support of Bosnian people during the Bosnian War. 2000s In 2002, he called for Turkey to be admitted to the European Union if Turkey recognizes the Armenian genocide. After his retirement from chess in 2005, Kasparov turned to politics and created the United Civil Front, a social movement whose main goal is to "work to preserve electoral democracy in Russia". He has vowed to "restore democracy" to Russia by restoring the rule of law. Kasparov was instrumental in setting up The Other Russia, a coalition which opposes Putin's government. The Other Russia has been boycotted by the leaders of Russia's mainstream opposition parties, Yabloko and Union of Right Forces due to its inclusion of both nationalist and radical groups. Kasparov has criticized these groups as being secretly under the auspices of the Kremlin. In April 2005, Kasparov was in Moscow at a promotional event when he was struck over the head with a chessboard he had just signed. The assailant was reported to have said "I admired you as a chess player, but you gave that up for politics" immediately before the attack. Kasparov has been the subject of a number of other episodes since, including police brutality and alleged harassment from the Russian secret service. Kasparov helped organize the Saint Petersburg Dissenters' March on 3 March 2007 and The March of the Dissenters on 24 March 2007, both involving several thousand people rallying against Putin and Saint Petersburg Governor Valentina Matviyenko's policies. In April 2007, Kasparov led a pro-democracy demonstration in Moscow. Soon after the demonstration's start, however, over 9,000 police descended on the group and seized almost everyone. Kasparov, who was briefly arrested by the Moscow police, was warned by the prosecution office on the eve of the march that anyone participating risked being detained. He was held for some 10 hours and then fined and released. He was later summoned by the FSB for violations of Russian anti-extremism laws. Speaking about Kasparov, former KGB defector Oleg Kalugin in 2007 remarked, "I do not talk in details – people who knew them are all dead now because they were vocal, they were open. I am quiet. There is only one man who is vocal and he may be in trouble: [former] world chess champion [Garry] Kasparov. He has been very outspoken in his attacks on Putin and I believe that he is probably next on the list." Kasparov gave speeches at think tanks such as the Hoover Institution. On 30 September 2007, Kasparov entered the Russian presidential race, receiving 379 of 498 votes at a congress held in Moscow by The Other Russia. In October 2007, Kasparov announced his intention of standing for the Russian presidency as the candidate of the "Other Russia" coalition and vowed to fight for a "democratic and just Russia". Later that month he traveled to the United States, where he appeared on several popular television programs, which were hosted by Stephen Colbert, Wolf Blitzer, Bill Maher, and Chris Matthews. In November 2007, Kasparov and other protesters were detained by police at an Other Russia rally in Moscow, which drew 3,000 demonstrators to protest election rigging. Following an attempt by about 100 protesters to march through police lines to the electoral commission, which had barred Other Russia candidates from parliamentary elections, arrests were made. The Russian authorities stated a rally had been approved but not any marches, resulting in several detained demonstrators. He was subsequently charged with resisting arrest and organizing an unauthorized protest and given a jail sentence of five days. Kasparov appealed the charges, citing that he had been following orders given by the police, although it was denied. He was released from jail on 29 November. Putin criticized Kasparov at the rally for his use of English when speaking rather than Russian. In December 2007, Kasparov announced that he had to withdraw his presidential candidacy due to inability to rent a meeting hall where at least 500 of his supporters could assemble. With the deadline expiring on that date, he explained it was impossible for him to run. Russian election laws required sufficient meeting hall space for assembling supporters. Kasparov's spokeswoman accused the government of using pressure to deter anyone from renting a hall for the gathering and said that the electoral commission had rejected a proposal that would have allowed for smaller gathering sizes rather than one large gathering at a meeting hall. 2010s—2020s Kasparov was among the 34 first signatories and a key organizer of the online anti-Putin campaign "Putin must go", started on 10 March 2010. The campaign was begun by a coalition of opposition to Putin who regard his rule as lacking any rule of law. Within the text is a call to Russian law enforcement to ignore Putin's orders. By June 2011, there were 90,000 signatures. While the identity of the petition author remained anonymous, there was wide speculation that it was indeed Kasparov. Kasparov was named Chairman of the Human Rights Foundation in 2011. On 31 January 2012, Kasparov hosted a meeting of opposition leaders planning a mass march on 4 February 2012, the third major opposition rally held since the disputed State Duma elections of December 2011. Among other opposition leaders attending were Alexey Navalny and Yevgenia Chirikova. On 17 August 2012, Kasparov was arrested and beaten outside of the Moscow court while attending the sentencing in the case involving the all-female punk band Pussy Riot. On 24 August, he was cleared of charges that he took part in an unauthorized protest against the conviction of three members of Pussy Riot. Judge Yekaterina Veklich said there were "no grounds to believe the testimony of the police". He could still face criminal charges over a police officer's claims that the opposition leader bit his finger while he was being detained. He later thanked all the bloggers and reporters who provided video evidence that contradicted the testimony of the police. Kasparov wrote in February 2013 that "fascism has come to Russia. ... Project Putin, just like the old Project Hitler, is but the fruit of a conspiracy by the ruling elite. Fascist rule was never the result of the free will of the people. It was always the fruit of a conspiracy by the ruling elites!" In April 2013, Kasparov joined in an HRF condemnation of Kanye West for having performed for the leader of Kazakhstan in exchange for a $3 million paycheck, saying that West "has entertained a brutal killer and his entourage" and that his fee "came from the loot stolen from the Kazakhstan treasury". Kasparov denied rumors in April 2013 that he planned to leave Russia for good. "I found these rumors to be deeply saddening and, moreover, surprising," he wrote. "I was unable to respond immediately because I was in such a state of shock that such an incredibly inaccurate statement, the likes of which is constantly distributed by the Kremlin's propagandists, came this time from Ilya Yashin, a fellow member of the Opposition Coordination Council (KSO) and my former colleague from the Solidarity movement." In an April 2013 op-ed piece, Kasparov accused prominent Russian journalist Vladimir Posner of failing to stand up to Putin and to earlier Russian and Soviet leaders. Kasparov was presented with the Morris B. Abram Human Rights Award, UN Watch's annual human-rights prize, in 2013. The organization praised him as "not only one of the world's smartest men" but "also among its bravest". At the 2013 Women in the World conference, Kasparov told The Daily Beasts Michael Moynihan that democracy no longer existed in what he called Russia's "dictatorship". Kasparov said at a press conference in June 2013 that if he returned to Russia he doubted he would be allowed to leave again, given Putin's ongoing crackdown against dissenters. "So for the time being," he said, "I refrain from returning to Russia." He explained shortly thereafter in an article for The Daily Beast that this had not been intended as "a declaration of leaving my home country, permanently or otherwise", but merely an expression of "the dark reality of the situation in Russia today, where nearly half the members of the opposition's Coordinating Council are under criminal investigation on concocted charges". He noted that the Moscow prosecutor's office was "opening an investigation that would limit my ability to travel", making it impossible for him to fulfill "professional speaking engagements" and hindering his "work for the nonprofit Kasparov Chess Foundation, which has centers in New York City, Brussels, and Johannesburg to promote chess in education". Kasparov further wrote in his June 2013 Daily Beast article that the mass protests in Moscow 18 months earlier against fraudulent Russian elections had been "a proud moment for me". He recalled that after joining the opposition movement in March 2005, he had been criticized for seeking to unite "every anti-Putin element in the country to march together regardless of ideology". Therefore, the sight of "hundreds of flags representing every group from liberals to nationalists all marching together for 'Russia Without Putin' was the fulfillment of a dream." Yet most Russians, he lamented, had continued to "slumber" even as Putin had "taken off the flimsy mask of democracy to reveal himself in full as the would-be KGB dictator he has always been". Kasparov responded with several sardonic Twitter postings to a September 2013 The New York Times op-ed by Putin. "I hope Putin has taken adequate protections," he tweeted. "Now that he is a Russian journalist his life may be in grave danger!" Also: "Now we can expect NY Times op-eds by Mugabe on fair elections, Castro on free speech, & Kim Jong-un on prison reform. The Axis of Hypocrisy." In a 12 May 2013 op-ed for The Wall Street Journal, Kasparov questioned reports that the Russian security agency, the FSB, had fully cooperated with the FBI in the matter of the Boston bombers. He noted that the elder bomber, Tamerlan Tsarnaev, had reportedly met in Russia with two known jihadists who "were killed in Dagestan by the Russian military just days before Tamerlan left Russia for the U.S." Kasparov argued, "If no intelligence was sent from Moscow to Washington" about this meeting, "all this talk of FSB cooperation cannot be taken seriously." He further observed, "This would not be the first time Russian security forces seemed strangely impotent in the face of an impending terror attack," pointing out that in both the 2002 Moscow theater siege and the 2004 Beslan school attack, "there were FSB informants in both terror groups – yet the attacks went ahead unimpeded." Given this history, he wrote, "it is impossible to overlook that the Boston bombing took place just days after the U.S. Magnitsky List was published, creating the first serious external threat to the Putin power structure by penalizing Russian officials complicit in human-rights crimes." In sum, Putin's "dubious record on counterterrorism and its continued support of terror sponsors Iran and Syria mean only one thing: common ground zero". Kasparov wrote in July 2013 about the trial in Kirov of fellow opposition leader Alexei Navalny, who had been convicted "on concocted embezzlement charges", only to see the prosecutor, surprisingly, ask for his release the next day pending appeal. "The judicial process and the democratic process in Russia," wrote Kasparov, "are both elaborate mockeries created to distract the citizenry at home and to help Western leaders avoid confronting the awkward fact that Russia has returned to a police state". Still, Kasparov felt that whatever had caused the Kirov prosecutor's about-face, "my optimism tells me it was a positive sign. After more than 13 years of predictable repression under Putin, anything different is good." Kasparov had maintained a summer home in the Croatian city of Makarska. In February 2014, he applied for citizenship by naturalisation in Croatia, according to media reports, claiming he was finding it increasingly difficult to live in Russia. According to an article in The Guardian, Kasparov was "widely perceived" as having been a vocal supporter of Croatian independence during the early 1990s. Later in February 2014, his application for naturalisation was approved and he had a meeting with Croatian prime minister Zoran Milanović on 27 February. Croatian press cited his "lobbying for Croatia in 1991" as grounds for the expedited naturalisation. Subsequent publications in Croatian press suggested that his lobbying for Croatia "was handsomely paid for". In an interview for a Croatian daily published in February 2022, Kasparov said he was "very grateful" to Croatian president Zoran Milanović for the help rendered by him (then as prime minister) in obtaining Croatian citizenship. Political views In September 2013, Kasparov wrote in Time magazine that in Syria, Putin and Bashar al-Assad "won by forfeit when President Obama, Prime Minister Cameron and the rest of the so-called leaders of the free world walked away from the table." Kasparov lamented the "new game at the negotiating table where Putin and Assad set the rules and will run the show under the protection of the U.N." Kasparov said in September 2013 that Russia was now a dictatorship. In the same month he told an interviewer that "Obama going to Russia now is dead wrong, morally and politically," because Putin's regime "is behind Assad". Kasparov has been outspoken against Putin's antigay laws, describing them as "only the most recent encroachment on the freedom of speech and association of Russia's citizens" which the international community had largely ignored. Regarding Russia's hosting of the 2014 Winter Olympics, Kasparov explained in August 2013 that he had opposed Russia's bid from the outset, since it would "allow Vladimir Putin's cronies to embezzle hundreds of millions of dollars" and "lend prestige to Putin's authoritarian regime". Kasparov did not support the proposed Sochi Olympics boycott—writing that it would "unfairly punish athletes"—but called for athletes and others to "transform Putin's self-congratulatory pet project into a spotlight that exposes his authoritarian rule" to the world. In September, Kasparov called upon politicians to refuse to attend the games and the public to pressure sponsors and the media, such that Coca-Cola, for example, could put "a rainbow flag on each Coca-Cola can" and NBC could "do interviews with Russian gay activists or with Russian political activists". Kasparov also emphasized that although he was "still a Russian citizen", he had "good reason to be concerned about my ability to leave Russia if I returned to Moscow". Kasparov has spoken out against the 2014 Russian annexation of Crimea and has stated that control of Crimea should be returned to Ukraine after the overthrow of Putin without additional conditions. Kasparov's website was blocked by the Russian government censorship agency, Roskomnadzor, at the behest of the public prosecutor, allegedly due to Kasparov's opinions on the Crimean crisis. Kasparov's block was made in unison with several other notable Russian sites that were accused of inciting public outrage. Reportedly, several of the blocked sites received an affidavit noting their violations. However, Kasparov stated that his site had received no such notice of violations after its block. In 2015 a whole note on Kasparov was removed from a Russian language encyclopedia of greatest Soviet players after an intervention from "senior leadership". In October 2015, Kasparov published a book titled Winter Is Coming: Why Vladimir Putin and the Enemies of the Free World Must Be Stopped. In the book, Kasparov likens Putin to Adolf Hitler, and explains the need for the west to oppose Putin sooner, rather than appeasing him and postponing the eventual confrontation. According to his publisher, "Kasparov wants this book out fast, in a way that has potential to influence the discussion during the primary season." In 2018, he said that "anything is better than Putin because that eliminates the probability of a nuclear war. Putin is insane." In the 2016 United States presidential election, Kasparov described Republican Donald Trump as "a celebrity showman with racist leanings and authoritarian tendencies" and criticised Trump for calling for closer ties with Putin. After Trump's running mate, Mike Pence, called Putin a strong leader, Kasparov said that Putin is a strong leader "in the same way arsenic is a strong drink". He also criticised the economic policies of Democratic primary candidate Bernie Sanders, but showed respect for Sanders as "a charismatic speaker and a passionate believer in his cause". Kasparov opined that Henry Kissinger "was selling the Trump Administration on the idea of a mirror of 1972 [Richard Nixon's visit to China], except, instead of a Sino-U.S. alliance against the U.S.S.R., this would be a Russian-American alliance against China." In 2017, he condemned the violence unleashed by the Spanish police against the independence referendum in Catalonia. He criticized the Spanish PM Mariano Rajoy and accused him of "betraying" the European promise of peace. After the Catalan regional election held later the same year, Kasparov wrote: "Despite unprecedented pressure from Madrid, Catalonian separatists won a majority. Europe must speak and help find a peaceful path toward resolution and avoid more violence". Kasparov recommended that Spain look to how Britain handled the 2014 Scottish independence referendum, adding: "look only at how Turkey and Iraq have treated the separatist Kurds. That cannot be the road for Spain and Catalonia." Kasparov supports recognition of the Armenian Genocide. He welcomed the Velvet Revolution in Armenia in 2018, just a few days after it happened. Kasparov condemned the assassination of Saudi journalist Jamal Khashoggi. In October 2018, he wrote that Erdoğan's regime in Turkey "has jailed more journalists than any country in the world and scores of them remain in prison in Turkey. Since 2016, Turkey's intelligence agency has abducted at least 80 people in operations in 18 countries." In 2021, Kasparov stated that "the only language that Putin understands is power, and his power is his money," arguing that the United States should target the bank accounts of Russian oligarchs to force Russia to rein in its criminals' cyberattacks against American agencies and companies. Playing style Kasparov's attacking style of play has been compared by many to Alekhine's. Kasparov has described his style as being influenced chiefly by Alekhine, Tal and Fischer. Kramnik has opined that "[Kasparov's] capacity for study is second to none", and said "There is nothing in chess he has been unable to deal with." Magnus Carlsen, whom Kasparov coached from 2009 to 2010, said of Kasparov, "I've never seen someone with such a feel for dynamics in complex positions." Kasparov was known for his extensive opening preparation and aggressive play in the opening. Olympiads and other major team events Kasparov played in a total of eight Chess Olympiads. He represented the Soviet Union four times and Russia four times, following the breakup of the Soviet Union in 1991. In his 1980 Olympiad debut, he became, at age 17, the youngest player to represent the Soviet Union or Russia at that level, a record which was broken by Vladimir Kramnik in 1992. In 82 games, he has scored (+50−3=29), for 78.7% and won a total of 19 medals, including team gold medals all eight times he competed. For the 1994 Moscow Olympiad, he had a significant organizational role, in helping to put together the event on short notice, after Thessaloniki canceled its offer to host, a few weeks before the scheduled dates. Kasparov's detailed Olympiad record follows: Valletta 1980, USSR 2nd reserve, 9½/12 (+8−1=3), team gold, board bronze; Lucerne 1982, USSR 2nd board, 8½/11 (+6−0=5), team gold, board bronze; Dubai 1986, USSR 1st board, 8½/11 (+7−1=3), team gold, board gold, performance gold; Thessaloniki 1988, USSR 1st board, 8½/10 (+7−0=3), team gold, board gold, performance gold; Manila 1992, Russia board 1, 8½/10 (+7−0=3), team gold, board gold, performance silver; Moscow 1994, Russia board 1, 6½/10 (+4−1=5), team gold; Yerevan 1996, Russia board 1, 7/9 (+5−0=4), team gold, board silver, performance gold; Bled 2002, Russia board 1, 7½/9 (+6−0=3), team gold, performance gold. Kasparov made his international teams debut for the USSR at age 16 in the 1980 European Team Championship and played for Russia in the 1992 edition of that championship. He won a total of five medals. His detailed Euroteams record follows: Skara 1980, USSR 2nd reserve, 5½/6 (+5−0=1), team gold, board gold; Debrecen 1992, Russia board 1, 6/8 (+4−0=4), team gold, board gold, performance silver. Kasparov also represented the USSR once in Youth Olympiad competition, but the detailed data at Olimpbase is incomplete; the Chessmetrics Garry Kasparov player file has his individual score from that event. Graz 1981, USSR board 1, 9/10 (+8−0=2), team gold. Records and achievements Chess ratings achievements Kasparov holds the record for the longest time as the No. 1 rated player in the worldfrom 1984 to 2005 (Vladimir Kramnik shared the No. 1 ranking with him once, in the January 1996 FIDE rating list). He was also briefly ejected from the list following his split from FIDE in 1993, but during that time he headed the rating list of the rival PCA. At the time of his retirement, he was still ranked No. 1 in the world, with a rating of 2812. His rating has fallen inactive since the January 2006 rating list. In January 1990, Kasparov achieved the (then) highest FIDE rating ever, passing 2800 and breaking Bobby Fischer's old record of 2785. By the July 1999 and January 2000 FIDE rating lists, Kasparov had reached a 2851 Elo rating, at that time the highest rating ever achieved. He held that record for the highest rating ever achieved until Magnus Carlsen attained a new record high rating of 2861 in January 2013. Other records Kasparov holds the record for most consecutive professional tournament victories, placing first or equal first in 15 individual tournaments from 1981 to 1990. The streak was broken by Vasyl Ivanchuk at Linares 1991, where Kasparov placed second, half a point behind him after losing their individual game. The details of this record winning streak follow: Frunze 1981, USSR Championship, 12½/17, tie for 1st; Bugojno 1982, 9½/13, 1st; Moscow 1982, Interzonal, 10/13, 1st; Nikšić 1983, 11/14, 1st; Brussels OHRA 1986, 7½/10, 1st; Brussels SWIFT 1987, 8½/11, tie for 1st; Amsterdam Optiebeurs 1988, 9/12, 1st; Belfort (World Cup) 1988, 11½/15, 1st; Moscow 1988, USSR Championship, 11½/17, tie for 1st; Reykjavík (World Cup) 1988, 11/17, 1st; Barcelona (World Cup) 1989, 11/16, tie for 1st; Skellefteå (World Cup) 1989, 9½/15, tie for 1st; Tilburg 1989, 12/14, 1st; Belgrade (Investbank) 1989, 9½/11, 1st; Linares 1990, 8/11, 1st. Kasparov went 9 years winning every supertournament he played in addition to contesting his series of 5 consecutive matches with Anatoly Karpov. His only failure in this time period in either tournament or match play was in the World Chess Championship 1984 when the 21 year old Kasparov was trailing (-5, +3 = 40) against the defending champion Karpov before the match was abruptly cancelled. Later on in his career, Kasparov went on another long streak of consecutive supertournament wins. Wijk aan Zee Hoogovens 1999, 10/13, 1st; Linares 1999, 10½/14, 1st; Sarajevo 1999, 7/9, 1st; Wijk aan Zee Corus 2000, 9½/13, 1st; Linares 2000, 6/10, tie for 1st; Sarajevo 2000, 8½/11, 1st; Wijk aan Zee Corus 2001, 9/13, 1st; Linares 2001, 7.5/10, 1st; Astana 2001, 7/10, 1st; Linares 2002, 8/12, 1st. In these 10 consecutive classical supertournaments wins, Kasparov had a score of 53 wins, 61 draws and 1 loss in 115 games with his only loss coming against Ivan Sokolov in Wijk aan Zee 1999. Kasparov won the Chess Oscar a record eleven times. Chess and computers In 1983, Acorn Computers acted as one of the sponsors for Kasparov's Candidates semi-final match against Viktor Korchnoi. Kasparov was awarded a BBC Micro which he took back with him to Baku, making it perhaps the first western-made microcomputer to reach Baku at that time. In 1985, computer chess magazine editor Frederic Friedel invited Kasparov to his house, and the two of them discussed how a chess database program would be useful for preparation. Two years later, Friedel founded Chessbase, and gave a copy of the program to Kasparov who started using it in his preparation. In 1985, Kasparov played against thirty-two different chess computers in Hamburg, winning all games, but with some difficulty. On 22 October 1989, Kasparov defeated the chess computer Deep Thought in both games of a two-game match. In December 1992, Kasparov visited Frederic Friedel in his hotel room in Cologne, and played 37 blitz games against Fritz 2 winning 24, drawing 4 and losing 9. Kasparov cooperated in producing video material for the computer game Kasparov's Gambit released by Electronic Arts in November 1993. In April 1994, Intel acted as a sponsor for the first Professional Chess Association Grand Prix event in Moscow played a time control of 25 minutes per game. In May, Chessbase's Fritz 3 running on an Intel Pentium PC defeated Kasparov in their first in the Intel Express blitz tournament in Munich, but Kasparov managed to tie it for first, and then win the playoff with 3 wins and 2 draws. The next day, Kasparov lost to Fritz 3 again in a game on ZDF TV. In August, Kasparov was knocked out of the London Intel Grand Prix by Richard Lang's ChessGenius 2 program in the first round. In 1995, during Kasparov's world title match with Viswanathan Anand, he unveiled an opening novelty that had been checked with a chess engine, an approach that would become increasingly common in subsequent years. Kasparov played in a pair of six-game chess matches with an IBM supercomputer called Deep Blue. The first match was played in Philadelphia in 1996 and won by Kasparov. The second was played in New York City in 1997 and won by Deep Blue. The 1997 match was the first defeat of a reigning world chess champion by a computer under tournament conditions. In May 1997, an updated version of Deep Blue defeated Kasparov 3½–2½ in a highly publicized six-game match. The match was even after five games but Kasparov lost quickly in Game 6. This was the first time a computer had ever defeated a world champion in a match. A documentary film was made about this famous match entitled Game Over: Kasparov and the Machine. Kasparov said that he was "not well prepared" to face Deep Blue in 1997. He said that based on his "objective strengths" his play was stronger than that of Deep Blue. Kasparov claimed that several factors weighed against him in this match. In particular, he was denied access to Deep Blue's recent games, in contrast to the computer's team, which could study hundreds of Kasparov's. After the loss, Kasparov said that he sometimes saw deep intelligence and creativity in the machine's moves, suggesting that during the second game, human chess players, in contravention of the rules, intervened. IBM denied that it cheated, saying the only human intervention occurred between games. The rules provided for the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play revealed during the course of the match. Kasparov requested printouts of the machine's log files but IBM refused, although the company later published the logs on the Internet. Much later, it was suggested that the behavior Kasparov noted had resulted from a glitch in the computer program. Although Kasparov wanted another rematch, IBM declined and ended their Deep Blue program. In January 2003, he engaged in a six-game classical time control match with a $1 million prize fund which was billed as the FIDE "Man vs. Machine" World Championship, against Deep Junior. The engine evaluated three million positions per second. After one win each and three draws, it was all up to the final game. After reaching a decent position Kasparov offered a draw, which was soon accepted by the Deep Junior team. Asked why he offered the draw, Kasparov said he feared making a blunder. Deep Junior was the first machine to beat Kasparov with black and at a standard time control. In June 2003, Mindscape released the computer game Kasparov Chessmate with Kasparov himself listed as a co-designer. In November 2003, he engaged in a four-game match against the computer program X3D Fritz, using a virtual board, 3D glasses and a speech recognition system. After two draws and one win apiece, the X3D Man–Machine match ended in a draw. Kasparov received $175,000 for the result and took home the golden trophy. Kasparov continued to criticize the blunder in the second game that cost him a crucial point. He felt that he had outplayed the machine overall and played well. "I only made one mistake but unfortunately that one mistake lost the game." Books and other writings Early writings Kasparov has written books on chess. He published a controversial autobiography when still in his early 20s, originally titled Child of Change, later retitled Unlimited Challenge. This book was subsequently updated several times after he became World Champion. Its content is mainly literary, with a small chess component of key unannotated games. He published an annotated games collection in 1983, Fighting Chess: My Games and Career, which has been updated several times in further editions. He also wrote a book annotating the games from his World Chess Championship 1985 victory, World Chess Championship Match: Moscow, 1985. He has annotated his own games extensively for the Yugoslav Chess Informant series and for other chess publications. In 1982, he co-authored Batsford Chess Openings with British grandmaster Raymond Keene and this book was an enormous seller. It was updated into a second edition in 1989. He also co-authored two opening books with his trainer Alexander Nikitin in the 1980s for British publisher Batsfordon the Classical Variation of the Caro-Kann Defence and on the Scheveningen Variation of the Sicilian Defence. Kasparov has also contributed extensively to the five-volume openings series Encyclopedia of Chess Openings. In 2000, Kasparov co-authored Kasparov Against the World: The Story of the Greatest Online Challenge with grandmaster Daniel King. The 202-page book analyzes the 1999 Kasparov versus the World game, and holds the record for the longest analysis devoted to a single chess game. My Great Predecessors series In 2003, the first volume of his five-volume work Garry Kasparov on My Great Predecessors was published. This volume, which deals with the world chess champions Wilhelm Steinitz, Emanuel Lasker, José Raúl Capablanca, Alexander Alekhine, and some of their strong contemporaries, has received lavish praise from some reviewers (including Nigel Short), while attracting criticism from others for historical inaccuracies and analysis of games directly copied from unattributed sources. Through suggestions on the book's website, most of these shortcomings were corrected in following editions and translations. Despite this, the first volume won the British Chess Federation's Book of the Year award in 2003. Volume two, covering Max Euwe, Mikhail Botvinnik, Vasily Smyslov and Mikhail Tal appeared later in 2003. Volume three, covering Tigran Petrosian and Boris Spassky appeared in early 2004. In December 2004, Kasparov released volume four, which covers Samuel Reshevsky, Miguel Najdorf, and Bent Larsen (none of these three were World Champions), but focuses primarily on Bobby Fischer. The fifth volume, devoted to the chess careers of World Champion Anatoly Karpov and challenger Viktor Korchnoi, was published in March 2006. Modern Chess series His book Revolution in the 70s (published in March 2007) covers "the openings revolution of the 1970s–1980s" and is the first book in a new series called "Modern Chess Series", which intends to cover his matches with Karpov and selected games. The book "Revolution in the 70s" concerns the revolution in opening theory that was witnessed in that decade. Such systems as the controversial (at the time) "Hedgehog" opening plan of passively developing the pieces no further than the first three ranks are examined in great detail. Kasparov also analyzes some of the most notable games played in that period. In a section at the end of the book, top opening theoreticians provide their own "take" on the progress made in opening theory in the 1980s. Garry Kasparov on Garry Kasparov series Kasparov published three volumes of his games, spanning his entire career. Winter Is Coming In October 2015, Kasparov published a book titled Winter Is Coming: Why Vladimir Putin and the Enemies of the Free World Must Be Stopped. The title is a reference to the HBO television series Game of Thrones. In the book, Kasparov writes about the need for an organization composed solely of democratic countries to replace the United Nations. In an interview, he called the United Nations a "catwalk for dictators". Historical revision Kasparov believes that the conventional history of civilization is radically incorrect. Specifically, he believes that the history of ancient civilizations is based on misdatings of events and achievements that actually occurred in the medieval period. He has cited several aspects of ancient history that he says are likely to be anachronisms. Kasparov has written in support of New Chronology (Fomenko), although with some reservations. In 2001, Kasparov expressed a desire to devote his time to promoting the New Chronology after his chess career. "New Chronology is a great area for investing my intellect ... My analytical abilities are well placed to figure out what was right and what was wrong." "When I stop playing chess, it may well be that I concentrate on promoting these ideas... I believe they can improve our lives." Later, Kasparov renounced his support of Fomenko theories but reaffirmed his belief that mainstream historical knowledge is highly inconsistent. Other post-retirement writing In 2007, he wrote How Life Imitates Chess, an examination of the parallels between decision-making in chess and in the business world. In 2008, Kasparov published a sympathetic obituary for Bobby Fischer, writing: "I am often asked if I ever met or played Bobby Fischer. The answer is no, I never had that opportunity. But even though he saw me as a member of the evil chess establishment that he felt had robbed and cheated him, I am sorry I never had a chance to thank him personally for what he did for our sport." He is the chief advisor for the book publisher Everyman Chess. Kasparov works closely with Mig Greengard and his comments can often be found on Greengard's blog (apparently no longer active). Kasparov collaborated with Max Levchin and Peter Thiel on The Blueprint, a book calling for a revival of world innovation, planned to release in March 2013 from W. W. Norton & Company. The book was never released, as the authors disagreed on its contents. Kasparov argued that chess has become the model for reasoning in the same way that the fruit fly Drosophila melanogaster became a model organism for geneticists, in an editorial comment on Google's AlphaZero chess-playing system. "I was pleased to see that AlphaZero had a dynamic, open style like my own," he wrote in late 2018. Kasparov served as a consultant for the 2020 Netflix miniseries The Queen's Gambit. He gave an extended interview to Slate describing his contributions. In 2020, Kasparov collaborated with Matt Calkins, founder and CEO of Appian, on HYPERAUTOMATION, a book about low-code development and the future of business automation. Kasparov wrote the foreword where he discusses his experiences with human-machine relationships. Bibliography Kasparov Teaches Chess (1984–85, Sport in the USSR Magazine; 1986, First Collier Books) The Test of Time (Russian Chess) (1986, Pergamon Pr) World Chess Championship Match: Moscow, 1985 (1986, Everyman Chess) Child of Change: An Autobiography (1987, Hutchinson) London–Leningrad Championship Games (1987, Everyman Chess) Unlimited Challenge (1990, Grove Pr) The Sicilian Scheveningen (1991, B.T. Batsford Ltd) The Queen's Indian Defence: Kasparov System (1991, B.T. Batsford Ltd) Kasparov Versus Karpov, 1990 (1991, Everyman Chess) Kasparov on the King's Indian (1993, B.T. Batsford Ltd) Kasparov, Garry. Jon Speelman and Bob Wade. 1995. Garry Kasparov's Fighting Chess. Henry Holt. Garry Kasparov's Chess Challenge (1996, Everyman Chess) Lessons in Chess (1997, Everyman Chess) Kasparov Against the World: The Story of the Greatest Online Challenge (2000, Kasparov Chess Online) My Great Predecessors Part I (2003, Everyman Chess) My Great Predecessors Part II (2003, Everyman Chess) Checkmate!: My First Chess Book (2004, Everyman Mindsports) My Great Predecessors Part III (2004, Everyman Chess) My Great Predecessors Part IV (2004, Everyman Chess) My Great Predecessors Part V (2006, Everyman Chess) How Life Imitates Chess (2007, William Heinemann Ltd.) Garry Kasparov on Modern Chess, Part I: Revolution in the 70s (2007, Everyman Chess) Garry Kasparov on Modern Chess, Part II: Kasparov vs Karpov 1975–1985 (2008, Everyman Chess) Garry Kasparov on Modern Chess, Part III: Kasparov vs Karpov 1986–1987 (2009, Everyman Chess) Garry Kasparov on Modern Chess, Part IV: Kasparov vs Karpov 1988–2009 (2010, Everyman Chess) Garry Kasparov on Garry Kasparov, part I (2011, Everyman Chess) Garry Kasparov on Garry Kasparov, part II (2013, Everyman Chess) Garry Kasparov on Garry Kasparov, part III (2014, Everyman Chess) Winter Is Coming: Why Vladimir Putin and the Enemies of the Free World Must Be Stopped (2015, Public Affairs) Deep Thinking with Mig Greengard (2017, Public Affairs) Videos Kasparov, Garry, Nigel Short, Raymond Keene and Daniel King. 1993. Kasparov Short The Inside Story. Grandmaster Video. Kasparov, Garry, Jonathan Tisdall and Jim Plaskett. 2000. My Story. Grandmaster Video. Kasparov, Garry. 2004. How to Play the Queen's Gambit. Chessbase. Kasparov, Garry. 2005. How to Play the Najdorf. Chessbase. vol. 1 , vol. 2 Kasparov, Garry. 2012. How I Became World Champion 1973–1985. Chessbase. Kasparov, Garry. 2017. Garry Kasparov Teaches Chess. Masterclass.com. Personal life Kasparov has lived in New York City since 2013. He has been married three times: to Masha, with whom he had a daughter before divorcing; to Yulia, with whom he had a son before their 2005 divorce; and to Daria (Dasha), with whom he has two children, a daughter born in 2006 and a son born in 2015. Kasparov's wife manages his business activities worldwide as the founder of Kasparov International Management Inc. See also Kasparov Chess, Internet chess club. Kasparov versus the World List of chess games between Kasparov and Kramnik Committee 2008 Putinism References Further reading External links Garry Kasparov, "Man of the Year?", OpinionJournal, 23 December 2007 Edward Winter, List of Books About Fischer and Kasparov Kasparov's "Deep Thinking" talk at Google Garry Kasparov's best games analyzed in video Articles about Garry Kasparov by Edward Winter 1963 births Living people 2011–2013 Russian protests 20th-century male writers 21st-century Russian politicians Azerbaijan University of Languages alumni Chess coaches Chess grandmasters Chess Olympiad competitors Chess historians Communist Party of the Soviet Union members Croatian activists Croatian chess writers Croatian people of Armenian descent Croatian people of Russian-Jewish descent Naturalized citizens of Croatia Chess players from Baku Recipients of the Order of Friendship of Peoples Russian anti-communists Russian chess players Russian chess writers Russian dissidents Russian liberals Russian people of Armenian descent Russian people of Jewish descent Russian political activists Russian sportsperson-politicians Solidarnost politicians Soviet chess players Soviet chess writers Soviet male writers The Other Russia (coalition) World chess champions World Junior Chess Champions
48824910
https://en.wikipedia.org/wiki/Configure%2C%20price%20and%20quote
Configure, price and quote
Configure, price, quote (CPQ) software is a term used in business to describe software systems that help sellers quote complex and configurable products. An example could be a maker of heavy trucks. If the customer chooses a certain chassis (the base frame of a motor vehicle), the choice of engines may be limited, because certain engines might not fit a certain chassis. Given a certain choice of engine, the choice of trailer may be limited (e.g. a heavy trailer requires a stronger engine), and so on. If the product is highly configurable, the user may face combinatorial explosion, which means the rapid growth of the complexity of a problem. Thus a configuration engine is employed to alleviate this problem. Configuration engines The "configure" in CPQ deals with the complex challenges of combining components and parts into a more viable product. There are three main approaches used to alleviate the problem of combinatorial explosion: Rule-based truth-maintenance systems: These systems were the first generation of configuration engines, launched in the 1970s based on research results in artificial intelligence going back to the 1960s. Constraint satisfaction engines: These engines were developed in the 1980s and 1990s. They can handle the full set of configuration rules to alleviate the problem of combinatorial explosion but can be complex and difficult to maintain as rules have to be written to accommodate the intended use. Compile-based configurators: These configurators build upon constraint-based engines and research in Binary Decision Diagrams. This approach compiles all possible combinations in a single distributable file and is agnostic to how rules are expressed by the author. This enables businesses to import rules from legacy systems and handle increasingly more complex sets of rules and constraints tied to increasingly more customizable products. The concept of configuration lifecycle management (CLM), of which CPQ is a component, describes how compile-based configuration can further be leveraged to address most of the problems related to product configuration for business employing mass customization. Industry The CPQ industry has many vendors. Some vendors focus more on one component, for example, a price optimization provider may integrate their pricing software with another provider's configuration engine - and vice versa. The IT research and advisory firm Gartner estimated that the CPQ market was $1.4 billion in 2019, up 16% year over year. The market is fragmented, with the largest vendor capturing only a 17% share. Gartner identified the following vendors as leaders in the CPQ software market in 2019, while G2, another peer-reviewed software comparison tool for businesses, identified the following vendors as leaders in the CPQ software market in 2021: DealHub Conga Oracle Salesforce (Salesforce CPQ and Salesforce Vlocity) SAP Tacton References Business software
698198
https://en.wikipedia.org/wiki/Sri%20Venkateswara%20College%20of%20Engineering
Sri Venkateswara College of Engineering
Sri Venkateswara College of Engineering (SVCE) is an engineering institute in Tamil Nadu, at Pennalur, Sriperumbudur near Chennai. SVCE was found in the year 1985. The college was established by the Southern Petrochemical Industries Corporation (SPIC) group. SVCE is among the top engineering colleges of Anna University in Tamil Nadu and a Tier-I institution among self-financing colleges. History In November 1985 the college was founded and granted permission by Vishnu Vardhan JP and to conduct engineering courses in Mechanical Engineering, Electronics and Communication Engineering and Computer Science Engineering, awarded by the University of Madras. The college complex at Nazarathpet (near Poonamallee) was inaugurated on 8 April 1985 by the former Governor of Tamil Nadu. In 1991, the college shifted into its new campus at Pennalur, near the town of Sriperumbudur. Sri Venkateswara College of Engineering received approval from the All India Council for Technical Education the same year. Courses in Electrical & Electronics Engineering and Chemical Engineering Courses were started in 1994. SVCE celebrated its decennial in March 1995 in the presence of former Minister of State for Commerce and present home minister of the Government of India. In 1996 it began a course in Information Technology and was the first college in the country to do so. The college obtained an ISO 9001:2000 certification in 2002. SVCE obtained autonomy from UGC in 2016. Rankings The National Institutional Ranking Framework (NIRF) ranked it 176 among engineering colleges in 2020. Library SVCE has a library spanning three floors covering major fields of science and engineering, a conference room, a study space and a seating capacity of over 200. The library is air-conditioned and has conferencing, multimedia, internet, reprography facilities, a CD-ROM collection. and a book bank for deserving students. The college subscribes to most major technical journals including those by the IEEE, ACM and ASME. Over 1,04,694 volumes and 1725 CDs are available and around 400 print and online journals are subscribed. Apart from the central library, all departments maintain their own libraries. SVCE is an institutional member of The British Council Library, IIT Madras, American Council Library, DELNET and MALIBNET. The library uses multimedia computers with internet connectivity, computer-based training, CDs, e-books and e-journals to promote e-learning. A new department of Information Management System (IMS) was started in 2012. Sports infrastructure The facility has a swimming pool, synthetic and clay tennis courts, turf cricket ground, basketball, football grounds, badminton, volleyball courts, and a 400-meter track. is dedicated to a table tennis hall, gymnasium, and carom and chess rooms. The college conducts inter-college tournaments in basketball, volleyball, cricket and ball badminton. Research Dr R Muthucumaraswamy, Dean(Research) is taking care of research activities in Sri Venkateswara College of Engineering. The following ten departments are recognised as research centres by Anna University to conduct PhD: (i) Electrical and Electronics Engineering EEE (ii) Mechanical Engineering (iii) Chemical Engineering (iv) Applied Mathematics (v) Applied Physics (vi) Applied Chemistry (vii)Information Technology (viii) Biotechnology (ix) Computer Science and Engineering (x) Electronics and Communication Engineering. The above departments have been recognized by Anna University as collaborative research centers for PhD degrees. R&D is aided by a technology innovation center (TIC), which houses SPIC's research centre. TIC houses an interdisciplinary centre for nanotechnology to carry out research. The management of SVCE sanctioned Rs.2,00,000 every year to carry out innovative projects for final year UG and PG students. The SVCE students' research day (SVCE Innovates) is conducted in third week of March every year to motivate and nurture innovative ideas among Students. Faculty research day is also conducted every year in third week of April to share their research ideas among faculty and research scholars. More than 60 faculty members are received recognised supervisors status from Anna University to guide research scholars. As on date more than 127 faculty members with PhD qualification. More than 226 research scholars including 27 full time research scholars are pursuing PhD in SVCE research centers. It is proud to note that till date 97 scholars(FT/PT(External)/PT(Internal)) are successfully completed their PhD through SVCE research centers. 17 of our faculty completed PhD from SVCE and 25 members of faculty of SVCE completed PhD from Anna University/other Affiliated colleges of Anna University. 10 of our faculty completed PhD from other University (IITM,Bharathiar University, MSU, JNTU) The Management of SVCE is encouraging and motivate the members of the faculty to do research and giving incentives for their research output. (i) Performance based research incentive for PhD qualified faculty (ii) Incentive for research publications in a reputed National/ International Journals. (iii) The Management of SVCE is also giving additional incentive 2% of the amount received through funded projects. The same sanctioned after the successful completion of funded projects received from external agencies. Latest External Funded projects: The Department of Electronics and Communication Engineering (Dr P Jothilakshmi, Professor-PI),received a research grant of Rs.4.1 Lakhs from Tamilnadu State Council for Science and Technology (TNSCST) for the period of two years (2020-2022) The Department of Applied Mathematics (Dr R MUTHUCUMARASWAMY, Dean(Research) Professor and Head-PI)received a research grant of Rs.17.79 Lakhs from National Board of Higher Mathematics (DAE-NBHM) under JRF scheme for the period of three years (2021-2024). The Department of Biotechnology (Dr V Sumitha, Associate Professor and HOD incharge, SVCE and Dr Gugan Jayaraman, Professor, IITM), received a grant of Rs.18.30 Lakhs from DST-SERB under the scheme Teacher Associateship for Research Excellence (TARE) for the period of three years (2021-2024). The Department of Information Technology (Dr C Yaaswanth, Associate Professor-PI), received a research grant of Rs.17.27 Lakhs from ISRO under RESPOND JRF scheme for the period of three years (2021-2024). Recently, the Department of Biotechnology (Dr P K Praveen Kumar , Associate Professor and Dr Michael Gromiha, Professor, IITM ) received a grant of Rs 18.30 lakhs from SERB-DST under the scheme Teacher Associateship for Research Excellance (TARE) for the period of three years (2021-2024). Patents Granted by IPR - 4 Three patents are granted by IPR to the following members of the faculty: Year 2020 - (1) Dr M Gajendran, Assistant Professor, Department of Mechanical Engineering. Year 2021 - (2) Dr N Meyappan, Professor and Head, Department of Chemical Engineering. Dr C Gopinath, Associate Professor, Department of Electrical and Electronics Engineering. Year 2022 -(1) Dr Prem Anand, Associate Professor, Department of Mechanical Engineering. iGEM participation Students from the department of biotechnology have formed teams and participated in the international Genetically Engineered Machines competition in 2015 and 2016, being the only Anna University affiliated college to do so. They were placed in the bronze medal category in 2015 and silver medal category in 2016. In 2015, they worked on finding alternatives to antibiotics and in 2016 worked on developing a system to prolong the shelf life of milk without refrigeration. Entrepreneurship Development Cell The Entrepreneurship Development Cell was formed in 1996 and has organized awareness camps to motivate students to become entrepreneurs. An Entrepreneurship Promotion and Incubation Center (EPIC) has been formed with support from Entrepreneurship Development Institute and MSME to incubate start-up companies with innovative ideas. ally to be suitable to the requirements of the society. The campus is large with sprawling greenery and individual blocks for departments and research and development centres. The cafeteria functions throughout the day and serves snacks, drinks and packed food items alongside South and North Indian breakfast and lunch choices. The Pennalurish Pupps and Birinji are of Top notch taste and the recipe is maintained for decades without any anomaly. For an undergraduates at SVCE, there are many subject-based activities, lectures and workshops happening in between exams to keep the students engaged. The college has memorandum of understanding signed with industries and companies, including multinational giants, to enable research options and internship opportunities for professional development. Housing The SVCE housing comprises two clusters of seven blocks for men and three blocks for women. For the Marine Engineering students, it is mandatory to live in the hostels. A total of around 380 women and 850 men can stay in the hostel. Hostel area has indoor games, TV, WiFi and laundromats. Forum For Economic Studies by Engineers A club where students are taught how to manage finances and economics. Mock placements are conducted by them every year where they bring in real HRs to conduct mock interviews to pre-final years and help them learn on how to face real placements. Curtain Call SVCE This is the dramatics club representing the college. It aims to improve the extracurricular participation among students by allowing them to join the club for various activities such as script writing, acting, direction and production. They have performed unique plays to a big audience through partnership with the company Crea Shakthi. Speakers'forum SVCE A club where students are trained for their interpersonal skills, public speaking, Group discussion, debate and stand up.this forum is totally concentrated on students communication skills and Peer mentoring. Notes Engineering colleges in Chennai Academic institutions formerly affiliated with the University of Madras
226605
https://en.wikipedia.org/wiki/3dfx%20Interactive
3dfx Interactive
3dfx Interactive was an American technology company headquartered in San Jose, California, founded in 1994, that specialized in the manufacturing of 3D graphics processing units, and later, video cards. It was a pioneer in the field from the late 1990s until 2000. The company's original product was the Voodoo Graphics, an add-in card that implemented hardware acceleration of 3D graphics. The hardware accelerated only 3D rendering, relying on the PC's current video card for 2D support. Despite this limitation, the Voodoo Graphics product and its follow-up, Voodoo2, were popular. It became standard for 3D games to offer support for the company's Glide API. The success of the company's products led to renewed interest in 3D gaming, and by the second half of the 1990s, products combining a 2D output with reasonable 3D performance were appearing. This was accelerated by the introduction of Microsoft's Direct3D, which provided a single high-performance API that could be implemented on these cards, seriously eroding the value of Glide. While 3dfx continued to offer high-performance options, the value proposition was no longer compelling. 3dfx rapidly declined in the late 1990s and most of the company's assets were acquired by Nvidia Corporation on December 15, 2000, mostly for intellectual property rights. The acquisition was accounted for as a purchase by Nvidia and was completed by the first quarter of their fiscal year of 2002. 3dfx ceased supporting their products on February 15, 2001 and filed for bankruptcy on October 15, 2002. History Early history The company was founded on August 24, 1994, as 3D/fx, Inc. Ross Smith, Gary Tarolli and Scott Sellers, all former employees of Silicon Graphics Inc. They were soon joined by Gordie Campbell of TechFarm. 3dfx released its first product, the Voodoo Graphics 3D chip, to manufacturing on November 6, 1995. The chip is a VGA 3D accelerator that features rendering methods such as point-sampled texture mapping, Z- and double buffering, Gouraud shading, subpixel correction, alpha compositing, and anti-aliasing. Alongside the chip came 3Dfx's Glide API, designed to take full advantage of the Voodoo Graphics' features. The company stated that Glide's creation was because it found that no existing APIs at the time could fully utilize the chip's capabilities. The DirectX 3.0 was deemed to be lacking, and the OpenGL was regarded as suitable only for CAD/CAM workstations. The first graphics card to use the chip was Orchid Technology's Righteous 3D, released on October 7, 1996. The company manufactured only the chips and some reference boards, and initially did not sell any product to consumers; rather, it acted as an OEM supplier for graphics card companies, which designed, manufactured, marketed, and sold their own graphics cards including the Voodoo chipset. 3dfx gained initial fame in the arcade market. The first arcade machine that 3dfx Voodoo Graphics hardware was used in was a 1996 baseball game featuring a bat controller with motion sensing technology called ICE Home Run Derby. Later that year it was featured in more popular titles, such as Atari's San Francisco Rush and Wayne Gretzky's 3D Hockey. 3dfx also developed MiniGL after id Software's John Carmack released a 1997 version of Quake that used the OpenGL API. The MiniGL translated OpenGL commands into Glide, and gave 3dfx the advantage as the sole consumer chip company to deliver a functional graphics library driver until 1998. Voodoo Graphics PCI Towards the end of 1996, the cost of EDO DRAM dropped significantly and 3dfx was able to enter the consumer PC hardware market with aggressive pricing compared to the few previous 3D graphics solutions for computers. Prior to affordable 3D hardware, games such as Doom and Quake had compelled video game players to move from their 80386s to 80486s, and then to the Pentium. A typical Voodoo Graphics PCI expansion card consisted of a DAC, a frame buffer processor and a texture mapping unit, along with 4 MB of EDO DRAM. The RAM and graphics processors operated at 50 MHz. It provided only 3D acceleration and as such the computer also needed a traditional video controller for conventional 2D software. A pass-through VGA cable daisy-chained the video controller to the Voodoo, which was itself connected to the monitor. The method used to engage the Voodoo's output circuitry varied between cards, with some using mechanical relays while others utilized purely electronic components. The mechanical relays emitted an audible "clicking" sound when they engaged and disengaged. By the end of 1997, the Voodoo Graphics was by far the most widely adopted 3D accelerator among both consumers and software developers. The Voodoo's primary competition was from PowerVR and Rendition. PowerVR produced a similar 3D-only add-on card with capable 3D support, although it was not comparable to Voodoo Graphics in either image quality or performance. 3dfx saw intense competition in the market from cards that offered the combination of 2D and 3D acceleration. While these cards, such as Matrox Mystique, S3 ViRGE and ATI 3D Rage, offered inferior 3D acceleration, their lower cost and simplicity often appealed to OEM system builders. Rendition's Vérité V1000 was an integrated (3D+VGA) single-chip solution, but it did not have comparable 3D performance, and its 2D capabilities were considered merely adequate relative to other 2D cards of the time. Voodoo Rush In August 1997, 3dfx released the Voodoo Rush chipset, combining a Voodoo chip with a 2D chip that lay on the same circuit board, eliminating the need for a separate VGA card. Most cards were built with an Alliance Semiconductor AT25/AT3D 2D component, but there were some built with a Macronix chip and there were initial plans to partner with Trident but no such boards were ever marketed. The Rush had the same specifications as Voodoo Graphics, but did not perform as well because the Rush chipset had to share memory bandwidth with the CRTC of the 2D chip. Furthermore, the Rush chipset was not directly present on the PCI bus but had to be programmed through linked registers of the 2D chip. Like the Voodoo Graphics, there was no interrupt mechanism, so the driver had to poll the Rush in order to determine whether a command had completed or not; the indirection through the 2D component added significant overhead here and tended to back up traffic on the PCI interface. The typical performance hit was around 10% compared to Voodoo Graphics, and even worse in windowed mode. Later, Rush boards were released by Hercules featuring 8 MiB VRAM and a 10% higher clock speed, in an attempt to close this performance gap. Some manufacturers bundled a PC version of Atari Games' racing game San Francisco Rush, the arcade version of which utilised a slightly upgraded Voodoo Graphics chipset with an extra texture mapping unit and additional texture memory. Sales of the Voodoo Rush cards were very poor, and the cards were discontinued within a year. The Voodoo Rush was 3dfx's first commercial failure. Voodoo2 The 3Dfx Voodoo2, the successor to the Voodoo Graphics chipset released in March 1998, was architecturally similar, but the basic board configuration added a second texturing unit, allowing two textures to be drawn in a single pass. The Voodoo2 required three chips and a separate VGA graphics card, whereas new competing 3D products, such as the ATI Rage Pro, Nvidia RIVA 128, and Rendition Verite 2200, were single-chip products. Despite some shortcomings, such as the card's dithered 16-bit 3D color rendering and 800x600 resolution limitations, no other manufacturers' products could match the smooth framerates that the Voodoo2 produced. It was a landmark (and expensive) achievement in PC 3D-graphics. Its excellent performance, and the mindshare gained from the original Voodoo Graphics, resulted in its success. Many users even preferred Voodoo2's dedicated purpose, because they were free to use the quality 2D card of their choice as a result. Some 2D/3D combined solutions at the time offered quite sub-par 2D quality and speed. The Voodoo2 introduced Scan-Line Interleave (SLI), in which two Voodoo2 boards were connected together, each drawing half the scan lines of the screen. SLI increased the maximum resolution supported to 1024×768. Because of the high cost and inconvenience of using three separate graphics cards (two Voodoo 2 SLI plus the general purpose 2D graphics adapter), the Voodoo2 SLI scheme had minimal effect on total market share and was not a financial success. SLI capability was not offered in subsequent 3dfx board designs, although the technology would be later used to link the VSA-100 chips on the Voodoo 5. It was on this technology that Nvidia based its own SLI, rebranded Scalable Link Interface, which debuted on the GeForce 6 series in 2004. The arrival of the Nvidia RIVA TNT with integrated 2D/3D chipset would offer minor challenge to the Voodoo2's supremacy months later. Banshee Near the end of 1998, 3dfx released the Banshee, which featured a lower price achieved through higher component integration, and a more complete feature-set including 2D acceleration, to target the mainstream consumer market. A single-chip solution, the Banshee was a combination of a 2D video card and partial (only one texture mapping unit) Voodoo2 3D hardware. Due to the missing second TMU, in 3D scenes which used multiple textures per polygon, the Voodoo2 was significantly faster. However, in scenes dominated by single-textured polygons, the Banshee could match or exceed the Voodoo2 due to its higher clock speed and resulting greater pixel fillrate. Banshee's 2D acceleration was the first such hardware from 3dfx and it was very capable. It rivaled the fastest 2D cores from Matrox, Nvidia, and ATI. It consisted of a 128-bit 2D GUI engine and a 128-bit VESA VBE 3.0 VGA core. The graphics chip capably accelerated DirectDraw and supported all of the Windows Graphics Device Interface (GDI) in hardware, with all 256 raster operations and tertiary functions, and hardware polygon acceleration. The 2D core achieved near-theoretical maximum performance with a null driver test in Windows NT. 3dfx announced in January 1998 that the Banshee had sold about one million units. While Nvidia had yet to launch a product in the add-in board market that sold as well as 3dfx's Voodoo line, the company was gaining steady ground in the OEM market. The Nvidia RIVA TNT was a similar, highly integrated product that had two major advantages in greater 3D speed and 32-bit 3D color support. 3dfx, by contrast, had very limited OEM sales, as the Banshee was adopted only in small numbers by OEMs. Rampage In early 1998, 3dfx embarked on a new development project. The Rampage development project was new technology for use in a new graphics card that would take approximately two years to develop, and would supposedly be several years ahead of the competition once it debuted. The company hired hardware and software teams in Austin, Texas to develop 2D and 3D Windows device drivers for Rampage in the summer of 1998. The hardware team in Austin initially focused on Rampage, but then worked on transform and lighting (T&L) engines and on MPEG decoder technology. (Later, these technologies were part of the Nvidia asset purchase in December 2000.) The software team developed both device drivers and a binary-compatible soft emulation of the Rampage function set. Thus, there were working Windows NT device drivers within a few days of the power on of the Rampage system on the 2nd week of December, 2000. Dreamcast In 1997, 3dfx was working with entertainment company Sega to develop a new video game console hardware platform. Sega solicited two competing designs: a unit code-named "Katana", developed in Japan using NEC and Imagination Technologies (then VideoLogic) technology, and "Blackbelt", a system designed in the United States using 3dfx technology. However, on July 22, 1997, 3dfx announced that Sega was terminating the development contract. Sega chose to use NEC's PowerVR chipset for its game console, though it still planned to purchase the rights to 3dfx's technology in order to prevent competitors from acquiring it. 3dfx said Sega has still not given a reason as to why it terminated the contract or why it chose NEC's accelerator chipset over 3dfx's. According to Dale Ford, senior analyst at Dataquest, a market research firm based in San Jose, California, a number of factors could have influenced Sega's decision to move to NEC, including NEC's proven track record of supplying chipsets for the Nintendo 64 and the demonstrated ability to be able to handle a major influx of capacity if the company decided to ramp up production on a moment's notice. "This is a highly competitive market with price wars happening all the time and it would appear that after evaluating a number of choices—and the ramifications each choice brings—Sega went with a decision that it thought was best for the company's longevity," said Mr. Ford. "Sega has to make a significant move to stay competitive and they need to make it soon. Now whether this move is to roll out another home console platform or move strictly to the PC gaming space is unknown." Sega quickly quashed 3dfx's "Blackbelt" and used the NEC-based "Katana" as the model for the product that would be marketed and sold as the Dreamcast. 3dfx sued Sega for breach of contract, accusing Sega of starting the deal in bad faith in order to take 3dfx technology. The case was settled out of court. Voodoo3 and strategy shift 3dfx executed a major strategy change just prior to the launch of Voodoo3 by purchasing STB Systems for US $141 million on December 14, 1998. STB Systems was one of the larger graphics card manufacturers at the time; the intent was for 3dfx to start manufacturing, marketing, and selling its own graphics cards, rather than functioning only as an OEM supplier. Purchase of STB was intended to give 3dfx access to that company's considerable OEM resources and sales channels, but the intended benefits of the acquisition never materialized. The two corporations were vastly different entities, with different cultures and structures, and they never integrated smoothly. STB prior to the 3dfx acquisition also approached Nvidia as a potential partner to acquire the company. At the time, STB was Nvidia's largest customer and was only minimally engaged with 3dfx. 3dfx management mistakenly believed that acquiring STB would ensure OEM design wins with their products and that product limitations would be overcome with STB's knowledge in supporting the OEM sales/design win cycles. Nvidia decided not to acquire STB and to continue to support many brands of graphics board manufacturers. After STB was acquired by 3dfx, Nvidia focused on being a virtual graphics card manufacturer for the OEMs and strengthened its position in selling finished reference designs ready for market to the OEMs. STB's manufacturing facility in Juarez, Mexico was not able to compete from either a cost or quality point of view when compared to the burgeoning original design manufacturers (ODMs) and Contract electronic manufacturers (CEMs) that were delivering solutions in Asia for Nvidia. Prior to the STB merger finalizing, some of 3dfx's OEMs warned the company that any product from Juarez will not be deemed fit to ship with their systems, however 3dfx management believed these problems could be addressed over time. Those customers generally became Nvidia customers and no longer chose to ship 3dfx products. The acquisition of STB was one of the main contributors to 3dfx's downfall; the Voodoo 3 became the first 3dfx chip to be developed in-house rather than by third-party manufacturers, which were a significant source of revenue for the company. These third-party manufacturers turned into competitors and began sourcing graphics chips from Nvidia. This also further alienated 3dfx's remaining OEM customers, as they had a single source for 3dfx products and could not choose an OEM to provide cost flexibility. With the purchase of STB, 3dfx created two cards targeting the low-end market, the Velocity 100, which has 8 MB of SDRAM, and the Velocity 200, which has 16 MB of SGRAM. The cards both used a chipset based on the Voodoo3 2000, and it was claimed that they were "underclocked". However, it was revealed by testing that the Velocity 100 chipset has the same clock speed as a typical Voodoo3 2000—at 143 MHz—and that, while one of its two TMUs is disabled in OpenGL and Glide applications for memory management, it can be re-enabled to increase those applications' performance, and AnandTech found no side effects of enabling the component. As 3dfx focused more on the retail graphics card space, further inroads into the OEM space were limited. A significant requirement of the OEM business was the ability to consistently produce new products on the six-month product refresh cycle the computer manufacturers required; 3dfx did not have the methodology nor the mindset to focus on this business model. In the end, 3dfx opted to be a retail distribution company manufacturing their own branded products. The Voodoo 3 was hyped as the graphics card that would make 3dfx the undisputed leader, but the actual product was below expectations. Though it was still the fastest as it edged the RIVA TNT2 by a small margin, the Voodoo3 lacked 32-bit color and large texture support. Though at that time few games supported large textures and 32-bit color, and those that did generally were too demanding to be run at playable framerates, the features "32-bit color support" and "2048×2048 textures" were much more impressive on paper than 16-bit color and 256×256 texture support. The Voodoo3 sold relatively well, but was disappointing compared to the first two models and 3dfx lost the market leadership to Nvidia. As 3dfx attempted to counter the TNT2 threat, it was surprised by Nvidia's GeForce 256. The GeForce was a single-chip processor with integrated transform, lighting, triangle setup/clipping (hardware T&L), and rendering engines, giving it a significant performance advantage over the Voodoo3. The 3dfx Voodoo3 2000 PCI was the highest-performance 2D/3D card available for the Apple Macintosh at the time of its release, though support from 3dfx was labeled as 'beta' and required a firmware reflash. As game developers switched to DirectX and OpenGL, which respectively had become the industry standard and were becoming increasingly popular, 3dfx released its Glide API under the General Public License on December 6, 1999. Downfall The company's final product was code-named Napalm. Originally, this was just a Voodoo3 modified to support newer technologies and higher clock speeds, with performance estimated to be around the level of the RIVA TNT2. However, Napalm was delayed, and in the meantime Nvidia brought out their landmark GeForce 256 chip, which shifted even more of the computational work from the CPU to the graphics chip. Napalm would have been unable to compete with the GeForce, so it was redesigned to support multiple chip configurations, like the Voodoo2 had. The end-product was named VSA-100, with VSA standing for Voodoo Scalable Architecture. 3dfx was finally able to have a product that could defeat the GeForce. However, by the time the VSA-100 based cards made it to the market, the GeForce 2 and ATI Radeon cards had arrived and were offering higher performance for the same price. The only real advantage the Voodoo 5 5500 had over the GeForce 2 GTS or Radeon was its superior spatial anti-aliasing implementation, and the fact that, relative to its peers, it didn't suffer such a large performance hit when anti-aliasing was enabled. 3dfx was fully aware of the Voodoo 5's speed deficiency, so they touted it as quality over speed, which was a reversal of the Voodoo 3 marketing which emphasized raw performance over features. 5500 sales were respectable but volumes were not at a level to keep 3dfx afloat. The Voodoo 5 5000, which had 32 MB of VRAM to the 5500's 64 MB, was never launched, as the smaller frame buffer didn't significantly reduce cost over the Voodoo 5 5500. The only other member of the Voodoo 5 line, the Voodoo 4 4500, was as much of a disaster as Voodoo Rush, because it had performance well short of its value-oriented peers combined with a late launch. Voodoo 4 was beaten in almost all areas by the GeForce 2 MX—a low-cost board sold mostly as an OEM part for computer manufacturers—and the Radeon VE. One unusual trait of the Voodoo 4 and 5 was that the Macintosh versions of these cards had both VGA and DVI output jacks, whereas the PC versions had only the VGA connector. Also, the Mac versions of the Voodoo 4 and 5 had a vulnerability in that they did not support hardware-based MPEG2 decode acceleration, which hindered the playback of DVDs on a Mac equipped with a Voodoo graphics card. The Voodoo 5 6000 never made it to market, due to a severe bug resulting in data corruption on the AGP bus on certain boards, and was limited to AGP 2x. It was thus incompatible with the new Pentium 4 motherboards. Only a few more than one thousand units of the graphics card were ever produced. Later tests proved that the Voodoo 5 6000 outperformed not only the GeForce 2 GTS and ATI Radeon 7200, but also the faster GeForce 2 Ultra and Radeon 7500. In some cases it was shown to compete well with the GeForce 3, trading performance places with the card on various tests. However, the prohibitively high production cost of the card, particularly the 4 chip setup, external power supply and 128 MB of VRAM (which would have made it the first consumer card with that amount of memory), would have likely hampered its competitiveness. Acquisition and bankruptcy On March 28, 2000, 3dfx bought GigaPixel for US$186 million, in order to help launch its products to market quicker. In late 2000, not long after the launch of the Voodoo 4, several of 3dfx's creditors decided to initiate bankruptcy proceedings. 3dfx, as a whole, would have had virtually no chance of successfully contesting these proceedings, and instead opted to be bought by Nvidia, thus ceasing to exist as a company. The history of and participants in the 3dfx/Nvidia deal making can be read in the respective companies' financial filings from that time period. The resolution and legality of those arrangements (with respect to the purchase, 3dfx's creditors and its bankruptcy proceedings) were still being worked through the courts , nearly 9 years after the sale. A majority of the engineering and design team working on "Rampage" (the successor to the VSA-100 line) that remained with the transition, were requested and remained in house to work on what became the GeForce FX series. Others accepted employment with ATI to bring their knowledge to the creation of the X series of video cards and the development of Crossfire, their own version of SLI, and yet another interpretation of 3dfx's SLI ideal. After Nvidia acquired 3dfx, mainly for its intellectual property, they announced that they would not provide technical support for 3dfx products. As of 2019, drivers and support are still offered by community websites. However, while functional, the drivers do not carry a manufacturer's backing and are considered beta software. For a limited time, Nvidia offered a program under which 3dfx owners could trade in their cards for Nvidia cards of similar performance. On December 15, 2000 3dfx apologized to the customers with a final press release. In 2003, the source code for 3dfx drivers leaked, resulting in fan-made, updated drivers and further support. The 3dfx bankruptcy is in the United States Court of Appeals for the Ninth Circuit, appeal, Docket # 11–15189. Following is a clerk's order as filed in the docket: Although 1997 was marked by analysts as a turning point for 3dfx due to the marketing led by the new CEO Greg Ballard, there was criticism of Ballard's understanding of R&D in the graphics industry. Single-card 2D/3D solutions were taking over the market, and although Ballard saw the need and attempted to direct the company there with the Voodoo Banshee and the Voodoo3, both of these cost the company millions in sales and lost market share while diverting vital resources from the Rampage project. Then 3dfx released word in early 1999 that the still-competitive Voodoo2 would support only OpenGL and Glide under Microsoft's Windows 2000 operating system, and not Direct3D. Many games were transitioning to Direct3D at this point, and the announcement caused many PC gamers – the core demographic of 3dfx's market – to switch to Nvidia or ATI offerings for their new machines. Ballard resigned shortly after, in January 2000. Products 1 Texture mapping units:render output units References Further reading External links Greg Ballard discusses some of the reasons for 3dfx's decline, Stanford University, November 2006 Interview with AVOC Companies based in San Jose, California Computer companies disestablished in 2002 Companies that filed for Chapter 11 bankruptcy in 2002 Defunct companies based in California Defunct computer companies of the United States Defunct computer hardware companies Graphics hardware companies Nvidia Computer companies established in 1994 1994 establishments in California 2002 disestablishments in California
9311992
https://en.wikipedia.org/wiki/Catalyst%206500
Catalyst 6500
The Catalyst 6500 is a modular chassis network switch manufactured by Cisco Systems since 1999, capable of delivering speeds of up to "400 million packets per second". A 6500 comprises a chassis, power supplies, one or two supervisors, line cards and service modules. A chassis can have 3, 4, 6, 9 or 13 slots each (Catalyst model 6503, 6504, 6506, 6509, or 6513, respectively) with the option of one or two modular power supplies. The supervisor engine provides centralised forwarding information and processing; up to two of these cards can be installed in a chassis to provide active/standby or stateful failover. The line cards provide port connectivity and service modules to allow for devices such as firewalls to be integrated within the switch. Supervisor The 6500 Supervisor comprises a Multilayer Switch Feature Card (MSFC) and a Policy Feature Card (PFC). The MSFC runs all software processes, such as routing protocols. The PFC makes forwarding decisions in hardware. The supervisor has connections to the switching fabric and classic bus, as well as bootflash for the Cisco IOS software. The latest generation supervisor is the Supervisor 2T. This supervisor was introduced at Cisco Live Las Vegas in July 2011. It provides 80 gigabits per slot on all slots of 6500-E chassis. Operating systems The 6500 currently supports three operating systems: CatOS, Native IOS and Modular IOS. CatOS CatOS is supported for layer 2 (switching) operations only. To be able to perform routing functions (e.g. Layer 3) operations, the switch must be run in hybrid mode. In this case, CatOS runs on the Switch Processor (SP) portion of the Supervisor, and IOS runs on the Route Processor (RP) also known as the MSFC. To make configuration changes, the user must then manually switch between the two environments. CatOS does have some functionality missing and is generally considered 'obsolete' compared to running a switch in Native Mode. Native IOS Cisco IOS can be run on both the SP and RP. In this instance, the user is unaware of where a command is being executed on the switch, even though technically two IOS images are loaded—one on each processor. This mode is the default shipping mode for Cisco products and enjoys support of all new features and line cards. Modular IOS Modular IOS is a version of Cisco IOS that employs a modern UNIX-based kernel to overcome some of the limitations of IOS. Additional to this is the ability to perform patching of processes without rebooting the device and in service upgrades. Methods of operation The 6500 has five major modes of operation: Classic, cef256, dcef256, cef720 and dcef720. Classic Bus The 6500 classic architecture provides 32 Gbit/s centralised forwarding performance. The design is such that an incoming packet is first queued on the line card and then placed on to the global data bus (dBus) and is copied to all other line cards, including the supervisor. The supervisor then looks up the correct egress port, access lists, policing and any relevant rewrite information on the PFC. This is placed on the result bus (rBus) and sent to all line cards. Those line cards for whom the data is not required terminate processing. The others continue forwarding and apply relevant egress queuing. The speed of the classic bus is 32gb half duplex (since it is a shared bus) and is the only supported way of connecting a Supervisor 32 engine (or Supervisor 1) to a 6500. cef256 This method of forwarding was first introduced with the Supervisor 2 engine. When used in combination with a switch fabric module, each line card has an 8Gbit/s connection to the switch fabric and additionally a connection to the classic bus. In this mode, assuming all line cards have a switch fabric connection, an ingress packet is queued as before and its headers are sent along the dBus to the supervisor. They are looked up in the PFC (including ACLs etc.) and then the result is placed on the rBus. The initial egress line card takes this information and forwards the data to the correct line card along the switch fabric. The main advantage here is that there is a dedicated 8 Gbit/s connection between the line cards. The receiving line card queues the egress packet before sending it from the desired port. The '256' is derived from a chassis using 2x8gb ports on 8 slots of a 6509 chassis: 16 * 8 = 128, 128 * 2 = 256. The number is doubled because of the switch fabric being 'full duplex'. dcef256 dcef256 uses distributed forwarding. These line cards have 2x8gb connections to the switch fabric and no classic bus connection. Only modules that have a DFC (Distributed Forwarding Card) can use dcef. Unlike the previous examples, the line cards hold a full copy of the supervisor's routing tables locally, as well as its own L2 adjacency table (i.e. MAC addresses). This eliminates the need for any connection to the classic bus or requirement to use the shared resource of the supervisor. In this instance, an ingress packet is queued, but its destination looked up locally. The packet is then sent across the switch fabric, queued in the egress line card before being sent. cef720 This mode of operation acts identically to cef256, except with 2x20gb connections to the switch fabric and there is no need for a switch fabric module (this is now integrated into the supervisor). This was first introduced into the Supervisor Engine 720. The '720' is derived from a chassis using 2x20gb ports on 9 slots of a 6509 chassis. 40 * 9 = 360 * 2 = 720. The number is doubled to the switch fabric being 'full duplex'. The reason 9 slots are used for the calculation instead of 8 for the cef256 is that it no longer needs to waste a slot with the switch fabric module. dcef720 This mode of operation acts identically to dcef256, except with 2x20gb connections to the switch fabric. Power supplies The 6500 is able to deliver high densities of Power over Ethernet across the chassis. Because of this, power supplies are a key element of configuration. Chassis support The following goes through the various 6500 chassis and their supported power supplies and loads. 6503 The original chassis permits up to 2800W and uses rear-inserted power supplies different from the others in the series. 6504-E This chassis permits up to 5000W (119A @ 42V) of power and, like the 6503, uses rear-inserted power supplies. 6506, 6509, 6506-E and 6509-E The original chassis can support up to a maximum of 4000W (90A @ 42V) of power, because of backplane limitations. If a power supply above this is inserted, it will deliver at full power up to this limitation (i.e. a 6000W power supply is supported in these chassis, but will output a maximum of 4000W). The 6509-NEB-A supports a maximum of 4500W (108A @ 42V).  With the introduction of the 6506-E and 6509-E series chassis, the maximum power supported has been increased to in excess of 14500 W (350A @ 42V). 6513 This chassis can support a maximum of 8000W (180A @ 42V). However, to obtain this, it must be run in combined mode. Therefore, it is suggested that it be run in redundant mode to obtain a maximum of 6000W (145A @ 42V). Power redundancy options The 6500 supports dual power supplies for redundancy. These may be run in one of two modes: redundant or combined mode. Redundant mode When running in Redundant mode, each power supply provides approximately 50% of its capacity to the chassis. In the event of a failure, the unaffected power supply will then provide 100% of its capacity and an alert will be generated. As there was enough to power the chassis ahead of time, there is no interruption to service in this configuration. This is also the default and recommended way to configure power supplies. Combined mode In combined mode, each power supply provides approximately 83% of its capacity to the chassis. This allows for greater utilisation of the power supplies and potentially increased PoE densities. In systems that are equipped with two power supplies, if one power supply fails and the other power supply cannot fully power all of the installed modules, system power management will shut down devices in the following order: Power over Ethernet (PoE) devices— The system will power down PoE devices in descending order, starting with the highest numbered port on the module in the highest numbered slot. Modules—If additional power savings are needed, the system will power down modules in descending order, starting with the highest numbered slot. Slots containing supervisor engines or Switch Fabric Modules are bypassed and are not powered down. This shut down order is fixed and cannot be changed. Online Insertion & Removal OIR is a feature of the 6500 which allows hot swapping most line cards without first powering down the chassis. The advantage of this is that one may perform an in-service upgrade. However, before attempting this, it is important to understand the process of OIR and how it may still require a reload. To prevent bus errors, the chassis has three pins in each slot which correspond with the line card. Upon insertion, the longest of these makes first contact and stalls the bus (to avoid corruption). As the line card is pushed in further, the middle pin makes the data connection. Finally, the shortest pin removes the bus stall and allows the chassis to continue operation. However, if any part of this operation is skipped, errors will occur (resulting in a stalled bus and ultimately a chassis reload). Common problems include: Line cards being inserted incorrectly (and thus making contact with only the stall and data pins and thus not releasing the bus) Line cards being inserted too quickly (and thus the stall removal signal is not received) Line cards being inserted too slowly (and thus the bus is stalled for too long and forces a reload). See also Supervisor Engine (Cisco) References Cisco products
173596
https://en.wikipedia.org/wiki/Wang%20Laboratories
Wang Laboratories
Wang Laboratories was a computer company founded in 1951, by An Wang and G. Y. Chu. The company was successively headquartered in Cambridge, Massachusetts (1954–1963), Tewksbury, Massachusetts (1963–1976), and finally in Lowell, Massachusetts (1976–1997). At its peak in the 1980s, Wang Laboratories had annual revenues of $3 billion and employed over 33,000 people. It was one of the leading companies during the time of the Massachusetts Miracle. The company was directed by An Wang, who was described as an "indispensable leader" and played a personal role in setting business and product strategy until his death in 1990. Under his direction, the company went through several distinct transitions between different product lines, beginning with typesetters, calculators and word processors, then adding computers, copiers and laser printers. Wang Laboratories filed for bankruptcy protection in August 1992. After emerging from bankruptcy, the company eventually changed its name to Wang Global. Wang Global was acquired by Getronics of the Netherlands in 1999, becoming Getronics North America, then was sold to KPN in 2007 and CompuCom in 2008, after which it no longer existed as a distinct brand or division. Public stock listing An Wang took steps to ensure that the Wang family would retain control of the company even after going public. He created a second class of stock, class B, with higher dividends, but only one-tenth the voting power of class C. The public mostly bought class B shares; the Wang family retained most of the class C shares. (The letters B and C were used to ensure that brokerages would fill any Wang stock orders with class B shares unless class C was specifically requested). Wang stock had been listed in the New York Stock Exchange, but this maneuver was not quite acceptable under NYSE's rules, and Wang was forced to delist with NYSE and relist on the more liberal American Stock Exchange. After Wang's 1992 bankruptcy, holders of class B and C common stock were treated the same. Products Typesetters The company's first major project was the Linasec in 1964. It was an electronic special purpose computer, designed to justify papertape for use on automated Linotype machines. It was developed under contract to Compugraphic, who manufactured phototypesetters. Compugraphic retained the rights to manufacture the Linasec without royalty. They exercised these rights, effectively forcing Wang out of the market. Calculators The Wang LOCI-2 Logarithmic Computing Instrument desktop calculator (the earlier LOCI-1 in September 1964 was not a real product) was introduced in January 1965. Using factor combining it was probably the first desktop calculator capable of computing logarithms, quite an achievement for a machine without any integrated circuits. The electronics included 1,275 discrete transistors. It actually performed multiplication by adding logarithms, and roundoff in the display conversion was noticeable: 2 times 2 yielded 3.999999999. From 1965 to about 1971, Wang was a well-regarded calculator company. Wang calculators cost in the mid-four-figures, used Nixie tube readouts, performed transcendental functions, had varying degrees of programmability, and exploited magnetic core memory. The 200 and 300 calculator models were available as timeshared simultaneous (SE) packages that had a central processing unit (the size of a small suitcase) connected by cables leading to four individual desktop display/keyboard units. Competition included HP, which introduced the HP 9100A in 1968, and old-line calculator companies such as Monroe and Marchant. Wang calculators were at first sold to scientists and engineers, but the company later won a solid niche in financial-services industries, which had previously relied on complicated printed tables for mortgages and annuities. One perhaps apocryphal story tells of a banker who spot-checked a Wang calculator against a mortgage table and found a discrepancy. The calculator was right, the printed tables were wrong, and the company's reputation was made. In the early seventies, Wang believed that calculators would become unprofitable low-margin commodities, and decided to exit the calculator business. Word processors The Wang 1200 Wang's first attempt at a word processor was the Wang 1200, announced in late 1971, but not available until 1972. The design consisted of the logic of a Wang 500 calculator hooked up to an OEM-manufactured IBM Selectric typewriter for keying and printing, and dual cassette decks for storage. Harold Koplow, who had written the microcode for the Wang 700 (and its derivative, the Wang 500) rewrote the microcode to perform word processing functions instead of number crunching. The operator of a Wang 1200 typed text on a conventional IBM Selectric keyboard; when the Return key was pressed, the line of text was stored on a cassette tape. One cassette held roughly 20 pages of text, and could be "played back" (e.g., the text retrieved) by printing the contents on continuous-form paper in the 1200 typewriter's "print" mode. The stored text could also be edited, using keys on a simple, six-key array. Basic editing functions included Insert, Delete, Skip (character, line), and so on. The labor and cost savings of this device were immediate, and remarkable: pages of text no longer had to be retyped to correct simple errors, and projects could be worked on, stored, and then retrieved for use later on. The rudimentary Wang 1200 machine was the precursor of the Wang Office Information System (OIS), which revolutionized the way typing projects were performed in the American workplace. Wang OIS Following the Wang 1200, Harold Koplow and David Moros made another attempt at designing a word processor. They started by first writing the user's manual for the product. A 2002 Boston Globe article refers to Koplow as a "wisecracking rebel" who "was waiting for dismissal when, in 1975, he developed the product that made computers popularly accessible." In Koplow's words, "Dr. Wang kicked me out of marketing. I, along with Dave Moros was relegated to Long Range Planning — 'LRPed'. This ... was tantamount to being fired: 'here is a temporary job until you find another one in some other company.'" Although he and Moros perceived the assignment to design a word processing machine as busywork, they went ahead anyway. They wrote the manual and convinced An Wang to turn it into a real project. The word processing machinethe Wang 1200 WPSwas introduced in June 1976 and was an instant success, as was its successor, the 1977 Wang OIS (Office Information System). These products were technological breakthroughs. They were multi-user systems. Each workstation looked like a typical terminal, but contained its own Intel 8080 microprocessor (later versions used a Z80) and 64 KB of RAM (comparable, but lower in power than the original IBM PC which came out in 1981). Disk storage was centralized in a master unit and shared by the workstations, and connection was via high-speed dual coaxial cable "928 Link". Multiple OIS masters could be networked to each other, allowing file sharing among hundreds of users. The systems were user-friendly and fairly easy to administer, with the latter task often performed by office personnel, in an era when most machines required trained administrators. Copiers/printers Ahead of IBM and Xerox, the lead for "the 'intelligent' printer: a reasonably priced, high-speed office copier that can be linked electronically to ..." PCs "and other automated equipment" was captured by Wang. A year later The New York Times described the IBM 6670 Information Distributor as "closer to the standard envisioned." Early computer models Wang 3300 Wang's first computer, the Wang 3300, is an 8-bit integrated circuit general purpose minicomputer specifically designed to be the central processor for a multi-terminal time-sharing system. Byte oriented it also provides a number of double byte operand memory commands. Core memory ranges from 4,096 to 65,536 bytes in 4,096 byte increments. Development began shortly after hiring Rick Bensene in June 1968. The product was announced in February 1969 and shipped to its first customer on March 29, 1971. Wang 2200 Wang developed and marketed several lines of small computer systems for both word processing and data processing. Instead of a clear, linear progression, the product lines overlapped and in some cases borrowed technology from each other. The most identifiable Wang minicomputer performing recognizable data processing was the Wang 2200 which appeared in May 1973. Unlike some other desktop computers such as the HP 9830, it had a CRT in a cabinet that also included an integrated computer controlled Compact Cassette storage unit and keyboard. Microcoded to run interpretive BASIC, about 65,000 systems were shipped in its lifetime and it found wide use in small and medium-size businesses worldwide. The original 2200 was a single user system. The improved VP model increased performance more than tenfold and enhanced the language (renamed Basic-2). The 2200 VP evolved into a desktop computer and larger MVP system to support up to 16 workstations and utilized commercial disk technologies that appeared in the late 1970s and early 1980s. The disk subsystems could be attached to up to 15 computers giving a theoretical upper limit of 240 workstations in a single cluster. Unlike the other product lines such as the VS and OIS (both described below), Wang aggressively used value added resellers (VARs) to customize and market 2200 systems. One such creative solution deployed dozens of 2200 systems and was developed in conjunction with Hawaii- and Hong Kong-based firm, Algorithms, Inc. It provided paging (beeper) services for much of the Hong Kong market in the early 1980s. Overshadowed by the Wang VS, the 2200 languished as a cost-effective, but forgotten solution in the hands of the customers who had it. In the late 1980s Wang revisited the 2200 series one last time, offering 2200 customers a new 2200 CS with bundled maintenance for less than customers were then paying just for maintenance of their aging 2200s. The 2200 CS was accompanied by an Intel 386 processor, updated disk units and other peripherals. Most 2200 customers upgraded to the 2200 CS, after which Wang never again developed or marketed any new 2200 products. In 1997, Wang reported having about two hundred 2200 systems still under maintenance around the world. Throughout, Wang had always offered maintenance services for the 2200. The 2200 Basic2 language was ported to be compiled and run on non-Wang hardware and operating systems by at least two companies. Niakwa Inc created a product named NPL (originally named Basic-2C). Kerridge Computer (now a part of ADP) created a product named KCML. Both products support DOS, Windows and various Unix systems. The Basic2 language has been substantially enhanced and extended by both companies to meet modern needs. Compared to the 2200 Wang hardware the compiled solutions improve all factors (speed, disk space, memory, user limits) by tens to hundreds of times. So while Wang support for the 2200 is gone many software applications continue to function. During the 1970s about 2,000 Wang 2200T computers were shipped to the USSR. Due to the Afghan war in the 1980s, US and COCOM export restrictions ended the shipment of Wang computers. The Soviets were in great need of computers. In 1981 Russian engineers at Minpribor's Schetmash factory in Kursk reverse engineered the Wang 2200T and created a computer they named the Iskra 226. The "COCOM restrictions" theory, though, while popular in the West, is challenged by some Russian computer historians on the basis of the fact that development for the Iskra-226 started in 1978, two years before the Afghan war. One possible reason for this might be a Soviet fear of the backdoors in the Western hardware. It is also significantly different from the Wang 2200 in its internals, being more inspired by it, rather than a direct clone. It used the same Basic language (named T-Basic) with a few enhancements. Many research papers reference calculations done on the Iskra 226. The machine's designers were nominated for a 1985 State Prize. Later, a somewhat scaled-down Unix implementation was created for Iskra-226, which was widely used in the Soviet Union but is virtually unknown in the West. Wang OIS Wang had a line called Alliance, which was based on the high end OIS (140/145) hardware architecture. It had more powerful software compared to the OIS word processing and list processing packages. The system was Tempest certified, leading to global deployment in American embassies after the Iran hostage crisis. The Z80 platform on which Alliance ran forced it to remain as an 8-bit application in a 64 KB workstation. The Wang VS computer line The first Wang VS computer was introduced in 1977, the same year as Digital Equipment Corporation's VAX; both continued for decades. The VS instruction set was compatible with the IBM 360 series, but it did not run any IBM 360 system software. Software The VS operating system and all system software were built from the ground up to support interactive users as well as batch operations. The VS was aimed directly at the business data processing market in general, and IBM in particular. While many programming languages were available, the VS was typically programmed in COBOL. Other languages supported in the VS integrated development environment included Assembler, COBOL 74, COBOL 85, BASIC, Ada, RPG II, C, PL/I, FORTRAN, Glossary, MABASIC, SPEED II and Procedure (a scripting language). Pascal was also supported for I/O co-processor development. The Wang PACE (Professional Application Creation Environment) 4GL and database was used from the mid-1980s onward by customers and third-party developers to build complex applications sometimes involving many thousands of screens, hundreds of distinct application modules, and serving many hundreds of users. Substantial vertical applications were developed for the Wang VS by third-party software houses throughout the 1980s in COBOL, PACE, BASIC, PL/I and RPG II. The Wang OFFICE family of applications and Wang WP were both popular applications on the VS. Word Processing ran on the VS through services that emulated the OIS environment and downloaded the WP software as "microcode" (in Wang terminology) to VS workstations. Hardware The press and the industry referred to the class of machines made by Wang, including the VS, as "minicomputers," and Kenney's 1992 book refers to the VS line as "minicomputers" throughout. Although some argue that the high-end VSes and their successors should qualify as mainframes, Wang avoided this term. In his autobiography, An Wang, rather than calling the VS 300 a mainframe, said that it "verges on mainframe performance". He went on to draw distinction between the "mainframes" at the high end of IBM's line ("just as Detroit would rather sell large cars ... so IBM would rather sell mainframes")—in which IBM had a virtual monopoly—with the "mid-sized systems" in which IBM had not achieved dominance: "The minicomputer market is still healthy. This is good for the customer and good for minicomputer makers." The VS7000 was introduced in 1987. These were a renumbering of the VS100, and VS300, with their more powerful counterpart upgrades identified as VS7110, VS7120 and VS7130 (for the VS100) and the VS7310 as the new high-end machine in this series. Later models, the small VS5000 series, launched in approximately 1988, were user-installable, the smallest being physically similar in size to PCs of the era. The largest iteration, the VS10000 supported an increasingly substantial number of users. The VS10000 was a technological departure from earlier models, in that it used "emitter coupled logic" (ECL). ECL is a very fast current based logic that necessitated the use of very large 375 amp, 3 volt power supplies, massive heat-sinks and large squirrel cage blowers. It was stated that the VS1000 computer drew up to ten kilowatts of power. For anyone who serviced these devices, the most comfortable place (also the noisiest) in the computer room was behind the computer. It also led to embarrassing situations where the horizontally mounted CPU and memory boards often were allowed to contact the lower cage mount when withdrawn due to the significant weight of the board not being properly held on withdrawal. The VS1000 was designed to run multiple concurrent operating systems, and was piloted with the VS ver7 and a Unix operating system, with strong rumors of a Novell research group attempting to rough a design for a large enterprise Novell OS. Going after IBM An Wang felt a personal sense of rivalry with IBM, partly as a result of heavy-handed treatment by IBM in 1955/56 over the rights to his magnetic-core patents. (This encounter formed the subject of a long chapter in Wang's own book, Lessons.) According to Charles C. Kenney, "Jack Connors remembers being in Wang's office one day when the Doctor pulled out a chart on which he had plotted Wang's growth and projected that Wang Laboratories would overtake IBM sometime in the middle of the 1990s. 'He had kept it a long time,' says Connors. 'And he believed it.'" Wang was one of the first computer companies to advertise on television, and the first to run an ad during the Super Bowl in 1978. Their first ad literally cast Wang Laboratories as David and IBM as Goliath, several years before the famous 1984 Apple Computer ad. A later ad depicted Wang Laboratories as a helicopter gunship taking aim at IBM. Wang wanted to compete against IBM as a computer company, selling directly to management information system departments. Before the VS, however, Wang Laboratories was not taken seriously as a computer company. The calculators, word processing systems and OIS were sold into individual departments, bypassing the corporate data-processing decision-makers. The chapter in Wang's book dealing with them shows that he saw them only as "a beachhead in the Fortune 1000." The Wang VS was Wang's entry into IT departments. In his book, An Wang notes that, to sell the VS, "we aggressively recruited salesmen with strong backgrounds in data processing ... who had experience dealing with MIS executives, and who knew their way around Fortune 1000 companies." As the VS took hold, the word processor and OIS lines were phased out. The word processing software continued, in the form of a loadable-microcode environment that allowed VS workstations to take on the behavior of traditional Wang WP terminals to operate with the VS and use it as a document server. Wang made inroads into IBM and DEC markets in the 1980s, but did not have a serious impact on IBM's mainframe market due to self-limiting factors. Even though An Wang wanted to compete with IBM, too many Wang salespeople were incompletely trained on the significant DP capabilities of the VS. In many instances the VS ran smaller enterprises up to about $500 million/year and in larger organizations found use as a gateway to larger corporate mainframes, handling workstation pass-through and massive print services. At Exxon Corporation, for instance, thirteen 1985 top-of-the-line VS300s at the Houston headquarters were used in the 1980s and into the 1990s to receive mainframe reports and make them viewable online by executives. At Mellon Mortgage 18 VS systems from the smallest to the largest were used as the enterprise mortgage origination, servicing, finance, documentation and hedge system and also for mainframe gateway services for logon and printing. Between Mellon Mortgage and parent Mellon Bank, their network contained 45 VS systems and the Bank portion of the network supported about 16,000 Wang Office users for email, report distribution and scheduling. At Kent and KTec Electronics, two related Houston companies, separate VS clusters were the enterprise systems, handling distribution, manufacturing and accounting, with significant EDI capability for receiving customer forecasts, sending invoices, and sending purchase orders and receiving shipping notifications. Both systems ran the GEISCO EDI package. Kent, which grew to $600 million/year, ran the Arcus distribution software in COBOL and KTec, which grew to $250 million/year, ran the CAELUS MRP system for manufacturing in BASIC. Aggressive marketing In the late 1980s, a British television documentary accused the company of targeting a competitor, Canadian company AES Wordplex, in an attempt to take it out of the market. However, the documentary came to no conclusion regarding this. Wang's approach was internally called "The Gas Cooker Program", named after similar programs to give discounts on new gas stoves by trading in an old one. Wang was accused of targeting Wordplex by offering a large discount on Wang OIS systems with a trade-in of Wordplex machines, regardless of the age or condition of the trade-in machine. Based on its good reputation with users and its program of aggressive discounts, Wang gained an increasing share of a shrinking market. Wordplex was subsequently taken over by Norsk Data. Word processing market collapse The market for standalone word processing systems collapsed with the introduction of the personal computer. MultiMate, on the IBM PC and MS-DOS PC clones, replicated the keyboard and screen interface and functions of the Wang word processor, and was actively marketed to Wang corporate users, while several other WYSIWYG word processing programs also became popular. Wang did make one last play in this arena, producing a dedicated Intel-based word processor called the Wang Office Assistant in 1984. This was marketed and sold very successfully in the UK to a specific few office equipment dealers who were able to upgrade their clients from electronic typewriters to the Office Assistant. They proved to be very reliable and fast when connected to the Wang bi-directional printer providing fairly cheap but very fast word processing to small companies such as Solicitors. The USA were surprised at the success of this machine in the UK but could not supply a spell check programme in time before the PC, with its flexibility of combining word processors with other programs such as spreadsheets, had rendered such a specific task machine largely unsellable. And so the Wang Office Assistant had a short life span of only four years. The Digital Voice Exchange The Wang DVX was one of the first integrated switchboard and voicemail systems. In the United Kingdom it was selected for the DTI Office Automation Pilot schemes at the National Coal Board in about 1980. Wang, which had added DVX Message Waiting in 1984, named their 1989 announcement DVX II. Internal research on speech recognition was carried out and implemented for discrete word recognition but was never released to the field. At one point there were 50 members of the Voice Engineering Department. Lawrence E. Bergeron was instrumental in managing the Voice Engineering Department at Wang Labs. He promoted the purchase of a VAX-11/780 for 'real-time' signal processing research and created the Peripheral Signal Processor board (PSP). The PSP was placed into 16 racks to handle 128 phone lines for the DVX (Digital Voice Exchange). Wang's Digital Voice Exchange supported the renting of voice mailboxes. Voice prompts were created by a hired voice specialist to give a melodic presentation for the DVX. To avoid false triggering of touch-tones by the prompts (due to input/output cross talk), notch filters were created to remove the touch tone frequencies from the prompts. Some of the prompt languages supported were: German, Spanish, French, British English, American English, and Portuguese. PCs and PC-based products Despite the release of the 2200 PCS (Personal Computer System) and 2200 PCS-II models as early as 1976, the history of computing regards the earliest PC as one which contained a microprocessor, which the 2200 PCS did not. However, the self-contained PCS-II incorporated many of the innovations that would later be seen in PCs, including the first 5.25-inch floppy drives that were especially designed for the PCS-II by Shugart Associates. The original Wang PC An Wang had initially refused to develop an IBM-compatible PC, due to his personal dislike of IBM, even though his son Fred had pitched the idea. The original Wang PC was released April 1982 to counter the IBM PC which had been released the previous August and which had gained wide acceptance in the market for which Wang traditionally positioned the OIS system. It was based on the Intel 8086 microprocessor, a slightly faster CPU than the IBM PC's 8088. A hardware/software package that permitted the Wang PC to act as a terminal to the OIS and VS products was available. The hardware component, the first version was made of two large add-in boards called the WLOC (Wang Local Office Connection), contained a Z-80 processor and 64 KB of memory. The original PC-VS hardware was using the 928 terminal emulator board, the WLOC © boards were used in the subsequent 80286 machines. These PC's later formed the basis for the system console on VS7000 and later series of the Wang mid frame series, being used for the initialisation of the boot process. One of the distinguishing features of the Wang PC was the system software. Similar to the Wang VS minicomputer, the command line was not immediately evident. Everything could be run from menus, including formatting a disk. Furthermore, each item on a menu could be explained by hitting a dedicated Help key on the keyboard. This software was later sold in MS-DOS compatible form for non Wang hardware. The Wang word processing software was also very graphical. The keyboard had 16 function-keys and, unlike WordStar, the popular word processor of the day, control key combinations were not required to navigate the system. The f-keys had the word processing functions labeled on them. However, the biggest stumbling block was thatdespite being a fully compliant MS-DOS systemit was not compatible with the IBM PC at the hardware level. This was a problem because MS-DOS was rarely used as anything more than a simple program loader; complex software (spreadsheets, Flight Simulator, etc.) could only obtain acceptable performance by direct manipulation of the hardware. Wang used a 16-bit data bus instead of the 8-bit data bus used by IBM, arguing that applications would run much faster since most operations required I/O (disk, screen, keyboard, printer). With this 16-bit design, Wang used peripheral hardware devices, such as the Wang PC display adapter, that were not compatible with their counterparts in the IBM PC line. This meant that the vast library of software available for the IBM PC could not be directly run on the Wang PC. Only those programs that were either written specifically for the Wang PC or ported from the IBM PC were available. A basic word processing package developed by Wang and Microsoft's Multiplan spreadsheet were the two commonly marketed software products. Lotus 1-2-3 and dBase II were also available. This dearth of application software led to the early demise of the original Wang PC, and it was replaced by an Intel 80286 based product that was fully plug compatible with the IBM PC. The unique system software was available at extra cost. Most Wang PCs were released with a monochrome graphics adapter that supported a single video mode with text and graphics planes that could be scrolled independently, unlike IBM-compatible PCs of the time which required selecting a specific video mode to allow graphics. A color graphics adapter and Wang-branded color monitor were also available. An ergonomic feature of the Wang PC was the monitor arm that clamped to the desk and held the monitor above and a system clamp that attached to the side of the desk and held the rather large computer box. By using these, there was nothing on the desktop except the keyboard. IBM-compatible Wang PCs After it became clear that it had been a mistake to ignore the issue of PC compatibility, Wang belatedly released an emulation board for Wang PC that enabled operation of many PC-compatible software packages. The board accomplished this by monitoring all I/O and memory transactions (visible in those days before North/South bridge chips to any board plugged into a slot on the expansion bus) and generating a non-maskable interrupt (NMI) whenever an operation was deemed to involve an incompatible device, requiring emulation. For example: the floppy controller circuitry on the Wang PC was similar to that of the IBM PC but involved enough design differences that PC-compatible software attempting to manipulate it directly would fail. Wang's PC emulation hardware would detect I/O and memory operations involving the addresses associated with the floppy controller in the IBM PC and generate an NMI. The NMI handler would immediately be activated (the exception vector having been appropriated during system init to point to ROM routines on the emulation board instead of the NMI routine in the PC BIOS) and would then update an internal representation of the IBM PC floppy controller and manipulate the real controller to reflect its state. Reads were satisfied in a similar way, by forcing an NMI, decoding the machine code indicated by the instruction pointer at the time of the fault, and then obtaining the desired info and updating the CPU registers accordingly before resuming the executing program. The PC emulation board thus enabled execution of an impressive number of applications by presenting "virtualized" PC-compatible hardware devices to them: a monochrome text-only video controller, a floppy controller, UARTs, DMA controller, parallel port, keyboard controller, etc. IBM-PC Emulation on the 8086-based Wang PC was working fairly reliably when IBM released their 80286-based PC-AT. Wang's answer was the 80286-based Wang APC (Advanced Professional Computer) which not only perpetuated the PC incompatibility at the hardware level but revealed an unexpected wrinkle in the hitherto successful PC emulation strategy: the 8086 decoded and executed instructions on-the-fly and was therefore able to respond to NMI on every instruction boundary, but the 80286 decoded and queued instructions in a pipeline whose depth varied with each CPU-mask stepping but which always prevented timely NMI execution. Wang's engineers came up with (and later patented) a rather devious instruction-prefetch monitor that managed to cope with that challenge at the expense of a modest performance penalty. Further iterations of the PC line were released commencing with the model number PC-240. These later models booted directly into MS-DOS (or another compatible operating system) and supported ISA-standard expansion cards. This PC-240 still wasn't entirely IBM PC-standard as the keyboard, although being a standard PC/AT device, allowed for VS compatibility with 24 function keys rather than the normal 12, and a number of Wang VS-specific keys. There was also a slight difference in CPU interrupts from IBM standard, so some software had compatibility issues. VS connectivity was via an ISA-based VS-terminal card, or via LightSpeed, the networked VS Terminal Emulator, over an IPX-based Ethernet connection. Originally the PC-240 came with a Wang-specific Hercules Graphics Card and compatible screen, which also acted as a keyboard extension, meaning the base unit could be kept some distance from the screen. This was later replaced with an EGA card and screen. Around 1991, Wang released the PC350-16 and PC350-40, which were Intel 80386-based, clocked at 16 MHz and 40 MHz, respectively. They used the same VS-compatible keyboard as the PC-240. They had a maximum of 4 megabytes of RAM and came with VGA screens as standard. They were generally supplied with MS-DOS and Windows 3.0. The 350-16 had an interesting bug where, occasionally, the machine would freeze entirely and not boot up if power-cycled at the mains. Although it would power on, the BIOS wouldn't start. The solution was to turn on the machine at the mains and hold down the power button for 30 seconds, at which point it would normally start. It was suggested that this was due to an under-valued capacitor in the power circuit. This problem appeared to be resolved in the 350-40, which had a different PSU. In 1992, Wang marketed a PC-compatible based on the Intel 80386SX processor, which they called the Alliance 750CD. It was clocked at 25 MHz and had a socket for an 80387 math coprocessor. It came with 2 megabytes of installed RAM, and was expandable to 16 megabytes using SIMM memory cards. It had a 1.44 megabyte floppy disk drive, an internal 80 megabyte hard disk, and a CD-ROM drive. Five expansion slots were built-in. It came with MS-DOS and Windows 3.1 operating systems. In 1994, Wang released the slimline Alliance 750CD 80486 based PC in the United Kingdom. These machines used standard PC/AT keyboards and were entirely IBM compatible, shipping with MSDOS 5.0 and Windows 3.11 as standard. The only unusual feature was that system BIOS settings and the real-time clock were maintained by four standard AA batteries as opposed to a more typical solutionsuch as a specialty battery pack or lithium battery. While originally offered with a 33 MHz 80486DX, the 750CD could be upgraded to later Socket 3 processors such as the 80486DX2 through the use of third party CPU upgrade adapters or interposers. This allowed upgrading to speeds beyond 50 MHz without overclocking, or alternatively the potential for speeds in excess of 100 MHz with overclocking, dependent on the processor used. Wang Freestyle Wang Freestyle was a 1988 product consisting of: A touch-sensitive tablet and special stylus for written annotation of any file that could be displayed on the PC. A phone handset for voice annotation, but not voice communication. Demonstrated strongly in conjunction with the tablet for explaining the text annotations. Email, via Wang OFFICE, of the resulting document set. Although the low end product was priced at $2,000 this precluded the important features such as "facsimile and voice options" (priced at $12,000). Freestyle was not a success in anything except marketing terms. A description of the system at the University of Southern California (USC) shows the symptoms of the failure: The $1.2 million USC system includes a VS 7150 mid-range computer; 30 image workstations, 25 with Freestyle capabilities; a laser printer; five endorsers; and five document scanners. Initial storage for document images is eight gigabytes of magnetic disk storage. Only 25 of the stations were Freestyle stations. The system was so costly, even in the context of a Wang Integrated Imaging System, that Freestyle was only affordable for highly specialized or very senior staff. Apart from USC, unusual in that it was deployed at clerical level, it was sold as a C-Level tool for C grades to communicate with other C Grades. This reduced the marketplace immediately from the mass market where the system would have been effective. Decline and fall Wang Labs would be only one of a large number of New England-based computer companies that would falter in the late 1980s and 1990s, marking the end of the Massachusetts Miracle. For instance the struggling Digital Equipment Corporation also significantly downsized in the 1990s and was acquired by Compaq. A common view within the PC community is that Wang Labs failed because it specialized in computers designed specifically for word processing and did not foresee (and was unable to compete against) general-purpose personal computers with word processing software in the 1980s. Word processing was not actually the mainstay of Wang's business by the time desktop computers began to gain in popularity. Although Wang manufactured desktops, its main business by the 1980s, was its VS line of mini-computer and "midframe" systems. The market for these mini-computers was ultimately conquered by enhanced micro-computers like the Apple Macintosh and the "Wintel" PC on one end and Sun, IBM and Hewlett-Packard servers on the other end. An Wang's insistence that his son, Fred Wang, succeed him contributed to the company's failure. Fred Wang was a business school graduate, "but by almost any definition", wrote Charles C. Kenney, "unsuited for the job in which his father had placed him". His assignment, first as head of research and development, then as president of the company, led to resignations by key R&D and business personnel. Amid declining revenues, John F. Cunningham, an 18-year employee of the firm, resigned as president and COO of Wang Labs to become chairman and chief executive of Computer Consoles Inc. Cunningham resigned due to disagreement with An Wang on how to pull the company out of the slump, as well as being upset that Fred Wang was positioned as the successor, an instance of nepotism. One turning point occurred when Fred Wang was head of R&D. On October 4, 1983, Wang Laboratories announced fourteen major hardware and software products, and promised dates of delivery. The announcement was well received, but even at the time there were warning signs. According to Datamation, Wang announced "everything but the kitchen sink. And if you could attach the kitchen sink to a personal computer they would announce that too." Very few of the products were close to completion and many of them had not even been started. All were delivered late and some were never delivered at all. In retrospect this was referred to as the "vaporware announcement" and it hurt the credibility of Fred Wang and Wang Laboratories. In 1986, Fred Wang, then 36 years old, was installed as president of Wang Laboratories. However, the company's fortunes continued to decline. Unlike most computer companies that funded their growth by issuing stock, An Wang had used debt to avoid further dilution of family control of the company. By August 1989, that debt was causing conflicts with its creditors. On August 4, 1989, An Wang fired his son. Richard W. Miller replaced him as the president of Wang Laboratories, having been with the company since 1988. Miller announced in December 1989 that the company would start to embrace established software standards, rather than use traditional proprietary designs. An Wang died in March 1990, and Miller took on the additional posts of chairman and CEO. The company underwent massive restructuring, and in August 1990, it eliminated its bank debt, but still ended the year with a record net loss. In November 1990, Wang announced their first personal computers running Unix. Previously, Wang's presence in the UNIX and open systems markets had been modest. In 1987, Wang developed a new typesetting system in conjunction with Arlington MA-based Texet Corp. The system used Xerox printers and UNIX workstations from Sun, but the product vanished before coming to market, partially because few Wang employees could use or support UNIX. UNIX ran on the VSInteractive Systems first ported IN/ix (their IBM360 version of SYS5 UNIX) to run in a VSOS Virtual machine circa 1985, and then Wang engineers completed the port so that it ran "native" on the VS hardware soon thereafterbut performance was always sub-par as UNIX was never a good fit for the inherently batch-mode nature of the VS hardware, and the line-at-a-time processing approach taken by the VS workstations; indeed, the workstation code had to be largely rewritten to bundle up each keystroke into a frame to be sent back to the host when running UNIX so that "tty" style processing could be implemented. PACE, which offered its data dictionary, excellent referential integrity, and speedy application development, was in the process of being ported to UNIX under the name OPEN Pace. A client server RDBMS model, built on the original product's ideology, OPEN Pace was demonstrated at the North American PACE User Group Conferences in both Boston and Chicago. OPEN Pace, along with a new Windows-based word processor called UpWord (which was at the time considered a strong contender to retake Wang's original market leadership from Microsoft), were touted as their new direction. However, after a marketing study suggested that it would require large capital investments in order to be viable competitors against Microsoft, both products were simply abandoned. Ira Magaziner, who was brought in by Miller in 1990, proposed to take Wang out of the manufacture of computers altogether, and to go big into imaging software instead. In March 1991, the company introduced its Office 2000 marketing strategy, focusing on office productivity. In June 1991, Wang started reselling IBM computers, in exchange for IBM investing in Wang stock. Wang hardware strategy to re-sell IBM RS/6000s also included further pursuit of UNIX software. In August 1991, Wang won a suit against NEC and Toshiba claiming violation of Wang's patents on single in-line memory modules (SIMMs). The company still recorded a net loss for the 1991 fiscal year. Wang Laboratories filed for bankruptcy protection on August 18, 1992, at a time when the company's attempted concession from proprietary to open systems was deemed by some analysts as "too little and too late." Final years Wang Labs emerged from bankruptcy on September 20, 1993. As part of its bankruptcy reorganization, the company's iconic headquarters, Wang Towers in Lowell, was sold at auction. The complex, which originally cost $60 million to build and housed 4,500 workers in over a million square feet (100,000 m2) of office space, was sold in 1994, for $525,000. The renovated complex, which is now known as Cross Point, was subsequently sold in 1998, to a joint venture of Yale Properties and Blackstone Real Estate Advisors for a price reported to be over $100 million. The company emerged from bankruptcy with $200 million in hand and embarked on a course of acquisition and self-reinvention, eschewing its former role as an innovative designer and manufacturer of computer and related systems. Later in the 1990s, and under the guidance of then CEO Joe Tucci, with the acquisition of the Olsy division of Olivetti the company changed its name to Wang Global. By then Wang had settled on "network services" as its chosen business. The most advanced VS system, capable of supporting over 1,000 usersthe VS18000 Model 950was released in 1999, and smaller models based on the same CPU chip were released in 2000the VS6760 and the VS6780. These were the last VS-based hardware systems. Kodak acquired the Wang Software arm in 1997, strengthening its position in the then booming document imaging and workflow market. In 1999, Wang Global, by then back up to $3.5 billion in annual revenues, was acquired by Getronics of The Netherlands, a $1.5 billion network services company active only in parts of Europe and Australia. Joe Tucci departed Wang after the acquisition. Wang Labs then became Getronics North America. In 2005, Getronics announced New VS (VSGX), a product designed to seamlessly run the VS operating system and all VS software on Intel 80x86 and IBM POWER machines under Linux or Unix, using a hardware abstraction layer. The product was a joint commercial effort of Getronics and TransVirtual Systems, developer of the Wang VS virtualization technology that makes the New VS possible. VS software can be run under New VS without program or data conversion. The New VS is a combination of very specifically configured mainstream PC or PowerPC server hardware running virtualization software. It is interoperable with SCSI-based Wang VS tape and disk drives, which provide a means of restoring VS files from standard backup tapes or copying VS disk drives. Wang networking and clustering are supported over TCP/IP. In 2007, Getronics operations worldwide were divided and sold to companies in respective local geographies. Dutch telecommunications operator KPN acquired Getronics in North America and some parts of Europe. In July 2008, Getronics North America (now an arm of KPN) announced the ending of support for the legacy VS line as existing contracts expired, and that TransVirtual Systems would be exclusive reseller of the New VS platform. Shortly after, in August 2008, KPN sold Getronics North America to CompuCom of Dallas, Texas. The Wang VS product line, not actively marketed since the 1993 bankruptcy and a tiny portion of the Getronics business, survived in use into the 21st century; by 2006 about 1,000 to 2,000 systems remained in service worldwide. In 2014, CompuCom announced that all support for legacy VS systems would cease at the end of 2014, while support for New VS systems would continue through TransVirtual Systems. See also Wang International Standard Code for Information Interchange Notes and references External links TransVirtual Systems website, TransVirtual being the suppliers of New VS, a Linux-based hardware abstraction that runs VS OS Small WANG museum, showing early products such as the 700 and 2200 series History, pictures, and user manuals for the Wang 1200 Information about the Wang 2200 The Wang LOCI-2 at the Old Calculator Web Museum Wang 452 calculator with photos and operating and programming instructions Wang 2200 emulator Wang VS emulator Wang VS news and forums Wang Laboratories, Inc. Records at Baker Library Historical Collections, Harvard Business School Defunct computer companies based in Massachusetts Defunct computer hardware companies Electronic calculator companies Minicomputers Programmable calculators Manufacturing companies based in Massachusetts Computer companies established in 1951 Manufacturing companies established in 1951 Technology companies established in 1951 Manufacturing companies disestablished in 1997 1951 establishments in Massachusetts 1997 disestablishments in Massachusetts Companies based in Lowell, Massachusetts American companies established in 1951 Wang Laboratories
15089290
https://en.wikipedia.org/wiki/Web%20testing
Web testing
Web testing is software testing that focuses on web applications. Complete testing of a web-based system before going live can help address issues before the system is revealed to the public. Issues may include the security of the web application, the basic functionality of the site, its accessibility to handicapped users and fully able users, its ability to adapt to the multitude of desktops, devices, and operating systems, as well as readiness for expected traffic and number of users and the ability to survive a massive spike in user traffic, both of which are related to load testing. Web application performance tool A web application performance tool (WAPT) is used to test web applications and web related interfaces. These tools are used for performance, load and stress testing of web applications, web sites, web API, web servers and other web interfaces. WAPT tends to simulate virtual users which will repeat either recorded URLs or specified URL and allows the users to specify number of times or iterations that the virtual users will have to repeat the recorded URLs. By doing so, the tool is useful to check for bottleneck and performance leakage in the website or web application being tested. A WAPT faces various challenges during testing and should be able to conduct tests for: Browser compatibility Operating System compatibility Windows application compatibility where required WAPT allows a user to specify how virtual users are involved in the testing environment.ie either increasing users or constant users or periodic users load. Increasing user load, step by step is called RAMP where virtual users are increased from 0 to hundreds. Constant user load maintains specified user load at all time. Periodic user load tends to increase and decrease the user load from time to time. Web security testing Web security testing tells us whether Web-based applications requirements are met when they are subjected to malicious input data. There is a web application security testing plug-in collection for FireFox See also List of web testing tools Software performance testing Software testing Web server benchmarking References Further reading Hung Nguyen, Bob Johnson, Michael Hackett: Testing Applications on the Web (2nd Edition): Test Planning for Mobile and Internet-Based Systems James A. Whittaker: How to Break Web Software: Functional and Security Testing of Web Applications and Web Services, Addison-Wesley Professional, February 2, 2006. Lydia Ash: The Web Testing Companion: The Insider's Guide to Efficient and Effective Tests, Wiley, May 2, 2003. S. Sampath, R. Bryce, Gokulanand Viswanath, Vani Kandimalla, A. Gunes Koru. Prioritizing User-Session-Based Test Cases for Web Applications Testing. Proceedings of the International Conference on Software Testing, Verification, and Validation (ICST), Lillehammer, Norway, April 2008. "An Empirical Approach to Testing Web Applications Across Diverse Client Platform Configurations" by Cyntrica Eaton and Atif M. Memon. International Journal on Web Engineering and Technology (IJWET), Special Issue on Empirical Studies in Web Engineering, vol. 3, no. 3, 2007, pp. 227–253, Inderscience Publishers. Software testing Web development
35376358
https://en.wikipedia.org/wiki/CronLab
CronLab
CronLab Limited is a privately held limited company which provides information security web filtering software to businesses and the public either directly or via integration into third party products. CronLab has offices in London, United Kingdom and Gothenburg, Sweden. History CronLab was founded in Gothenburg, Sweden and introduced its first Anti-Spam Hardware Appliances in 2009. CRN Magazine, and SC Magazine, In 2010 the company moved its headquarters to London, United Kingdom. It has also released software as a service hosted models of the spam filtering technology awarded a recommendation by Techworld, and hosted email archiving solutions. CIDE Group, a toy manufacturer, partnered with CronLab on development of a children's tablet computer. The tablet, called "Kurio", features CronLab's hosted web-filtering and parental controls technology. CronLab has agreements for distribution in France, Belgium, Switzerland, Germany, Norway, Bulgaria, Moldova, Romania, Ireland and the United Kingdom. The company's security products aim to protect against email and web threats such as spam, spyware, trojans and viruses, and they also provide an email archiving solution. CronLab's products support multi-tenancy and are marketed to ISPs, MSPs and IT consultants. Their products can all also be white labelled. References Software companies established in 2009 Companies based in Gothenburg Software companies of Sweden Computer security software companies Privately held companies of Sweden Computer security companies Content-control software Swedish companies established in 2009
63916316
https://en.wikipedia.org/wiki/Distributed%20agile%20software%20development
Distributed agile software development
Distributed agile software development is a research area that considers the effects of applying the principles of agile software development to a globally distributed development setting, with the goal of overcoming challenges in projects which are geographically distributed. The principles of agile software development provide structures to promote better communication, which is an important factor in successfully working in a distributed setting. However, not having face-to-face interaction takes away one of the core agile principles. This makes distributed agile software development more challenging than agile software development in general. History / Research The increasing globalization with the aid of novel capabilities provided by the technological efficacy of the Internet has led software development companies to offshore their development efforts to more economically attractive areas. This phenomenon began in the 90s, while its strategic importance was realized in the 2000s. Most initial related studies also date from around this time. During this time, the Agile Manifesto was released, which represents an evolution from the prevailing heavyweight approaches to software development. This naturally led to the question, "can distributed software development be agile?". One of the first comprehensive reviews trying to answer this question was done in 2006. By studying three organizations, they found that “careful incorporation of agility in distributed software development environments is essential in addressing several challenges to communication, control, and trust across distributed teams”. Later, in 2014, a systematic literature review (SLR) was done to identify the main problems in getting agile to work in a distributed fashion. In 2019, a similar SLR was done. Moreover, a general review on the subject was done in. The results of some of this research will be discussed in the section Challenges & Risks. In all, distributed agile software development remains a highly dynamic field. Research continues to be done on all of its facets, indicating that it offers unique opportunities and advantages over more traditional methods, but not without imposing its own challenges and risks. Opportunities In the distributed environment, one might have difficulties in keeping track of everyone’s workload and contribution towards the deliverable. Through adoption of agile principles and practices, the visibility is made clearer as there are multiple iterations where one can visualize the issues or criticalities on the initial stages of the project. Continuous integration of programming code, which is one of the focal pieces of agile software development, additionally serves to reduce setup of the executive issues. Adopting of agile principles appears to positively affect correspondence between groups as advancement in cycles makes it simpler for members to see the short-term objectives. Sprint reviews can be seen as a powerful method to improve external correspondence whilst they help to share data about the features and prerequisite conditions between partners or stakeholders. Agile practices also assist in building trust between various teams associated with the process by stimulating consistent communication and conveyance of programming deliverables. As indicated by an investigation made by Passivara, Durasiewicz and, Lassenius, the software quality and correspondence are improved and communication and coordinated effort are more regular comparatively as a result of the Scrum approach utilized in the undertaking. Additionally, the inspiration of colleagues was accounted for to have expanded. Along these lines, adopting agile practices in a distributed environment has demonstrated to be valuable for the quality of the project and its execution. Thus, these can be seen as some of the advantages achieved by combining agile in distributed development, however, the list is exhaustive. The main benefits can be listed as follows: Enhanced inter and intra cultural diversity The distributed environment brings about a sense of global mindset over the local mindset, where the team can exchange and accept the other’s ideas, perceptions, culture, aesthetics etc. Members from a wide range of cultures get the opportunity to gain and share knowledge from their associates, from an alternate point of view. In this manner, they can carry new plans to the task by considering out of the box. Flexible working plans The team members can be benefited with abundant freedom and opportunities on the way of working, with the sole aim being completing the tasks and handing in the deliverables on time. This also makes way for an expanded duty to the organization. In that way, the employees can balance both their professional and personal lives, and hence, the work-life balance can also be achieved that way. Traversing time-zones The teams can span multiple time-zones, in this manner access as long as the 24-hour limit can be achieved. This increases productivity as people are hired all around the globe. The job to be done is never put to a halt as someone is always around to handle the issue. This also ensures the work is carried out 24/7 around the sun and there is almost no down-time. As a distributed environment focuses more on productivity and performance, the handing-off of the work helps in accomplishing the task. Individuals with incapacities and mobility limitations As mentioned, the distributed agile environment establishes more importance on productivity and performance, rather than presence. This benefits people with disabilities as they have the freedom to work from an environment that is comfortable for them and contribute to the deliverable. This scenario is also applicable when the employee cannot be present in office and clock-in the hours, he can work from home to complete the tasks, thus, not affecting the deliverable. Increased levels of prosperity Working in an distributed agile environment ensures an enhanced level of prosperity and well-being, both of the individuals and of the company. This is because there is not much stress on only one individual to complete the work as the work is distributed to multiple people across the globe. Thus, this ensures physical and mental well-being. Also, as multiple people contribute their part and it goes through a number of iterations, the end quality of the work is enhanced, which is beneficial for the company. Hence, it is a win-win situation for both the company and its employees. Reduced travel costs Working in a distributed environment often brings up the need for discussions and meetings on the targets, deadlines, work, etc. However, this adoption of agile principles and practices in a distributed environment helps in reducing the travel costs as it opens up the platform to communicate via video conferencing and other feasible options. This breaks down the need for physical presence, and enhances the idea of face-to-face interaction, so the meetings can be conducted from any part of the world and be made accessible to the others in the team. Iterative idea of agile As the progress of the work is in an iterative fashion, a regular check can be done to track the status of the deliverable and if all the members are on the same page in the level of understanding. Also, this way makes it easier in identifying errors and bugs and can be corrected in the earlier stages as the process goes through multiple iterations. The increased input in each stage of the work results in improved quality of deliverable. Extensive pool of HR As the same work is being carried out in different parts of the world, it increases the range of abilities of the group by getting to a more extensive pool of Human Resources worldwide. This introduces the need for all the HRs acting as one mind to enforce collaborations and decision-making in different verticals and horizontals within an organization, as well as to communicating with stakeholders and prioritizing the deliverable. Reduces office space The distributed agile environment enhances the idea of remote working, hence the need for expanding office spaces to accommodate more employees is not required anymore. Also, the different work-related things like electricity, computers, car-parking lots, etc. are not of major concern as the employees have the liberty to work from their desired environment. This, in a way, is beneficial as it helps in saving a huge amount of money that would be spent on these overhead expenses otherwise. Iterative improvement with continuous delivery to the client is a central practice in agile software improvement, and one that legitimately identifies one of the significant difficulties of an offshore turn of events: diminished perceivability into project status. Regular physical meetings allow team leaders, project managers, clients, and customers to keep track of the progress of the project by the measure of working programming they have obtained. Challenges & Risks Distributed software development has its own inherent challenges due to spatial, temporal, and socio-cultural differences between distributed teams. Combining it with agile principles and practices, in turn, increases the severity of the risks involved, as both methods are in direct contrast with each other. Agile software development was originally designed to be used by co-located teams, as it is based on informal communication and close collaboration. Distributed development, however, requires formal communication, clear standards, set guidelines and rigid structure. This section describes the risks and challenges involved in distributed agile software development as a result of the aforementioned compatibility issues. Challenges As a result of the incompatibility with which one is faced in combining agile principles and practices in a distributed setting, some of the challenges which can arise are as follows.: Documentation Offshore organizations favor plan-driven design where detailed requirements are sent offshore to be constructed. This conflicts with the common practice of agile teams who give documentation a lower priority. The result of this situation is that misunderstandings are a lot more likely to arise. Pair programming Pair programming, where two programmers work side by side to work on a particular problem is a common agile practice. It has been shown to yield better products in less time while keeping the programmers content in the process. Because of the distance between teams this is a lot harder to achieve. Different time zones Depending on the time zone of each distributed team it makes it more challenging to arrange meetings at times when both teams are available. The situation can easily arise in which one team member is available and the other is not for meetings. This especially is a problem if an immediate task has components of the program which are tightly coupled, in such a case one team would not be able to proceed without the feedback of the other. Teaching In a distributed setting the downside of not being able to practice close communication is most felt with inexperienced developers who need to go through a training phase. Training employees who are not co-located is challenging, think of the differences in background and cultural differences which make it difficult to bring these inexperienced team members up to speed. Because of this, alternative ways of teaching need to be considered. Distribution of work With regards to distribution of work we want to avoid the architecture to reflect the team’s geographical distribution by distributing the work based on the location. It is better to distribute tasks relating to a single user story across the whole team, thinking in terms of the stories, not the components. Over specialization by geographical location and/or component is a sign that your team is dealing badly with the communication challenges posed to the distributed teams. This over specialization has the unintended consequence of changing the product to suit the development, not the customer’s requirements. Risks A study done in 2013 has tried to consolidate the literature on risk management in distributed Agile development. A more comprehensive study has tried to categorize the risk factors for distributed Agile projects in, this was done utilizing both research literature and real-world experience from thirteen IT organizations. For the sake of brevity, the full list of 45 risk factors, with corresponding management techniques is omitted. Instead, a brief summary of the main categories and overall management techniques is given. Software Development Life Cycle This category comprises the risk factors related to various activities of software development like customer specification of requirements and planning, modeling, construction and deployment of software application. Many of the risk factors in this category stem from ineffective knowledge sharing. Unclear objectives, requirements, differences in practices of standard processes or inconsistencies across designs to name a few. Many of these risks can be managed by making sure that knowledge is shared effectively. More specifically, make sure that the objective of the project is crystal clear across teams, as well as the requirements. Automate and standardize as much of the development cycle as possible, so that each team is working with the same technology stack and infrastructure. In short, ensure that everyone is on the same page. Project Management Project management relates to tasks such as project planning, project organizing, project staffing, project directing and control. This category involves risks due to interactions between development activities, and managerial activities. The adoption of distributed Agile development will transform the way in which the project needs to be managed. If this is not done carefully, risks might include a lower initial velocity, teams reorganizing every sprint or, a lack of uniformity in multisite team's capabilities. Group Awareness Risk factors related to a lack of group awareness are grouped in this category. Group awareness requires intensive communication, coordination, collaboration, and trust among the group members. Co-located teams achieve this awareness more easily, as it flows more naturally from being in the same physical location. To manage the risks involved with a lack of group awareness, spatially dispersed teams will have to use a more disciplined approach in communication using the latest technological tools. Practices such as co-locating initially, to set the track for the project, have proved to be effective in managing risk. External Stakeholder Collaboration These factors relate to the collaboration with customers, vendors, and third-party developers. Managing its risks boils down to making sure that the coordination and communication with these external actors are done efficiently and clearly. Technology Setup Risk factors that arise due to inappropriate tool usage are grouped in this category. For example, a lack of communication structure can be solved by providing the teams with the means to do video conference calls. Besides that, choosing the right tools to use during a project is important. This can vary across projects, teams and use cases, so an analysis beforehand on the tools to use is recommended. Tools and best practices Communication One of the most important factors in overcoming the challenges faced with distributed agile software development is to improve communication. This means minimizing the time it takes to set up and tear down a communication session and favor video conferencing over voice conferencing if it is available. Face-to-face contact opportunities with the whole team should be encouraged in order to help build rapport. It is beneficial to do this at the start to set out a plan to which the team can adhere throughout the project. In addition, it is also beneficial in the last few iterations before the release of the final deliverable. Time-zone differences One option with regards to dealing with the problem of availability for meetings due to time zones is to appoint a representative for the team which serves as an intermediary for the two teams having formed good rapport with both. Another option is to use nested Scrum with multilevel reporting and multiple daily Scrum meetings. A solution for having Scrum meetings in teams which cope with time-zone differences is making a distinction between local team meetings and global Scrum meetings. Each team has a local meeting at the start of their day and a global meeting at another time of the day. This is only possible if their working days have overlapping time. Keeping up with agile practices Due to the distributed nature, a team might veer off of solid established agile practices. Therefore there should be someone with the role of the coach that keeps the team on track. They should also take it upon themselves to think of alternatives for the distributed work environment using agile practices. To keep every team member informed about the adopted agile approach, it is important to maintain documentation for the project. This improves the group collaboration in using agile principles and practices in a distributed software development setting . For this, various tools can be used which support the team in maintaining the documentation. Use of tools Various tools and platforms can be used to improve communication in a distributed setting. These are even more essential than in a non-distributed setting in order to minimize the virtual distance between the distributed teams. Communication There are various tools available to support communication in distributed software development. Asynchronous tools like e-mail, synchronous tools like audio and video conferencing software and hybrid tools like instant messaging provide team members with the means to have the necessary meetings and communications. Another example is tools that support social networking to create a shared experience between team members across locations. Project management To guide the project and make sure that all teams and team members have a clear vision of what work has to be done, project management platforms like issue management tools should be used. Development tools To provide a shared experience for every team member, every team member should have access to the same tools for their development. Having the same software configuration management tools linked to project management tools enables developers to work at the same pace and communicate about the development in a similar way. Knowledge management To give every team member access to the same knowledge about the product and the development, tools like Wiki software or knowledge bases can be used. Compatibility with the Agile Manifesto The values and principles of the Agile Manifesto have been explored in their applicability in a distributed work environment in 12 case studies. The studies have followed software companies that applied distributed agile software development in their projects. Among the 12 cases, 10 onshore companies were located in the U.S. and seven offshore companies were located in India. The findings are summarized in the following table: From this we learn that all case studies emphasized the first value of the Agile Manifesto which states that individuals and interactions should be valued over processes and tools. The Agile Manifesto prefers working software over comprehensive documentation without necessarily negating documentation completely. This value is also reflected in the majority of the cases. Only four cases have been identified which emphasize the importance of customer collaboration over contract negotiation. As can clearly be seen from the table the fourth value has been adopted the least out of all the values by the software companies: “instead of strictly following the agile development practices as commonly defined, the companies continuously tweak them to fit the evolving needs of their projects”. With regards to agile principles it isn’t a surprise that face-to-face conversation with the development team has been valued by all the studies. This was simulated electronically between the onshore and offshore teams. On whether to be open to change requirements even late in development none of the software companies in the study provided details. By this we can assume that it wasn’t considered as important as some of the other principles. References External links Agile Manifesto Agile Glossary Agile Patterns Software project management Agile software development
1029281
https://en.wikipedia.org/wiki/Monochrom
Monochrom
Monochrom (stylised as monochrom) is an international art-technology-philosophy group, publishing house and film production company. It was founded in 1993, and defines itself as "an unpeculiar mixture of proto-aesthetic fringe work, pop attitude, subcultural science and political activism". Its main office is located at Museumsquartier/Vienna (at 'Q21'). The group's members are: Johannes Grenzfurthner, Evelyn Fürlinger, Harald Homolka-List, Anika Kronberger, Franz Ablinger, Frank Apunkt Schneider, Daniel Fabry, Günther Friesinger and Roland Gratzer. The group is known for working with different media and entertainment formats, although many projects are performative and have a strong focus on a critical and educational narrative. Johannes Grenzfurthner calls this "looking for the best weapon of mass distribution of an idea". Monochrom is openly left-wing and tries to encourage public debate, sometimes using subversive affirmation or over-affirmation as a tactic. The group popularized the concept of "context hacking". On the occasion of Monochrom's 20th birthday in 2013, several Austrian high-profile media outlets paid tribute to the group's pioneering contributions within the field of contemporary art and discourse. History and philosophy In the early 1990s, Johannes Grenzfurthner was an active member of several BBS message boards. He used his online connections to create a zine or alternative magazine that dealt with art, technology and subversive cultures, and was influenced by US magazines like Mondo 2000. Grenzfurthner's motivations were to react to the emerging conservatism in cyber-cultures of the early 1990s and to combine his political background in the Austrian punk and antifa movement with discussion of new technologies and the cultures they create. Franz Ablinger joined Grenzfurthner and they became the publication's core team.The first issue was released in 1993. Over the years the publication featured many interviews and essays, for example by Bruce Sterling, HR Giger, Richard Kadrey, Arthur Kroker, Negativland, Kathy Acker, Michael Marrak, DJ Spooky, Geert Lovink, Lars Gustafsson, Tony Serra, Friedrich Kittler, Jörg Buttgereit, Eric Drexler, Terry Pratchett, Jack Sargeant and Bob Black, in its specific experimental layout style. In 1995 the group decided to cover new artistic practices and started experimenting with different media: performances, computer games, robots, puppet theater, musical, short films, pranks, conferences, online activism. In 1995 we decided that we didn't want to constrain ourselves to just one media format (the "fanzine"). We knew that we wanted to create statements, create viral information. So a quest for the best "Weapon of Mass Distribution" started, a search for the best transportation mode for a certain politics of philosophical ideas. This was the Cambrian Explosion of monochrom. We wanted to experiment, try stuff, find new forms of telling our stories. But, to be clear, it was (and still is) not about keeping the pace, of staying up-to-date, or (even worse) staying "fresh". The emergence of new media (and therefore artistic) formats is certainly interesting. But etching information into copper plates is just as exciting. We think that the perpetual return of 'the new', to cite Walter Benjamin, is nothing to write home about - except perhaps for the slave-drivers in the fashion industry. We've never been interested in the new just in itself, but in the accidental occurrence. In the moment where things don't tally, where productive confusion arises. All the other core team members joined between 1995 and 2006. Grenzfurthner is the group's artistic director. He defines Monochrom's artistic and activist approach as 'Context hacking' or 'Urban Hacking'. The group monochrom refers to its working method as "Context Hacking," thus referencing the hacker culture, which propagates a creative and emancipatory approach to the technologies of the digital age, and in this way turns against the continuation into the digital age of centuries-old technological enslavement perpetrated through knowledge and hierarchies of experts. ... Context hacking transfers the hackers' objectives and methods to the network of social relationships in which artistic production occurs, and upon which it is dependent. ... One of context hackers' central ambitions is to bring the factions of counterculture, which have veered off along widely diverging trajectories, back together again. Community and network From its very foundation, the group defined itself as a movement, culture (referring to Iain M. Banks's sci-fi series) and "open field of experimentation". Monochrom supported and supports various artists, activists, researchers and communities with an online publishing platform, a print publishing service (edition mono), and organizes in-person meetings, screenings, radio shows, debate circles, conferences, online platforms. It is fundamental for the group's core members to combine artistic and educational endeavors with community work (cf. social practice). Some collaborations have been rather short-lived (for example the publication of a 1993 fringe science paper by Jakob Segal, projects with the Billboard Liberation Front and Ubermorgen or the administration of Dorkbot Vienna), some have been going for many years and decades (for example with Michael Marrak, Cory Doctorow, Jon Lebkowsky, Fritz Ostermayer, V. Vale, eSeL, Scott Beale/Laughing Squid, Machine Project, Emmanuel Goldstein, Jason Scott, Jonathan Mann, Jasmin Hagendorfer and the Porn Film Festival Vienna), Michael Zeltner, Anouk Wipprecht, VSL Lindabrunn). Monochrom supports initiatives like the Radius Festival, Play:Vienna, the Buckminster Fuller Institute Austria, RE/Search, the Semantic Web Company and the Vienna hackerspace Metalab. For a couple of years, Monochrom ran the DIY project "Hackbus" in cooperation with David "Daddy D" Dempsey (of FM4) Since 2007, Monochrom is the European correspondent for Boing Boing Video. Art residency Monochrom offers a collaborative art residency in Vienna. Since 2003 the group has invited and created projects with artists, researchers, and activists like Suhrkamp's Johannes Ullmaier, pop theorist Stefan Tiron, performance artist Angela Dorrer, DIY blogger (and later: entrepreneur) Bre Pettis, photographer and activist Audrey Penven, digital artist Eddie Codel, sex work activist Maggie Mayhem, glitch artist Phil Stearns, illustrator Josh Ellingson, DIY artist Ryan Finnigan, digital artist Jane Tingley, digital rights activist Jacob Appelbaum, sex tech expert Kyle Machulis, hacker Nick Farr, filmmakers Sophia Cacciola and Michael J. Epstein, writer Jack Sargeant, and others.All former resident artists are considered ambassadors. Johannes Grenzfurthner sees Monochrom as a community and social incubator of critical and subversive thinkers. An example is Bre Pettis of MakerBot Industries, who got inspired to create 3d printers during his art residency with Monochrom in 2007. Pettis wanted to create a robot that could print shot glasses for Monochrom's cocktail-robot event Roboexotica and did research about the RepRap project at Metalab. Shot glasses remained a theme throughout the history of MakerBot. Main projects (in chronological order) Mackerel Fiddlers (1996-) A radical anti-representation/anti-recording music movement that partially refers to Hakim Bey's Temporary Autonomous Zone. To quote the manifesto: "We set value on developing a form of viral resistance by systematic infiltration of symphonic orchestras. A New Year's Concert of the Vienna Philharmonic Orchestra (1984) could have been transformed by at least one Mackerel Fiddler and Austria's image would have been ruined worldwide. ... These days, self-production and 'embarrassment sells' have become the golden rules of media, be it radio, TV, or telegraph. Thus it is not only legitimate to be ashamed of ones activity as a Mackerel Fiddler, it is also thankworthy. Failure is beautiful! Disgrace is sunshine!" Schubumkehr (1995–1996) A manifesto propagating 'internet demarketing' and deals with negative aspects of early net culture. Paz Sastre reprinted and contextualized the manifesto in this 2021 publication "Manifiestos sobre el arte y la red 1990-1999", published by Exit Media. Der Exot (1997–2012) A telerobot remotely controlled via a web interface/chat forum. The robot was supported and operated by a big community. The robot's basic structure was built out of remodeled Lego bricks and equipped with a Fisheye lens camera. The project was the first telerobot/tele-community projects of its kind. It was presented at art festivals and technology presentations. Monochrom relaunched the project in 2011, calling it a "resurrection", and specifying the social aspect of the project: "A mobile robot with a mounted camera that can be controlled via web interface. But that's tricky. If too many people try to control the robot at the same time it is counter-productive. ... Der Exot is the anti-crowd source robot. The users have to discuss and cooperate via a chat interface to communicate where they want to go, what corners they want to explore, what to crush." The project was presented at the 'Robotville' Exhibition of the Science Museum London. Wir kaufen Seelen (We Buy Souls) (1998) A "spirituo-capitalist" booth where project members tried to buy the souls of passers-by for US$5 per soul. A total of fifteen were purchased and registered. These souls are still being offered for sale to third parties with a power of disposal. Minus 24x (2001) Monochrom's pro-failure/pro-error/pro-inability manifesto, hailing the "Luddites of inability". Quote: "Turning an object against the use inscribed in it (as sociolect of the world of things) means probing its possibilities. ... The information age is an age of permanently getting stuck. Greater and greater speed is demanded. New software, new hardware, new structures, new cultural techniques. Lifelong learning? Yes. But the company can't fire the secretary every six months, just because she can't cope with the new version of Excel. They can count their keystrokes, measure their productivity ... but! They will never be able to sanction their inability! Because that is imminent." Scrotum gegen votum (Scrotum for a vote) (2000-) "A form of political commentary for "about fifty percent of the population". Masculine individuals (whether in sex or gender) are seated nude in a special chair attached to a flatbed scanner. The scans then may or may not be sent to various politicians. The project won the NEBAPOMIC 2000 (Network-based Political Minimalism Counteraction Award) in the category of small country with political tendencies towards the conservative right. Soviet Unterzoegersdorf (1999-) In 2005 Monochrom presented the first part of a computer game trilogy: "Soviet Unterzoegersdorf - The Adventure Game" (using AGS). To Monochrom it was clear that the adventure game, an almost extinct form of computer game, would provide the perfect media platform to communicate the idea of "Soviet Unterzoegersdorf". Edge chose the game as their 'internet game of the month' of November 2005. In 2011 Monochrom and Austrian production company Golden Girls Filmproduktion announced that they are working on the feature film Sierra Zulu. The movie will be dealing with Soviet Unterzoegersdorf. In 2012 Monochrom presented the 16-minute short film "Earthmoving". It is a prequel to the feature film Sierra Zulu and features actors Jeff Ricketts, Martin Auer, Lynsey Thurgar, Adrienne Ferguson and Alexander Fennon. In March 2009 Monochrom presented 'Soviet Unterzoegersdorf: Sector II'. The game features special guest appearances of Cory Doctorow, Bruce Sterling, Jello Biafra, Jason Scott, Bre Pettis and MC Frontalot. The fake history of the "last existing appanage republic of the USSR", Soviet Unterzoegersdorf. Created to discuss topics such as the theoretical problems of historiography, the concept of the "socialist utopia" and the political struggles of postwar Europe. The theoretical concept was transformed into an improvisational theatre/performance/LARP that lasted two days. Georg Paul Thomann (2002–2005) Monochrom was chosen to represent the Republic of Austria at the São Paulo Art Biennial, São Paulo (Brazil) in 2002. However, the political climate in Austria (at that time, the center-right People's Party had recently formed a coalition with Jörg Haider's radical-right Austrian Freedom Party) gave the left-wing art group concerns about acting as wholehearted representatives of their nation. Monochrom dealt with the conundrum by creating the persona of Georg P. Thomann, an irascible, controversial (and completely fictitious) artist of longstanding fame and renown. Through the implementation of this ironic mechanism - even the catalogue included the biography of the non-existent artist - the group solved with pure fiction the philosophical and bureaucratic dilemma attached to the system of representation presented to them by the Biennial. An interesting story related to the Thomann project took place once the São Paulo Art Biennial was underway. The artist Chien-Chi Chang was invited as the representative of Taiwan, but the country's name was removed by the administration from his cube overnight and replaced by the label, "Museum of Fine Arts, Taipei." As the members of Monochrom discovered, China had threatened to retreat from the Biennial (and create massive diplomatic problems) if the organizers of the Biennial were thought to be challenging the "One-China policy." Chang's open letter remained unanswered. Under the guise of Thomann, Monochrom invited artists from several countries to show their solidarity with Chang by taking the adhesive letters from their countries' name tags and giving them to Chang so that he could remount "Taiwan" outside his room. Monochrom wanted to show that artists do not necessarily have to internalize the fragmentation and isolation imposed by the rat-race of art markets and exhibitions as society-controlling imperatives. Several Asian newspapers reported about the performance. One Taiwanese newspaper headlined: "Austrian artist Georg Paul Thomann saves 'Taiwan'". In 2005 Monochrom released press info that "Austrian artist and writer Prof. Georg Paul Thomann died in a tragic accident at the tender age of 60". On 29 July 2005 they staged his funeral in Hall in Tirol. Thomann's gravesite remains in Hall. Georg Paul Thomann's tombstone shows an engraved URL of the Thomann project page. Georg Paul Thomann is featured in RE/Search's "Pranks 2" book. 452 x 157 cm^2 global durability (2002-) Together with Patick Hoenninger. Milk packages are collected in many countries. The standardized format of the Tetra Pak offers a worldwide frame for creative variation, which becomes visible on the 9.5 by 16.5 cm front of the packaging. According to the group, the relation to pop art not only exists in an aesthetic but also in a social dimension, reminiscent of Walter Benjamin's "The Work of Art in the Age of Its Technological Reproducibility." Roboexotica (2002-) An annual festival where scientists, researchers, computer geeks and artists from all over the world build cocktail robots and discuss technological innovation, futurology and science fiction. Roboexotica is also an ironic attempt to criticize techno-triumphalism and to dissect technological hypes. 2002 Monochrom teamed up with Shifz in the organization of the events. Roboexotica has been featured on Slashdot, Wired News, Reuters, New York Times and blogs like Boing Boing and New Scientist. The Absent Quintessence (2002) Feature films were drastically cut and thereby wrenched out of their genres (hardcore porn, splatter, eastern/kung fu, zombie, etc.). These genre films – all of which are characterized by certain anonymity and a mass-produced look – have been stripped of their "essential" scenes (for example, all sex scenes in pornography, all fight scenes in the kung fu films). Thus, the material has been reduced to a bare-bones plot that had actually been conceived only as filler, but its aesthetics and stereotypical narrative patterns now make it easy to contextualize. The project tried to analyze these "re-released" shorts and to filter out interesting subtexts. Towers of Hanoi (2002) Members of the group entered a bank and exchanged 50 euros for dollars, then back again to euros - and so on - until the money was gone. Afterward the group calculated how many times you have to exchange the global amount of cash (20 trillion euros) from euros to dollars until it vanishes completely. It was calculated that if this process was completed a total of 849 times using the global amount of cash, 18 cents would remain. Blattoptera (2003–2005) Artists were invited to design a gallery-space for their tribe of South American cockroaches. Each month a different international artist, or arts group, was invited to design an environment in which the cockroaches are placed, to act as audience for, and as aesthetic judges of the work. Brandmarker (2003-) How well do people remember the logos of large corporations that sell consumer goods? An attempt to evaluate the actual power of commercial brands by making people draw famous logos from memory. Eignblunzn (2003) Members of the group prepared blood sausage out of their own blood and ate it ('auto blood sausage'). The performance was accompanied by political essays about the 'autocannibalistic' tendencies of the global economy. The event also can be interpreted as a critical statement about art, art history, and the art market (Viennese Actionism). Instant Blitz Copy Fight (2004-) People from all over the world are asked to take flash pictures of copyright warnings in movie theaters. Monochrom (in cooperation with Cory Doctorow) collects and exhibits those pictures as a copyleft/Free Culture statement. The Flower Currency (2005) A project to explore a value exchange system, created and owned by children, to enable artists to collaborate on the creation of interdisciplinary artworks. Udo 77 (2004) A musical about Udo Proksch, a criminal figure in recent Austrian history. Born to a poor family he rose to become the darling of Austrian high society before landing in jail on a life sentence for sinking a ship and its crew in order to cash in on insurance of nonexistent goods. His perfectly tuned network of sponsors, friends, and political functionaries could not hush up the scandal and many of his associates joined him in his fall from grace. 1 Baud (2005) Monochrom held workshops in San Francisco to teach people semaphore communication techniques ('International Code of Signals'). After a few days set aside for study and practice, they started a citywide performance to send messages through town at a speed of 1 baud. ("1 Baud" was part of the "Experience The Experience" tour.) Brick Of Coke (2005) Monochrom created a 'Brick Of Coke': they put twenty gallons of Coca-Cola into a pot and boiled it down for a week until the residue left behind could be molded into a brick. The performance and talk dealt with the sugar industry and other multinational corporation policies and Coca-Cola as a symbol of corporate power. ("Brick Of Coke" was part of the "Experience The Experience" tour.) Buried Alive/Six Feet Under Club (2005-) In 2010 Monochrom created the "Six Feet Under Club". Couples could volunteer to be buried together in a casket beneath the ground to perform sexual acts. In a press release they explained that the space they occupy is "extremely private and intimate". The coffin "is a reminder of the social norm of exclusive pair bonding 'till death do us part'." However, this intimate scene was corrupted by the presence of a night vision webcam which projects the scene on to an outside wall. The scenario kept the intimacy of a sexual moment intact while moving the private act into public space. Monochrom's performance can be seen as an absurd parody of pornographic cinema or an examination of the high value placed on sexual privacy. "Six Feet Under Club" performances took place in San Francisco in 2010 and Vienna in 2013 and 2014. People in Los Angeles, San Francisco, Vancouver and Toronto had the opportunity to be buried alive in a real coffin for fifteen minutes. As a framework program Monochrom members held lectures about the history of the science of determining death and the medical cultural history of "buried alive". ("Buried Alive" was launched as part of the "Experience The Experience" tour in 2005, but was extended beyond the tour and became a permanent coffin installation at VSL Lindabrunn in Lower Austria in 2013. Catapulting Wireless Devices (2005) The catapult is one of the oldest machines in the history of technology. Monochrom created an ironic statement about progress. The group build a small medieval trebuchet and used a couple of issues of the techno-utopist magazine Wired as a counterweight to catapult wireless devices (e.g. cell phones or PDAs) at the greatest possible distance. ("Catapulting Wireless Devices" was part of the "Experience The Experience" tour.) Farewell to Overhead (2005) The group created a melancholic electro pop song about the "dead medium" overhead projector and adolescence/socialisation. Growing Money (2005) To quote Monochrom's press statement: "Money is frozen desire. Thus it governs the world. Money is used for all forms of trade, from daily shopping at the supermarket to trafficking in human beings and drugs. In the course of all these transactions, our money wears out quickly, especially the smaller banknotes that are changing hands constantly. ... Money is dirty, and thus it is a living entity. This is something we take literally: money is an ideal environment for microscopic organisms and bacteria. We want to make your money grow. In a potent nutrient fluid under heat lamps we want to get as much life as we can out of your dollar bills." ("Growing Money" was part of the "Experience The Experience" tour.) Illegal Space Race (2005) Monochrom placed the planets true to scale (sun, 4 meters in diameter at Machine Gallery, Alvarado Street, near Echo Park) throughout the Los Angeles cityscape. Then they conducted an 'illegal space car race' through the solar system. ("Illegal Space Race" was part of the "Experience The Experience" tour.) Magnetism Party (2005) In form of a staged college party, Monochrom deleted all the electromagnetic storage media that they could find with a couple of heavy-duty neodymium magnets. Monochrom stated that the Magnetism Party was an attempt to actively come to terms with one aspect of the information society that is almost completely ignored by our epistemological machinery: forgetting. The slogan was "Delete is just another word for nothing left to lose". ("Magnetism Party" was part of the "Experience The Experience" tour.) Arad-II (2005): The members of Monochrom staged a fake (public theatre performance) about a deadly virus outbreak at 'Art Basel Miami Beach', one of the biggest art fairs in North America. Monochrom dealt with the networking/business aspect of the art market, the post-September 11, 2001 attacks hysteria about biological warfare, and the media coverage about Avian influenza (bird flu). Press release quote: "In mid-November 2005, Günther Friesinger visited the Ulaangom Biennial in the Republic of Mongolia. ... He directly departed to Miami to attend some meetings at Art Basel Miami Beach. ... There is acute evidence that he is carrying a rare, but highly contagious sub-form of the Arad-II Virus (family Onoviridae), of which Freiburg virus is also a member. ... Friesinger is walking around the different art fairs in Miami Beach and is spreading the pathogen. The situation is critical. A worldwide outbreak – due to the many visitors from all over the world – is imminent. ... We want to find all the people that Günther Friesinger small talked to and handshaked with. We want to retrieve and destroy the business cards he has spread. Additionally, we must take him into custody and in the event of his death cremation is absolutely necessary." Café King Soccer (Café König Fußball) (2006) In June 2006, Monochrom created the art installation 'Café King Soccer' at NGBK Gallery in Berlin. The installation deals with the soccer corruption case in whose centre we find referee Robert Hoyzer. Monochrom reflect on the fact that soccer has at all times mirrored the dialectics between the culture of subjectivity of the working class and the assertion of objectivity of middle-class culture. The former is represented by the collectives that meet in the game, the latter by the referee, an exemplary civil subject conducting the game by acting as its objective opponent. The Hoyzer case violated this agreement. In it, Hoyzer is – especially in the forefront of the 2006 FIFA World Cup in Germany – also a tragic character, because he acted out his inner self-contradiction as an exemplary civil subject in a publicly effective way. At the same time, the Hoyzer case is itself an integral part of the game – merely because of his exemplary immolation as a scapegoat which seems to correspond exactly to his role on the field - and conditio sine qua non of its perpetuation. Campaign for the Abolition Of Personal Pronouns (2006): Monochrom propagates the creation of gender-neutral personal pronouns. In an activist way the group states that there is a relationship between the structure of language and the way people think and act (see Constructivism). Waiting for GOTO (2006): The reference point Monochrom chose for their theatre-project 'Waiting for GOTO' (Volkstheater Wien) is the theatre classic 'Waiting for Godot' which is projected into the future by modernistic references to science-fiction. In 'Waiting for Goto' we meet 'ideological delinquents' in a distant interstellar future who are separated from their bodies and locked up in two female students who are able to earn their college fees and make ends meet thanks to this job. The play presents us with Monochrom's portrayal of everyday work in a neo-liberal society, double consciousness, the endurance of incorporated contradictions by fragmented subjects, and the exploitation of the living body, self-alienation. Lord Jim Lodge powered by monochrom (2006-): The Lord Jim Lodge was founded during the 1980s by the artists Jörg Schlick, Martin Kippenberger, Albert Oehlen and Wolfgang Bauer. Every member was obliged to use the lodge logo and/or the "Sun Breasts Hammer" symbol and the slogan "No one helps nobody" in his work. The group's declared goal was to make the logo "more well known than that of Coca-Cola". Thanks to the international recognition received by the oeuvres of Kippenberger, Oehlen, and Schlick the Lord Jim Lodge has already attained a relatively high degree of notoriety. Still, the logo's dissemination has remained – despite the international reputation that these artists have achieved – within the framework of the art system and its peripheral importance. As an intentional addition to works of visual art, it was in the end limited by their material form of existence. In March 2006 it was announced that Monochrom has assumed ownership of all trademark and usage rights of the artist Jörg Schlick's Lord Jim Lodge. Monochrom took part in a contest by 'Coca-Cola Light' ('Coca-Cola Light Art Edition 2006'). Quote Monochrom: "This puts us in a position to set in motion long overdue synergy effects between Coca-Cola and the Lord Jim Lodge. The only possibility for realizing the challenge formulated in the lodge logo is to use habitat in the merchandise world as a vehicle of transmission for guiding the message through that world's channels of distribution and into public consciousness. ... Thus we would like to use the prize as a trial run for such a form of cooperation/competition. Coca-Cola and Lord Jim Lodge – together at last! The symbolic-economic capital of the Lord Jim Lodge and the economic-symbolic capital of Coca-Cola will be brought together, paving the way for a better future. For a world of radical beauty and exclusive bottles in small editions! In the end, we are all individuals – at least as long as nobody comes along and proves the contrary." Monochrom won the prize. The logo of "Lord Jim Lodge powered by monochrom" was printed on 50.000 Coca-Cola Light bottles. Taugshow (2006-): Monochrom produce a regular TV talk show for a Viennese community TV station and put it online on their page under a Creative Commons license. Taugshow is referring to the Viennese slang term 'taugen' (to dig something, to adore something). Quote: "Our guests are geeks, heretics, and other coevals. Taugshow is a tour-de-farce, condensed into the well-known cultural technique of a prime time TV show." Guests are people like underground publisher V. Vale, sex activist and author Violet Blue, Chaos Computer Club spokesman Andy Müller-Maguhn, RepRap designer Vik Olliver, fashion researcher Adia Martin, media activist Eddie Codel, blog researcher Klaus Schönberger, computer crime lawyer Jennifer Granick, bondage instructor J. D. Lenzen, science researcher Karin Harrasser, blogger Regine Debatty, IT expert Emmanuel Goldstein, DEF CON founder Jeff Moss, Tim Pritlove and blogger/writer Cory Doctorow. Arse Elektronika (2007-): Monochrom organizes a series of conferences about sex and technology. The first conference was held in October 2007 in San Francisco and dealt with pr0nnovation (the history of pornography and technological innovation) and featured speakers such as Mark Dery, Violet Blue and Eon McKai. Arse Elektronika 2008 dealt with Sex and Science Fiction ('Do Androids Sleep With Electric Sheep?') and was held in San Francisco in October 2008. It featured speakers like Rudy Rucker and Constance Penley. The general theme of Arse Elektronika 2009 was 'Of Intercourse and Intracourse' (genetics, biotechnology, wetware, body modifications) and took place October 2009 in San Francisco. Featured guests: R. U. Sirius, Annalee Newitz, Allen Stein. 2010 the first Arse Elektronika exhibition was presented in the city of Hong Kong. The theme of Arse Elektronika 2010 in San Francisco was "Space Racy" (Sex, Tech and Spaces). The theme of Arse Elektronika 2011 in San Francisco was "Screw the System" (Sex, Tech class, and culture) The theme of Arse Elektronika 2012 in San Francisco was "4PLAY: Gamifuckation and Its Discontents" (Sex, Tech and Games) The theme of Arse Elektronika 2013 in San Francisco was "id/entity" (Sex, Tech and Identity) The theme of Arse Elektronika 2014 in San Francisco was "trans*.*" (Sex, Tech and Transformations) The theme of Arse Elektronika 2015 in San Francisco was "Shoot Your Workload" (Sex, Tech and Work) Arse Elektronika compilations There are currently four compilations or proceedings of essays presented at, or relevant to, the themes of Arse Elektronika. Sculpture Mobs (2008-): Monochrom promotes a concept called Sculpture Mobs. At the 2008 Maker Faire in San Mateo, California Monochrom trained attendees to erect public sculptures in a simulated Wal-Mart parking lot in just 5 minutes before "security" was called. Quote: "No one is safe from public sculptures, those endless atrocities! All of them are labeled 'art in public space'. Unchallenging hunks of aesthetic metal in business parks, roundabouts, in shopping malls! It is time to create DIY public art! Get your hammers! Get your welding equipment!" Monochrom teamed up with the Billboard Liberation Front to create a political illegal public sculpture called "The Great Firewall of China" at the Google Campus in Mountain View, California. Monochrom created additional Sculpture Mobs and Sculpture Mob Training Camps in various cities: Graz (2008), Ljubljana (2008) and Barcelona (2010). Der Streichelnazi / Nazi Petting Zoo (2008): The group staged a public "Nazi petting" or "hugging" on a heavily frequented Viennese shopping street. The piece is a political and ironic statement about Austria's Nazi past and how Austria deals with it. Quote from their video documentation: "In 1938 Austria joined the Third Reich. Millions cheered Hitler and in the referendum, 99.75% said 'yes' to 'Greater Germany'. But after World War II, many Austrians sought comfort in the idea of Austria as "the Nazis' first victim". Factions of Austrian society tried for a long time to advance the view that it was only annexation at the point of a bayonet(te). But it's time to embrace history. It's time to remember the feel-good days of 1938. It's time to let our real feelings out! It's time to hug the Nazi, Austria! Finally!" Carefully Selected Moments (2008): Monochrom publishes a Best-Of CD featuring re-recorded versions of some of the group's favorite songs. Hacking the Spaces (2009-): Monochrom publishes a much-debated pamphlet by Johannes Grenzfurthner who (in collaboration with Frank Apunkt Schneider) makes a critical study on hackerspaces. In writing the historical context of hackerspaces originally expanding from the counter culture movement and conceived as niches against bourgeois society, Grenzfurthner and Schneider argue that hackerspaces today function quite differently as they initially did. Back in the seventies, these open spaces were imagined as tiny worlds to escape from capitalism or authoritarian regimes. The idea was much more based on micro-political tactics than on hippie's spirit: Instead of trying to transfer the old world into a new one people started to build up tiny new worlds with the old world. They made up open space where people could come together and try out different forms of living, working, maybe loving and whatever people do when they want to do something. In a capitalist society, alternative concepts always end up being commodified such as "indie music" becoming mainstream. According to Grenzfurthner and Schneider, the same happened to hackerspaces when "the political approach faded away on en route into tiny geeky workshop paradises". Kiki and Bubu (2008-): Invited by Boing Boing's Xeni Jardin, Monochrom created a sock puppet show focussing on the characters of Kiki and Bubu, an orange-red bird and a brown bear. Kiki is the well-read one, while Bubu is portrayed as a little slow, but often surprises with deep insights. Kiki and Bubu are fond of the ideology of Neo-Marxism and the series is based on the idea of explaining leftist terms (like commodification, neoliberalism, alienation, planned economy) in an entertaining yet surreal way. The first installments were short films (2008), but Monochrom also created life puppet shows (2008, 2010, 2014) and a 50-minute feature video called Kiki and Bubu: Rated R Us (2011): "Kiki and Bubu have some feelings, so they sign up for an online dating site. When the People of China want to become their friends, they are excited. However, sending the People of China a video of themselves proves to be difficult: Their content gets flagged as inappropriate and taken down from YouTube. On the long quest for knowledge that follows, Kiki and Bubu learn all about Internet censorship. And love." Antidev - God Hates Game Designers (2012): Monochrom member Johannes Grenzfurthner staged a fundamentalist Christian protest, holding signs like "God Hates Game Designers" and "Thou Shalt Not Monetize Thy Neighbor" at the Game Developers Conference 2012 in San Francisco, attacking the focus on marketing and monetization. The images went viral and provoked much controversy. Die Gstettensaga: The Rise of Echsenfriedl (2014): A sci-fi fantasy comedy about the post-apocalyptic world after the so-called "Google Wars". The movie was produced for Austria's TV station ORF and deals with the politics and hype behind media technology and nerd culture. The film was directed by Johannes Grenzfurthner. Hedonistika (2014-): Monochrom's "smorgastic Festival for Gastrobots, Culinatronics, Advanced Snackhacks and Nutritional Mayhem", an event dedicated to approaches in gastronomical robots, cooking machines, molecular cuisine and experimental food performances. The first installment was presented in Montréal at the 'Biennale internationale d'art numérique'. The second installment was presented in Holon, near Tel Aviv, at 'Print Screen Festival'. monochrom's ISS (2011): Monochrom creates an improv reality sitcom for theater stages portraying the first year of operation of the International Space Station. The show depicts day-to-day working life in outer space and asks questions about work under the special conditions (and impairments) of a space station, to come to terms with weightlessness and the dictatorship of the functional. The production features actor Jeff Ricketts. Creative Class Escort Service (Kreativlaufhaus) (2015): Monochrom offered an escort service for creative workers (like writers, sculptors, curators, art theorists, filmmakers, designers). The basic concept was to run a Laufhaus, a specific form of German/Austrian brothel where sex workers rent a room and offer services. Monochrom also transported creative workers to clients off-site. Monochrom wanted to start a public debate about the working conditions in art and the sex work field. Occupy East India Trading Company (2015): At the annual TEDxVienna conference, members of Monochrom entered the Volkstheater in 17th-century costumes, carrying a sign and pamphlets protesting the East India Company. The group wanted to address the history of global corporations, especially at a corporate-sponsored event as TEDx: "The East India Company – the first great multinational corporation, and the first to run amok – was the ultimate model for many of today's joint-stock corporations." Shingal, where are you? (2016): Set in an abandoned coal mine at the Turkish border, the documentary Shingal, where are you? weaves together the stories of Yezidi refugees following ISIS attacks and the kidnapping of more than 3000 women and children. The story is told in raw cinematography from the parallel perspective of three generations of Yezidis. The film was directed by Angelos Rallis and Hans Ulrich Goessl. Monochrom functioned as the co-production company. Traceroute (2016): A documentary about the history, politics, and impact of nerd culture. It was written and directed by Johannes Grenzfurthner. Anima Ex Machina (2020): The novel Anima Ex Machina is a good example of Monochrom's history as publisher. The German science fiction and fantasy writer Michael Marrak was invited to Vienna as an artist-in-residence in September and October 2020. He created the sci-fi novel Anima Ex Machina, which was then published by Monochrom. The novel was nominated for the Kurd Laßwitz Award, possibly the best-known science fiction award from Germany. Glossary of Broken Dreams (2018): An essayistic feature film by Johannes Grenzfurthner that tries to present an overview of political concepts such as freedom, privacy, identity, resistance, etc. The film features performances by Amber Benson, Max Grodenchik, Jason Scott, Maschek, Jeff Ricketts and others. Masking Threshold (2021): A horror drama film directed by Johannes Grenzfurthner, written by Grenzfurthner and Samantha Lienhard. The synopsis: "Conducting a series of experiments in his makeshift home-lab, a skeptical IT worker tries to cure his harrowing hearing impairment. But where will his research lead him? Masking Threshold combines a chamber play, a scientific procedural, an unpacking video and a DIY YouTube channel while suggesting endless vistas of existential pain and decay." Je Suis Auto (to be released 2022): Chase Masterson and Johannes Grenzfurthner portray the main characters in Monochrom's science fiction comedy film "Je Suis Auto" (release 2022), directed by Grenzfurthner and Juliana Neuhuber. Masterson is voicing the title character "Auto", a self-driving taxi, and Grenzfurthner plays Herbie Fuchsel, an unemployed nerd critical of artificial intelligence. The film is a farcical comedy that deals with issues such as artificial intelligence, politics of labor, and tech culture. Publications (incomplete) Als die Welt noch unterging (Frank Apunkt Schneider, 2007) Das Wesen der Tonalität (Othmar Steinbauer; edited by Guenther Friesinger, Helmut Neumann, Ursula Petrik, Dominik Sedivy, 2006) Die Leiden der Neuen Musik (Ursula Petrik; edited by Guenther Friesinger, Helmut Neumann, Ursula Petrik, Dominik Sedivy, 2009) Do Androids Sleep with Electric Sheep? (edited by Johannes Grenzfurthner, Günther Friesinger, Daniel Fabry und Thomas Ballhausen, 2009) Leutezeichnungen (edited together with Elffriede, 2003) monochrom / magazine and yearbook series. Published in 1993, 1994, 1995, 1996, 1997, 1998, 2000, 2004, 2006, 2007, 2010 Of Intercourse and Intracourse – Sexuality, Biomodification and the Techno-Social Sphere (edited by Johannes Grenzfurthner, Günther Friesinger, Daniel Fabry, 2011) pr0nnovation? Pornography and Technological Innovation (edited by Johannes Grenzfurthner, Günther Friesinger and Daniel Fabry, 2008) Quo Vadis, Logo?! (edited by Günther Friesinger and Johannes Grenzfurthner, 2006) Roboexotica (edited by Günther Friesinger, Magnus Wurzer, Johannes Grenzfurthner, Franz Ablinger and Chris Veigl, 2008) Screw the System (edited by Johannes Grenzfurthner, Günther Friesinger, Daniel Fabry) Sonne Busen Hammer 16 (edited by Johannes Grenzfurthner, Günther Friesinger and Franz Ablinger, 2006) Sonne Busen Hammer 17 (edited by Johannes Grenzfurthner, Günther Friesinger and Franz Ablinger, 2007) Spektakel - Kunst - Gesellschaft (edited by Stephan Grigat, Johannes Grenzfurthner and Günther Friesinger, 2006) Stadt der Klage (Michael Marrak, 1997) Subvert Subversion. Politischer Widerstand als kulturelle Praxis (edited by Johannes Grenzfurthner, Günther Friesinger, 2020) The Wonderful World of Absence (edited by Günther Friesinger, Johannes Grenzfurthner, Daniel Fabry, 2011) VIPA (edited by Orhan Kipcak, 2007) Weg der Engel (Michael Marrak and Agus Chuadar, 1998) Who shot Immanence? (edited together with Thomas Edlinger and Fritz Ostermayer, 2002) Filmography (feature-length films) Je Suis Auto (to be released 2022) – directed by Johannes Grenzfurthner and Juliana Neuhuber Masking Threshold (2021) – directed by Johannes Grenzfurthner Glossary of Broken Dreams (2018) – directed by Johannes Grenzfurthner Traceroute (2016) – directed by Johannes Grenzfurthner Die Gstettensaga: The Rise of Echsenfriedl (2014) – directed by Johannes Grenzfurthner Kiki and Bubu: Rated R Us (2011) - directed by Johannes Grenzfurthner Exhibitions and festivals (examples) Arad-II, Art Basel Miami Beach / USA (2005) Die waren früher auch mal besser: monochrom (1993-2013) / Austria (2013) Dilettanten. Forum Stadtpark, Graz / Austria - Steiermärkisches Landesmuseum Joanneum, Graz / Austria - Steirischer Herbst 2002, Graz / Austria (2002) Junge Szene 98. Vereinigung Bildender Künstler, Wiener Secession, Vienna / Austria (1998) MEDIA FORUM/Moscow International Film Festival / Moscow / Russia (2008) Neoist World Congress. Kunsthalle Exnergasse, Vienna / Austria (1997) Roboexotica (Festival for Cocktail Robotics, Vienna, 1999-) Robotronika. Public Netbase t0 Media~Space!, Institut für neue Kulturtechnologien, Vienna / Austria (1998) Seriell Produziertes. Diagonale (Austrian Film Festival), Graz / Austria (2000) techno(sexual) bodies / videotage / Hong Kong / China (2010) The Influencers, Center for Contemporary Culture / Barcelona / Spain (2008) The Thomann Project. São Paulo Art Biennial, São Paulo / Brazil (2002) Unterspiel, Contemporary Art Gallery, Vancouver / Canada (2005) world-information.org. Museum of Contemporary Art, Brussels / Belgium (2000) and Belgrad / Serbia (2003) Awards (examples) 1st prize of 'E55' (Vienna/Berlin) 1999. aniMotion Award Honorary Mention (Sibiu, Romania) for Interactive Tales for Monochrom's "Soviet Unterzoegersdorf/Sector 1/The Adventure Game" (2007). Art Award of FWF Austrian Science Fund (2013). Coca-Cola Light Art Edition (2006). Media Forum/Moscow International Film Festival, Jury Special Mention (Moscow, Russia) for Monochrom's "The Void's Foaming Ebb", (2008). Nestroy Theatre Prize (Vienna) 2005 (together with 'The Great Television Swindle' by maschek and 'Freundschaft' by Steinhauer and Henning) for Udo 77 (2004). Official Honoree for NetArt and Personal Blog/Culture in The 2009 Webby Awards, International Academy of Digital Arts and Sciences (2009). Videomedeja Awards Special Mention, Novi Sad, Serbia for Net/Software Category for Monochrom's "Soviet Unterzoegersdorf/Sector 1/The Adventure Game" (2006). See also Notes References External links Detailed Interview with Johannes Grenzfurthner of Monochrom (by Marc Da Costa/Furtherfield): part 1, part 2 and part 3 Monochrom in English Monochrom in German (different blogs, information, etc. available than on the English site) TEDx talk by Johannes Grenzfurthner on Monochrom, art and subversion Organizations established in 1993 1993 establishments in Austria Cultural organisations based in Austria Austrian artist groups and collectives Austrian activists Austrian bloggers Austrian contemporary artists Postmodern artists Culture jamming Hoaxes in Austria Hoaxes in Germany Hoaxes in the United States Political art Politics and technology Pranksters Underground publishers Anti-consumerist groups Anti-corporate activism Internet-based activism Robotic art Performance artist collectives Culture jamming techniques Impostors Hacker culture Nerd culture Net.artists Film production companies of Austria Webby Award winners Creative Commons-licensed authors Artist residencies
25082318
https://en.wikipedia.org/wiki/SUSE%20Studio
SUSE Studio
SUSE Studio was an online Linux software creation tool by SUSE. Users could develop their own Linux distro, software appliance, or virtual appliance, mainly choosing which applications and packages they want on their "custom" Linux and how it looks. Users could choose between openSUSE or SUSE Linux Enterprise as a base and pick from a variety of pre-configured images including jeOS, minimal server, GNOME, and KDE desktops. The SUSE Studio service was shut down on February 15, 2018. Image formats and booting options SUSE Studio supports the following image formats and booting options: Live CD/DVD / ISO image VMDK (VMware disk image) VirtualBox VHD (Virtual Hard Disk) [Hard] disk image USB image Xen KVM (Kernel-based Virtual Machine) OVF (Open Virtualization Format) AMI (Amazon Machine Image) for Amazon Elastic Compute Cloud Preboot Execution Environment (onsite version only) SUSE Studio in use On SUSE Gallery one can find a catalog of the images created in SUSE Studio. These are available for download as well as immediate deployment on the supported cloud platforms. Upon logging in, cloning and test-driving images is possible. A number of projects, both related to the openSUSE Project and independent, use SUSE Gallery as the preferred way to get virtual- and disk images to their users. SUSE Studio is what powered the fan-made Chrome OS, which was a semi-stripped-down system loaded with the developers' version of Google Chrome, Google web application links, and OpenOffice.org (not to be confused with Google's "Chrome OS"). The many desktop environments supported (not limited to): JeOS Server Qt only LXQt GTK+ only GNOME Cinnamon MATE XFCE Enlightenment Qt and GTK+ integrated KDE Shutdown On November 9, 2017, Novell announced that they would be shutting down SUSE Studio Online on February 15, 2018. SUSE Studio Express will replace the service, because of previous merging with Open Build Service and SUSE Studio Online. See also Open Build Service (formerly openSUSE Build Service) openSUSE Project SUSE Linux SUSE Studio ImageWriter YaST ZYpp References Linux emulation software Software companies disestablished in 2018 SUSE Linux
274816
https://en.wikipedia.org/wiki/Distributed%20control%20system
Distributed control system
A distributed control system (DCS) is a computerised control system for a process or plant usually with many control loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localising control functions near the process plant, with remote monitoring and supervision. Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality of SCADA and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not geographically remote. Structure The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays. The accompanying diagram is a general model which shows functional manufacturing levels using computerised control. Referring to the diagram; Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collect information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets Level 4 is the production scheduling level. Levels 1 and 2 are the functional levels of a traditional DCS, in which all equipment are part of an integrated system from a single manufacturer. Levels 3 and 4 are not strictly process control in the traditional sense, but where production control and scheduling takes place. Technical points The processor nodes and operator graphical displays are connected over proprietary or industry standard networks, and network reliability is increased by dual redundancy cabling over diverse routes. This distributed topology also reduces the amount of field cabling by siting the I/O modules and their associated processors close to the process plant. The processors receive information from input modules, process the information and decide control actions to be signalled by the output modules. The field inputs and outputs can be analog signals e.g. 4–20 mA DC current loop or two-state signals that switch either "on" or "off", such as relay contacts or a semiconductor switch. DCSs are connected to sensors and actuators and use setpoint control to control the flow of material through the plant. A typical application is a PID controller fed by a flow meter and using a control valve as the final control element. The DCS sends the setpoint required by the process to the controller which instructs a valve to operate so that the process reaches and stays at the desired setpoint. (see 4–20 mA schematic for example). Large oil refineries and chemical plants have several thousand I/O points and employ very large DCS. Processes are not limited to fluidic flow through pipes, however, and can also include things like paper machines and their associated quality controls, variable speed drives and motor control centers, cement kilns, mining operations, ore processing facilities, and many others. DCSs in very high reliability applications can have dual redundant processors with "hot" switch over on fault, to enhance the reliability of the control system. Although 4–20 mA has been the main field signalling standard, modern DCS systems can also support fieldbus digital protocols, such as Foundation Fieldbus, profibus, HART, modbus, PC Link, etc. Modern DCSs also support neural networks and fuzzy logic applications. Recent research focuses on the synthesis of optimal distributed controllers, which optimizes a certain H-infinity or the H 2 control criterion. Typical applications Distributed control systems (DCS) are dedicated systems used in manufacturing processes that are continuous or batch-oriented. Processes where a DCS might be used include: Chemical plants Petrochemical (oil) and refineries Pulp and paper mills (see also: quality control system QCS) Boiler controls and power plant systems Nuclear power plants Environmental control systems Water management systems Water treatment plants Sewage treatment plants Food and food processing Agrochemical and fertilizer Metal and mines Automobile manufacturing Metallurgical process plants Pharmaceutical manufacturing Sugar refining plants Agriculture applications History Evolution of process control operations Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-manned central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process. With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control system was born. The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels. Origins Early minicomputers were used in the control of industrial processes since the beginning of the 1960s. The IBM 1800, for example, was an early computer that had input/output hardware to gather process signals in a plant for conversion from field contact levels (for digital points) and analog signals to the digital domain. The first industrial control computer system was built 1959 at the Texaco Port Arthur, Texas, refinery with an RW-300 of the Ramo-Wooldridge Company. In 1975, both Honeywell and Japanese electrical engineering firm Yokogawa introduced their own independently produced DCS's - TDC 2000 and CENTUM systems, respectively. US-based Bristol also introduced their UCS 3000 universal controller in 1975. In 1978 Valmet introduced their own DCS system called Damatic (latest generation named Valmet DNA). In 1980, Bailey (now part of ABB) introduced the NETWORK 90 system, Fisher Controls (now part of Emerson Electric) introduced the PROVoX system, Fischer & Porter Company (now also part of ABB) introduced DCI-4000 (DCI stands for Distributed Control Instrumentation). The DCS largely came about due to the increased availability of microcomputers and the proliferation of microprocessors in the world of process control. Computers had already been applied to process automation for some time in the form of both direct digital control (DDC) and setpoint control. In the early 1970s Taylor Instrument Company, (now part of ABB) developed the 1010 system, Foxboro the FOX1 system, Fisher Controls the DC2 system and Bailey Controls the 1055 systems. All of these were DDC applications implemented within minicomputers (DEC PDP-11, Varian Data Machines, MODCOMP etc.) and connected to proprietary Input/Output hardware. Sophisticated (for the time) continuous as well as batch control was implemented in this way. A more conservative approach was setpoint control, where process computers supervised clusters of analog process controllers. A workstation provided visibility into the process using text and crude character graphics. Availability of a fully functional graphical user interface was a way away. Development Central to the DCS model was the inclusion of control function blocks. Function blocks evolved from early, more primitive DDC concepts of "Table Driven" software. One of the first embodiments of object-oriented software, function blocks were self-contained "blocks" of code that emulated analog hardware control components and performed tasks that were essential to process control, such as execution of PID algorithms. Function blocks continue to endure as the predominant method of control for DCS suppliers, and are supported by key technologies such as Foundation Fieldbus today. Midac Systems, of Sydney, Australia, developed an objected-oriented distributed direct digital control system in 1982. The central system ran 11 microprocessors sharing tasks and common memory and connected to a serial communication network of distributed controllers each running two Z80s. The system was installed at the University of Melbourne. Digital communication between distributed controllers, workstations and other computing elements (peer to peer access) was one of the primary advantages of the DCS. Attention was duly focused on the networks, which provided the all-important lines of communication that, for process applications, had to incorporate specific functions such as determinism and redundancy. As a result, many suppliers embraced the IEEE 802.4 networking standard. This decision set the stage for the wave of migrations necessary when information technology moved into process automation and IEEE 802.3 rather than IEEE 802.4 prevailed as the control LAN. The network-centric era of the 1980s In the 1980s, users began to look at DCSs as more than just basic process control. A very early example of a Direct Digital Control DCS was completed by the Australian business Midac in 1981–82 using R-Tec Australian designed hardware. The system installed at the University of Melbourne used a serial communications network, connecting campus buildings back to a control room "front end". Each remote unit ran two Z80 microprocessors, while the front end ran eleven Z80s in a parallel processing configuration with paged common memory to share tasks and that could run up to 20,000 concurrent control objects. It was believed that if openness could be achieved and greater amounts of data could be shared throughout the enterprise that even greater things could be achieved. The first attempts to increase the openness of DCSs resulted in the adoption of the predominant operating system of the day: UNIX. UNIX and its companion networking technology TCP-IP were developed by the US Department of Defense for openness, which was precisely the issue the process industries were looking to resolve. As a result, suppliers also began to adopt Ethernet-based networks with their own proprietary protocol layers. The full TCP/IP standard was not implemented, but the use of Ethernet made it possible to implement the first instances of object management and global data access technology. The 1980s also witnessed the first PLCs integrated into the DCS infrastructure. Plant-wide historians also emerged to capitalize on the extended reach of automation systems. The first DCS supplier to adopt UNIX and Ethernet networking technologies was Foxboro, who introduced the I/A Series system in 1987. The application-centric era of the 1990s The drive toward openness in the 1980s gained momentum through the 1990s with the increased adoption of commercial off-the-shelf (COTS) components and IT standards. Probably the biggest transition undertaken during this time was the move from the UNIX operating system to the Windows environment. While the realm of the real time operating system (RTOS) for control applications remains dominated by real time commercial variants of UNIX or proprietary operating systems, everything above real-time control has made the transition to Windows. The introduction of Microsoft at the desktop and server layers resulted in the development of technologies such as OLE for process control (OPC), which is now a de facto industry connectivity standard. Internet technology also began to make its mark in automation and the world, with most DCS HMI supporting Internet connectivity. The 1990s were also known for the "Fieldbus Wars", where rival organizations competed to define what would become the IEC fieldbus standard for digital communication with field instrumentation instead of 4–20 milliamp analog communications. The first fieldbus installations occurred in the 1990s. Towards the end of the decade, the technology began to develop significant momentum, with the market consolidated around Ethernet I/P, Foundation Fieldbus and Profibus PA for process automation applications. Some suppliers built new systems from the ground up to maximize functionality with fieldbus, such as Rockwell PlantPAx System, Honeywell with Experion & Plantscape SCADA systems, ABB with System 800xA, Emerson Process Management with the Emerson Process Management DeltaV control system, Siemens with the SPPA-T3000 or Simatic PCS 7, Forbes Marshall with the Microcon+ control system and Azbil Corporation with the Harmonas-DEO system. Fieldbus technics have been used to integrate machine, drives, quality and condition monitoring applications to one DCS with Valmet DNA system. The impact of COTS, however, was most pronounced at the hardware layer. For years, the primary business of DCS suppliers had been the supply of large amounts of hardware, particularly I/O and controllers. The initial proliferation of DCSs required the installation of prodigious amounts of this hardware, most of it manufactured from the bottom up by DCS suppliers. Standard computer components from manufacturers such as Intel and Motorola, however, made it cost prohibitive for DCS suppliers to continue making their own components, workstations, and networking hardware. As the suppliers made the transition to COTS components, they also discovered that the hardware market was shrinking fast. COTS not only resulted in lower manufacturing costs for the supplier, but also steadily decreasing prices for the end users, who were also becoming increasingly vocal over what they perceived to be unduly high hardware costs. Some suppliers that were previously stronger in the PLC business, such as Rockwell Automation and Siemens, were able to leverage their expertise in manufacturing control hardware to enter the DCS marketplace with cost effective offerings, while the stability/scalability/reliability and functionality of these emerging systems are still improving. The traditional DCS suppliers introduced new generation DCS System based on the latest Communication and IEC Standards, which resulting in a trend of combining the traditional concepts/functionalities for PLC and DCS into a one for all solution—named "Process Automation System" (PAS). The gaps among the various systems remain at the areas such as: the database integrity, pre-engineering functionality, system maturity, communication transparency and reliability. While it is expected the cost ratio is relatively the same (the more powerful the systems are, the more expensive they will be), the reality of the automation business is often operating strategically case by case. The current next evolution step is called Collaborative Process Automation Systems. To compound the issue, suppliers were also realizing that the hardware market was becoming saturated. The life cycle of hardware components such as I/O and wiring is also typically in the range of 15 to over 20 years, making for a challenging replacement market. Many of the older systems that were installed in the 1970s and 1980s are still in use today, and there is a considerable installed base of systems in the market that are approaching the end of their useful life. Developed industrial economies in North America, Europe, and Japan already had many thousands of DCSs installed, and with few if any new plants being built, the market for new hardware was shifting rapidly to smaller, albeit faster growing regions such as China, Latin America, and Eastern Europe. Because of the shrinking hardware business, suppliers began to make the challenging transition from a hardware-based business model to one based on software and value-added services. It is a transition that is still being made today. The applications portfolio offered by suppliers expanded considerably in the '90s to include areas such as production management, model-based control, real-time optimization, plant asset management (PAM), Real-time performance management (RPM) tools, alarm management, and many others. To obtain the true value from these applications, however, often requires a considerable service content, which the suppliers also provide. Modern systems (2010 onwards) The latest developments in DCS include the following new technologies: Wireless systems and protocols Remote transmission, logging and data historian Mobile interfaces and controls Embedded web-servers Increasingly, and ironically, DCS are becoming centralised at plant level, with the ability to log into the remote equipment. This enables operator to control both at enterprise level ( macro ) and at the equipment level (micro), both within and outside the plant, because the importance of the physical location drops due to interconnectivity primarily thanks to wireless and remote access. The more wireless protocols are developed and refined, the more they are included in DCS. DCS controllers are now often equipped with embedded servers and provide on-the-go web access. Whether DCS will lead Industrial Internet of Things (IIOT) or borrow key elements from remains to be seen. Many vendors provide the option of a mobile HMI, ready for both Android and iOS. With these interfaces, the threat of security breaches and possible damage to plant and process are now very real. See also Annunciator panel Building automation EPICS Industrial control system Industrial safety system Safety instrumented system (SIS) TANGO References Control engineering Applications of distributed computing Industrial automation
67672068
https://en.wikipedia.org/wiki/1927%E2%80%9328%20UCLA%20Grizzlies%20men%27s%20ice%20hockey%20season
1927–28 UCLA Grizzlies men's ice hockey season
The 1927–28 UCLA Grizzlies men's ice hockey season was the 2nd season of play for the program. Season Fresh off of their first official season, and an undefeated one no less, UCLA was slated to play their first game against USC. The game was delayed, however, and the team played a familiar opponent to begin their season, Occidental. The team lost its first game in two years and followed that up with a second consecutive defeat, putting them in a poor position right at the start. With so few games on their schedule, UCLA couldn't afford to lose any more and the team recovered with a hard fought win in game 3. The two early losses, and USC's dominance over Occidental, meant that UCLA could only hope for a tie with the Trojans for the crown but that could only happen if the Grizzlies won each of their remaining 3 games. In the pivotal game with Southern California, the Trojans got out to a 2-goal lead but UCLA was able to tie the game and make it look like they had a chance for a time. Unfortunately the attack from USC was to strong and the Grizzlies surrendered 6 goals, handing the city championship over to the Trojans. After a second loss to Occidental, the team's lineup was changed with Tafe dropping back to defense (where he had played the year before) and Al Johnson jumping up to the forward position. The game was delayed when USC got the date wrong and didn't show up to the game. The Grizzlies could have claimed a win on a forfeit but the team refused to do so. It was rescheduled for a week later but eventually cancelled. Artemus Lane served as team manager. Note: UCLA used the same colors as UC-Berkley until 1949. Roster Standings Schedule and Results |- !colspan=12 style=";" | Regular Season References UCLA Bruins men's ice hockey seasons UCLA Grizzlies UCLA Grizzlies 1928 in sports in California 1927 in sports in California
17145280
https://en.wikipedia.org/wiki/1985%20Rose%20Bowl
1985 Rose Bowl
The 1985 Rose Bowl Game was a postseason college football bowl game between the USC Trojans of the Pacific-10 Conference and Ohio State Buckeyes of the Big Ten Conference, held on New Year’s Day in the Rose Bowl in The game resulted in a victory for the underdog Scoring summary First quarter Ohio State - Spangler 21-yard field goal, 12:08 - OSU 3, USC 0 USC - Jordan 51-yard field goal, 6:52 - OSU 3, USC 3 USC - Cormier 3-yard pass from Green (Jordan kick) 1:54 - USC 10, OSU 3 Second quarter USC - Ware 19-yard pass from Green (Jordan kick) - USC 17, OSU 3 Ohio State - Spangler 46-yard field goal, 0:00 - USC 17, OSU 6 Third quarter Ohio State - Spangler 52-yard field goal, 6:37 - USC 17, OSU 9 USC - Jordan 51-yard field goal, 4:05 - USC 20, OSU 9 Fourth quarter Ohio State - Carter 18-yard pass from Tomczak (Tomczak run), 7:34 - USC 20, OSU 17 Tim Green and Jack Del Rio earned the Rose Bowl MVP awards. Highlights With fourth straight win, Pac-10 takes its first lead in the series with the Big Ten, 20–19 Pac-10 has won ten of eleven, and fourteen of the last sixteen meetings Using a swarming defense, USC caused three Mike Tomczak interceptions Tomczak threw 37 passes in the game Ohio State kicker Rich Spangler scored a Rose Bowl-record 52-yard field goal USC kicker Steve Jordan had two 51-yard field goals Big contribution from Trojans' Tim Green and Timmie Ware, Joe Cormier, Fred Crutcher, and Kennedy Pola Statistics {| class=wikitable style="text-align:center" ! Statistics !! USC !! Ohio State |- |align=left|First Downs ||16 ||19 |- |align=left|Rushes–Yards||42–133||34–113 |- |align=left|Passing Yards||128 ||290 |- |align=left|Passes||13–25–0||24–37–3 |- |align=left|Total Yards ||261 ||403 |- |align=left|Punts–Average||7–42||4–48 |- |align=left|Fumbles–Lost ||2–1 ||4–1 |- |align=left|Turnovers by ||1 ||4 |- |align=left|Penalties-Yards||4–38 ||4–46 |- |align=left|Time of possession||31:11 ||28:47 |} References Rose Bowl Rose Bowl Game Ohio State Buckeyes football bowl games USC Trojans football bowl games Rose Bowl Rose Bowl
2800885
https://en.wikipedia.org/wiki/OpenDocument%20software
OpenDocument software
This is an overview of software support for the OpenDocument format, an open document file format for saving and exchanging editable office documents. Current support A number of applications support the OASIS Open Document Format for Office Applications; listed alphabetically they include: Text documents (.odt) Word processors AbiWord 2.4+ (import from 2.4.0, export from 2.4.2; used to require separate download and installation of plugins – up to version 2.6.8). Adobe Buzzword beta, a web-based word processor has limited ODF support owing to its beta status. Atlantis Word Processor 1.6.5+ can import ODT documents. Calligra Words uses ODT as its native file format. Collabora Office Writer for Mobile and Desktop apps uses ODT as its native file format. Collabora Online Writer uses ODT as its native file format. eyeOS Cloud computing operating system with eyeDocs Word Processor has basic support for ODF text documents. EasiWriter (for RISC OS) Version 9.1 of EasiWriter can import/save ODT files on RISC OS. FileApp allows viewing OpenDocument files on iPhone and iPad. FocusWriter, a distraction-free word processor. Google Docs, a web-based word processor and spreadsheet application derived from the application Writely. Gwennel, a WYSIWYG word processor written in assembly language, under 200 KB. IBM Lotus Notes 8.0+ includes an office suite for creating text, spreadsheet and presentation files. IBM Lotus Symphony Viewer allows viewing OpenDocument texts, spreadsheets and presentations on iPad and iPhone. JustSystems Ichitaro (Japanese), read/write support via plug-in from version 2006, full built-in support from 2007. LibreOffice Writer (an OpenOffice.org fork) uses ODT as its native file format. Go-oo, an OpenOffice.org fork which were later merged with Libreoffice (Development discontinued). Microsoft Word native support since Office 2007 SP2 (support for previous versions is available through several plugins). Sun ODF Plugin for Microsoft Office. Microsoft OpenXML/ODF Translator Add-in for Office. Currently no ODF support on the Mac OS X version of Microsoft Office. Mobile Office, an office package for Symbian mobile phones. Microsoft WordPad included with Windows 7 has limited support for opening and saving in the odt format. Nisus Writer Pro 1.2+ for Mac OS X. OnlyOffice online and desktop editors, where both online and offline suites support ODF for opening, editing and exporting. OpenDocument Viewer, a free Android app for reading ODF, released under GPLv3 (also available from F-Droid). OpenOffice Writer – full support from 2.0, import-only in 1.1.5. IBM Lotus Symphony Documents 1.0+ (OpenOffice.org 1.0 derivative; Development discontinued). NeoOffice Writer – full support from 2.0 (OpenOffice.org 2.0.3 derivative), import only in 1.2.2 (OpenOffice.org 1.1.5 derivative). StarOffice 8+ Writer (OpenOffice.org 2.0 derivative; Development discontinued). RomanianOffice, a proprietary word processor based on OpenOffice.org. Open Word Processor allows editing OpenDocument text files (.odt) on iPad. ownCloud Documents, a plugin for ownCloud, allows creation and collaborative editing of ODT files stored in ownCloud. TechWriter (for RISC OS) Version 9.1 of TechWriter can import/save ODT files on RISC OS. TextEdit, (In Mac OS X 10.5 Leopard) can read/write ODT format but does not retain all formatting. Bean 1.1.0+, basic word processor with limited ODF support implemented in Mac OS X. TextMaker starting with version 2008. Visioo Writer 0.6.1 (in development) — document viewer, incomplete support. WordPerfect Office (import-only in X4). Zoho Writer, an online word processor, can read and write ODT format. Other applications Apple Inc.'s Quick Look, the built-in quick preview feature of Mac OS X, supports OpenDocument format files starting with Mac OS X v10.5. Support is limited to basic ODF implementation in Mac OS X. Oxygen XML Editor 9.3+ allows users to extract, validate, edit, transform (using XSLT or XQuery) to other file formats, compare and process the XML data stored in OpenDocument files. Validation uses the latest ODF Documents version 1.1 Relax NG Schemas. IBM WebSphere Portal 6.0.1+ can preview texts from ODT files as HTML documents. IBM Lotus Domino 8.0+ KeyView (10.4.0.0) filter supports ODT, ODS, ODP for viewing files. ODT Viewer freeware viewer and simple ODT to HTML converter for Windows systems. Data management phpMyAdmin 2.9.0+ – database manager, exports to ODT. Text management Dokuwiki — wiki software, exports to ODT with the odt plugin. Drupal ODF Import – a Drupal module allows one to import ODT files into the CMS nodes. eZ publish — content management system, supports import and export of writer documents via extension. Scribus 1.2.2+ — desktop publishing suite, imports ODT. Translation support OmegaT — OmegaT is a free translation memory application written in Java. Translate Toolkit — converts OpenDocument into XLIFF 1.2 for localisation in any XLIFF aware CAT tool. Bibliographic RefWorks – Web-based commercial citation manager, supports uploading ODT files for citation formatting. Spreadsheet documents (.ods) Spreadsheets Calligra Sheets uses ODS as default file format. Collabora Office Calc for Mobile and Desktop apps uses ODS as its native file format. Collabora Online Calc uses ODS as its native file format. EditGrid, a web-based (online) spreadsheet service – full support. FileApp allows viewing OpenDocument files on iPhone and iPad. Gnumeric can both open and save files in this format and plans to continue to support this format in the future. Google Docs, a web-based word processor and spreadsheet application which can read and save OpenDocument files. IBM Lotus Notes 8.0+ includes an office suite for creating text, spreadsheet and presentation files. IBM Lotus Symphony Spreadsheets 1.0+ (OpenOffice.org 1.0 derivative; Development discontinued). IBM Lotus Symphony Viewer allows viewing OpenDocument texts, spreadsheets and presentations on iPad and iPhone. JustSystems JUST Suite 2009 Sanshiro (Japanese). LibreOffice Calc (an OpenOffice.org fork) uses ODS as its native file format. Microsoft Excel has native support for ODF since Excel 2007 Service Pack 2. When writing formulas Excel uses the spreadsheet formula language specified in ISO/IEC 29500 (Office Open XML) which differs from the draft OpenFormula format used in other ODF implementations. Earlier versions of Microsoft Excel support OpenDocument with Sun ODF Plugin for Microsoft Office. Partial support also with Microsoft OpenXML/ODF Translator Add-in for Office. OnlyOffice online and desktop editors, where both online and offline suites support ODF for opening, editing and exporting. OpenOffice Calc – full support from 2.0, import-only in 1.1.5. NeoOffice – native support from 2.0 (OpenOffice.org 2.0.3 derivative), import only in 1.2.2 (OpenOffice.org 1.1.5 derivative). StarOffice 8+ Calc (OpenOffice 2.0 derivative; Development discontinued). WPS Office WPS Spreadsheet support et, ett, xls, xlsx, xlt, xltx, csv, xlsm, xltm, xlsb, ets. Zoho Sheet, an online spreadsheet application, can import and export ODS format. Other applications Oxygen XML Editor 9.3+ allows users to extract, validate, edit, transform (using XSLT or XQuery) to other file formats, compare and process the XML data stored in OpenDocument files. Validation uses the latest ODF Documents version 1.1 Relax NG Schemas. IBM WebSphere Portal 6.0.1+ can preview texts from ODS files as HTML documents. odsgenerator v1.4.5+ Generate an OpenDocument Format .ods file from json or yaml file. Data management phpMyAdmin 2.9.0+ – database manager, exports to ODS, exports to system32\windows. Knowledge management Knomos 1.0 – Law office management application. EndNote X 1.0.1 – Reference management software. Statistics gretl 1.7.0 – Statistical analysis software (import only). Translation support OmegaT — OmegaT is a free translation memory application written in Java. Presentation documents (.odp) Presentation Calligra Stage uses ODP as default file format. Collabora Office Impress for Mobile and Desktop apps uses ODP as its native file format. Collabora Online Impress uses ODP as its native file format. FileApp allows viewing OpenDocument files on iPhone and iPad. IBM Lotus Notes 8.0+ includes an office suite for creating text, spreadsheet and presentation files. IBM Lotus Symphony Presentations 1.0+ (OpenOffice.org 1.0 derivative; Development discontinued). IBM Lotus Symphony Viewer allows viewing OpenDocument texts, spreadsheets and presentations on iPad and iPhone. JustSystems JUST Suite 2009 Agree (Japanese). LibreOffice Impress uses ODP as its native file format. LibreOffice Online Impress uses ODP as its native file format. Microsoft PowerPoint native support since Office 2007 SP2 (support for previous versions is available through several plugins). OnlyOffice online and desktop editors, where both online and offline suites support ODF for opening, editing and exporting. OpenOffice Impress – native support from 2.0, import-only in 1.1.5. NeoOffice 1.2 Impress (OpenOffice 1.1.5 derivative). NeoOffice 2.0 Impress (OpenOffice 2.0.3 derivative). StarOffice 8 Impress (OpenOffice 2.0 derivative; Development discontinued). WPS Office WPS Presentation support ppt, pot, pps, dps, dpt, pptx, potx, ppsx, pptm, potm, ppsm, dpss. Zoho Show, an online presentation program, can import/export ODP format files. Other applications Oxygen XML Editor 9.3+ allows users to extract, validate, edit, transform (using XSLT or XQuery) to other file formats, compare and process the XML data stored in OpenDocument files. Validation uses the latest ODF Documents version 1.1 Relax NG Schemas. IBM WebSphere Portal 6.0.1+ can preview texts from ODP files as HTML documents. Database documents (.odb) Database LibreOffice Base (an OpenOffice.org fork) Graphics documents (.odg) Calligra Flow uses ODG as its native file format. Collabora Office Draw for Mobile and Desktop apps uses ODG as its native file format. Collabora Online Draw uses ODG as its native file format. Karbon, vector graphics editor which is part of Calligra Suite. OpenDocument support since 1.5+ (import and export). LibreOffice Draw uses ODG as its native file format. JustSystems JUST Suite 2008+ Hanako (Japanese). OpenOffice Draw – full support from 2.0, import-only in 1.1.5. NeoOffice Draw – full support from 2.0 (OpenOffice.org 2.0.3 derivative), import only in 1.2.2 (OpenOffice.org 1.1.5 derivative). StarOffice 8 Draw (OpenOffice 2.0 derivative; Development discontinued). Scribus 1.2.2+ (import only) — Desktop publishing application. Inkscape 0.44+ (export only) — vector graphics editor. Other applications IBM WebSphere Portal 6.0.1+ can preview texts from ODG files as HTML documents. Formula documents (.odf) KFormula 1.5+ (full native support). LibreOffice Math uses ODF as its native file format. Collabora Office for Desktop apps uses ODF as its native file format. Math uses ODF as its native file format. OpenOffice Math (full support from 2.0). NeoOffice 2.0 Math (OpenOffice 2.0.3 derivative). Search tools Google supports searching in content of ODT, ODS, and ODP files and also searching for these filetypes. Found files can be viewed directly in a converted HTML view. Beagle, Linux desktop search engine. Indexes and searches multiple file formats, including OpenDocument files. Google Desktop Search has an unofficial OpenDocument plug-in available, supporting ODT, OTT, ODG, OTG, ODP, OTP, ODS, OTS, and ODF OpenDocument formats. The plug-in does not correctly handle Unicode characters. Apple Spotlight (built into OS X 10.4 and later) supports indexed searching of OpenDocument files using a third-party plug-in from the NeoOffice team. Copernic Desktop Search (Windows). Other planned support Ability Office developers declared planned ODF support for the next major version of their office suite. Evermore Integrated Office – EIOffice 2009 will support ODF in the update. As stated on Evermore Software website: "Work is underway to both read and write to this new format as well as *.pdf and *.odf file formats in the update." Last version of EIOffice 2009 (5.0.1272.101EN.L1) cannot open or save ODF files. Haansoft's Hangul Word Processor will support OpenDocument format documents in its next version for Windows, which is planned for the end of 2009. An extension for Mozilla Firefox has been proposed by a developer named Talin, according to Mozilla hacker Gervase Markham (source); it has since been further modified by Alex Hudson. and was hosted in the official Firefox extension repository. Wikipedia announced that it will use ODF for printing wikis. BlackBerry smartphones are going to support ODF in their embedded office suites, starting mid-2009. The WordPad editor in Windows 7 already includes support for ODF. Programmatic support, filters, converters There are OpenDocument-oriented libraries available for languages such as Java, Python, Ruby, C++ and C#. OpenDoc Society maintains an extensive list of ODF software libraries for OpenDocument Format. OpenDocument packages are ordinary zip files. There is an OpenDocument format which is just a single XML file, but most applications use the package format. Thus, any of the vast number of tools for handling zip files and XML data can be used to handle OpenDocument. Nearly all programming languages have libraries (built-in or available) for processing XML files and zip files. Microsoft Microsoft has been offering native support for ODF since Office 2007 Service Pack 2. Microsoft is hosting the 8th ODF Plugfest in Brussels in 2012. In October 2005, one year before the Microsoft Office 2007 suite was released, Microsoft declared that there is not sufficient demand from Microsoft customers for international standard OpenDocument format support and therefore it will not be included in Microsoft Office 2007. This statement was repeated also in next months. As an answer, on 20 October 2005 an online petition was created to demand ODF support from Microsoft. The petition was signed by circa 12000 people. In May 2006, ODF plugin for Microsoft Office was released by OpenDocument Foundation. Microsoft declared that the company did not work with the developers of the plug-in. In July 2006 Microsoft announced the creation of the Open XML Translator project—tools to build a technical bridge between the Microsoft Office Open XML Formats and the OpenDocument Format (ODF). This work was started in response to government requests for interoperability with ODF. The goal of project is not to implement ODF direct to Microsoft Office, but only to create plugin and external tools. In February 2007, this project released first version of ODF plug-in for Microsoft Word. In February 2007 SUN released initial version of SUN ODF plugin for Microsoft Office. Version 1.0 was released in July 2007. Microsoft Office 2007 Service Pack 2 was released on 28 April 2009. It added native support of OpenDocument 1.1 as well as other formats like XPS and PDF. In April 2012, Microsoft announced support for ODF 1.2 in Microsoft Office 2013. Microsoft has financed the creation of an Open XML translator, to enable the conversion of documents between Office Open XML and OpenDocument. The project, hosted on SourceForge, is an effort by several of Microsoft's partners to create a plugin for Microsoft Office that will be freely available under a BSD license. By December 2007, plugins had been released for Microsoft Word, Microsoft Excel and Microsoft PowerPoint. Independent analysis has, however, reported several concerns with these plugins, including lack of support for Office 2007. Third party support: Two ODF plug-ins for Microsoft Office There are currently two third-party plug-ins: Sun Microsystems' ODF Plugin for Microsoft Office users (download link no longer available as of 30 March 2013)— gives users of Microsoft Office Word, Excel and PowerPoint the ability to read, edit and save to the ISO-standard Open Document Format (ODF). It works with Microsoft Office 2007 (with service pack 1 or higher), Microsoft Office 2003, Microsoft Office XP, and even Microsoft Office 2000. ooo-word-filter — enables users of Microsoft Word 2003 to open OpenDocument files. A third plug-in, OpenOpenOffice (O3), is apparently inactive. OpenOpenOffice was developed by Phase-n, a free and open source software plug-in to enable Microsoft Office to read and write OpenDocument files (and any other formats supported by OpenOffice.org). Instead of installing a complete office application or even a large plug-in, O3 intended to install a tiny plug-in to the Microsoft Office system. This tiny plug-in intended to automatically send the file to some server, which would then do the conversion, returning the converted file. The server could be local to an organization (so private information doesn't go over the Internet) or accessed via the Internet (for those who do not want to set up a server). A beta of the server half has been completed, and further expected announcements have not occurred. Phase-n argued that the main advantage of their approach is simplicity. Their website announces that O3 "requires no new concepts to be explored, no significant development, and leverages the huge existing body of work already created by the OpenOffice developers, the CPAN module authors, and the Microsoft .NET and Office teams. They also argue that this approach significantly simplifies maintenance; when a new version of OpenOffice is released, only the server needs to be upgraded. A fourth plug-in was announced by the OpenDocument Foundation in May 2006 but development was stopped in October 2007. Microsoft Office 2007 SP2 support controversy Microsoft supports OpenDocument format in Office 2007 SP2. The current implementation faces criticism for not supporting encrypted documents and formula format in the same way as other OpenDocument-compatible software, as well as for stripping out formulas in imported spreadsheets created by other OpenDocument-compatible software. Critics say that with this conflict of standards Microsoft actually managed to reduce interoperability between office productivity software. The company had previously reportedly stated that "where ODF 1.1 is ambiguous or incomplete, the Office implementation can be guided by current practice in OpenOffice.org, mainly, and other implementations including KOffice and AbiWord. Peter Amstein and the Microsoft Office team are reluctant to make liberal use of extension mechanisms, even though provided in ODF 1.1. They want to avoid all appearance of an embrace-extend attempt." However, according to the ODF Alliance, "ODF spreadsheets created in Excel 2007 SP2 do not in fact conform to ODF 1.1 because Excel 2007 incorrectly encodes formulas with cell addresses. Section 8.3.1 of ODF 1.1 says that addresses in formulas "start with a "[" and end with a "]"." In Excel 2007 cell addresses were not enclosed with the necessary square brackets, which could be easily corrected." This however has been contested as the ISO/IEC 26300 specification states that the semantics and the syntax is dependent on the used namespace which is implementation dependent leaving the syntax implementation defined as well. Before SP2, Microsoft had sponsored the creation of the Open XML translator project to allow the conversion of documents between OOXML and OpenDocument. As a result of this project, Microsoft financed the ODF add-in for Word project on SourceForge. This project is an effort by several of Microsoft's partners to create a plugin for Microsoft Office that will be freely available under a BSD license. The project released version 1.0 for Microsoft Word of this software in January 2007 followed by versions for Microsoft Excel and Microsoft PowerPoint in December of the same year. Sun Microsystems has created the competing OpenDocument plugin for Microsoft Office 2007 (Service Pack 1 or higher), 2000, XP, and 2003 that supports Word, Excel, and PowerPoint documents. The ODF Alliance has claimed that third-party plug-ins "provide better support for ODF than the recently released Microsoft Office 2007 SP2". Dynamic languages Some open source application programming interfaces, designed for OpenDocument handling, are available in various dynamic programming languages such as Perl and Python. The Lpod project is an example. Accessibility One important issue raised in the discussion of OpenDocument is whether the format is accessible to those with disabilities. There are two issues: does the specification support accessibility, and are implementations accessible? Specification While the specification of OpenDocument is going through an extensive accessibility review, many of the components it is built on (such as SMIL for audio and multimedia and SVG for vector graphics) have already gone through the World Wide Web Consortium (W3C)'s Web Accessibility Initiative processes. There are already applications that currently read/write OpenDocument that export Tagged PDF files (to support PDF accessibility); this suggests that much or all of the necessary data for accessibility is already included in the OpenDocument format. The OASIS OpenDocument technical committee released a draft of OpenDocument 1.1 on 2006-07-27, for public comment through 2006-09-25. This is a very minor update to the specification to add accessibility information, mainly soft page break markings, table header markings, presentation navigation markings, alternative text and captions, and specifically stating that spreadsheets may be embedded in presentations. Peter Korn (an accessibility expert) reviewed version 1.1 "to satisfy myself that all of our accessibility concerns have been addressed", and declared "I am so satisfied." Implementations Peter Korn gave an in-depth report on OpenDocument accessibility. He noted that there are many kinds of impairments, including visual (minor, major, or blind), physical (minor, major with vocal control, major without vocal control), auditory, and cognitive. He then noted that the situation varies, depending on the specific disability. For a vast number of disabilities, there are no known problems, though. OpenOffice is expected to work well with existing solutions in MS Windows' on-screen keyboards (etc.) when driven by single-switch access, head-mouse, and eye-gaze systems. On Unix-like systems, GNOME's "On-screen Keyboard" can be used. Also available on both Linux and Windows systems is Dasher, a text-entry alternative released under the GPL for head-mouse and eye-gaze users (35+ word-per-minute typing speeds using nothing but eye movement are possible). If those with disabilities are already using Microsoft Office, then a plug-in enabling them to load and save OpenDocument files using Microsoft Office may give them the same capabilities they already have (assuming the opening/saving cycle is accessible). So from that perspective, OpenDocument is at least as accessible as Microsoft Office. For users using alternatives to Microsoft Office there may be problems, not necessarily due to the ODF file format but rather due to the lower investment to date by assistive technology vendors on these platforms, though there is ongoing work. For example, IBM has stated that its "Workplace productivity tools available through Workplace Managed Client including word processing, spreadsheet and presentation editors are currently planned to be fully accessible on a Windows platform by 2007. Additionally, these productivity tools are currently planned to be fully accessible on a Linux platform by 2008" (Sutor, 10 November 2005). It is important to note that since OpenDocument is an Open Standard file format, there is no need for everyone to use the same program to read and write OpenDocument files; someone with a disability is free to use whatever program works best for them. See also Comparison of OpenDocument software Network effect Open format Office suite Office Open XML References External links Application support for ODF (OpenDocument Fellowship). Groklaw's ODF Resources. lpOD-Perl, OpenDocument Connector Perl programming interfaces for ODF. lpOD-Python, ODFpy Python programming interfaces for ODF. OpenDocument OpenDocument Office suites
2419552
https://en.wikipedia.org/wiki/Newbear%2077-68
Newbear 77-68
The Newbear 77-68 was a kit of parts from which a purchaser could construct a first generation home computer based around a Motorola 6800 microprocessor. Because it was designed to be assembled by its owner at home, it was also a homebuilt computer. The 77-68 was designed by Tim Moore and was offered for sale by Bear Microcomputer Systems of Newbury, Berkshire, England from June 1977. It was among the first, if not the first, of British home computers and was featured in the launch edition of Personal Computer World magazine in February 1978. The Newbear 77-68 was both a home computer and a homebuilt computer, since it was designed to not only be used at home (hence a home computer), but also be assembled at home by its owner (hence a homebuilt computer). Description The basic 77-68 comprised an 8-inch square printed circuit board accommodating the microprocessor, Static RAM of 256 8 bit words and the bare essentials in terms of input/output and timing logic to make a working computer. The processor ran with an instruction cycle time of around 1.25 microseconds with most instructions executing in 3 to 7 microseconds. In the short time for which the 77-68 represented an economic and reasonably current technology for home computing, an active user group distributed designs for additional components such as memory cards, video display cards and teletype interfaces which enthusiasts could, and did, construct themselves. It was even possible to run BASIC. All the components to build the basic machine could be bought for around £50 with additional elements added later. This was a sensible approach at a time when, for example, 16K x 1 bit dynamic memory chips cost £7 each and 8 chips plus a significant amount of support logic were required to build a memory card. Operation The 77-68 was programmed in its most basic form with toggle switches and LEDs. With the microprocessor's operation suspended in "HALT" mode, memory words could be accessed and their contents observed in binary. The word could then be modified directly using an additional 8 binary toggle switches to specify the data required. Once a complete program had been "toggled in" using this method, the "HALT" condition could be removed using another switch and the microprocessor would look for an address at which to start executing the program in the last two words of the address space. This technique, called Direct Memory Access was typical for many early computers using volatile memory that did not retain its contents when the power was switched off. Even early mainframe computers required their operators to "toggle" or "dial" in a bootstrap program by hand to get things going on power-up. Capability Although 256 words of memory seems extraordinarily small by contemporary standards, when "toggling in" programs by hand it seemed quite adequate. There was ample space to create programs that played music, sent and received morse code, operated data storage to media such as a cassette player and even offered game experiences (though these required significant imagination by the user). Expanded with additional memory, the 77-68 was quite capable of running software such as the TSC BASIC interpreter and users wrote software that offered a wide range of applications at a time when even word processors were a novelty and spreadsheets were largely unknown. User experience and legacy For many home computer pioneers, primitive machines like the 77-68 offered a thrill that is hard to describe to a generation that has grown up with technology many times more powerful all around. The sense of being able to construct something from inert basic components, write a program and see a set of components that had been separate "come to life" in concert to do something small but useful was very exciting. This was a time when it was quite possible for a non-specialist to understand every aspect of the computer they had built and machines like the 77-68 offered a generation the chance to own and experiment with one for the first time. While the number of kits sold and constructed and the number of systems still in operation is unknown, one of the systems illustrated in this article is now in the Museum of Computing, Swindon, England. References Early microcomputers
33883790
https://en.wikipedia.org/wiki/JModelica.org
JModelica.org
JModelica.org is a commercial software platform based on the Modelica modeling language for modeling, simulating, optimizing and analyzing complex dynamic systems. The platform is maintained and developed by Modelon AB in collaboration with academic and industrial institutions, notably Lund University and the Lund Center for Control of Complex Systems (LCCC). The platform has been used in industrial projects with applications in robotics, vehicle systems, energy systems, CO2 separation and polyethylene production. The key components of the platform are: A Modelica compiler for translating Modelica source code into C or XML code. The compiler also generates models compliant with the Functional Mock-up Interface standard. A Python package for simulation of dynamic models, Assimulo. Assimulo provides interfaces to several state of the art integrators and is used as a simulation engine in JModelica.org. Algorithms for solving large scale dynamic optimization problems implementing local collocation methods on finite elements and pseudospectral collocation methods. A Python package for user interaction. All parts of the platform are accessed from Python, including compiling and loading models, simulating and optimizing. JModelica.org supports the Modelica modeling language for modeling of physical systems. Modelica provides high-level descriptions of hybrid dynamic systems, which are used as a basis for different kinds of computations in JModelica.org including simulation, sensitivity analysis and optimization. Dynamic optimization problems, including optimal control, trajectory optimization, parameter optimization and model calibration can be formulated and solved using JModelica.org. The Optimica extension enables high-level formulation of dynamic optimization problems based on Modelica models. The mintOC project provides a number of benchmark problems encoded in Optimica. The platform promotes open interfaces for integration with numerical packages. The Sundials ODE/DAE integrator suite, the NLP solver IPOPT and the AD package CasADi are examples of packages that are integrated into the JModelica.org platform. JModelica.org is compliant with the Functional Mock-up Interface (FMI) standard and Functional Mock-up Units (FMUs), generated by JModelica.org or by another FMI-compliant tool, can be simulated in the Python environment. An independent comparison between JModelica.org and the optimization systems ACADO Toolkit, IPOPT, and CppAD, is provided in the report Open-Source Software for Nonlinear Constrained Optimization of Dynamic Systems. The Eclipse plug-in for editing of Modelica source code has been discontinued. On December 18th 2019, Modelon decided to move the JModelica.org source code from open to closed source. The last open-source release is available for download on request. See also AMESim AMPL APMonitor ASCEND Dymola General Algebraic Modeling System (GAMS) MapleSim Wolfram SystemModeler Openmodelica SimulationX PROPT References Simulation software Simulation programming languages Mathematical optimization software Free simulation software Declarative programming languages Object-oriented programming Free software programmed in Python
946004
https://en.wikipedia.org/wiki/Asterisk%20%28PBX%29
Asterisk (PBX)
Asterisk is a software implementation of a private branch exchange (PBX). In conjunction with suitable telephony hardware interfaces and network applications, Asterisk is used to establish and control telephone calls between telecommunication endpoints, such as customary telephone sets, destinations on the public switched telephone network (PSTN), and devices or services on voice over Internet Protocol (VoIP) networks. Its name comes from the asterisk (*) symbol for a signal used in dual-tone multi-frequency (DTMF) dialing. Asterisk was created in 1999 by Mark Spencer of Digium, which since 2018 is a division of Sangoma Technologies Corporation. Originally designed for Linux, Asterisk runs on a variety of operating systems, including NetBSD, OpenBSD, FreeBSD, macOS, and Solaris, and can be installed in embedded systems based on OpenWrt. Features The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mail, conference calling, interactive voice response (phone menus), and automatic call distribution. Users can create new functionality by writing dial plan scripts in several of Asterisk's own extensions languages, by adding custom loadable modules written in PHP or C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets. Asterisk supports several standard voice over IP protocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and H.323. Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent. It can serve as a gateway between IP phones and the PSTN via T- or E-carrier interfaces or analog FXO cards. The Inter-Asterisk eXchange (IAX) protocol, RFC 5456, native to Asterisk, provides efficient trunking of calls between Asterisk PBX systems, in addition to distributing some configuration logic. Many VoIP service providers support it for call completion into the PSTN, often because they themselves have deployed Asterisk or offer it as a hosted application. Some telephones also support the IAX protocol. By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies. Some sites are using Asterisk to replace proprietary PBXes, others provide additional features, such as voice mail or voice response menus, or virtual call shops, or to reduce cost by carrying both local and long-distance calls over the Internet. In addition to VoIP protocols, Asterisk supports traditional circuit-switching protocols such as ISDN and SS7. This requires appropriate hardware interface cards, marketed by third-party vendors. Each protocol requires the installation of software modules. In Asterisk release 14 the Opus audio codec is supported. Internationalization While initially developed in the United States, Asterisk has become a popular VoIP PBX worldwide. It allows having multiple sets of voice prompts identified by language (and even multiple sets of prompts for each language) as well as support for time formats in different languages. Several sets of prompts for the interactive voice response and voice mail features are included with Asterisk: American, British, and Australian English, Canadian French, Japanese, Russian, Mexican Spanish and Swedish. A few novelty prompts are offered, such as jokes and a themed "zombie apocalypse" message for Halloween. Additionally, voice sets are offered for commercial sale in various languages, dialects, and genders. The default set of English-language Asterisk prompts are recorded by professional telephone voice Allison Smith. Derived products Asterisk is a core component in many commercial products and open-source projects. Some of the commercial products are hardware and software bundles, for which the manufacturer supports and releases the software with an open-source distribution model. AskoziaPBX, a fork of the m0n0wall project, uses Asterisk PBX software to realize all telephony functions. AstLinux is a "Network Appliance for Communications" open-source software distribution. FreePBX, an open-source graphical user interface, bundles Asterisk as the core of its FreePBX Distro LinuxMCE bundles Asterisk to provide telephony; there is also an embedded version of Asterisk for OpenWrt routers. PBX in a Flash/Incredible PBX and trixbox are software PBXes based on Asterisk. Elastix previously used Asterisk, HylaFAX, Openfire and Postfix to offer PBX, fax, instant messaging and email functions, respectively, before switching to 3CX. Issabel is an open-source Unified Communications software which uses Asterisk for telephony functions. It was forked from the open-source versions of Elastix when 3CX acquired it. *astTECS uses Asterisk in its VoIP and mobile gateways. Various add-on products, often commercial, are available that extend Asterisk features and capabilities. The standard voice prompts included with the system are free. A business can purchase matching voice announcements of its company name, IVR menu options and employee or department names (as a library of live recordings of common names or a set of fully customised prompts recorded by the same professional voice talent) at additional cost for seamless integration into the system. Other add-ons provide fax support, text-to-speech, additional codecs and new features. Some third-party add-ons are free; a few even support embedded platforms such as the Raspberry Pi. See also Comparison of VoIP software DUNDi FreeSWITCH IPBX GateKeeper H.323 GNU SIP Witch List of free and open-source software packages List of SIP software OpenBTS SIP Express Router References External links Free VoIP software Free business software Free software programmed in C Free communication software Telephone exchange equipment Videotelephony Lua (programming language)-scriptable software 1999 software
46688284
https://en.wikipedia.org/wiki/Giraffic
Giraffic
Giraffic was a Tel Aviv-based company that had developed "Adaptive Video Acceleration” (AVA) software to improve the performance of streaming video. It sold primarily to OTT Video Apps Providers and to Consumer Electronics Device Manufacturers, such as LG, ZTE and Samsung. Giraffic's AVA technology was acquired in 2019 by Roku, Inc. While the Company claimed to continue development of its develops Distributed Adaptive Storage & Streaming (DASS) technology towards distributed storage systems, it appears to have made no updates to its homepage since announcement of the 2019 sale of its AVA technology to a "Fortune 500 company: one of the leading global Over-the-Top (OTT) providers". History The company was founded in 2008 by Boris Malamud, Gil Gat and Yoel Zanger. Prior to co-founding Giraffic, Yoel Zanger co-founded New-Tone Technologies, a value-added services platform for mobile applications based on advanced voice and data technologies. Giraffic was headquartered in Tel Aviv, Israel. Prior to its 2019 acquisition by Roku it had sales offices in the United States, Korea, and Hong Kong. Key personnel included Menashe Rothschild, Jeffrey Parkinson, Bhupen Shah, Anton Monk, Gregg Bernard, Mitch Singer, and Levy Gerzberg. Investors included KEC Ventures, Samsung Ventures and Previz Ventures. Technologies Giraffic's Adaptive Video Acceleration (AVA) software debuted at CES 2015, where the company demonstrated its video product on Samsung Smart TVs, Broadcom IP set-top boxes, and Intel Puma Home Gateway. Giraffic's Distributed Adaptive Storage and Streaming (DASS) software was deployed 2010-2012 as a peer-to-peer Content Delivery Network (P2P-CDN) by video websites including Veoh.com, Mako.co.il and Craze Digital. Patents US Patent 14/287,276 IL 231685 SYSTEM AND METHOD FOR PREDICTIVE BUFFERING AND NETWORK SHAPING US Patent 9306860. A congestion control method for dynamically maximizing communication link throughput. Filing date October 14, 2013 US Patent 8473610. Proactive Storage. Filing date June 22, 2011 . Asynchronous data streaming in a distributed network. Issued March 1, 2012 Partners Samsung and LG use Giraffic's Adaptive Video Acceleration (AVA) technology. Competition Companies who competed with Giraffic included Akamai and Google. References Software companies of Israel Software companies established in 2008 Israeli companies established in 2008 Software companies disestablished in 2019 2019 disestablishments in Israel 2019 mergers and acquisitions
6109308
https://en.wikipedia.org/wiki/Prefix%20sum
Prefix sum
In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers is a second sequence of numbers , the sums of prefixes (running totals) of the input sequence: ... For instance, the prefix sums of the natural numbers are the triangular numbers: {| class="wikitable" |- !input numbers |  1 ||  2 ||  3 ||  4 ||  5 ||  6 || ... |- !prefix sums |  1 ||  3 ||  6 || 10 || 15 || 21 || ... |} Prefix sums are trivial to compute in sequential models of computation, by using the formula to compute each output value in sequence order. However, despite their ease of computation, prefix sums are a useful primitive in certain algorithms such as counting sort, and they form the basis of the scan higher-order function in functional programming languages. Prefix sums have also been much studied in parallel algorithms, both as a test problem to be solved and as a useful primitive to be used as a subroutine in other parallel algorithms. Abstractly, a prefix sum requires only a binary associative operator ⊕, making it useful for many applications from calculating well-separated pair decompositions of points to string processing. Mathematically, the operation of taking prefix sums can be generalized from finite to infinite sequences; in that context, a prefix sum is known as a partial sum of a series. Prefix summation or partial summation form linear operators on the vector spaces of finite or infinite sequences; their inverses are finite difference operators. Scan higher order function In functional programming terms, the prefix sum may be generalized to any binary operation (not just the addition operation); the higher order function resulting from this generalization is called a scan, and it is closely related to the fold operation. Both the scan and the fold operations apply the given binary operation to the same sequence of values, but differ in that the scan returns the whole sequence of results from the binary operation, whereas the fold returns only the final result. For instance, the sequence of factorial numbers may be generated by a scan of the natural numbers using multiplication instead of addition: {| class="wikitable" |- !input numbers | 1 || 2 || 3 || 4 || 5 || 6 || ... |- !prefix products | 1 || 2 || 6 || 24 || 120 || 720 || ... |} Inclusive and exclusive scans Programming language and library implementations of scan may be either inclusive or exclusive. An inclusive scan includes input when computing output (i.e., ) while an exclusive scan does not (i.e., ). In the latter case, implementations either leave undefined or accept a separate "" value with which to seed the scan. Either type of scan can be transformed into the other: an inclusive scan can be transformed into an exclusive scan by shifting the array produced by the scan right by one element and inserting the identity value at the left of the array. Conversely, an exclusive scan be transformed into an inclusive scan by shifting the array produced by the scan left and inserting the sum of the last element of the scan and the last element of the input array at the right of the array. The following table lists examples of the inclusive and exclusive scan functions provided by a few programming languages and libraries: {| class="wikitable" style="text-align: left;" |- !Language/library !Inclusive scan !Exclusive scan |- |Haskell |scanl1 |scanl |- |MPI |MPI_Scan |MPI_Exscan |- |C++ |std::inclusive_scan |std::exclusive_scan |- |Scala |scan | |- |Rust |scan | |- |} Parallel algorithms There are two key algorithms for computing a prefix sum in parallel. The first offers a shorter span and more parallelism but is not work-efficient. The second is work-efficient but requires double the span and offers less parallelism. These are presented in turn below. Algorithm 1: Shorter span, more parallel Hillis and Steele present the following parallel prefix sum algorithm: for to do for to do in parallel if then else In the above, the notation means the value of the th element of array in timestep . With a single processor this algorithm would run in time. However if the machine has at least processors to perform the inner loop in parallel, the algorithm as a whole runs in time, the number of iterations of the outer loop. Algorithm 2: Work-efficient A work-efficient parallel prefix sum can be computed by the following steps. Compute the sums of consecutive pairs of items in which the first item of the pair has an even index: , , etc. Recursively compute the prefix sum of the sequence Express each term of the final sequence as the sum of up to two terms of these intermediate sequences: , , , , etc. After the first value, each successive number is either copied from a position half as far through the sequence, or is the previous value added to one value in the sequence. If the input sequence has steps, then the recursion continues to a depth of , which is also the bound on the parallel running time of this algorithm. The number of steps of the algorithm is , and it can be implemented on a parallel random access machine with processors without any asymptotic slowdown by assigning multiple indices to each processor in rounds of the algorithm for which there are more elements than processors. Discussion Each of the preceding algorithms runs in time. However, the former takes exactly steps, while the latter requires steps. For the 16-input examples illustrated, Algorithm 1 is 12-way parallel (49 units of work divided by a span of 4) while Algorithm 2 is only 4-way parallel (26 units of work divided by a span of 6). However, Algorithm 2 is work-efficient—it performs only a constant factor (2) of the amount of work required by the sequential algorithm—while Algorithm 1 is work-inefficient—it performs asymptotically more work (a logarithmic factor) than is required sequentially. Consequently, Algorithm 1 is likely to perform better when abundant parallelism is available, but Algorithm 2 is likely to perform better when parallelism is more limited. Parallel algorithms for prefix sums can often be generalized to other scan operations on associative binary operations, and they can also be computed efficiently on modern parallel hardware such as a GPU. The idea of building in hardware a functional unit dedicated to computing multi-parameter prefix-sum was patented by Uzi Vishkin. Many parallel implementations follow a two pass procedure where partial prefix sums are calculated in the first pass on each processing unit; the prefix sum of these partial sums is then calculated and broadcast back to the processing units for a second pass using the now known prefix as the initial value. Asymptotically this method takes approximately two read operations and one write operation per item. Concrete implementations of prefix sum algorithms An implementation of a parallel prefix sum algorithm, like other parallel algorithms, has to take the parallelization architecture of the platform into account. More specifically, multiple algorithms exist which are adapted for platforms working on shared memory as well as algorithms which are well suited for platforms using distributed memory, relying on message passing as the only form of interprocess communication. Shared memory: Two-level algorithm The following algorithm assumes a shared memory machine model; all processing elements (PEs) have access to the same memory. A version of this algorithm is implemented in the Multi-Core Standard Template Library (MCSTL), a parallel implementation of the C++ standard template library which provides adapted versions for parallel computing of various algorithms. In order to concurrently calculate the prefix sum over data elements with processing elements, the data is divided into blocks, each containing elements (for simplicity we assume that divides ). Note, that although the algorithm divides the data into blocks, only processing elements run in parallel at a time. In a first sweep, each PE calculates a local prefix sum for its block. The last block does not need to be calculated, since these prefix sums are only calculated as offsets to the prefix sums of succeeding blocks and the last block is by definition not succeeded. The offsets which are stored in the last position of each block are accumulated in a prefix sum of their own and stored in their succeeding positions. For being a small number, it is faster to do this sequentially, for a large , this step could be done in parallel as well. A second sweep is performed. This time the first block does not have to be processed, since it does not need to account for the offset of a preceding block. However, in this sweep the last block is included instead and the prefix sums for each block are calculated taking the prefix sum block offsets calculated in the previous sweep into account. function prefix_sum(elements) { n := size(elements) p := number of processing elements prefix_sum := [0...0] of size n do parallel i = 0 to p-1 { // i := index of current PE from j = i * n / (p+1) to (i+1) * n / (p+1) - 1 do { // This only stores the prefix sum of the local blocks store_prefix_sum_with_offset_in(elements, 0, prefix_sum) } } x = 0 for i = 1 to p { // Serial accumulation of total sum of blocks x += prefix_sum[i * n / (p+1) - 1] // Build the prefix sum over the first p blocks prefix_sum[i * n / (p+1)] = x // Save the results to be used as offsets in second sweep } do parallel i = 1 to p { // i := index of current PE from j = i * n / (p+1) to (i+1) * n / (p+1) - 1 do { offset := prefix_sum[i * n / (p+1)] // Calculate the prefix sum taking the sum of preceding blocks as offset store_prefix_sum_with_offset_in(elements, offset, prefix_sum) } } return prefix_sum } Improvement: In case that the number of blocks are too much that makes the serial step time-consuming by deploying a single processor, the Hillis and Steele algorithm can be used to accelerate the second phase. Distributed memory: Hypercube algorithm The Hypercube Prefix Sum Algorithm is well adapted for distributed memory platforms and works with the exchange of messages between the processing elements. It assumes to have processor elements (PEs) participating in the algorithm equal to the number of corners in a -dimensional hypercube. Throughout the algorithm, each PE is seen as a corner in a hypothetical hyper cube with knowledge of the total prefix sum as well as the prefix sum of all elements up to itself (according to the ordered indices among the PEs), both in its own hypercube. The algorithm starts by assuming every PE is the single corner of a zero dimensional hyper cube and therefore and are equal to the local prefix sum of its own elements. The algorithm goes on by unifying hypercubes which are adjacent along one dimension. During each unification, is exchanged and aggregated between the two hyper cubes which keeps the invariant that all PEs at corners of this new hyper cube store the total prefix sum of this newly unified hyper cube in their variable . However, only the hyper cube containing the PEs with higher index also adds this to their local variable , keeping the invariant that only stores the value of the prefix sum of all elements at PEs with indices smaller or equal to their own index. In a -dimensional hyper cube with PEs at the corners, the algorithm has to be repeated times to have the zero-dimensional hyper cubes be unified into one -dimensional hyper cube. Assuming a duplex communication model where the of two adjacent PEs in different hyper cubes can be exchanged in both directions in one communication step, this means communication startups. i := Index of own processor element (PE) m := prefix sum of local elements of this PE d := number of dimensions of the hyper cube x = m; // Invariant: The prefix sum up to this PE in the current sub cube σ = m; // Invariant: The prefix sum of all elements in the current sub cube for (k=0; k <= d-1; k++) { y = σ @ PE(i xor 2^k) // Get the total prefix sum of the opposing sub cube along dimension k σ = σ + y // Aggregate the prefix sum of both sub cubes if (i & 2^k) { x = x + y // Only aggregate the prefix sum from the other sub cube, if this PE is the higher index one. } } Large message sizes: pipelined binary tree The Pipelined Binary Tree Algorithm is another algorithm for distributed memory platforms which is specifically well suited for large message sizes. Like the hypercube algorithm, it assumes a special communication structure. The processing elements (PEs) are hypothetically arranged in a binary tree (e.g. a Fibonacci Tree) with infix numeration according to their index within the PEs. Communication on such a tree always occurs between parent and child nodes. The infix numeration ensures that for any given PEj, the indices of all nodes reachable by its left subtree are less than and the indices of all nodes in the right subtree are greater than . The parent's index is greater than any of the indices in PEj's subtree if PEj is a left child and smaller if PEj is a right child. This allows for the following reasoning: The local prefix sum of the left subtree has to be aggregated to calculate PEj's local prefix sum . The local prefix sum of the right subtree has to be aggregated to calculate the local prefix sum of higher level PEh which are reached on a path containing a left children connection (which means ). The total prefix sum of PEj is necessary to calculate the total prefix sums in the right subtree (e.g. for the highest index node in the subtree). PEj needs to include the total prefix sum of the first higher order node which is reached via an upward path including a right children connection to calculate its total prefix sum. Note the distinction between subtree-local and total prefix sums. The points two, three and four can lead to believe they would form a circular dependency, but this is not the case. Lower level PEs might require the total prefix sum of higher level PEs to calculate their total prefix sum, but higher level PEs only require subtree local prefix sums to calculate their total prefix sum. The root node as highest level node only requires the local prefix sum of its left subtree to calculate its own prefix sum. Each PE on the path from PE0 to the root PE only requires the local prefix sum of its left subtree to calculate its own prefix sum, whereas every node on the path from PEp-1 (last PE) to the PEroot requires the total prefix sum of its parent to calculate its own total prefix sum. This leads to a two-phase algorithm: Upward PhasePropagate the subtree local prefix sum to its parent for each PEj. Downward phasePropagate the exclusive (exclusive PEj as well as the PEs in its left subtree) total prefix sum of all lower index PEs which are not included in the addressed subtree of PEj to lower level PEs in the left child subtree of PEj. Propagate the inclusive prefix sum to the right child subtree of PEj. Note that the algorithm is run in parallel at each PE and the PEs will block upon receive until their children/parents provide them with packets. k := number of packets in a message m of a PE m @ {left, right, parent, this} := // Messages at the different PEs x = m @ this // Upward phase - Calculate subtree local prefix sums for j=0 to k-1: // Pipelining: For each packet of a message if hasLeftChild: blocking receive m[j] @ left // This replaces the local m[j] with the received m[j] // Aggregate inclusive local prefix sum from lower index PEs x[j] = m[j] ⨁ x[j] if hasRightChild: blocking receive m[j] @ right // We do not aggregate m[j] into the local prefix sum, since the right children are higher index PEs send x[j] ⨁ m[j] to parent else: send x[j] to parent // Downward phase for j=0 to k-1: m[j] @ this = 0 if hasParent: blocking receive m[j] @ parent // For a left child m[j] is the parents exclusive prefix sum, for a right child the inclusive prefix sum x[j] = m[j] ⨁ x[j] send m[j] to left // The total prefix sum of all PE's smaller than this or any PE in the left subtree send x[j] to right // The total prefix sum of all PE's smaller or equal than this PE Pipelining If the message of length can be divided into packets and the operator ⨁ can be used on each of the corresponding message packets separately, pipelining is possible. If the algorithm is used without pipelining, there are always only two levels (the sending PEs and the receiving PEs) of the binary tree at work while all other PEs are waiting. If there are processing elements and a balanced binary tree is used, the tree has levels, the length of the path from to is therefore which represents the maximum number of non parallel communication operations during the upward phase, likewise, the communication on the downward path is also limited to startups. Assuming a communication startup time of and a bytewise transmission time of , upward and downward phase are limited to in a non pipelined scenario. Upon division into k packets, each of size and sending them separately, the first packet still needs to be propagated to as part of a local prefix sum and this will occur again for the last packet if . However, in between, all the PEs along the path can work in parallel and each third communication operation (receive left, receive right, send to parent) sends a packet to the next level, so that one phase can be completed in communication operations and both phases together need which is favourable for large message sizes . The algorithm can further be optimised by making use of full-duplex or telephone model communication and overlapping the upward and the downward phase. Data structures When a data set may be updated dynamically, it may be stored in a Fenwick tree data structure. This structure allows both the lookup of any individual prefix sum value and the modification of any array value in logarithmic time per operation. However, an earlier 1982 paper presents a data structure called Partial Sums Tree (see Section 5.1) that appears to overlap Fenwick trees; in 1982 the term prefix-sum was not yet as common as it is today. For higher-dimensional arrays, the summed area table provides a data structure based on prefix sums for computing sums of arbitrary rectangular subarrays. This can be a helpful primitive in image convolution operations. Applications Counting sort is an integer sorting algorithm that uses the prefix sum of a histogram of key frequencies to calculate the position of each key in the sorted output array. It runs in linear time for integer keys that are smaller than the number of items, and is frequently used as part of radix sort, a fast algorithm for sorting integers that are less restricted in magnitude. List ranking, the problem of transforming a linked list into an array that represents the same sequence of items, can be viewed as computing a prefix sum on the sequence 1, 1, 1, ... and then mapping each item to the array position given by its prefix sum value; by combining list ranking, prefix sums, and Euler tours, many important problems on trees may be solved by efficient parallel algorithms. An early application of parallel prefix sum algorithms was in the design of binary adders, Boolean circuits that can add two -bit binary numbers. In this application, the sequence of carry bits of the addition can be represented as a scan operation on the sequence of pairs of input bits, using the majority function to combine the previous carry with these two bits. Each bit of the output number can then be found as the exclusive or of two input bits with the corresponding carry bit. By using a circuit that performs the operations of the parallel prefix sum algorithm, it is possible to design an adder that uses logic gates and time steps. In the parallel random access machine model of computing, prefix sums can be used to simulate parallel algorithms that assume the ability for multiple processors to access the same memory cell at the same time, on parallel machines that forbid simultaneous access. By means of a sorting network, a set of parallel memory access requests can be ordered into a sequence such that accesses to the same cell are contiguous within the sequence; scan operations can then be used to determine which of the accesses succeed in writing to their requested cells, and to distribute the results of memory read operations to multiple processors that request the same result. In Guy Blelloch's Ph.D. thesis, parallel prefix operations form part of the formalization of the Data parallelism model provided by machines such as the Connection Machine. The Connection Machine CM-1 and CM-2 provided a hypercubic network on which the Algorithm 1 above could be implemented, whereas the CM-5 provided a dedicated network to implement Algorithm 2. In the construction of Gray codes, sequences of binary values with the property that consecutive sequence values differ from each other in a single bit position, a number can be converted into the Gray code value at position of the sequence simply by taking the exclusive or of and (the number formed by shifting right by a single bit position). The reverse operation, decoding a Gray-coded value into a binary number, is more complicated, but can be expressed as the prefix sum of the bits of , where each summation operation within the prefix sum is performed modulo two. A prefix sum of this type may be performed efficiently using the bitwise Boolean operations available on modern computers, by computing the exclusive or of with each of the numbers formed by shifting to the left by a number of bits that is a power of two. Parallel prefix (using multiplication as the underlying associative operation) can also be used to build fast algorithms for parallel polynomial interpolation. In particular, it can be used to compute the divided difference coefficients of the Newton form of the interpolation polynomial. This prefix based approach can also be used to obtain the generalized divided differences for (confluent) Hermite interpolation as well as for parallel algorithms for Vandermonde systems. See also General-purpose computing on graphics processing units Summed-area table References External links Concurrent algorithms Higher-order functions
62269228
https://en.wikipedia.org/wiki/X.1205
X.1205
X.1205 is a technical standard, that provides an overview of cybersecurity, it was developed by the Standardization Sector of the International Telecommunication Union (ITU-T). The standard provides an overview of cybersecurity as well as a taxonomy of threats in cybersecurity. References ITU-T recommendations ITU-T X Series Recommendations
33571
https://en.wikipedia.org/wiki/Williams%20tube
Williams tube
The Williams tube, or the Williams–Kilburn tube after inventors Freddie Williams and Tom Kilburn, is an early form of computer memory. It was the first random-access digital storage device, and was used successfully in several early computers. The Williams tube works by displaying a grid of dots on a cathode ray tube (CRT). Due to the way CRTs work, this creates a small charge of static electricity over each dot. The charge at the location of each of the dots is read by a thin metal sheet just in front of the display. Since the display faded over time, it was periodically refreshed. It cycles faster than earlier acoustic delay-line memory, at the speed of the electrons inside the vacuum tube, rather than at the speed of sound. The system was adversely affected by nearby electrical fields, and required constant alignment to keep operational. Williams–Kilburn tubes were used primarily on high-speed computer designs. Williams and Kilburn applied for British patents on 11 December 1946, and 2 October 1947, followed by United States patent applications on 10 December 1947, and 16 May 1949. Working principle The Williams tube depends on an effect called secondary emission that occurs on cathode ray tubes (CRTs). When the electron beam strikes the phosphor that forms the display surface, it normally causes it to illuminate. If the beam energy is above a given threshold (depending on the phosphor mix) it also causes electrons to be struck out of the phosphor. These electrons travel a short distance before being attracted back to the CRT surface and falling on it a short distance away. The overall effect is to cause a slight positive charge in the immediate region of the beam where there is a deficit of electrons, and a slight negative charge around the dot where those electrons land. The resulting charge well remains on the surface of the tube for a fraction of a second while the electrons flow back to their original locations. The lifetime depends on the electrical resistance of the phosphor and the size of the well. The process of creating the charge well is used as the write operation in a computer memory, storing a single binary digit, or bit. A positively charged dot is erased (filling the charge well) by drawing a second dot immediately adjacent to the one to be erased (most systems did this by drawing a short dash starting at the dot position, the extension of the dash erased the charge initially stored at the starting point). This worked because the negative halo around the second dot would fill in the positive center of the first dot. A collection of dots or spaces, often one horizontal row on the display, represents a computer word. Increasing beam energy made the dots bigger and last longer, but required them to be further apart, since nearby dots would erase each other. The beam energy had to be large enough to produce dots with a usable lifetime. This places an upper limit on the memory density, and each Williams tube could typically store about 256 to 2560 bits of data. Because the electron beam is essentially inertia-free and can be moved anywhere on the display, the computer can access any location, making it a random access memory. Typically, the computer would load the address as an X and Y pair into the driver circuitry and then trigger a time base generator that would sweep the selected locations, reading from or writing to the internal registers, normally implemented as flip-flops. Reading the memory took place via a secondary effect caused by the writing operation. During the short period when the write takes place, the redistribution of charges in the phosphor creates an electrical current that induces voltage in any nearby conductors. This is read by placing a thin metal sheet just in front of the display side of the CRT. During a read operation, the beam writes to the selected bit locations on the display. Those locations that were previously written to are already depleted of electrons, so no current flows, and no voltage appears on the plate. This allows the computer to determine there was a "1" in that location. If the location had not been written to previously, the write process will create a well and a pulse will be read on the plate, indicating a "0". Reading a memory location creates a charge well whether or not one was previously there, destroying the original contents of that location, and so any read has to be followed by a rewrite to reinstate the original data. In some systems this was accomplished using a second electron gun inside the CRT that could write to one location while the other was reading the next. Since the display would fade over time, the entire display had to be periodically refreshed using the same basic method. As the data is read and then immediately rewritten, this operation can be carried out by external circuitry while the central processing unit (CPU) was busy carrying out other operations. This refresh operation is similar to the memory refresh cycles of DRAM in modern systems. Since the refresh process caused the same pattern to continually reappear on the display, there was a need to be able to erase previously written values. This was normally accomplished by writing to the display just beside the original location. The electrons released by this new write would fall into the previously written well, filling it. The original systems produced this effect by writing a small dash, which was easy to accomplish without changing the master timers and simply producing the write current for a slightly longer period. The resulting pattern was a series of dots and dashes. There was a considerable amount of research on more effective erasing systems, with some systems using out-of-focus beams or complex patterns. Some Williams tubes were made from radar-type cathode ray tubes with a phosphor coating that made the data visible, while other tubes were purpose-built without such a coating. The presence or absence of this coating had no effect on the operation of the tube, and was of no importance to the operators, since the face of the tube was covered by the pickup plate. If a visible output was needed, a second tube connected in parallel with the storage tube, with a phosphor coating, but without a pickup plate, was used as a display device. Development Developed at the University of Manchester in England, it provided the medium on which the first electronically stored-memory program was implemented in the Manchester Baby computer, which first successfully ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Tom Kilburn wrote a 17-line program to calculate the highest proper factor of 218. Tradition at the university has it that this was the only program Kilburn ever wrote. Williams tubes tended to become unreliable with age, and most working installations had to be "tuned" by hand. By contrast, mercury delay-line memory was slower and not truly random access, as the bits were presented serially, which complicated programming. Delay lines also needed hand tuning, but did not age as badly and enjoyed some success in early digital electronic computing despite their data rate, weight, cost, thermal and toxicity problems. The Manchester Mark 1, which used Williams tubes, was successfully commercialised as the Ferranti Mark 1. Some early computers in the United States also used Williams tubes, including the IAS machine (originally designed for Selectron tube memory), the UNIVAC 1103, IBM 701, IBM 702 and the Standards Western Automatic Computer (SWAC). Williams tubes were also used in the Soviet Strela-1 and in the Japan TAC (Tokyo Automatic Computer). See also Atanasoff–Berry computer – Used a type of memory called regenerative capacitor memory Mellon optical memory References Notes Bibliography Further reading External links The Williams Tube Manchester Baby and the birth of Computer Memory RCA 6571 Computer storage tube data sheet Cathode ray tube History of computing hardware History of computing in the United Kingdom Department of Computer Science, University of Manchester Types of RAM Vacuum tubes
67510848
https://en.wikipedia.org/wiki/OpenSearch%20%28software%29
OpenSearch (software)
OpenSearch is a family of software consisting of a search engine (also named OpenSearch), and OpenSearch Dashboards, a data visualization dashboard for that search engine. The software started in 2021 as a fork of Elasticsearch and Kibana, with development led by Amazon Web Services. History The project was created after Elastic NV changed the license of new versions of this software away from the open-source Apache License in favour of the Server Side Public License (SSPL). Amazon intends to build an open community with many stakeholders. (Currently only Amazon Web Services has maintainership status and write access to the source code repositories, though they invite pull requests from anyone.) Other companies such as Logz.io, CrateDB, Red Hat and others have also announced an interest in building or joining a community to continue using and maintaining this open-source software. OpenSearch OpenSearch is a Lucene-based search engine that started as a fork of version 7.10.2 of the Elasticsearch service. It has Elastic NV Intellectual property and telemetry removed, and is licensed under the Apache License, version 2. The maintainers have made a commitment to remain completely compatible with Elasticsearch in its initial versions. OpenSearch Dashboards OpenSearch Dashboards started as a fork of version 7.10.2 of Elastic's Kibana software, and is also under the Apache License, version 2. See also Elasticsearch#Licensing changes References Search engine software Software forks
6250485
https://en.wikipedia.org/wiki/RATS%20%28software%29
RATS (software)
RATS, an abbreviation of Regression Analysis of Time Series, is a statistical package for time series analysis and econometrics. RATS is developed and sold by Estima, Inc., located in Evanston, IL. History The forerunner of RATS was a FORTRAN program called SPECTRE, written by economist Christopher A. Sims. SPECTRE was designed to overcome some limitations of existing software that affected Sims' research in the 1970s, by providing spectral analysis and also the ability to run long unrestricted distributed lags. The program was then expanded by Tom Doan, then of the Federal Reserve Bank of Minneapolis, who added ARIMA and VAR capabilities and went on to found the consulting firm that owns and distributes RATS software. In its early incarnations, RATS was designed primarily for time series analysis, but as it evolved, it acquired other capabilities. With the advent of personal computers in 1984, RATS went from being a specialty mainframe program to an econometrics package sold to a much broader market. Features RATS is a powerful program, which can perform a range of econometric and statistical operations. The following is a list of the major procedures in econometrics and time series analysis that can be implemented in RATS. All these methods can be used in order to forecast, as well as to conduct data analysis. In addition, RATS can handle cross-sectional and panel data: Linear regression, including stepwise. Regressions with heteroscedasticity and serial-correlation correction. Non-linear least squares. Two-stage least squares, three-stage least squares, and seemingly unrelated regressions. Non-linear systems estimation. Generalized Method of Moments. Maximum likelihood estimation. Simultaneous equation systems, large econometric models. ARIMA (autoregressive, integrated moving average) and transfer function models. Spectral analysis. Kalman filter and State Space models. Neural networks. Regressions with discrete dependent variables, such as logistic regressions. ARCH and GARCH models. Vector autoregressions. RATS can read data from a variety of file formats and database sources, including Excel files, text files, Stata files, and most databases that support SQL and ODBC. It can handle virtually any data frequency, including daily, weekly, intra-day, and panel data. RATS has extensive graphics capabilities. It can generate high-resolution time series graphs, high-resolution X-Y scatter plots, dual-scale graphs, and can export graphs to many formats, including PostScript and Windows Metafile. Mode of operation RATS can be run interactively, or in batch mode. In the interactive mode, the user can run existing programs, or perform new tasks either by using menu-driven "wizards" or by typing in commands directly (or a combination of both approaches). The menu-driven wizards automatically generate the corresponding commands, allowing users to interactively construct complete programs that can be saved and re-run later. New users often prefer the interactive mode, while experienced users will often prefer to run batch jobs. After an interactive session, the code can be saved, and converted to a batch format. One advantage of RATS, as opposed to automated forecasting software, is that it is an actual programming language, which enables the user to design custom models, and change specifications. Recent versions have added report-generation tools designed to facilitate accurate exporting of results for use in papers and other documents. See also Comparison of statistical packages – includes information on RATS features References Further reading External links Estima RATS Discussion Forum Econometrics software Regression and curve fitting software Time series software Proprietary commercial software for Linux
972237
https://en.wikipedia.org/wiki/Capgemini
Capgemini
Capgemini SE is a French multinational information technology (IT) services and consulting company. It is headquartered in Paris, France. History Capgemini was founded by Serge Kampf in 1967 as an enterprise management and data processing company. The company was founded as the Société pour la Gestion de l'Entreprise et le Traitement de l'Information (Sogeti). In 1974 Sogeti acquired Gemini Computers Systems, a US company based in New York. In 1975, having made two major acquisitions of CAP (Centre d'Analyse et de Programmation) and Gemini Computer Systems, and following resolution of a dispute with the similarly named CAP UK over the international use of the name 'CAP', Sogeti renamed itself as CAP Gemini Sogeti. Cap Gemini Sogeti launched US operations in 1981, following the acquisition of Milwaukee-based DASD Corporation, specializing in data conversion and employing 500 people in 20 branches throughout the US. Following this acquisition, The U.S. Operation was known as Cap Gemini DASD. In 1996, the name was simplified to Cap Gemini with a new group logo. All operating companies worldwide were re-branded to operate as Cap Gemini. Ernst & Young Consulting was acquired by Cap Gemini in 2000. It simultaneously integrated Gemini Consulting to form Cap Gemini Ernst & Young. In 2017, Cap Gemini S.A. became Capgemini SE, and its Euronext ticker name similarly changed from CAP GEMINI to CAPGEMINI. In July 2020, Capgemini reported that it has been named as a pioneer in Everest Group's Guidewire IT services called "Guidewire Services PEAK Matrix® Assessment 2020 – Setting the Key Phase on Cloud." In June 2021, Capgemini partners with Sanofi, Orange & Generali to launch Future4care, a European start-up accelerator focused on digital healthcare. Capgemini has over 300,000 employees in over 50 countries, of whom nearly 125,000 are in India. Acquisitions Capgemini has acquired numerous companies. In 2018, they acquired the Philadelphia-based digital customer engagement company LiquidHub for to assist Capgemini's digital and cloud growth in North America. Its large backend team is based in India. Earlier Capgemini acquisitions included Kanbay for $1.2 billion and iGate for $4 billion. In 2019, Capgemini acquired Altran bringing the total employee count to over 250,000. This is the largest acquisition in the company's history. As part of the acquisition, Capgemini acquired frog design, which was integrated into Capgemini Invent; several other recent acquisitions in the design and digital space, including staff from Fahrenheit 212, Idean, and June21, have been merged into frog under this structure. In 2021, Capgemini acquired RXP Services and Acclimation in Australia to expand their operations in Australia. Services Capgemini Invent Capgemini Invent was launched in September 2018 as the design and consulting brand of the Capgemini Group. Located in more than 37 offices globally, Capgemini Invent includes more than 10,000 employees. Sogeti Sogeti is a wholly owned subsidiary of Capgemini Group. It is an information technology consulting company specialising in technology and engineering professional services. Capgemini Q-Lab Capgemini Q-Lab is a quantum computing laboratory set up in 2022 in collaboration with IBM, and will be an authorised IBM Quantum Hub. The facility will be available in the UK, Portugal and India, and will work as research facilities to help build quantum applications. The lab will feature IBM's latest 127-qubit quantum processor, Eagle. Management The Capgemini Group Executive Committee consists of 27 members. On 20 May 2020, Aiman Ezzat was appointed as the new CEO. He is associated with Capgemini for more than 20 years. From 2005 to 2007, Aiman was Capgemini's Deputy Director of Strategy. In November 2007, Ezzat was appointed COO of the Financial Services Global Business Unit, and became its Global Head in December 2008 till 2012. From January 2018 to May 2020, he served as Chief Operating Officer and prior to this as Chief Financial Officer, from December 2012 to 2018. From 2012 to 2020 Paul Hermelin served as the Group Chairman and CEO. He joined Capgemini in 1993 and was appointed as its CEO in 2002. In May 2012, Hermelin became chairman and CEO of the Capgemini Group. He succeeded Serge Kampf, who served as the Vice Chairman of the Board until his death on 15 March 2016. In May 2020, Aiman Ezzat succeeded Paul Hermelin as CEO. Paul Hermelin continues as Chairman of the Board of Directors References External links French companies established in 1967 Consulting firms established in 1967 Technology companies established in 1967 Information technology consulting firms of France Management consulting firms of France International information technology consulting firms International management consulting firms Companies based in Paris Multinational companies headquartered in France CAC 40 Companies listed on Euronext Paris Societates Europaeae
11039984
https://en.wikipedia.org/wiki/H.264/MPEG-4%20AVC%20products%20and%20implementations
H.264/MPEG-4 AVC products and implementations
The following is a list of H.264/MPEG-4 AVC products and implementations. Prominent software implementations Adobe Systems supports the playback of H.264 in Adobe Flash 9.x. In latest version of Adobe Premiere Elements 7 and Premiere Pro CS4 (both shipped in 2008), both source-video and video-export (to Blu-ray Disc) support H.264. Apple integrated H.264 support into Mac OS X v10.4 "Tiger" and QuickTime 7. The encoder conforms to Main Profile and the decoder supports Constrained Baseline and most of Main Profile. Additionally, iChat and FaceTime use H.264, as do many other Apple applications, such as Compressor. BT Group offers a modular implementation of H.264. Written in C++, it has been ported to various platforms from PCs to mobile phones. All 4:2:0 profiles (Baseline/Main/High) are supported. Elecard Group develops software codecs as well as DSP codecs and various applications for both online and offline decoding and encoding. Intel provides various licensing options on their implementation of an H.264 (amongst others) encoder/decoder as part of their Integrated Performance Primitives package, which includes an evaluation source code download. MainConcept H.264/AVC SDK offers encoding and decoding in all profiles and levels supported by the standard. MainConcept also offers a stand-alone encoding app. Microsoft Windows 7, with the Home Premium and higher editions Includes a Media Foundation-based H.264 encoder with Baseline profile level 3 and Main profile support . Transcoding (encoding) support is not exposed through any built-in Windows application but the encoder is included as a Media Foundation Transform (MFT). Includes a Media Foundation-based H.264 decoder with Baseline, Main, and High profile support, up to level 5.1 Includes a DirectShow filter for H.264 decoding Includes an MPEG-4 file source to read MP4, M4A, M4V, MP4V, MOV and 3GP container formats and an MPEG-4 file sink to output to MP4 format . On2 Technologies provides software implementations of an H.264 Baseline encoder and decoder in its embedded (Hantro) product family. The codec is available optimized for ARM9, ARM11 and Cortex A8. Kulabyte provides live video encoding and streaming software for X86 that supports up to 1080p resolution full motion H.264/AVC video using MainConcept "High" profile. Sorenson Media offers several implementations of H.264 for Sorenson Squeeze users to choose from. These versions include Sorenson Media's legacy H.264 codec, Apple's implementation, MainConcept's H.264, and the first commercial release of x264. x264 is a GPL-licensed H.264 encoder that is used in the free VideoLAN and MEncoder transcoding applications and, as of December 2005, remains the only reasonably complete open source and free software implementation of the standard, with support for Main Profile and High Profile. A Video for Windows build is still available. x264 won an independent video codec comparison organized by Doom9.org in December 2005. The LGPL-licensed libavcodec by FFmpeg includes an H.264 decoder. It can decode Main Profile and High Profile video. It is used in many programs like in the free VLC media player and MPlayer multimedia players. FFmpeg can also optionally (set at build time) link to the x264 library to encode H.264. CoreAVC by CoreCodec is a highly optimized commercial H.264 decoder. According to independent tests by people on the Doom9.org forums, it is the fastest software decoder as of June 2006. The standard version supports Baseline Profile, Main Profile and High profile, except interlaced video. The professional edition supports both PAFF and MBAFF interlaced video beginning from version 1.1. The professional edition also supports speedups on SMP capable systems, and GPU acceleration using nVidia CUDA architecture. Nero Digital, co-developed by Nero AG and Ateme, includes an H.264 encoder and decoder (as of September 2005, corresponding to Main Profile, except interlaced video support), along with other MPEG-4 compatible technologies. It was updated in 2006 to support High Profile. XBMC Media Center and its derivatives, like Boxee and Plex. Blu-code is a professional H.264 encoder provided by Sony, aimed at Blu-ray-compliant HD production. OpenH264 is an open-source H.264 encoder and decoder implementation by Cisco, made available in December 2013. Prominent hardware implementations Decoding Several companies are mass-producing custom chips capable of decoding H.264/AVC video. Chips and cores capable of real-time decoding at HDTV picture resolutions include these: Broadcom BCM7411, BCM7401, BCM7400, BCM7403, BCM7405, BCM7325, BCM7335, BCM7043, BCM7412 Horizon Semiconductors provides a family (Hz3120, Hz3220, Hz4120, Hz4220, Hz7220) of multi standard HD decoder SoC solutions for Cable, Satellite & IPTV set-top boxes, HD DVD/Blu-ray boxes and DTV. Conexant CX2418X NXP Semiconductors PNX1702, PNX1005, PNX1004, PNX8950, PNX8935 based on TriMedia Technology On2 Technologies provides multi-format hardware decoder IP cores that will support up to 1080p resolution full motion H.264/AVC video Sigma Designs SMP8654, SMP8634, EM8622L, and EM8624L Realtek RTD1073, RTD1283 STMicroelectronics STB7100, STB7109, NOMADIK (STn 8800/8810/8815/8820 series) Texas Instruments TMS320DM365, TMS320DM642, TMS320DM643x, and TMS320DM644x DSPs based on DaVinci Technology (except for 1080i/p) Imagination Technologies Ltd. licensable IP cores for SoC development. VXD-370 HD Decoder H.264 with Baseline, Main and High Profile support up to Level 4.1 (50 Mbit/s). Also decodes VC-1 (WMV9), MPEG-4, MPEG-2, JPEG. Chips&Media has been developed multi-standard video silicon IP which covers the full line up of video standards up to Full HD(1920x1080) resolution. Allegro proved Chips&Media's Hardwired H.264 / MPEG-4 AVC Codec IP Complete (December 27, 2005) Such chips will allow widespread deployment of low-cost devices capable of playing H.264/AVC video at standard-definition and high-definition television resolutions. Many other hardware implementations are deployed in various markets, ranging from inexpensive consumer electronics to real-time FPGA-based encoders for broadcast. A few of the more familiar hardware product offerings for H.264/AVC include these: ATI Technologies' graphics processing unit (GPU), beginning with the Radeon X1000-series, feature hardware acceleration of H.264 decoding starting in the Catalyst 5.13 drivers, see ATI Avivo. Beyonwiz have products with full advanced functions of dual HD PVR. Google's Android platform for mobile devices natively supports H.264 (based on PacketVideo's OpenCORE) On the T-Mobile G1 (HTC Dream), a Qualcomm MSM7200 CPU provides hardware decoding. Since about 2012, Android devices began supporting High profile. NVIDIA has released drivers for hardware H.264 decoding on its GeForce 8 Series, its GeForce 7 Series and some GeForce 6 Series GPUs, see Nvidia PureVideo. Apple's 5th Generation iPod can play H.264 Baseline Profile up to Level 3 with support for bit rates up to 1.5 Mbit/s, image resolutions up to 640×480, and frame rates up to 30 frames per second. This device also plays MPEG-4 Part 2 Simple Profile video, up to 2.5 Mbit/s, 640×480 pixels, 30 frames per second. Additionally, video of up to 720×480 (NTSC DVD) encoded in the iPod compliant H.264 profile may be viewed on the device; if transferred with an iTunes alternative. Playback at full DVD resolution does not require any firmware modification to the iPod. All iOS devices can play H.264 videos. Starting with iOS devices released in 2010, it added support for Main Profile, while with those released in year later, it added support for High Profile. The Sony PlayStation Portable features hardware decoding of H.264 video from UMD disks and Memory Stick Pro Duo flash cards. The device supports Main Profile up to Level 3 with bit rates up to 10 Mbit/s from the Memory Stick, and as of firmware version 3.30, supports video files up to a resolution of 720x480. The Microsoft Xbox 360 features a separate HD DVD drive that plugs into the console via USB that can play back HD DVDs, which includes HD DVDs using the H.264 codec. The Microsoft Xbox 360 received stand-alone H.264 decoding in the Spring Dashboard Update released on May 7, 2007. The Xbox 360 will play H.264 video files up to 10 Mbit/s peak in 1080p (H.264 Level 4.1) high profile and audio up to 2 channel AAC LC. The Symbian S60 OS supports H.264. Certain models of LG, Motorola, Nokia, Samsung and Sony Ericsson mobile phone can play back H.264. Encoding Magnum Semiconductor provides single-chip HD AVC encoder for the consumer market and multichip AVC HD encoder for the distribution and contribution markets, based on Domino Platform. Fujitsu has announced a 1080i encoding/decoding IC that will be introduced in March 2007, priced at 120 USD. The chip will be produced in a 90 nm process and will support High Profile Level 4 (up to 25 Mbit/s). Horizon Semiconductors has developed a family (Hz3120, Hz4010, Hz4120) of single-chip HD codec, decoder, and transcoder products that support H.264, VC-1, MPEG-4, and MPEG-2 in resolutions up to 1080p @ 60 frame/s. Horizon's SoC solutions integrate an audio codec, an HD display processor, CPU, 2D/3D graphics accelerator, a high-bandwidth transport processor, CA/DRM unit, video pre-processor, and a wide variety of advanced connectivity and peripherals. Horizon's ICs are designed in accordance with world-leading secure processor architectures, enabling complete content protection in compliance with numerous Conditional Access and Digital Rights Management schemes. The DMS-02 media processor from 3DLabs promises to encode D1 video stream (BT.601 216 Mbit/s) at 30 frame/s (equivalent to High 4:2:2 Profile, Level 3). Ambarella has unveiled single chip platforms that encode/decode 1080p60, 1080i60 and 720p60 video. Elgato Turbo.264 hardware encoder for Mac OS X connects via USB 2.0 and presents itself as three QuickTime components. Although intended for Elgato's EyeTV software, it will work with any software on Mac OS X using the QuickTime framework, such as Final Cut. The maximum resolution supported is 800x600. On2 Technologies provides multi-format hardware encoder IP cores that will support up to 1080p resolution full motion H.264/AVC video Kulabyte provides live video encoding and streaming turn-key hardware that supports up to 1080p resolution full motion H.264/AVC video using MainConcept "High" profile. Imagination Technologies provides multi-format, multi-stream IP cores that will support 1080P60 H.264 [email protected] encoding, also at high frame rates to 1000 frame/s and beyond Samsung Semiconductor produces C110 SoC used, among the others, on Samsung Galaxy S series of smartphones. Integrated Multi Format Codec (MFC) provides encoding and decoding of MPEG-4/H.263/H.264 up to 1080p@30fps and decoding of MPEG-2/VC1/MPEG-4 video up to 1080p@30fps. Alma Technologies provides ultra low latency H.264 Encoder IP cores since 2011 capable of encoding Full HD video even on low cost FPGA devices. Standalone hardware implementations without the need of CPU. Cradle Technologies products MDSP provides encoding of MPEG-4/H.264 up to 4D1@30fps. Matrox provides hardware using its MAX chip for encoding MPEG-4/H.264 up to 1080p60 [email protected]. The MAX chip is in a rack-mountable video interface as well as on a PCIe card for situations not requiring the video deck interface. VISENGI launched on mid-2014 the highest throughput H.264 hardware encoder IP core, at 5.3 pixels encoded per clock cycle, allowing 4K UltraHD at 60fps on most low-cost FPGAs, and 8K UHDTV on mid-level ones. It features two versions: a High 4:4:4 Predictive Profile capable encoder and a CAVLC 4:4:4 Intra Profile one. Blackmagic Design launched, in 2011, a standalone H.264 hardware encoder that can encode in real time various bit rates and profiles up to 1080p60. Sources include SDI/HDSDI, YUV Component video and HDMI. It can handle up to two channels of analog or digital audio. Transcoding Some modern video chips, GPUs, and motherboards from AMD (Avivo, UVD, VCE), Intel (Quick Sync) and nVidia (NVENC) support transcoding. Ambarella offers a single-chip 1080p60 transcoder (A6) for broadcast head-ends and high-density transcoding applications. Horizon Semiconductors provides a multi-standard native 1080/60p Transcoder (Hz4010) for the triple-play/quad-play Cable, Satellite and IPTV Set-Top Box, Digital Video Recorder and Home Media Center, Blu-ray/HD DVD player and recorder, iVDRs, place shifting boxes and location-free TV. Magnum Semiconductor provides single-chip xcoders for the consumer market with multiple codec (e.g. AVC/VC1/MPEG2 to AVC/VC1/MPEG2), resolution and bitrate support. The company also provides professional multichip xcoders for distribution and contribution markets. Telestream provides software transcoding solutions including their products FlipFactory and Episode, which includes bi-directional transcoding support for H.264/AVC, to and from over 120 different video compression formats and video file formats. ViXS Systems has developed several transcoders capable of H.264 to MPEG-2 transcoding. These transcoders are implemented in embedded PVR TV, PC boards, Network Attached Storage (NAS), remote TV (video over the internet) and other storage devices (such as transcoding to increase storage in DVD-R and HD-DVD solutions). See also H.264/MPEG-4 AVC References MPEG-4 Video codecs
35810951
https://en.wikipedia.org/wiki/INK%20%28operating%20system%29
INK (operating system)
INK (for I/O Node Kernel) is the operating system that runs on the input output nodes of the IBM Blue Gene supercomputer. INK is a Linux-derivative. See also Compute Node Linux Timeline of operating systems Rocks Cluster Distribution Cray Linux Environment References Linux kernel variant Supercomputer operating systems
13951061
https://en.wikipedia.org/wiki/Nintendo%20Software%20Planning%20%26%20Development
Nintendo Software Planning & Development
, commonly abbreviated as Nintendo SPD, was a Japanese research, planning and development division housed inside the Nintendo Development Center in Kyoto, Japan. The division had two departments: Software Planning & Development Department, which primarily co-produced games with external developers; and Software Development & Design Department, which primarily developed experimental and system software. The division was created during a corporate restructuring in September 2003, with the abolition of the Nintendo R&D1 and Nintendo R&D2 departments. The group had the task of independently developing innovative games, assisting other development teams on projects, and managing overseas production of first-party franchises. Both SPD and SDD departments were divided into four separate groups, which worked concurrently on different projects. In September 2015, Nintendo SPD merged with Nintendo's other software development division, Entertainment Analysis & Development (EAD), becoming Nintendo Entertainment Planning & Development. History In 2004, then-Nintendo president Satoru Iwata created the Software Planning & Development division, appointing himself as its general manager. The goal of the newly created division would be to focus on co-producing and supervising external second-party video game development, with the goal of relieving the Entertainment Analysis & Development (EAD) division, and its general manager Shigeru Miyamoto, to focus on internal development. Although that was the division's primary focus, it also went on to develop some video games titles internally. On June 27, 2013, deputy general manager Shinya Takahashi replaced Satoru Iwata as general manager of the division, gaining a seat in Nintendo's board of directors in the process. A year later, on June 18, 2014, all of Nintendo's internal research and development divisions, including the SPD division, were moved from the Nintendo's headquarters in Kyoto to the newly built Nintendo Development Center, just 300 meters from the old building. By centralizing all of its developers in the new building, Nintendo hoped they would deeply interact with each other, regardless of which division and field they were working on, creating a synergy between hardware and software development. On September 16, 2015, the division was merged with Nintendo's internal software development division, Entertainment Analysis & Development, becoming Nintendo Entertainment Planning & Development (EPD). As Shigeru Miyamoto retired as general manager of the EAD division and went on to become a Creative Fellow, former SPD general manager Shinya Takahashi took his place as general manager of the newly created EPD division, thus supervising all video games developed at Nintendo. The new division accumulated all of its predecessors roles as both developing video games internally and co-producing them with external developers. Structure The General Manager of the Nintendo Software Planning & Development Division was Shinya Takahashi, assisted by both Keizo Kato, the Assistant Manager and Kensuke Tanabe, the Executive Officer. The division was split into two different departments: the Software Planning & Development Department, which was split into four separate groups, which was supervised by Deputy Manager Yoshio Sakamoto; and the Software Development & Design Department which was split into three separate groups, supervised by Deputy Manager Masaru Nishita. All of the groups worked concurrently on different projects. Software Planning & Development Department Production Group No. 1 The Production Group No. 1's primary focus was the development and production of video game software and software applications for Nintendo home and handheld consoles, as well as software for peripherals developed for said consoles, both internally and in cooperation with second-party developers. The group manager and main producer was Nintendo-veteran Yoshio Sakamoto. The group is responsible for developing and producing games in the WarioWare, Rhythm Heaven, Card Hero, Tomodachi and the mainline Metroid series. Notes Production Group No. 2 The Production Group No. 2 is led by manager and video game producer Hitoshi Yamagami. The group is primarily responsible for co-producing and supervising video games published by Nintendo and developed by third-party developers from Japan. They're responsible for producing and supervising games in the Pokémon, F-Zero, Legendary Starfy, Fire Emblem, Dr. Mario, Endless Ocean, Fossil Fighters, Style Savvy and Xenoblade Chronicles series. In addition to co-producing games, the group also supervised the development of Drill Dozer, developed by Game Freak. {| class="wikitable sortable plainrowheaders" style="text-align: center;" |+ List of video games co-produced by the Nintendo SPD Production Group No. 2 |- ! scope="col" | ! scope="col" | Title !Series ! scope="col" | Genre(s) ! scope="col" | Platform(s) ! class="unsortable" scope="col" | |- | rowspan="5" | 2004 ! scope="row" | Densetsu no Stafy 3 | The Legendary Starfy | Platform | Game Boy Advance | |- ! scope="row" | Pokémon Emerald | Pokémon | Role-playing | Game Boy Advance | |- ! scope="row" | Fire Emblem: The Sacred Stones | Fire Emblem | Tactical role playing | Game Boy Advance | |- ! scope="row" | F-Zero: Climax | F-Zero | Racing | Game Boy Advance | |- ! scope="row" | Pokémon Dash | Pokémon | Racing | Nintendo DS | |- | rowspan="10" | 2005 ! scope="row" | Yakuman DS | Yakuman | Puzzle | Nintendo DS | |- ! scope="row" | Fire Emblem: Path of Radiance | Fire Emblem | Tactical role-playing | GameCube | |- ! scope="row" | Nonono Puzzle Chalien | | Puzzle | Game Boy Advance | |- ! scope="row" | Advance Wars: Dual Strike | Wars | Turn-based tactics | Nintendo DS | |- ! scope="row" | Dance Dance Revolution: Mario Mix | | Music, exergaming | GameCube | |- ! scope="row" | Jump Super Stars | | Fighting | Nintendo DS | |- ! scope="row" | Dr. Mario & Puzzle League | | Puzzle | Game Boy Advance | |- ! scope="row" | | Pokémon | Puzzle | Nintendo DS | |- ! scope="row" | Super Princess Peach | Mario | Platform | Nintendo DS | |- ! scope="row" | Pokémon Mystery Dungeon: Blue Rescue Team and Red Rescue Team | Pokémon | Roguelike | Nintendo DS | |- | rowspan="8" | 2006 ! scope="row" | Tetris DS | Tetris | Puzzle | Nintendo DS | |- ! scope="row" | Densetsu no Stafy 4 | The Legendary Starfy | Platform | Nintendo DS | |- ! scope="row" | Mawashite Tsunageru Touch Panic | | Puzzle | Nintendo DS | |- ! scope="row" | Project Hacker: Kakusei | | Graphic adventure | Nintendo DS | |- ! scope="row" | Chōsōjū Mecha MG | | Fighting | Nintendo DS | |- ! scope="row" | Wi-Fi Taiō Yakuman DS | Yakuman | Puzzle | Nintendo DS | |- ! scope="row" | Pokémon Diamond and Pearl | Pokémon | Role-playing | Nintendo DS | |- ! scope="row" | Jump Ultimate Stars | | Fighting | Nintendo DS | |- | rowspan="12" | 2007 ! scope="row" | Wario: Master of Disguise | Wario | Platform | Nintendo DS | |- ! scope="row" | Picross DS | Picross | Puzzle | Nintendo DS | |- ! scope="row" | Fire Emblem: Radiant Dawn | Fire Emblem | Tactical role-playing | Wii | |- ! scope="row" | Planet Puzzle League | Puzzle League | Puzzle | Nintendo DS | |- ! scope="row" | Kurikin Nano Island Story | | Role-playing | Nintendo DS | |- ! scope="row" | Brain Age 2: More Training in Minutes a Day | Brain Age | Edutainment | Nintendo DS | |- ! scope="row" | Ganbaru Watashi no Kakei Diary | | Digital diary | Nintendo DS | |- ! scope="row" | Endless Ocean | Endless Ocean | Adventure, simulation | Wii | |- ! scope="row" | Zekkyō Senshi Sakeburein | | Beat 'em up | Nintendo DS | |- ! scope="row" | Pokémon Mystery Dungeon: Explorers of Time and Explorers of Darkness | Pokémon | Roguelike | Nintendo DS | |- ! scope="row" | ASH: Archaic Sealed Heat | | Tactical role-playing | Nintendo DS | |- ! scope="row" | DS Bungaku Zenshuu | | E-reader | Nintendo DS | |- | rowspan="14" | 2008 ! scope="row" | Wii Chess | Wii | Chess | Wii | |- ! scope="row" | Advance Wars: Days of Ruin | Wars | Turn-based tactics | Nintendo DS | |- ! scope="row" | | Dr. Mario | Puzzle | Wii | |- ! scope="row" | Fossil Fighters | Fossil Fighters | Role-playing | Nintendo DS | |- ! scope="row" | Yakuman Wii: Ide Yosuke no Kenkou Mahjong | Yakuman | Puzzle | Wii | |- ! scope="row" | The Legendary Starfy | The Legendary Starfy | Platform | Nintendo DS | |- ! scope="row" | Tsuushin Taikyoku: Hayazashi Shogi Sandan | | Puzzle | Wii | |- ! scope="row" | Tsuushin Taikyoku: Igo Dojo 2700-Mon | | Puzzle | Wii | |- ! scope="row" | Fire Emblem: Shadow Dragon | Fire Emblem | Tactical role-playing | Nintendo DS | |- ! scope="row" | Pokémon Platinum | Pokémon | Role-playing | Nintendo DS | |- ! scope="row" | Disaster: Day of Crisis | | Action-adventure | Wii | |- ! scope="row" | Style Savvy | Style Savvy | Simulation | Nintendo DS | |- ! scope="row" | Dr. Mario ExpressA Little Bit of... Dr. Mario | Dr. Mario | Puzzle | Wii | |- ! scope="row" | 100 Classic Book Collection | | E-reader | Nintendo DS | |- | rowspan="12" | 2009 ! scope="row" | Puzzle League Express | Puzzle League | Puzzle | Nintendo DSi | |- ! scope="row" | Yōsuke Ide no Kenkō Mahjong DSi | | Puzzle | Nintendo DSi | |- ! scope="row" | Pokémon Mystery Dungeon: Explorers of Sky | Pokémon | | Nintendo DS | |- ! scope="row" | Sparkle Snapshots | | | | |- ! scope="row" | Pokémon Rumble | Pokémon | | Wii | |- ! scope="row" | Pokémon Mystery Dungeon: Adventure Team | Pokémon | | Wii | |- ! scope="row" | Ganbaru Watashi no Osaifu Ouendan | | | Nintendo DSi | |- ! scope="row" | Metal Torrent | | Shooter | Nintendo DSi | |- ! scope="row" | Pokémon HeartGold and SoulSilver | Pokémon | Role-playing | Nintendo DS | |- ! scope="row" | | Endless Ocean | Adventure, simulation | Wii | |- ! scope="row" | | Sin and Punishment | Shoot 'em up | Wii | |- ! scope="row" | PokéPark Wii: Pikachu's Adventure | Pokémon | | Wii | |- | rowspan="6" | 2010 ! scope="row" | Zangeki no Reginleiv | | | Wii | |- ! scope="row" | Xenoblade Chronicles | Xenoblade Chronicles | Action role-playing | Wii | |- ! scope="row" | Fire Emblem: New Mystery of the Emblem | Fire Emblem | Tactical role-playing | Nintendo DS | |- ! scope="row" | ThruSpace | | | | |- ! scope="row" | Pokémon Black and Pokémon White | Pokémon | Role-playing | Nintendo DS | |- ! scope="row" | Fossil Fighters: Champions | Fossil Fighters | Role-playing | Nintendo DS |- | rowspan="13" | 2011 ! scope="row" | The Last Story | | Action role-playing | Wii | |- ! scope="row" | Learn with Pokémon: Typing Adventure | Pokémon | | Nintendo DS | |- ! scope="row" | Pandora's Tower | | Action role-playing | Wii | |- ! scope="row" | Pokédex 3D | Pokémon | | Nintendo 3DS | |- ! scope="row" | Ketzal's Corridors | | | Nintendo 3DS | |- ! scope="row" | Kirby's Return to Dream Land | Kirby | | Wii | |- ! scope="row" | PokéPark 2: Wonders Beyond | Pokémon | | Wii | |- ! scope="row" | 3D Classics: Excitebike | 3D Classics | Racing | Nintendo 3DS | |- ! scope="row" | 3D Classics: Xevious | 3D Classics | Shoot 'em up | Nintendo 3DS | |- ! scope="row" | 3D Classics: Urban Champion | 3D Classics | Fighting | Nintendo 3DS | |- ! scope="row" | 3D Classics: Twinbee | 3D Classics | Shoot 'em up | Nintendo 3DS | |- ! scope="row" | 3D Classics: Kirby's Adventure | 3D Classics | Platform, action | Nintendo 3DS | |- ! scope="row" | 3D Classics: Kid Icarus | 3D Classics | Action, platform | Nintendo 3DS | |- | rowspan="8" | 2012 ! scope="row" | Fire Emblem: Awakening | Fire Emblem | Tactical role-playing | Nintendo 3DS | |- ! scope="row" | Pokémon Black 2 and Pokémon White 2 | Pokémon | Role-playing | Nintendo DS | |- ! scope="row" | Pokémon Dream Radar | Pokémon | |Nintendo 3DS | |- ! scope="row" | Pokédex 3D Pro | Pokémon | | Nintendo 3DS | |- ! scope="row" | HarmoKnight | | Rhythm | Nintendo 3DS | |- ! scope="row" | Style Savvy: Trendsetters | Style Savvy | Simulation | Nintendo 3DS | |- ! scope="row" | Wii Karaoke U | | |Wii U | |- ! scope="row" | Pokémon Mystery Dungeon: Gates to Infinity |Pokémon | |Nintendo 3DS | |- | rowspan="4" | 2013 ! scope="row" | Pokémon Rumble U |Pokémon |Action role-playing |Wii U | |- ! scope="row" | The Wonderful 101 | |Action |Wii U | |- ! scope="row" | Pokémon X and Pokémon Y |Pokémon | Role-playing | Nintendo 3DS | |- ! scope="row" | Dr. Luigi | | Puzzle | Wii U | |- | rowspan="9" | 2014 ! scope="row" | Kirby: Triple Deluxe |Kirby | | Nintendo 3DS | |- ! scope="row" | Fossil Fighters: Frontier | | Role-playing | Nintendo 3DS | |- ! scope="row" | {{Unbulleted list|Pokémon Battle Trozei|Pokémon Link: Battle! scope="row" | }} |Pokémon | | Nintendo 3DS | |- ! scope="row" | Pokémon Art Academy| | | Nintendo 3DS | |- ! scope="row" | Dedede's Drum Dash Deluxe|Kirby | | Nintendo 3DS | |- ! scope="row" | Kirby Fighters Deluxe|Kirby | | Nintendo 3DS | |- ! scope="row" | Bayonetta|Bayonetta | | Wii U | |- ! scope="row" | Bayonetta 2|Bayonetta | | Wii U | |- ! scope="row" | Pokémon Omega Ruby and Alpha Sapphire|Pokémon | Role-playing | Nintendo 3DS | |- | rowspan="8" | 2015 ! scope="row" | Pokémon Shuffle|Pokémon | Puzzle | Nintendo 3DS | |- ! scope="row" | Code Name: S.T.E.A.M.| | Turn-based strategy | Nintendo 3DS | |- ! scope="row" | Pokémon Rumble World|Pokémon | Action role-playing | Nintendo 3DS | |- ! scope="row" | Style Savvy: Fashion Forward|Style Savvy | Simulation | Nintendo 3DS | |- ! scope="row" | Xenoblade Chronicles X|Xenoblade Chronicles | Action role-playing | Wii U | |- ! scope="row" | Fire Emblem Fates|Fire Emblem | Tactical role-playing | Nintendo 3DS | |- ! scope="row" | Devil's Third| | Action-adventure, hack and slash, shooter | Wii U | |- ! scope="row" | Real Dasshutsu Game x Nintendo 3DS| | |Nintendo 3DS | |} Notes Production Group No. 3 The Production Group No. 3 was led by producer Kensuke Tanabe and responsible for overseeing the development of titles from the Metroid Prime, Battalion Wars, Super Mario Strikers, Mario vs. Donkey Kong, Excite, Paper Mario, Fluidity, and Donkey Kong Country series. Notes Production Group No. 4 Group Manager: Hiroshi Sato Production Group No. 4 was responsible for overseeing the development of titles from the Mario Party, Donkey Kong, and Wii Party series. Notes Co-production with Eighting. Co-production with NDcube. Co-production with Cing. Co-production with Hudson. Co-production with INiS. Co-production with Camelot. Co-production with Paon. Co-production with AlphaDream. Co-production with Project Sora and Sora Ltd. Co-production with Bandai Namco Studios and Sora Ltd. Co-production with Good-Feel. Co-production with Arzest. Software Development & Design Department Deputy Manager: Masaru NishitaNintendo Software Development & Design was an experimental software development team assembled by Nintendo Co., Ltd. president Satoru Iwata. The team was originally assembled as a System Service Task Force that would develop all the unique internal system software for the Nintendo DS and Nintendo Wii. The team was responsible for all the additional Wii Channels, the Nintendo DSi system software and more recently, the Nintendo 3DS system software. Nintendo SDD also went on to develop several innovative retail games. The philosophy behind development was to think out of the box and create unique software in a timely manner with smaller development resources. The development staff was composed of Koichi Kawamoto, who was the original programmer of WarioWare, and Shinya Takahashi, who was a longtime designer at Nintendo EAD. The department was also responsible for developing several subsequent WiiWare and DSiWare software. Software Development Group Manager/producer: Kiyoshi Mizuki Software Development Group was responsible for developing software from the Jam with the Band and Brain Age series, among additional Touch! Generations'' titles with partner developers. Co-production with Namco Bandai Games. Notes References Nintendo divisions and subsidiaries Japanese companies established in 2003 Video game companies established in 2003 Video game companies disestablished in 2015 Defunct video game companies of Japan Japanese companies disestablished in 2015
5128033
https://en.wikipedia.org/wiki/FreeBSD%20Ports
FreeBSD Ports
The FreeBSD Ports collection is a package management system for the FreeBSD operating system, providing an easy and consistent way of installing software packages. As of February 2020, there are over 38,487 ports available in the collection. It has also been adopted by NetBSD as the basis of its pkgsrc system. Installing from source The ports collection uses Makefiles arranged in a directory hierarchy so that software can be built, installed and uninstalled with the make command. When installing an application, very little (if any) user intervention is required after issuing a beginning command such as make install or make install clean in the ports directory of the desired application. In most cases the software is automatically downloaded from the Internet, patched and configured if necessary, then compiled, installed and registered in the package database. If the new port has needed dependencies on other applications or libraries, these are installed beforehand automatically. Most ports are already configured with default options which have been deemed generally appropriate for most users. However, these configuration options (called knobs) can sometimes be changed before installation using the make config command, which brings up a text-based interface that allows the user to select the desired options. Historically, each port (or software package) has been maintained by an individual port maintainer who is responsible for ensuring the currency of the port and providing general support. Today, many ports are maintained by special task forces or sub-projects, each with a dedicated mailing list (e.g. [email protected], [email protected], etc.), while unmaintained ports are assigned to the generic group [email protected]. In general, anyone may become a port maintainer by contributing their favorite software to the collection. One may also choose to maintain an existing port with no active maintainer. Packages Precompiled (binary) ports are called packages. A package can be created from the corresponding port with the make package command; pre-built packages are also available for download from FreeBSD-hosted package repositories. A user can install a package by passing the package name to the pkg install command. This downloads the appropriate package for the installed FreeBSD release version, then installs the application, including any software dependencies it may have. By default, packages are downloaded from the main FreeBSD Package Repository (pkg.freebsd.org), but if there are any troubles after updating packages, previous version of packages cannot be installed because the repository denies subfolders indexes. In this case, a user must upgrade the OS version to the latest release and install latest packages. FreeBSD maintains a build farm called the pointyhat cluster in which all packages for all supported architectures and major releases are built. The build logs and known errors for all ports built into packages through the pointyhat cluster are available in a database and weekly builds logs are also available through mailing list archives. These pre-compiled packages are separated into categories by the architectures for which they are available. Packages are further separated into several "release" directories, one for each current production release built from the ports collection and shipped with the release. These production release directories are never updated. There are also stable and current directories for several major release branches. These are updated more or less weekly. In most cases a package created for an older version of FreeBSD can be installed and used on a newer system without difficulty since binary backward compatibility across major releases is enabled by default. A packaging system for binary packages called pkg has replaced the package management system in FreeBSD 10. History Jordan Hubbard committed his port make macros to the FreeBSD CVS repository on August 21, 1994. His package install suite Makefile had been committed a year earlier (August 26, 1993). The core ports framework was at first maintained by Hubbard along with Satoshi Asami for several years. The Ports Management Team was later formed to handle this task. NetBSD's pkgsrc and OpenBSD's ports collection trace their roots to FreeBSD. DPorts Since release 3.6 DragonFly BSD project uses FreeBSD Ports as a base for its own DPorts ports collection. John Marino of DragonFly BSD project created DeltaPorts repository – a collection of patches and files that overlay and modify the FreeBSD Ports, in order to generate DPorts. See also MacPorts References External links Official FreeBSD Ports web page FreshPorts - website that tracks port updates Port-Tags - Project to add tags to the ports collection Installing Applications: Packages and Ports from the FreeBSD Handbook (Chapter 4) Free package management systems FreeBSD
24750268
https://en.wikipedia.org/wiki/Polymake
Polymake
Polymake is software for the algorithmic treatment of convex polyhedra. Albeit primarily a tool to study the combinatorics and the geometry of convex polytopes and polyhedra, it is by now also capable of dealing with simplicial complexes, matroids, polyhedral fans, graphs, tropical objects, toric varieties and other objects. Polymake has been cited in over 100 recent articles indexed by Zentralblatt MATH as can be seen from its entry in the swMATH database. Special Features modular Polymake was originally designed as a research tool for studying aspects of polytopes. As such, polymake uses many third party software packages for specialized computations, thereby providing a common interface and bridge between different tools. A user can easily (and unknowingly) switch between using different software packages in the process of computing properties of a polytope. rule based computation Polymake internally uses a server-client model where the server holds information about each object (e.g., a polytope) and the clients sends requests to compute properties. The server has the job of determining how to complete each request from information already known about each object using a rule based system. For example, there are many rules on how to compute the facets of a polytope. Facets can be computed from a vertex description of the polytope, and from a (possibly redundant) inequality description. Polymake builds a dependency graph outlining the steps to process each request and selects the best path via a Dijkstra type algorithm. scripting Polymake can be used within a perl script. Moreover, users can extend polymake and define new objects, properties, rules for computing properties, and algorithms. Polymake applications Polymake divides its collection of functions and objects into 10 different groups called applications. They behave like C++ namespaces. The polytope application was the first one developed and it is the largest. Common application This application contains many "helper" functions used in other applications. Fan application The Fan application contains functions for polyhedral complexes (which generalize simplicial complexes), planar drawings of 3-polytopes, polyhedral fans, and subdivisions of points or vectors. Fulton application This application deals with normal toric varieties. The name of this application is from the book "Introduction to Toric Varieties" by William Fulton. Graph application The graph application is for manipulating directed and undirected graphs. Some the standard graph functions exist (like for adjacency and cliques) together with combinatorial functions like computing the lattice represented by a directed acyclic graph. Group application The group application focuses on finite permutation groups. Basic properties of a group can be calculated like characters and conjugacy classes. Combined with a polytope, this application can compute properties associated with a group acting on a polytope by permuting the polytope's vertices, facets, or coordinates. Ideal application The ideal application computes a few properties of polynomial ideals: Gröbner basis, Hilbert polynomial, and radicals. Matroid application The matroid class can compute all the standard properties of a matroid like bases and circuits. This application can also compute more advanced properties like the Tutte polynomial of a matroid and realizing the matroid with a polytope. Polytope application Within the polytope application, there are over 230 functions or calculations that can be done with a polytope. These functions range in complexity from simply calculating basic information about a polytope (e.g., number of vertices, number of facets, tests for simplicial polytopes, and converting a vertex description to an inequality description) to combinatorial or algebraic properties (e.g., H-vector, Ehrhart polynomial, Hilbert basis, and Schlegel diagrams). There are also many visualization options. Topaz application The Topaz application contains all the functions relating to abstract simplicial complexes. Many advance topological calculations over simplicial complexes can be performed like homology groups, orientation, fundamental group. There is also a combinatorial collection of properties that can be computed like a shelling and Hasse diagrams. Tropical application The tropical application contains functions for exploring tropical geometry; in particular, tropical hypersurfaces and tropical cones. Development History Polymake version 1.0 first appeared in the proceedings of DMV-Seminar "Polytopes and Optimization" held in Oberwolfach, November 1997. Version 1.0 only contained the polytope application, but the system of "applications" was not yet developed. Version 2.0 was released sometime in 2003, and version 3.0 was released in 2016. Software packages Used within polymake Below is a list of third party software packages that polymake can interface with as of version 3.0. Users are also able to write new rule files for interfacing with any software package. Note that there is some redundancy in this list (e.g., a few different packages can be used for finding the convex hull of a polytope). Because polymake uses rule files and a dependency graph for computing properties, most of these software packages are optional. However, some become necessary for specialized computations. 4ti2: software package for algebraic, geometric and combinatorial problems on linear spaces a-tint: tropical intersection theory azove: enumeration of 0/1 vertices cdd: double description method for converting between an inequality and vertex description of a polytope Geomview: interactive 3D viewing program Gfan: Gröbner fans and tropical varieties GraphViz: graph visualization software LattE (Lattice point Enumeration): counting lattice points inside polytopes and integration over polytopes libnormaliz: affine monoids, vector configurations, lattice polytopes, and rational cones lrs: implementation of the reverse search algorithm for vertex enumeration and convex hull problems nauty: automorphism groups of graphs permlib: set stabilizer and in-orbit computations PORTA: enumerate lattice points of a polytope ppl: Parma Polyhedra Library qhull: Quickhull algorithm for convex hulls singular: computer algebra system for polynomial computations, with special emphasis on commutative and non-commutative algebra, algebraic geometry, and singularity theory sketch: for making line drawings of two- or three-dimensional solid objects SplitsTree4: phylogenetic networks sympol: tool to work with symmetric polyhedra threejs: JavaScript library for animated 3D computer graphics tikz: TeX packages for creating graphics programmatically TOPCOM: triangulations of point configurations and matroids TropLi: for computing tropical linear spaces of matroids tosimplex: Dual simplex algorithm implemented by Thomas Opfer Vinci: volumes of polytopes Used in conjunction with polymake jupyter-polymake: allows polymake within Jupyter notebooks. PolymakeInterface: package for using polymake in GAP. PolyViewer: GUI viewer for polymake fies. References Mathematical software Polyhedra Computational geometry
1076662
https://en.wikipedia.org/wiki/Wireless%20sensor%20network
Wireless sensor network
Wireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental conditions such as temperature, sound, pollution levels, humidity and wind. These are similar to wireless ad hoc networks in the sense that they rely on wireless connectivity and spontaneous formation of networks so that sensor data can be transported wirelessly. WSNs monitor physical or environmental conditions, such as temperature, sound, and pressure. Modern networks are bi-directional, both collecting data and enabling control of sensor activity. The development of these networks was motivated by military applications such as battlefield surveillance. Such networks are used in industrial and consumer applications, such as industrial process monitoring and control and machine health monitoring. A WSN is built of "nodes" – from a few to hundreds or thousands, where each node is connected to other sensors. Each such node typically has several parts: a radio transceiver with an internal antenna or connection to an external antenna, a microcontroller, an electronic circuit for interfacing with the sensors and an energy source, usually a battery or an embedded form of energy harvesting. A sensor node might vary in size from a shoebox to (theoretically) a grain of dust, although microscopic dimensions have yet to be realized. Sensor node cost is similarly variable, ranging from a few to hundreds of dollars, depending on node sophistication. Size and cost constraints constrain resources such as energy, memory, computational speed and communications bandwidth. The topology of a WSN can vary from a simple star network to an advanced multi-hop wireless mesh network. Propagation can employ routing or flooding. In computer science and telecommunications, wireless sensor networks are an active research area supporting many workshops and conferences, including International Workshop on Embedded Networked Sensors (EmNetS), IPSN, SenSys, MobiCom and EWSN. As of 2010, wireless sensor networks had deployed approximately 120million remote units worldwide. Application Area monitoring Area monitoring is a common application of WSNs. In area monitoring, the WSN is deployed over a region where some phenomenon is to be monitored. A military example is the use of sensors to detect enemy intrusion; a civilian example is the geo-fencing of gas or oil pipelines. Health care monitoring There are several types of sensor networks for medical applications: implanted, wearable, and environment-embedded. Implantable medical devices are those that are inserted inside the human body. Wearable devices are used on the body surface of a human or just at close proximity of the user. Environment-embedded systems employ sensors contained in the environment. Possible applications include body position measurement, location of persons, overall monitoring of ill patients in hospitals and at home. Devices embedded in the environment track the physical state of a person for continuous health diagnosis, using as input the data from a network of depth cameras, a sensing floor, or other similar devices. Body-area networks can collect information about an individual's health, fitness, and energy expenditure. In health care applications the privacy and authenticity of user data has prime importance. Especially due to the integration of sensor networks, with IoT, the user authentication becomes more challenging; however, a solution is presented in recent work. Habitat Monitoring Wireless sensor networks have been used to monitor various species and habitats, beginning with the Great Duck Island Deployment, including marmots, cane toads in Australia and zebras in Kenya. Environmental/Earth sensing There are many applications in monitoring environmental parameters, examples of which are given below. They share the extra challenges of harsh environments and reduced power supply. Air quality monitoring Experiments have shown that personal exposure to air pollution in cities can vary a lot. Therefore, it is of interest to have higher temporal and spatial resolution of pollutants and particulates. For research purposes, wireless sensor networks have been deployed to monitor the concentration of dangerous gases for citizens (e.g., in London). However, sensors for gases and particulate matter suffer from high unit-to-unit variability, cross-sensitivities, and (concept) drift. Moreover, the quality of data is currently insufficient for trustworthy decision-making, as field calibration leads to unreliable measurement results, and frequent recalibration might be required. A possible solution could be blind calibration or the usage of mobile references. Forest fire detection A network of Sensor Nodes can be installed in a forest to detect when a fire has started. The nodes can be equipped with sensors to measure temperature, humidity and gases which are produced by fire in the trees or vegetation. The early detection is crucial for a successful action of the firefighters; thanks to Wireless Sensor Networks, the fire brigade will be able to know when a fire is started and how it is spreading. Landslide detection A landslide detection system makes use of a wireless sensor network to detect the slight movements of soil and changes in various parameters that may occur before or during a landslide. Through the data gathered it may be possible to know the impending occurrence of landslides long before it actually happens. Water quality monitoring Water quality monitoring involves analyzing water properties in dams, rivers, lakes and oceans, as well as underground water reserves. The use of many wireless distributed sensors enables the creation of a more accurate map of the water status, and allows the permanent deployment of monitoring stations in locations of difficult access, without the need of manual data retrieval. Natural disaster prevention Wireless sensor networks can be effective in preventing adverse consequences of natural disasters, like floods. Wireless nodes have been deployed successfully in rivers, where changes in water levels must be monitored in real time. Industrial monitoring Machine health monitoring Wireless sensor networks have been developed for machinery condition-based maintenance (CBM) as they offer significant cost savings and enable new functionality. Wireless sensors can be placed in locations difficult or impossible to reach with a wired system, such as rotating machinery and untethered vehicles. Data logging Wireless sensor networks also are used for the collection of data for monitoring of environmental information. This can be as simple as monitoring the temperature in a fridge or the level of water in overflow tanks in nuclear power plants. The statistical information can then be used to show how systems have been working. The advantage of WSNs over conventional loggers is the "live" data feed that is possible. Water/waste water monitoring Monitoring the quality and level of water includes many activities such as checking the quality of underground or surface water and ensuring a country’s water infrastructure for the benefit of both human and animal. It may be used to protect the wastage of water. Structural health monitoring Wireless sensor networks can be used to monitor the condition of civil infrastructure and related geo-physical processes close to real time, and over long periods through data logging, using appropriately interfaced sensors. Wine production Wireless sensor networks are used to monitor wine production, both in the field and the cellar. Threat detection The Wide Area Tracking System (WATS) is a prototype network for detecting a ground-based nuclear device such as a nuclear "briefcase bomb." WATS is being developed at the Lawrence Livermore National Laboratory (LLNL). WATS would be made up of wireless gamma and neutron sensors connected through a communications network. Data picked up by the sensors undergoes "data fusion", which converts the information into easily interpreted forms; this data fusion is the most important aspect of the system. The data fusion process occurs within the sensor network rather than at a centralized computer and is performed by a specially developed algorithm based on Bayesian statistics. WATS would not use a centralized computer for analysis because researchers found that factors such as latency and available bandwidth tended to create significant bottlenecks. Data processed in the field by the network itself (by transferring small amounts of data between neighboring sensors) is faster and makes the network more scalable. An important factor in WATS development is ease of deployment, since more sensors both improves the detection rate and reduces false alarms. WATS sensors could be deployed in permanent positions or mounted in vehicles for mobile protection of specific locations. One barrier to the implementation of WATS is the size, weight, energy requirements and cost of currently available wireless sensors. The development of improved sensors is a major component of current research at the Nonproliferation, Arms Control, and International Security (NAI) Directorate at LLNL. WATS was profiled to the U.S. House of Representatives' Military Research and Development Subcommittee on October 1, 1997 during a hearing on nuclear terrorism and countermeasures. On August 4, 1998 in a subsequent meeting of that subcommittee, Chairman Curt Weldon stated that research funding for WATS had been cut by the Clinton administration to a subsistence level and that the program had been poorly re-organized. Incident monitoring There are studies that show that using sensors for incident monitoring improve in a great way the response of firefighters and police to an unexpected situation. For an early detection of incidents we can use acoustic sensors to detect a spike in the noise of the city because of a possible accident, or use termic sensors to detect a possible fire. Characteristics The main characteristics of a WSN include Power consumption constraints for nodes using batteries or energy harvesting. Examples of suppliers are ReVibe Energy and Perpetuum Ability to cope with node failures (resilience) Some mobility of nodes (for highly mobile nodes see MWSNs) Heterogeneity of nodes Homogeneity of nodes Scalability to large scale of deployment Ability to withstand harsh environmental conditions Ease of use Cross-layer optimization Cross-layer is becoming an important studying area for wireless communications. In addition, the traditional layered approach presents three main problems: Traditional layered approach cannot share different information among different layers, which leads to each layer not having complete information. The traditional layered approach cannot guarantee the optimization of the entire network. The traditional layered approach does not have the ability to adapt to the environmental change. Because of the interference between the different users, access conflicts, fading, and the change of environment in the wireless sensor networks, traditional layered approach for wired networks is not applicable to wireless networks. So the cross-layer can be used to make the optimal modulation to improve the transmission performance, such as data rate, energy efficiency, quality of service (QoS), etc. Sensor nodes can be imagined as small computers which are extremely basic in terms of their interfaces and their components. They usually consist of a processing unit with limited computational power and limited memory, sensors or MEMS (including specific conditioning circuitry), a communication device (usually radio transceivers or alternatively optical), and a power source usually in the form of a battery. Other possible inclusions are energy harvesting modules, secondary ASICs, and possibly secondary communication interface (e.g. RS-232 or USB). The base stations are one or more components of the WSN with much more computational, energy and communication resources. They act as a gateway between sensor nodes and the end user as they typically forward data from the WSN on to a server. Other special components in routing based networks are routers, designed to compute, calculate and distribute the routing tables. Platforms Hardware One major challenge in a WSN is to produce low cost and tiny sensor nodes. There are an increasing number of small companies producing WSN hardware and the commercial situation can be compared to home computing in the 1970s. Many of the nodes are still in the research and development stage, particularly their software. Also inherent to sensor network adoption is the use of very low power methods for radio communication and data acquisition. In many applications, a WSN communicates with a local area network or wide area network through a gateway. The Gateway acts as a bridge between the WSN and the other network. This enables data to be stored and processed by devices with more resources, for example, in a remotely located server. A wireless wide area network used primarily for low-power devices is known as a Low-Power Wide-Area Network (LPWAN). Wireless There are several wireless standards and solutions for sensor node connectivity. Thread and ZigBee can connect sensors operating at 2.4 GHz with a data rate of 250kbit/s. Many use a lower frequency to increase radio range (typically 1 km), for example Z-wave operates at 915 MHz and in the EU 868 MHz has been widely used but these have a lower data rate (typically 50 kb/s). The IEEE 802.15.4 working group provides a standard for low power device connectivity and commonly sensors and smart meters use one of these standards for connectivity. With the emergence of Internet of Things, many other proposals have been made to provide sensor connectivity. LoRa is a form of LPWAN which provides long range low power wireless connectivity for devices, which has been used in smart meters and other long range sensor applications. Wi-SUN connects devices at home. NarrowBand IOT and LTE-M can connect up to millions of sensors and devices using cellular technology. Software Energy is the scarcest resource of WSN nodes, and it determines the lifetime of WSNs. WSNs may be deployed in large numbers in various environments, including remote and hostile regions, where ad hoc communications are a key component. For this reason, algorithms and protocols need to address the following issues: Increased lifespan Robustness and fault tolerance Self-configuration Lifetime maximization: Energy/Power Consumption of the sensing device should be minimized and sensor nodes should be energy efficient since their limited energy resource determines their lifetime. To conserve power, wireless sensor nodes normally power off both the radio transmitter and the radio receiver when not in use. Routing protocols Wireless sensor networks are composed of low-energy, small-size, and low-range unattended sensor nodes. Recently, it has been observed that by periodically turning on and off the sensing and communication capabilities of sensor nodes, we can significantly reduce the active time and thus prolong network lifetime. However, this duty cycling may result in high network latency, routing overhead, and neighbor discovery delays due to asynchronous sleep and wake-up scheduling. These limitations call for a countermeasure for duty-cycled wireless sensor networks which should minimize routing information, routing traffic load, and energy consumption. Researchers from Sungkyunkwan University have proposed a lightweight non-increasing delivery-latency interval routing referred as LNDIR. This scheme can discover minimum latency routes at each non-increasing delivery-latency interval instead of each time slot. Simulation experiments demonstrated the validity of this novel approach in minimizing routing information stored at each sensor. Furthermore, this novel routing can also guarantee the minimum delivery latency from each source to the sink. Performance improvements of up to 12-fold and 11-fold are observed in terms of routing traffic load reduction and energy efficiency, respectively, as compared to existing schemes. Operating systems Operating systems for wireless sensor network nodes are typically less complex than general-purpose operating systems. They more strongly resemble embedded systems, for two reasons. First, wireless sensor networks are typically deployed with a particular application in mind, rather than as a general platform. Second, a need for low costs and low power leads most wireless sensor nodes to have low-power microcontrollers ensuring that mechanisms such as virtual memory are either unnecessary or too expensive to implement. It is therefore possible to use embedded operating systems such as eCos or uC/OS for sensor networks. However, such operating systems are often designed with real-time properties. TinyOS, developed by David Culler, is perhaps the first operating system specifically designed for wireless sensor networks. TinyOS is based on an event-driven programming model instead of multithreading. TinyOS programs are composed of event handlers and tasks with run-to-completion semantics. When an external event occurs, such as an incoming data packet or a sensor reading, TinyOS signals the appropriate event handler to handle the event. Event handlers can post tasks that are scheduled by the TinyOS kernel some time later. LiteOS is a newly developed OS for wireless sensor networks, which provides UNIX-like abstraction and support for the C programming language. Contiki, developed by Adam Dunkels, is an OS which uses a simpler programming style in C while providing advances such as 6LoWPAN and Protothreads. RIOT (operating system) is a more recent real-time OS including similar functionality to Contiki. PreonVM is an OS for wireless sensor networks, which provides 6LoWPAN based on Contiki and support for the Java programming language. Online collaborative sensor data management platforms Online collaborative sensor data management platforms are on-line database services that allow sensor owners to register and connect their devices to feed data into an online database for storage and also allow developers to connect to the database and build their own applications based on that data. Examples include Xively and the Wikisensing platform. Such platforms simplify online collaboration between users over diverse data sets ranging from energy and environment data to that collected from transport services. Other services include allowing developers to embed real-time graphs & widgets in websites; analyse and process historical data pulled from the data feeds; send real-time alerts from any datastream to control scripts, devices and environments. The architecture of the Wikisensing system describes the key components of such systems to include APIs and interfaces for online collaborators, a middleware containing the business logic needed for the sensor data management and processing and a storage model suitable for the efficient storage and retrieval of large volumes of data. Simulation At present, agent-based modeling and simulation is the only paradigm which allows the simulation of complex behavior in the environments of wireless sensors (such as flocking). Agent-based simulation of wireless sensor and ad hoc networks is a relatively new paradigm. Agent-based modelling was originally based on social simulation. Network simulators like Opnet, Tetcos NetSim and NS can be used to simulate a wireless sensor network. Other concepts Localization Network localization refers to the problem of estimating the location of wireless sensor nodes during deployments and in dynamic settings. For ultra-low power sensors, size, cost and environment precludes the use of Global Positioning System receivers on sensors. In 2000, Nirupama Bulusu, John Heidemann and Deborah Estrin first motivated and proposed a radio connectivity based system for localization of wireless sensor networks. Subsequently, such localization systems have been referred to as range free localization systems, and many localization systems for wireless sensor networks have been subsequently proposed including AHLoS, APS, and Stardust. Sensor Data Calibration and Fault Tolerance Sensors and devices used in wireless sensor networks are state-of-the-art technology with the lowest possible price. The sensor measurements we get from these devices are therefore often noisy, incomplete and inaccurate. Researchers studying wireless sensor networks hypothesize that much more information can be extracted from hundreds of unreliable measurements spread across a field of interest than from a smaller number of high-quality, high-reliability instruments with the same total cost. Macroprogramming Macro-programming is a term coined by Matt Welsh. It refers to programming the entire sensor network as an ensemble, rather than individual sensor nodes. Another way to macro-program a network is to view the sensor network as a database, which was popularized by the TinyDB system developed by Sam Madden. Reprogramming Reprogramming is the process of updating the code on the sensor nodes. The most feasible form of reprogramming is remote reprogramming whereby the code is disseminated wirelessly while the nodes are deployed. Different reprogramming protocols exist that provide different levels of speed of operation, reliability, energy expenditure, requirement of code resident on the nodes, suitability to different wireless environments, resistance to DoS, etc. Popular reprogramming protocols are Deluge (2004), Trickle (2004), MNP (2005), Synapse (2008), and Zephyr (2009). Security Infrastructure-less architecture (i.e. no gateways are included, etc.) and inherent requirements (i.e. unattended working environment, etc.) of WSNs might pose several weak points that attract adversaries. Therefore, security is a big concern when WSNs are deployed for special applications such as military and healthcare. Owing to their unique characteristics, traditional security methods of computer networks would be useless (or less effective) for WSNs. Hence, lack of security mechanisms would cause intrusions towards those networks. These intrusions need to be detected and mitigation methods should be applied. There have been important innovations in securing wireless sensor networks. Most wireless embedded networks use omni-directional antennas and therefore neighbors can overhear communication in and out of nodes. This was used this to develop a primitive called “local monitoring” which was used for detection of sophisticated attacks, like blackhole or wormhole, which degrade the throughput of large networks to close-to-zero. This primitive has since been used by many researchers and commercial wireless packet sniffers. This was subsequently refined for more sophisticated attacks such as with collusion, mobility, and multi-antenna, multi-channel devices. Distributed sensor network If a centralized architecture is used in a sensor network and the central node fails, then the entire network will collapse, however the reliability of the sensor network can be increased by using a distributed control architecture. Distributed control is used in WSNs for the following reasons: Sensor nodes are prone to failure, For better collection of data, To provide nodes with backup in case of failure of the central node. There is also no centralised body to allocate the resources and they have to be self organized. As for the distributed filtering over distributed sensor network. the general setup is to observe the underlying process through a group of sensors organized according to a given network topology, which renders the individual observer estimates the system state based not only on its own measurement but also on its neighbors’. Data integration and sensor web The data gathered from wireless sensor networks is usually saved in the form of numerical data in a central base station. Additionally, the Open Geospatial Consortium (OGC) is specifying standards for interoperability interfaces and metadata encodings that enable real time integration of heterogeneous sensor webs into the Internet, allowing any individual to monitor or control wireless sensor networks through a web browser. In-network processing To reduce communication costs some algorithms remove or reduce nodes' redundant sensor information and avoid forwarding data that is of no use. This technique has been used, for instance, for distributed anomaly detection or distributed optimization. As nodes can inspect the data they forward, they can measure averages or directionality for example of readings from other nodes. For example, in sensing and monitoring applications, it is generally the case that neighboring sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires techniques for in-network data aggregation and mining. Aggregation reduces the amount of network traffic which helps to reduce energy consumption on sensor nodes. Recently, it has been found that network gateways also play an important role in improving energy efficiency of sensor nodes by scheduling more resources for the nodes with more critical energy efficiency need and advanced energy efficient scheduling algorithms need to be implemented at network gateways for the improvement of the overall network energy efficiency. Secure data aggregation This is a form of in-network processing where sensor nodes are assumed to be unsecured with limited available energy, while the base station is assumed to be secure with unlimited available energy. Aggregation complicates the already existing security challenges for wireless sensor networks and requires new security techniques tailored specifically for this scenario. Providing security to aggregate data in wireless sensor networks is known as secure data aggregation in WSN. were the first few works discussing techniques for secure data aggregation in wireless sensor networks. Two main security challenges in secure data aggregation are confidentiality and integrity of data. While encryption is traditionally used to provide end to end confidentiality in wireless sensor network, the aggregators in a secure data aggregation scenario need to decrypt the encrypted data to perform aggregation. This exposes the plaintext at the aggregators, making the data vulnerable to attacks from an adversary. Similarly an aggregator can inject false data into the aggregate and make the base station accept false data. Thus, while data aggregation improves energy efficiency of a network, it complicates the existing security challenges. See also Autonomous system Bluetooth mesh networking Center for Embedded Network Sensing List of ad hoc routing protocols Meteorological instrumentation Mobile wireless sensor networks OpenWSN Optical wireless communications Robotic mapping Smart, connected products Unattended ground sensor Virtual sensor network Wireless ad hoc networks References Further reading Kiran Maraiya, Kamal Kant, Nitin Gupta "Wireless Sensor Network: A Review on Data Aggregation" International Journal of Scientific & Engineering Research Volume 2 Issue 4, April 2011. Chalermek Intanagonwiwat, Deborah Estrin, Ramesh Govindan, John Heidemann, "Impact of Network Density on Data Aggregation in Wireless SensorNetworks," November 4, 2001. External links IEEE 802.15.4 Standardization Committee Secure Data Aggregation in Wireless Sensor Networks: A *Survey A list of secure aggregation proposals for WSN
2519015
https://en.wikipedia.org/wiki/Level%20Seven%20%28hacker%20group%29
Level Seven (hacker group)
The Level Seven Crew, also known as Level Seven, Level 7 or L7, was a hacking group during the mid to late 1990s. Rumoured to have dispersed in early 2000 when nominal head 'vent' was raided by the FBI on February 25, 2000. Origins Thought to have been derived from Dante Alighieri’s novel, The Inferno. The group called themselves Level Seven after the seventh level of hell, the violent. Contained in some of the group’s web defacements, was the quote: "il livello sette posidare la vostra famiglia", which loosely translated from Italian says, "Level Seven owns your family". The group, spent most of their time on IRC - in the EFnet channel #LevelSeven discussing security, or the lack thereof. The group was also associated with other high-profile hacking groups such as Global Hell and Hacking For Girliez. Notability The hacking group was noted in Attrition's Top 20 most active groups of all time by claiming responsibility for over 60 unauthorized penetrations of computer systems in 1999 alone, including The First American National Bank, The Federal Geographic Data Committee, NASA and Sheraton Hotels. Level Seven is also credited with being the first group to hack a .ma domain and server located in Morocco. The server was owned by the Faculté des Sciences Semlalia, Marrakech However, the group is most widely known for the September 7, 1999 defacement of (The US Embassy in China's Website), in regards to the 1998 U.S. embassy bombings. Level Seven typify a group of hackers who exploit or attack computers and networks for more than just the thrill and challenge, and for reasons other than money. During their era, they were activists, and they used their computer skills to make political statements and protest actions by government and industry. Thus, they bridged the realms of hacking and activism, operating in a domain that is now called "hacktivism". Quotations "I would be inclined to think that normal hackers would not be able to break into something like the US embassy. The security measures they use are very, very different to those protecting a commercial Web server." - Ian Jonsten-Bryden (British government security expert of Oceanus Security in Suffolk) "We embrace technology, we learn from it, we use it, and we exploit it. Technology is a very powerful tool, as is knowledge, but some people go beyond these boundaries, testing limits, finding new ways and ideas... we call these people hackers, and we are one of many.." - vent, September 1999 References External links Level Seven "Going Down" Level Seven Interview Level Seven Responds Hacker groups
476836
https://en.wikipedia.org/wiki/Noise%20reduction
Noise reduction
Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. All signal processing devices, both analog and digital, have traits that make them susceptible to noise. Noise can be random with an even frequency distribution (white noise), or frequency-dependent noise introduced by a device's mechanism or signal processing algorithms. In electronic recording devices, a major type of noise is hiss created by random electron motion due to thermal agitation that occurs due to temperature. These agitated electrons rapidly add and subtract from the voltage of the output signal and thus create detectable noise. In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film's sensitivity, more sensitive film having larger-sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level. In general Noise reduction algorithms tend to alter signals to a greater or lesser degree. The local signal-and-noise orthogonalization algorithm can be used to avoid changes to the signals. In seismic exploration Boosting signals in seismic data is especially crucial for seismic imaging, inversion, and interpretation, thereby greatly improving the success rate in oil & gas exploration. The useful signal that is smeared in the ambient random noise is often neglected and thus may cause fake discontinuity of seismic events and artifacts in the final migrated image. Enhancing the useful signal while preserving edge properties of the seismic profiles by attenuating random noise can help reduce interpretation difficulties and misleading risks for oil and gas detection. In audio When using analog tape recording technology, they may exhibit a type of noise known as tape hiss. This is related to the particle size and texture used in the magnetic emulsion that is sprayed on the recording media, and also to the relative tape velocity across the tape heads. Four types of noise reduction exist: single-ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, and codec or dual-ended systems. Single-ended pre-recording systems (such as Dolby HX and HX Pro, or Tandberg's Actilinear and Dyneq) work to affect the recording medium at the time of recording. Single-ended hiss reduction systems (such as DNL or DNR) work to reduce noise as it occurs, including both before and after the recording process as well as for live broadcast applications. Single-ended surface noise reduction (such as CEDAR and the earlier SAE 5000A, Burwen TNE 7000, and Packburn 101/323/323A/323AA and 325) is applied to the playback of phonograph records to attenuate the sound of scratches, pops, and surface non-linearities. Single-ended dynamic range expanders like the Phase Linear Autocorrelator Noise Reduction and Dynamic Range Recovery System (Models 1000 and 4000) can reduce various noise from old recordings. Dual-ended systems have a pre-emphasis process applied during recording and then a de-emphasis process applied at playback. Compander-based noise reduction systems Dual-ended compander noise reduction systems include the professional systems Dolby A and Dolby SR by Dolby Laboratories, dbx Professional and dbx Type I by dbx, Donald Aldous' EMT NoiseBX, Burwen Laboratories' , Telefunken's and MXR Innovations' MXR as well as the consumer systems Dolby NR, Dolby B, Dolby C and Dolby S, dbx Type II, Telefunken's High Com and Nakamichi's High-Com II, Toshiba's (Aurex AD-4) , JVC's and Super ANRS, Fisher/Sanyo's Super D, SNRS, and the Hungarian/East-German Ex-Ko system. These systems have a pre-emphasis process applied during recording and then a de-emphasis process applied at playback. In some compander systems the compression is applied during professional media production and only the expansion is applied by the listener; for example, systems like dbx disc, High-Com II, CX 20 and UC were used for vinyl recordings whereas Dolby FM, High Com FM and FMX were used in FM radio broadcasting. The first widely used audio noise reduction technique was developed by Ray Dolby in 1966. Intended for professional use, Dolby Type A was an encode/decode system in which the amplitude of frequencies in four bands was increased during recording (encoding), then decreased proportionately during playback (decoding). The Dolby B system (developed in conjunction with Henry Kloss) was a single band system designed for consumer products. In particular, when recording quiet parts of an audio signal, the frequencies above 1 kHz would be boosted. This had the effect of increasing the signal to noise ratio on tape up to 10 dB depending on the initial signal volume. When it was played back, the decoder reversed the process, in effect reducing the noise level by up to 10 dB. The Dolby B system, while not as effective as Dolby A, had the advantage of remaining listenable on playback systems without a decoder. The Telefunken High Com integrated circuit U401BR could be utilized to work as a mostly Dolby B–compatible compander as well. In various late-generation High Com tape decks the Dolby-B emulating "D NR Expander" functionality worked not only for playback, but undocumentedly also during recording. dbx was a competing analog noise reduction system developed by David E. Blackmer, founder of dbx laboratories. It used a root-mean-squared (RMS) encode/decode algorithm with the noise-prone high frequencies boosted, and the entire signal fed through a 2:1 compander. dbx operated across the entire audible bandwidth and unlike Dolby B was unusable as an open ended system. However it could achieve up to 30 dB of noise reduction. Since analog video recordings use frequency modulation for the luminance part (composite video signal in direct colour systems), which keeps the tape at saturation level, audio style noise reduction is unnecessary. Dynamic noise limiter and dynamic noise reduction Dynamic noise limiter (DNL) is an audio noise reduction system originally introduced by Philips in 1971 for use on cassette decks. Its circuitry is also based on a single chip. It was further developed into dynamic noise reduction (DNR) by National Semiconductor to reduce noise levels on long-distance telephony. First sold in 1981, DNR is frequently confused with the far more common Dolby noise-reduction system. However, unlike Dolby and dbx Type I & Type II noise reduction systems, DNL and DNR are playback-only signal processing systems that do not require the source material to first be encoded, and they can be used together with other forms of noise reduction. Because DNL and DNR are non-complementary, meaning they do not require encoded source material, they can be used to remove background noise from any audio signal, including magnetic tape recordings and FM radio broadcasts, reducing noise by as much as 10 dB. They can be used in conjunction with other noise reduction systems, provided that they are used prior to applying DNR to prevent DNR from causing the other noise reduction system to mistrack. One of DNR's first widespread applications was in the GM Delco car stereo systems in U.S. GM cars introduced in 1984. It was also used in factory car stereos in Jeep vehicles in the 1980s, such as the Cherokee XJ. Today, DNR, DNL, and similar systems are most commonly encountered as a noise reduction system in microphone systems. Other approaches A second class of algorithms work in the time-frequency domain using some linear or non-linear filters that have local characteristics and are often called time-frequency filters. Noise can therefore be also removed by use of spectral editing tools, which work in this time-frequency domain, allowing local modifications without affecting nearby signal energy. This can be done manually by using the mouse with a pen that has a defined time-frequency shape. This is done much like in a paint program drawing pictures. Another way is to define a dynamic threshold for filtering noise, that is derived from the local signal, again with respect to a local time-frequency region. Everything below the threshold will be filtered, everything above the threshold, like partials of a voice or "wanted noise", will be untouched. The region is typically defined by the location of the signal Instantaneous Frequency, as most of the signal energy to be preserved is concentrated about it. Modern digital sound (and picture) recordings no longer need to worry about tape hiss so analog style noise reduction systems are not necessary. However, an interesting twist is that dither systems actually add noise to a signal to improve its quality. Software programs Most DAWs (Digital audio workstation) and audio software in general have one or more noise reduction functions. Notable special purpose noise reduction software programs include Gnome Wave Cleaner. In images Images taken with both digital cameras and conventional film cameras will pick up noise from a variety of sources. Further use of these images will often require that the noise be (partially) removed – for aesthetic purposes as in artistic work or marketing, or for practical purposes such as computer vision. Types In salt and pepper noise (sparse light and dark disturbances), pixels in the image are very different in color or intensity from their surrounding pixels; the defining characteristic is that the value of a noisy pixel bears no relation to the color of surrounding pixels. Generally this type of noise will only affect a small number of image pixels. When viewed, the image contains dark and white dots, hence the term salt and pepper noise. Typical sources include flecks of dust inside the camera and overheated or faulty CCD elements. In Gaussian noise, each pixel in the image will be changed from its original value by a (usually) small amount. A histogram, a plot of the amount of distortion of a pixel value against the frequency with which it occurs, shows a normal distribution of noise. While other distributions are possible, the Gaussian (normal) distribution is usually a good model, due to the central limit theorem that says that the sum of different noises tends to approach a Gaussian distribution. In either case, the noise at different pixels can be either correlated or uncorrelated; in many cases, noise values at different pixels are modeled as being independent and identically distributed, and hence uncorrelated. Removal Tradeoffs There are many noise reduction algorithms in image processing. In selecting a noise reduction algorithm, one must weigh several factors: the available computer power and time available: a digital camera must apply noise reduction in a fraction of a second using a tiny onboard CPU, while a desktop computer has much more power and time whether sacrificing some real detail is acceptable if it allows more noise to be removed (how aggressively to decide whether variations in the image are noise or not) the characteristics of the noise and the detail in the image, to better make those decisions Chroma and luminance noise separation In real-world photographs, the highest spatial-frequency detail consists mostly of variations in brightness ("luminance detail") rather than variations in hue ("chroma detail"). Since any noise reduction algorithm should attempt to remove noise without sacrificing real detail from the scene photographed, one risks a greater loss of detail from luminance noise reduction than chroma noise reduction simply because most scenes have little high frequency chroma detail to begin with. In addition, most people find chroma noise in images more objectionable than luminance noise; the colored blobs are considered "digital-looking" and unnatural, compared to the grainy appearance of luminance noise that some compare to film grain. For these two reasons, most photographic noise reduction algorithms split the image detail into chroma and luminance components and apply more noise reduction to the former. Most dedicated noise-reduction computer software allows the user to control chroma and luminance noise reduction separately. Linear smoothing filters One method to remove noise is by convolving the original image with a mask that represents a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights. Smoothing filters tend to blur an image, because pixel intensity values that are significantly higher or lower than the surrounding neighborhood would "smear" across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction; they are, however, often used as the basis for nonlinear noise reduction filters. Anisotropic diffusion Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation, which is called anisotropic diffusion. With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering, but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image. Non-local means Another approach for removing noise is based on non-local averaging of all the pixels in an image. In particular, the amount of weighting for a pixel is based on the degree of similarity between a small patch centered on that pixel and the small patch centered on the pixel being de-noised. Nonlinear filters A median filter is an example of a non-linear filter and, if properly designed, is very good at preserving image detail. To run a median filter: consider each pixel in the image sort the neighbouring pixels into order based upon their intensities replace the original value of the pixel with the median value from the list A median filter is a rank-selection (RS) filter, a particularly harsh member of the family of rank-conditioned rank-selection (RCRS) filters; a much milder member of that family, for example one that selects the closest of the neighboring values when a pixel's value is external in its neighborhood, and leaves it unchanged otherwise, is sometimes preferred, especially in photographic applications. Median and other RCRS filters are good at removing salt and pepper noise from an image, and also cause relatively little blurring of edges, and hence are often used in computer vision applications. Wavelet transform The main aim of an image denoising algorithm is to achieve both noise reduction and feature preservation using the wavelet filter banks. In this context, wavelet-based methods are of particular interest. In the wavelet domain, the noise is uniformly spread throughout coefficients while most of the image information is concentrated in a few large ones. Therefore, the first wavelet-based denoising methods were based on thresholding of detail subbands coefficients. However, most of the wavelet thresholding methods suffer from the drawback that the chosen threshold may not match the specific distribution of signal and noise components at different scales and orientations. To address these disadvantages, non-linear estimators based on Bayesian theory have been developed. In the Bayesian framework, it has been recognized that a successful denoising algorithm can achieve both noise reduction and feature preservation if it employs an accurate statistical description of the signal and noise components. Statistical methods Statistical methods for image denoising exist as well, though they are infrequently used as they are computationally demanding. For Gaussian noise, one can model the pixels in a greyscale image as auto-normally distributed, where each pixel's "true" greyscale value is normally distributed with mean equal to the average greyscale value of its neighboring pixels and a given variance. Let denote the pixels adjacent to the th pixel. Then the conditional distribution of the greyscale intensity (on a scale) at the th node is: for a chosen parameter and variance . One method of denoising that uses the auto-normal model uses the image data as a Bayesian prior and the auto-normal density as a likelihood function, with the resulting posterior distribution offering a mean or mode as a denoised image. Block-matching algorithms A block-matching algorithm can be applied to group similar image fragments into overlapping macroblocks of identical size, stacks of similar macroblocks are then filtered together in the transform domain and each image fragment is finally restored to its original location using a weighted average of the overlapping pixels. Random field Shrinkage fields is a random field-based machine learning technique that brings performance comparable to that of Block-matching and 3D filtering yet requires much lower computational overhead (such that it could be performed directly within embedded systems). Deep learning Various deep learning approaches have been proposed to solve noise reduction and such image restoration tasks. Deep Image Prior is one such technique which makes use of convolutional neural network and is distinct in that it requires no prior training data. Software Most general purpose image and photo editing software will have one or more noise-reduction functions (median, blur, despeckle, etc.). See also General noise issues Filter (signal processing) Signal processing Signal subspace Audio Architectural acoustics Codec listening test Noise-cancelling headphones Noise print Sound masking Images and video Dark-frame subtraction Digital image processing Total variation denoising Video denoising Similar problems Deblurring References External links Recent trends in denoising tutorial Noise Reduction in photography Matlab software and Photoshop plug-in for image denoising (Pointwise SA-DCT filter) Matlab software for image and video denoising (Non-local transform-domain filter) Non-local image denoising, with code and online demonstration Audio engineering Image noise reduction techniques Sound recording
41131841
https://en.wikipedia.org/wiki/National%20Recording%20Preservation%20Plan
National Recording Preservation Plan
The National Recording Preservation Plan is a strategic guide for the preservation of sound recordings in the United States. It was published in December 2012 by the Council on Library and Information Resources (CLIR) and the National Recording Preservation Board of the Library of Congress. The plan was written by a community of specialists, but is prominently credited to Brenda Nelson-Strauss, Alan Gevinson and Sam Brylawski Background In 2000, Congress passed the "National Recording Preservation Act", which established the National Recording Preservation Foundation and the National Recording Preservation Board. In 2010, this board published a document titled "The State of Recorded Sound Preservation in the United States", which identified legal and technical factors that contribute to the loss of sound recordings. The National Recording Preservation Plan can be considered a response to the challenges identified in this study. Content After a foreword by the Librarian of Congress and an executive summary, the plan is divided into four major sections: Building the National Sound Recording Preservation infrastructure, Blueprint for implementing preservation strategies, Promoting broad public access for educational purposes and Long-term national strategies. Building the National Sound Recording Preservation infrastructure This section calls for increased infrastructure to support audio preservation, including more and better storage facilities, education programs, a directory of resources, and a formal agenda for further research. Physical and digital infrastructure Sound recordings on physical media require specific conditions to slow the inevitable degradation of the carriers. Facilities that can accomplish these conditions are expensive to create and maintain, so the plan recommends fundraising for the creation of such facilities and consortial funding and use between smaller institutions. The need for professional preservation reformatting (digitization) facilities is similar, and the plan suggests similar strategies for increased reformatting in the following section - increased in-house services where feasible, collaboration where necessary, and, possibly, expansion of the National Audio-Visual Conservation Center to allow nonprofit third parties to use their facilities or their services. Best practices for the archival storage of digital information call for sophisticated networks of storage and curation to ensure the data persist and remain accessible. Few archives currently meet these standards, and there is little research toward the specific needs of audio data. In this section, the plan challenges readers to devise innovative strategies to create and share repositories, and to continue to improve digital storage through research. Education and professional training Audio preservation is not a new discipline, but has traditionally been carried out by personnel with various disparate professional backgrounds. Audio engineers, archivists and librarians, computer scientists, and subject specialists in music, history and folklore (to name a few) have all made important contributions to the practice and literature. There has not, however, traditionally been uniform training in audio preservation. This part of the plan calls for the development of professional educational programs in audio preservation that balance the knowledge of the fields above and would prepare individuals to perform, or at least understand, the many skills required to accomplish this work. This education will require the identification of and participation by those institutions with the requisite human and technical resources, or the development of these faculties at interested institutions. With these programs in place, funding and internships and fellowships will further allow prospective students to seamlessly enter the field. In addition to primary education in audio preservation, the next sections call for continuing education for working professionals, and a resource directory that compiles information about educational programs, working professionals and organizations, best practices, general literature, funding institutions, and more. This document, called the Audio Preservation Resource Directory, is frequently referred to throughout the remainder of the plan. National technology research agenda Much research has been done on the attributes and failures of sound media, and best practices in archival transfer, but much more research, such as new methods of transfer, new treatments to deteriorating audio carriers, and new methods of digital storage, is expected to yield promising results in the near future. This research will require collaboration between government, academia and industry in order to leverage existing competencies and avoid duplication of effort. Much of the information about legacy equipment and skills is held only by senior archivists and engineers who will eventually retire. Recommendation 1.8 suggests readers practice 'knowledge management' by conducting video interviews with senior staff. Any other information held in schematics or equipment manuals should be similarly documented and archived. Blueprint for implementing preservation strategies This section emphasizes the need for a consensus of best practices in the variety of tasks that constitute the audio preservation practice, as well as the need for tools and guidelines for the management of digital audio. Audio preservation management Acknowledging the imminence of many audio preservation projects, the insufficiency of funding for this type of work, and the lack of formally trained professionals in many smaller institutions, recommendation 2.1 calls for the development of a handbook that presents best practices in audio preservation in simple, practical terms. Such a guide would act as an approachable lesson in best practices for personal collectors or smaller institutions. The next recommendation demonstrates the need for assessment of collections in prioritizing work and generating funding. Assessment can be performed by third-party consultants, or by existing staff with the help of assessment tools, some of which exist, and some that still need to be developed. Somewhat in alignment with the handbook described above, recommendation 2.3 encourages partnerships between institutions of different types (private vs. public), sizes, finances and goals (etc.). An example of this might be a non-profit community radio station working with a university to store or transfer an important collection of radio program transcriptions that the station would not be able to handle on its own. New tools and guidelines for preserving digital audio files Although digital audio is often perceived as the 'destination' of audio preservation activity, it is more accurately a halfway point. Digital transfers of legacy audio and born-digital audio must be actively maintained to ensure continued preservation and accessibility. Some foresight in the production of digital audio can facilitate these goals moving forward. Recommendation 2.4 encourages successful institutions to share the tools and strategies that allowed them to create useful digital audio objects. While these will certainly vary project to project and institution to institution, some strategies may be widely or universally better than others, and these should be identified and communicated. One of the universally beneficial features of digital audio is metadata. While specific schema will vary widely, all digital audio should include technical metadata about the production of the audio, descriptive metadata about the content of the audio, and structural metadata about the relationships between audio files and related objects. Standardization of these metadata schemas encourages interoperability between organizations and facilitates access and future data curation and migration. One of the best ways to ensure interoperability, in metadata and beyond, is to use and promote software and tools that adhere to established standards. Some commercial software and tools are built with this functionality, and some can be modified or extended to permit it. Open source software is sometimes more flexible than commercial. This recommendation may seem to apply only to software engineers, but managers and archivists must be aware of these considerations and choose tools with them in mind. Not all digital audio is transferred from analog carriers. In recent years, digital recording has mostly replaced analog technologies, and some digital audio is produced entirely in the digital realm (electronic music, MIDI, etc.). These files are often considered to be at equal risk to those on deteriorating analog media because of the lack of standardization, and obsolescence of proprietary software used to produce the recordings. Awareness of these issues among archives and promotion of awareness among musicians and producers may help identify which formats will require attention, and prompt research into emulating legacy software or migrating proprietary data. Promoting broad public access for educational purposes This section begins to address the ways in which preservation is interdependent with access. Complicated intellectual property laws prevent audio archives from making most recordings publicly available, and funding agencies are reluctant to fund initiatives that don't improve access to materials. The following recommendations seek to improve public access to information that can be legally distributed, advocate for copyright reform, and streamline legal avenues for access to copyrighted works for educational use. Discovery and cataloging initiatives Widespread public access to recordings is an admirable goal for archives, but is unrealistic in most cases considering current copyright policy. There are many ways that archives can improve access to their collections, however, without risking legal action or offense to rights holders. Recommendation 3.1 suggests the development and consolidation of discographies into a comprehensive national discography that would simplify the tracking of rights ownership. Although many decades of research have produced satisfactory discographies of individual record labels and musical forms, consolidation could improve usefulness through standardization, community annotation, and interoperability with existing collection catalogs and finding aids. Inclusion of commercially published discographies will be licensed with their respective publishers. If a consolidated discography would bring together all published recordings into a single resource, a national directory of existing sound collections might compile information about existing collections at a fraction of the time, expense and effort of the discography. Audio holdings in existing collections, and even entire audio collections can be completely invisible to scholars without representation or community involvement. Recommendation 3.2 requests a directory of existing collections of all types, contact information for representatives, and descriptions of subject or format strengths. This directory may help facilitate partnerships as described in recommendation 2.3. Just as standardized metadata helps people discover digital audio objects (recommendation 2.5), cataloging libraries' and archives' holdings of physical audio objects (and related materials) helps their respective constituents discover their materials better. Copy (shared) cataloging between institutions requires a consensus on what information must be included in a catalog record. This discussion should be undertaken publicly, and the results shared and distributed as best practices. Copyright legislation reform "Copyright reform...remains the key solution to preserving America's recorded sound history, protecting ownership rights, and providing public access" American copyright laws about sound recordings are uniquely restrictive compared to American copyright laws for other formats and international copyright laws about sound recordings. In addition to preventing access, existing laws sometimes prohibit the preservation of deteriorating carrier objects until the object has audibly degraded. Recommendation 3.4 recognizes the need for federal copyright coverage of sound recordings committed before February 1972. The federal copyright code does not address sound recordings made before 15 Feb. 1972, which means that the legal status of their use and reproduction is in "limbo" - decided by a complex network of state laws and court decisions. Federalization of copyright for these recordings would clarify their status, and potentially initiate conversations about reformatting and academic use. The recommendation also suggests that copyright protection for recordings require reasonable market availability. If the copyright owner of a recording cannot be identified or located, recommendation 3.5 suggests that it be considered an 'orphan work', reducing the liability of institutions who choose to distribute it in good faith after a reasonable search. This recommendation works in tandem with 3.4 in that federalization of sound recording copyrights would allow the 'orphan work' designation to apply uniformly to such recordings without being subject to contradictory state laws and court decisions. Section 108 of the US copyright code allows the preservation reformatting of deteriorating sound recordings under specific conditions. The study group appointed by the Library of Congress and US Copyright Office to evaluate the section concluded that these conditions are outdated and overly restrictive, and recommendation 3.6 of this plan suggests ways to modernize the code to allow for better preservation and access in the academic realm. These recommendations include: Expand eligibility to pre-1972 recordings. Expand eligibility of protection beyond libraries and archives to include other nonprofit organizations and vendors working on behalf of nonprofits. Allow more than three copies (as appropriate), and allow reformatting before recording has already perceptibly deteriorated. Expand a library's right to distribute reproductions of sound recordings beyond news content at a user's request, provided it is not reasonably commercially available. Expand the definition of 'adjunct works', as described in subsection 108(i) to allow them to deliver related content along with requested audio. Allow virtual (i.e. streaming) access copies to be made available, instead of requiring that a researched physically visit the library or archive. Increase the term of access to commercially unavailable works that remain under copy protection to the last 45 years of their term if they were produced before 1961, where previously subsection 108(h) allowed this availability in the last 20 years. Improving legal public access This section addresses the ways in which innovative licensing agreements can expand scholarly access to copyrighted recordings. Recommendation 3.7 encourages archives to work with rights holders to license rare out-of-print recordings for streaming on the web. This would allow archives to share their holdings without risking a lawsuit, and record labels to monetize their properties without the cost and effort of producing a formal reissue. Recommendation 3.8 theorizes a shared network of digital audio files representing transfers of commercial recordings. Similar to copy cataloging, this model would allow archives to share the effort of digitizing collections. A similar model for paper holdings has been successful in recent years. The next recommendation suggests the need for a shared database of label ownership information. This would clarify whether a recording could be considered an orphan work (and therefore shareable), and would avoid a situation described in recommendation 3.7 whereby archives self-impose restrictions on access to avoid infringing on rights holders' interests. Recommendation 3.10 challenges the Library of Congress, the United States' largest collection of sound recordings, to find new ways to make their immense collection more accessible. One proposed solution involves building additional research centers to allow for in-person listening without sometimes prohibitive travel to the Library of Congress. The final recommendation of this section challenges institutions interested in sharing audio holdings to work with representatives of rights holders to build a consensus on the application of the fair use doctrine, and to publish a guide of best practices based on their findings. Long-term national strategies This final section reflects on the recommendations of the previous three, and addresses the larger practical challenges in accomplishing these goals. The first recommendation of this section, 4.1, charges the National Recording Preservation Board with the task of coordinating the activities recommended by this document. These activities include aiding preservation work by other institutions, promoting public understanding of the board and foundation's goals, developing fundraising campaigns, and forming committees to report on the more specialized tasks. The next recommendation charges the board with the creation of an advisory committee composed of heads of archives and recording industry executives to work together to accomplish the goals of the plan and resolve conflicts between affected parties. Recommendation 4.3 requests the development of a national collections policy, and suggests a few formats and genres to be considered, including local radio programs, recordings published only in digital form, recordings published by small independent record labels, neglected and emerging formats, and corporate records on the production of sound recordings. The next recommendation addresses recordings whose licensing agreements could interfere with their archiving. This includes recordings that are only available to stream, or downloads that are licensed for personal use only. Suggestions for improvement are educational clauses in existing license to allow formal archiving or coordinating with publishers to write separate licenses for archiving. Recommendation 4.5 recognizes that the goals prescribed by the plan are expensive to undertake, and that current funding is inadequate. The authors go on to suggest several ways in which funding could be expanded, including partnerships with music industry stakeholders, online audio sales merchants or recording artists and new campaigns to make audio preservation more visible to traditional funding agencies, The final recommendation of the plan, 4.6, calls for the periodic assessment of the progress made by the board, the foundation, and the greater community of audio preservation professionals through a conference or meetings. References Preservation (library and archival science) Sound recording
62639473
https://en.wikipedia.org/wiki/Timeline%20of%20the%20American-led%20intervention%20in%20the%20Syrian%20civil%20war
Timeline of the American-led intervention in the Syrian civil war
{{Infobox military conflict | conflict = | partof = Operation Inherent Resolve, the military intervention against ISIL, and the Foreign involvement in the Syrian Civil War | image = | caption = Top: Territorial map of the Syrian Civil War in September 2014 Bottom: Current territorial map of the Syrian Civil War a | date = 22 September 2014 – present | place = Syria | coordinates = | map_type = | map_relief = | latitude = | longitude = | map_size = | map_marksize = | map_caption = | map_label = | territory = | result = Ongoing operations 19,786 U.S. and allied airstrikes, over 16,000 hitting ISIL positions Thousands of targets destroyed, thousands of militants killed ISIL loses most of its territory in Syria by December 2017 ISIL suffers military defeat and loses almost all of its remaining territory in March 2019 U.S. and allies supplying weapons and advisers to the Kurdish-led Syrian Democratic Forces U.S.-backed rebel training program 2014-2016 U.S.-led occasional strikes against the Syrian government U.S. Marines and Special Operation forces deployed in Syria Planned withdrawal of most U.S. troops in late 2020 Further decline of Russia–United States relations Death of Abu Bakr Al-Bagdadhi ISIL leader in October 2019 | combatants_header = | combatant1 = CJTF–OIR Air war and ground forces Airstrikes only Local ground forces Syrian Democratic Forces YPG YPJ Syriac Military Council Al-Sanadid Forces Euphrates Volcano () Revolutionary Commando Army<small>Limited involvement</small> Peshmerga | combatant2 = al-Qaeda al-Nusra Front Khorasan group Tahrir al-Sham Jaysh al-Sunna Rouse the Believers Operations Room () Guardians of Religion Organization Jund al-Aqsa Turkistan Islamic Party Ahrar al-Sham | combatant3 = Syrian Arab Republic Supported by:| commander1 = Donald Trump Barack Obama Chuck Hagel () Ashton Carter () James Mattis () Patrick M. Shanahan () Mark Esper () Lars Løkke Rasmussen Helle Thorning-Schmidt Mark Rutte Boris Johnson Theresa May David Cameron Stephen Hillier Tony Abbott Malcolm Turnbull Trevor Jones David Johnston Emmanuel Macron François Hollande Jean-Yves Le Drian Pierre de Villiers Angela Merkel Annegret Kramp-Karrenbauer Volker Wieker King Abdullah II Abdullah Ensour King Salman King Abdullah Al Saud (Died 2015) Mohammad bin Salman Al Saud King Mohammed VI Abdelilah Benkirane Bouchaib Arroub Khalifa Al Nahyan Hamad bin Isa Al Khalifa Tamim Al Thani Hamad bin Ali Al Attiyah Salih Muslim Muhammad Masoud Barzani Stephen Harper Justin Trudeau Thomas J. Lawson Yvan Blondin | commander2 = Abu Bakr al-Baghdadi Abu Alaa Afri Abu Jaber Shaykh Abu Mohammad al-Julani Abu Humam al-Shami Abu Jaber Shaykh (2014–2015) Abu Yahia al-Hamawi | commander3 = Bashar al-Assad | units1 = | units2 = | units3 = | strength1 = Coalition forces:Coalition forces-air Coalition forces-ground Local forces | strength2 = Islamic State of Iraq and the Levant: Around 100,000 fighters 3 MiG-21 or MiG-23 aircraft At least a few hundred tanks 2 dronesal-Qaeda: Tahrir al-Sham: 31,000+ 7-12,000 Khorasan: 50 Jund al-Aqsa: 2,100Ahrar al-Sham: 26,000–30,000+ | strength3 = Syrian Arab Republic: 180,000 soldiers | casualties1 = United States:8 servicemen killed 2 government contractors killed1 F-16 crashed1 V-22 Osprey crashed2 drones lost Jordan:1 serviceman executed1 F-16 crashed United Kingdom:1 serviceman killed 2 SAS operators wounded | casualties2 = Islamic State of Iraq and the Levant:At least 9,158 killed al-Qaeda: 349 killed ~50 killed Jaysh al-Sunna:10 killed Ahrar al-Sham:3 killed | casualties3 = Syrian Arab Republic:''' 169 soldiers and militiamen killed 215 Russian mercenaries killed 4 tanks destroyed 11+ aircraft destroyed 5 SAM batteries destroyed 2 armed drones shot down | casualties4 = 3,833 civilians killed by Coalition airstrikes in Syria 5,900+ civilians killed by ISIL in Syria Over 420,000 civilians displaced or fled to other countries | notes = Number of militants killed possibly higher, due to them covering up their losses. | campaignbox = }} The American-led intervention in the Syrian Civil War is the United States-led support of Syrian opposition and the Federation of Northern Syria during the course of the Syrian Civil War and active military involvement led by the United States and its allies — the militaries of the United Kingdom, France, Jordan, Turkey, Canada, Australia and more — against the Islamic State of Iraq and the Levant (ISIL) and al-Nusra Front since 2014. Since early 2017, the U.S. and other Coalition partners have also targeted the Syrian government and its allies via airstrikes and aircraft shoot-downs. 2014 September 2014: Airstrikes begin On 22 September 2014, Pentagon Press Secretary Rear Admiral John Kirby confirmed that the United States and partner nations had undertaken airstrikes in Syria using fighters, bombers, and Tomahawk missiles in strikes authorized by President Barack Obama. The initial strikes were coordinated by United States Central Command (USCENTCOM) with Arab partner forces from Bahrain, Jordan, Qatar, Saudi Arabia, and the United Arab Emirates (UAE) conducting or supporting airstrikes. The overnight strikes targeted about 20 Islamic State of Iraq and the Levant (ISIL) targets, including headquarters buildings. Sources in Syria claimed that among the target locations was Brigade 93, a Syrian army base that ISIL militants had recently captured, and targets in the ISIL-held towns of Tabqa and Tell Abyad in Raqqa Governorate. The U.S. also targeted the al-Qaeda-affiliated al-Nusra Front and the Khorasan group in Syria's Aleppo and Idlib provinces. At least 70 Islamic State fighters, 50 fighters affiliated with al-Qaeda, and eight civilians were killed overnight by the airstrikes, according to the Syrian Observatory for Human Rights (SOHR) conflict monitor, while eight strikes were launched against the Khorasan group. U.S. F-22 Raptor stealth fighters were reportedly among the U.S. aircraft striking targets in Syria on the first night of the campaign, carrying out their first ever combat missions since entering service in 2005. Syrian military radar was "passive" during the first air strikes, making no known attempt to counter coalition aircraft. The U.S. had deployed HARM missiles as a precaution, as it was uncertain how Syria's air-defense network would react. On 24 September, the U.S.-led coalition conducted a second round of airstrikes on Islamic State facilities in Syria, targeting ISIL-held oil production facilities the group was using to fund their activities. Some targets were apparently also mobile production facilities which were most likely not refineries. In a third round of airstrikes on ISIL targets on 25 September, Arab partners led the U.S. in strikes against militant-held oil facilities in northeastern Syria. Saudi Arabia and the UAE dropped 80 percent of the bomb tonnage in the third round of strikes, compared to other strikes in which the U.S. led the way. On 26 September, the U.S. carried out a fourth round of airstrikes on ISIL targets in eastern Syria, targeting ISIL heavy equipment and destroying four battle tanks in Deir ez-Zor Governorate. In a fifth round of airstrikes on 27 September, the U.S. led strikes along with Saudi Arabia, Jordan and the UAE against ISIL forces in the Kobanî Canton of Syrian Kurdistan, destroying two armored vehicles and an unknown number of fighters. The area was under siege by ISIL militants, with the fighting recently forcing over 100,000 Syrian Kurds to flee across the border to Turkey. On 28 and 29 September, the U.S. carried out two rounds of strikes against ISIL positions across four provinces. Among the facilities targeted was the entrance to the largest gas plant in Syria, in Deir ez-Zor Governorate, and ISIL training camps and vehicles near an ISIL-controlled grain silo in Manbij. October 2014 In an eighth round of airstrikes on 1 October, the U.S.-led coalition conducted daytime strikes against Islamic State forces besieging the primarily Kurdish town of Kobanî. The strikes were in support of the Free Syrian Army (FSA) and the Kurdish People's Protection Units (YPG), who were defending the city. On 2 October, the U.S. and the UAE led a ninth round of strikes against ISIL forces across Syria, destroying an ISIL checkpoint near Kobanî, damaging a tank north of the Sinjar Mountain area, destroying a tank west of Raqqa, and destroying several ISIL facilities east of Aleppo. In a tenth round of airstrikes on 3 October, the U.S., assisted by Saudi Arabia and the UAE, struck ISIL forces in northern and eastern Syria, destroying an ISIL garrison south of al-Hasakah, destroying two tanks southeast of Deir ez-Zor and two modular oil refineries and a training camp south of Raqqa, and striking an ISIL building northeast of Aleppo. The U.S. led an 11th round of airstrikes on 4 October, alongside Jordan, Saudi Arabia, and the UAE, against ISIL forces across Syria. The coalition carried out nine strikes, destroying an ISIL infantry unit, armored personnel carrier (APC), and a vehicle south of Kobanî. They also destroyed a tank and a vehicle southeast of Deir ez-Zor, damaged the Tabqa airfield and destroyed an artillery piece near Raqqa, as well as an ISIL depot and logistics complex south of al-Hasakah. In a 12th round of strikes on 5 October, the U.S. carried out three airstrikes against ISIL forces in central and eastern Syria, destroying an ISIL bulldozer, two ISIL tanks and another vehicle northwest of Mayadin, along with six firing positions and a large ISIL unit northwest of Raqqa. On 6 October, the U.S. carried out a 13th round of strikes, destroying an ISIL tank near Tabqa airfield west of Raqqa, two fighting positions south of Kobanî, and a tank southeast of Deir ez-Zor. On 7 October, in a 14th round of strikes, the U.S., Saudi Arabia and the UAE, carried out nine strikes damaging multiple ISIL-controlled buildings west of al-Hasakah, damaging a staging area and IED production facility northeast of Deir ez-Zor, destroying three armed vehicles, damaging one armed vehicle, destroying a vehicle carrying anti-aircraft artillery, destroying an ISIL tank, and an ISIL unit in and around Kobanî, and killing a small group of fighters southwest of Rabiyah. On 8 October, the U.S. led a 15th round of nine airstrikes along with the UAE, destroying an armored personnel carrier, four armed vehicles, an artillery piece, and damaged another armed vehicle in and around Kobanî, striking an ISIL training camp and fighters northwest of Raqqa, and destroying a tank northwest of Deir ez-Zor. In a 16th round of airstrikes in Syria on 9 October, the U.S. carried out nine airstrikes in the areas in and around the besieged border town of Kobanî. The U.S. carried out six airstrikes south of Kobanî that destroyed two ISIL-held buildings, one tank and one heavy machine gun along, a fighting position along with one large and two small ISIL units. North of Kobanî, the U.S. struck two small ISIL units and destroyed two ISIL-held buildings. On 10 October, the U.S. led a 17th round of airstrikes along with Saudi Arabia and the UAE, carrying out nine strikes that destroyed two ISIL training facilities, three vehicles, damaging a tank and striking two ISIL units in and around Kobanî. The strikes also destroyed an armored vehicle staging facility east of Deir ez-Zor and struck a small ISIL unit northeast of Al-Hasakah. In an 18th round of airstrikes in Syria on 11 October, the U.S. carried out six airstrikes in and around Kobanî. The U.S. carried out four strikes north of Kobanî striking a fighting position, damaging a command and control facility, destroying a staging building, and striking two small ISIL units. South of Kobanî, two airstrikes destroyed three trucks. On 12 October, the U.S. led a 19th round of airstrikes along with Saudi Arabia and the United Arab Emirates, carrying out four strikes — three in Kobanî, destroying a fighting position and a staging area, and one strike northwest of Raqqa, destroying an armored vehicle compound. Also on 12 October, the U.S. announced that the Turkish government had approved the use of Turkish military bases by Coalition forces fighting ISIL in Syria and Iraq. These installations included key bases only from the Syrian border and important U.S. military bases in Turkey such as the Incirlik Air Base. Despite the announcement of Turkish government approval, on 13 October, Turkish officials publicly denied that any agreement had been made over Coalition use of Turkish airbases, including Incirlik. In a 20th round of airstrikes in Syria on 13 October, the U.S. and Saudi Arabia carried out eight airstrikes against ISIL forces. Seven of the strikes were in and around Kobanî, striking a large ISIL unit, two small units; damaging one staging location and destroying another, destroying a heavy-machine-gun firing position, destroying three buildings, and damaging two others. One other strike northwest of Raqqa struck an ISIL garrison. On 14 October, the U.S. and Saudi Arabia carried out the 21st round and the largest set of strikes against ISIL in Syria since the beginning of the intervention, with 21 strikes against targets in and around Kobanî, and an additional strike near Deir ez-Zor. According to the Department of Defense, the strikes were designed to interdict ISIL reinforcements and resupply zones and prevent ISIL from massing combat power on the Kurdish-held portions of Kobanî. The strikes destroyed two staging locations and damaged another, destroyed one ISIL building and damaged two others, damaged three ISIL compounds, destroyed one truck, one armed vehicle, and one other vehicle near Kobanî in support of Kurdish forces resisting the |siege of the town. In addition to those targets, the airstrikes struck seven staging areas, two mortar positions, three ISIL occupied buildings, and an artillery storage facility. An additional strike near Deir ez-Zor struck a modular oil refinery. In a 22nd round of airstrikes on 15 October, the U.S. carried out 18 strikes against ISIL targets in and around Kobanî. The strikes destroyed multiple fighting positions and also successfully struck sixteen ISIL-occupied buildings. On 16 October, the U.S. carried out a 23rd round of airstrikes with 14 airstrikes against ISIL targets in and around Kobanî striking 19 ISIL-controlled buildings, two command posts, three fighting positions, three sniper positions, one staging location, and one heavy machine gun position. In a 24th round of airstrikes on 17 October, the U.S. carried out seven airstrikes against ISIL targets in and around Kobanî and in north-eastern Syria. Six airstrikes took place near Kobanî, striking three ISIL-controlled buildings; they also destroyed two fighting positions, suppressed three fighting positions, and destroyed two vehicles. One other airstrike near Al-Shaddadi struck ISIL-controlled oil collection equipment, including several petroleum, oil, and lubricants tanks, and a pump station. On 20 October, the U.S. carried out a 25th round of airstrikes, with six airstrikes against ISIL targets in and around Kobanî. The strikes destroyed ISIL fighting positions, ISIL mortar positions, a vehicle, and one stray equipment supply bundle from a U.S. airdrop of Kurdish supplies in order to prevent the supplies from being captured. In a 26th round of airstrikes on 21 October, the U.S. carried out four airstrikes against ISIL targets in and around Kobanî. The strikes destroyed several ISIL fighting positions, an ISIL-controlled building, and a large ISIL unit. The British Royal Air Force began operating over Syria in a surveillance role on the same date, making the UK the first Western country other than the U.S. to operate in both Iraq and Syria simultaneously. On 22 October, the U.S. carried out a 27th round of airstrikes with six airstrikes against ISIL targets in and around Kobanî. The strikes destroyed several ISIL fighting positions, two ISIL vehicles, an ISIL-controlled building and an ISIL logistical center. In a 28th round of airstrikes on 23 October, the U.S. carried out six airstrikes in and around Kobanî and near Deir ez-Zor. Four strikes destroyed several ISIL fighting positions, an ISIL vehicle, and an ISIL command and control center near Kobanî. Two strikes east of Deir ez-Zor destroyed several ISIL oil storage tanks. On 24 October, the U.S. carried out a 29th round of airstrikes with six airstrikes against ISIL targets in and around Kobanî. The strikes destroyed an ISIL vehicle and struck three ISIL units. In a 30th round of airstrikes on 25 October, the U.S. carried out one strike near Kobanî, destroying an ISIL artillery piece. On 26 October, the U.S. carried out its 31st round of airstrikes with five airstrikes against ISIL targets near Kobanî, destroying seven ISIL vehicles and an ISIL-controlled building. In a 32nd round of airstrikes on 27 October, the U.S. carried out four strikes near Kobanî, destroying five ISIL vehicles and an ISIL-occupied building. On 28 October, the U.S. carried out its 33rd round of airstrikes, with four airstrikes conducted against ISIL targets near Kobanî, destroying four ISIL fighting positions and a small ISIL unit. In a 34th round of airstrikes on 29 October, the U.S. carried out eight airstrikes in and around Kobanî. The strikes destroyed five ISIL fighting positions, a small ISIL unit, six ISIL vehicles, an ISIL-controlled building, and an ISIL command and control node. On 30 October, the U.S. carried out a 35th round of airstrikes, with 12 airstrikes against ISIL targets in and around Kobanî, and against targets near Deir ez-Zor and Raqqa. 10 strikes near Kobanî struck two small ISIL units, destroyed seven ISIL fighting positions, and five ISIL-controlled buildings. One strike near Deir ez-Zor damaged an ISIL headquarters building while another strike near Raqqa damaged an ISIL security building. In a 36th round of airstrikes on 31 October, the U.S. carried out four airstrikes in and around Kobanî, damaging four ISIL fighting positions and an ISIL controlled building. Naming of Operation Inherent Resolve Unlike previous U.S. combat operations, no name had been given to the American-led anti-ISIL intervention in Syria and Iraq until it was announced in mid-October 2014 that the operational name would be Inherent Resolve. The decision to keep the conflict nameless until then drew considerable media criticism. November 2014 On 1 November, the U.S. carried out a 37th round of airstrikes with five airstrikes against ISIL targets in and around Kobanî. The strikes suppressed or destroyed nine ISIL fighting positions, and struck one ISIL-controlled building. In a 38th round of airstrikes on 2 November, the U.S. carried out seven airstrikes in and around Kobanî and near Deir ez-Zor. Five airstrikes in and around Kobanî struck five small ISIL units and destroyed three ISIL vehicles. Two airstrikes southeast of Deir ez-Zor destroyed an ISIL tank and two vehicle shelters. On 3 November, the U.S. and coalition partners carried out a 39th round of airstrikes in and around Kobanî and near Deir ez-Zor. Four airstrikes in and around Kobanî struck an ISIL fighting position, a small ISIL unit, and destroyed two ISIL-controlled buildings. One airstrike near Deir ez-Zor damaged an ISIL-controlled building. In a 40th round of airstrikes on 4 and 5 November, the U.S. carried out six airstrikes in and around Kobanî and north of Sinjar just across the Iraq-Syria border. Three airstrikes in and around Kobanî struck a small ISIL unit, two ISIL fighting positions, and an ISIL dump truck that was used in the construction of fighting positions. One airstrike north of Sinjar destroyed an ISIL fighting position, used to launch mortar attacks, and struck a small ISIL unit manning the position. Two additional strikes north of Sinjar struck a small ISIL unit and destroyed an ISIL armored vehicle. On 6 and 7 November, the U.S. carried out a 41st round of airstrikes in and around Kobanî and near Tell Abyad. Seven strikes in and around Kobanî struck three small ISIL units, seven ISIL fighting positions, and destroyed an ISIL artillery piece. One airstrike near Tell Abyad destroyed an ISIL weapons stockpile. In a 42nd round of airstrikes between 8 and 10 November, the U.S. carried out 23 airstrikes in and around Kobanî and near Deir ez-Zor. 13 airstrikes conducted in and around Kobanî struck an ISIL vehicle and five small ISIL units, destroyed an ISIL-occupied building used as an ammunition stockpile, an ISIL command and control building, and seven ISIL fighting positions, as well as damaging two ISIL fighting positions. In addition, eight airstrikes southeast of Deir ez-Zor damaged several structures of an ISIL oil collection facility, which was used to trans-load oil for the black market, while two airstrikes east of Deir ez-Zor damaged an ISIL oil collection point. Between 11 and 12 November, the U.S. carried out a 43rd round of airstrikes with 16 airstrikes in and around Kobanî, near Deir ez-Zor, and near Al-Hasakah. 10 airstrikes conducted in and around Kobanî struck eight small ISIL units, damaged three ISIL fighting positions, and destroyed an ISIL logistics facility. Four airstrikes near Deir ez-Zor damaged an ISIL crude oil collection facility, struck a small ISIL unit, and damaged an ISIL vehicle. Two airstrikes near Al-Hasakah damaged a crude oil collection point. In a 44th round of airstrikes between 13 and 14 November, the U.S. carried out 20 airstrikes in and around Kobanî, east of Deir ez-Zor, west of Aleppo, and east of Raqqa. 17 airstrikes conducted in and around Kobanî struck ten ISIL units, destroyed 10 fighting positions, an ISIL controlled building, two ISIL vehicles, and an ISIL motorcycle. One airstrike east of Raqqa destroyed an ISIL training camp and another airstrike east of Deir ez-Zor destroyed an ISIL oil collection point. One other airstrike west of Aleppo struck militants associated with the Khorasan group. Between 15 and 17 November, the U.S. carried out a 45th round of airstrikes with 11 airstrikes in and around Kobanî and near Deir ez-Zor. Nine airstrikes in and around Kobanî destroyed seven ISIL fighting positions, suppressed an ISIL fighting position, destroyed four ISIL staging areas, and struck one tactical ISIL unit. Two airstrikes near Deir ez-Zor struck an ISIL crude oil collection facility and destroyed one ISIL tank. In a 46th round of airstrikes between 18 and 19 November, the U.S. carried out seven airstrikes in and around Kobanî, southeast of Al-Hasakah, and near Hazm. Five airstrikes in and around Kobanî destroyed an ISIL fighting position, an ISIL staging area and three ISIL controlled buildings, suppressed two ISIL fighting positions, struck two tactical ISIL units, and a large ISIL unit. One airstrike southeast of Al-Hasakah damaged a crude oil collection point operated by ISIL while another airstrike near Hazm struck and destroyed a storage facility associated with the Khorasan Group. Between 20 and 21 November, the U.S. and coalition partners carried out a 47th round of airstrikes with seven airstrikes in and around Kobanî and near Raqqa. Six airstrikes in and around Kobanî destroyed four ISIL staging areas, two ISIL-controlled buildings, two ISIL tactical units, and suppressed an ISIL fighting position. One airstrike near Raqqa damaged an ISIL barracks building. In a 48th round of airstrikes between 22 and 24 November, the U.S. and coalition partners carried out nine airstrikes in and around Kobanî and near Raqqa. Seven airstrikes in and around Kobanî destroyed three ISIL fighting positions along with two ISIL staging areas, damaged an ISIL staging area, and suppressed four ISIL fighting positions. Two strikes near Raqqa struck an ISIL headquarters building. Between 25 and 26 November, the U.S. carried out a 49th round of airstrikes with 10 airstrikes in and around Kobanî striking an ISIL fighting position, a large ISIL unit, two tactical ISIL units, and destroying four ISIL staging areas and six ISIL fighting positions. In a 50th round of airstrikes between 27 and 28 November, the U.S. carried out two airstrikes near Kobanî and Aleppo. One airstrike near Kobanî struck an ISIL fighting position and an ISIL staging area while one airstrike near Aleppo struck a tactical ISIL unit. Between 29 November and 1 December, the U.S. carried out a 51st round of airstrikes with 27 airstrikes in and around Kobanî, near Raqqa, and near Aleppo. 17 airstrikes near Kobanî destroyed two ISIL-occupied buildings, three ISIL tanks, three ISIL fighting positions, an ISIL armored personnel carrier, three ISIL vehicles and two ISIL staging areas. It also struck seven tactical ISIL units, targeted six ISIL fighting positions and damaged an ISIL-controlled building. Nine airstrikes near Raqqa struck an ISIL electronic warfare garrison, an ISIL military garrison, an ISIL headquarters building, an ISIL jamming system, an ISIL tank and 14 ISIL vehicles while one airstrike near Aleppo struck a target associated with the Khorasan Group. December 2014 In a 52nd round of airstrikes between 1 and 3 December, the U.S. carried out 14 airstrikes in and around Kobanî destroying an ISIL vehicle, 17 ISIL fighting positions, an ISIL staging area, and suppressed eight other fighting positions and stuck a large ISIL unit. Between 4 and 8 December, the U.S. and coalition partners carried out a 53rd round of airstrikes with 15 airstrikes in and around Kobanî and near Raqqa. 15 airstrikes in and around Kobanî destroyed four ISIL fighting positions, three ISIL-occupied buildings, two ISIL staging areas, two ISIL tanks, an ISIL motorcycle, a mortar, and struck eight tactical ISIL units along with two ISIL fighting positions. One airstrike near Raqqa struck an ISIL electronic warfare garrison. In a 54th round of airstrikes between 9 and 10 December, the U.S. carried out seven airstrikes in and around Kobanî, destroying five ISIL fighting positions, striking three ISIL fighting positions, and striking a large ISIL unit. Between 11 and 12 December, the U.S. and coalition partners carried out a 55th round of airstrikes with seven airstrikes in and around Kobanî, near Aleppo, and near Al-Qa'im, Iraq. Five airstrikes in and around Kobanî destroyed five ISIL fighting positions and struck one ISIL fighting position. One airstrike near Aleppo struck five ISIL-occupied buildings while another airstrike near Al-Qa'im on the Syrian border destroyed two ISIL fortifications. In a 56th round of airstrikes between 13 and 15 December, the U.S. and coalition partners carried out nine airstrikes in and around Kobanî and near Abu Kamal. Eight airstrikes in and around Kobanî destroyed nine ISIL fighting positions, two ISIL-controlled buildings, and two ISIL staging positions as well as striking one ISIL fighting position. One airstrike near Abu Kamal destroyed an ISIL vehicle. Between 16 and 17 December, the U.S. and coalition partners carried out a 57th round of airstrikes with six airstrikes in and around Kobanî and near Abu Kamal. Five airstrikes in and around Kobanî destroyed an ISIL controlled building, one ISIL staging area, one ISIL bunker, and an ISIL mortar, and struck two ISIL tactical units, two additional buildings, and two ISIL fighting positions. One airstrike near Abu Kamal destroyed an ISIL tactical vehicle. In a 58th round of airstrikes on 18 December, the U.S. and coalition partners carried out six airstrikes in and around Kobanî destroying seven ISIL fighting positions and an ISIL building, and struck a tactical unit. On 19 December, the U.S. and coalition partners carried out a 59th round of airstrikes with four strikes in and around Kobanî and near Raqqa. Three airstrikes in and around Kobanî destroyed two ISIL controlled buildings and an ISIL staging area as well as striking two ISIL tactical units. One airstrike near Raqqa damaged an ISIL training compound. In a 60th round of airstrikes on 20 December, the U.S. and coalition partners carried out five airstrikes in and around Kobanî destroying eight ISIL fighting positions. On 21 December, the Coalition carried out a 61st round of airstrikes with three strikes in and around Kobanî destroying an ISIL staging position and two ISIL fighting positions as well as striking two ISIL fighting positions. In a 62nd round of airstrikes on 22 December, the Coalition carried out 12 airstrikes in and around Kobanî, near Aleppo, near Al-Hasakah, and near Raqqa. Six airstrikes in and around Kobanî destroyed six ISIL fighting positions and struck four ISIL fighting positions and an ISIL tactical unit. Three airstrikes near Aleppo destroyed artillery equipment and struck 10 ISIL buildings; two airstrikes near Al-Hasakah destroyed an ISIL tactical vehicle, two ISIL trucks, a building, and two ISIL storage containers, and one airstrike near Raqqa destroyed an ISIL checkpoint complex. On 23 December, the Coalition carried out a 63rd round of airstrikes with seven airstrikes in and around Kobanî. Six airstrikes in and around Kobanî destroyed seven ISIL fighting positions, an ISIL building and struck several ISIL fighting positions and one airstrike near Barghooth struck ISIL oil collection equipment. In a 64th round of airstrikes on 24 December, the Coalition carried out ten airstrikes in and around Kobanî, near Deir ez-Zor, and near Raqqa. Eight airstrikes in and around Kobanî destroyed five ISIL fighting positions, an ISIL building, an ISIL staging position, and struck three ISIL tactical units, an ISIL tactical vehicle and an ISIL fighting position. One airstrike near Deir ez-Zor struck a crude oil collection point and another airstrike near Raqqa struck an ISIL weapons stockpile. On 25 December, the Coalition carried out a 65th round of airstrikes with 15 airstrikes in and around Kobanî, near Al-Hasakah, and near Raqqa. 13 airstrikes in and around Kobanî destroyed three ISIL buildings, one vehicle, 17 ISIL fighting positions, two ISIL staging positions as well as striking two ISIL fighting positions, three large ISIL units and four ISIL tactical units. One airstrike near Al-Hasakah struck an ISIL drilling tower and destroyed two support vehicles and another airstrike near Raqqa struck an ISIL assembly area. In a 66th round of airstrikes on 26 December, the Coalition carried out four airstrikes in and around Kobanî, destroying three ISIL buildings and two ISIL vehicles. On 29 December, the Coalition carried out a 67th round of airstrikes with 12 airstrikes in and around Kobanî, near Deir ez-Zor, and near Raqqa. 10 airstrikes in and around Kobanî destroyed 11 ISIL fighting positions, two ISIL buildings, and an ISIL storage container, and struck an ISIL tactical unit. One airstrike near Deir ez-Zor struck several ISIL-controlled buildings while another airstrike near Raqqa also struck several ISIL-controlled buildings. In a 68th round of airstrikes on 30 December, the Coalition carried out seven airstrikes in and around Kobanî and near Deir ez-Zor. Six airstrikes in and around Kobanî destroyed three ISIL buildings, damaged one ISIL building, and struck an ISIL tactical unit while one airstrike near Deir ez-Zor destroyed an ISIL shipping container. On 31 December, the U.S. and coalition partners carried out a 69th round of airstrikes with seven airstrikes in and around Kobanî and near Al-Hasakah. Five airstrikes in and around Kobanî destroyed five ISIL buildings and six ISIL fighting positions while two airstrikes near Al-Hasakah destroyed four oil derricks controlled by ISIL. 2015 January 2015 In a 70th round of airstrikes on 1 January, the Coalition carried out 17 airstrikes in and around Kobanî, near Deir ez-Zor, and near Raqqa. 13 airstrikes in and around Kobanî destroyed 12 ISIL controlled buildings, four ISIL fighting positions, one ISIL vehicle as well as striking two ISIL tactical units and two large ISIL units. Two airstrikes near Raqqa destroyed five ISIL checkpoints and struck an ISIL staging area, while two airstrikes near Deir ez-Zor destroyed an ISIL fighting position and struck an ISIL shipping container. February 2015: Al-Hasakah offensive On 5 February 2015, Jordan elevated its role in the U.S.-led coalition in Syria, launching one of the largest airstrike campaigns since early January 2015, targeting ISIL militants near Raqqa, the then-de facto ISIL capital, inflicting an unknown number of casualties and damaging ISIL facilities. This was done in retaliation against ISIL's brutal murder of Muath al-Kasasbeh. On 6 February, a continued round of Coalition airstrikes at Raqqa killed over 30 ISIL militants. On 21 February, Syrian Kurds launched an offensive to retake ISIL-held territories in the Al-Hasakah Governorate, specifically in the Tell Hamis area, with support from U.S. airstrikes. At least 20 villages were liberated, and 12 militants were killed in the clashes. In response, on 23 February, ISIL abducted 150 Assyrian Christians from villages near Tell Tamer in northeastern Syria, after launching a large offensive in the region. As a result of ISIL's massive offensive in the west Al-Hasakah Governorate, the U.S.-led Coalition increased the number of airstrikes in the region to 10, on 24 February, in order to halt the ISIL advance. The airstrikes struck nine ISIL tactical units and destroyed two ISIL vehicles. On 26 February, the number of Assyrian Christians abducted by ISIL from villages in northeastern Syria from 23 to 25 February rose to at least 220, according to the Syrian Observatory for Human Rights (SOHR), a monitoring group based in Britain. On 27 February, the Kurdish Democratic Union Party and Syrian Observatory for Human Rights reported that Kurdish fighters had recaptured the town of Tell Hamis, along with most of the villages occupied by ISIL in the region. At least 127 ISIL militants were killed in the clashes, along with 30 YPG and allied fighters. One Australian volunteer, who was fighting for the YPG, was also killed. Many of the remaining ISIL militants retreated to Tell Brak, which quickly came under assault from the YPG and allied Arab fighters. March–April 2015: Battle of Sarrin and expanded Canadian and UK efforts On 1 March 2015, YPG fighters, aided by U.S. airstrikes, were able to drive ISIL militants out of Tell Brak, reducing the ISIL occupation in the eastern Jazira Canton to the villages between Tell Brak and Tell Hamis. On 6 March, it was reported that Abu Humam al-Shami, al-Nusra's military chief, was killed in a U.S. airstrike targeting a meeting of top al-Nusra leaders, at the al-Nusra Front's new headquarters at Salqin. On 9 March, the U.S. carried out another airstrike on the al-Nusra Front, targeting a military camp near Atimah, close to the Turkish border in the Idlib Governorate. The airstrike left nine militants dead. On 24 March, Canadian Prime Minister Stephen Harper announced that Canada would be looking to expand Operation Impact to include airstrikes against ISIL in Syria as well. On 26 March, the United Kingdom Ministry of Defence announced the deployment of around 75 military trainers and headquarter staff to Turkey and other nearby countries in the anti-ISIL coalition, to assist with the U.S.-led training programme in Syria. The programme was set to provide small arms, infantry tactics and medical training to Syrian moderate opposition forces for over three years. On 30 March, the House of Commons of Canada authorized the extended deployment of its military for one year and to conduct operations related to the war in Syria. On 8 April, Canada initiated airstrikes in Syria, with two CF-18 fighters bombing a former military installation of the Syrian government that was captured by ISIL, near its headquarters in Raqqa. May 2015: Al-Amr special forces raid On 15 May, after surveillance by British special forces confirmed the presence of a senior ISIL leader named Abu Sayyaf in al-Amr, 1st SFOD-Delta operators from the Joint Special Operations Command based in Iraq conducted an operation to capture him. The operation resulted in his death when he tried to engage U.S. forces in combat and the capture of his wife Umm Sayyaf. The operation also led to the freeing of a Yazidi woman who was held as a slave. About a dozen ISIL fighters were also killed in the raid, two U.S. officials said. The SOHR reported that an additional 19 ISIL fighters were killed in the U.S. airstrikes that accompanied the raid. One official said that ISIL Forces fired at the U.S. aircraft, and there was reportedly hand-to-hand combat during the raid. UH-60 Black Hawk and V-22 Osprey helicopters were used to conduct the raid, and Umm Sayyaf was held by U.S. forces in Iraq.CNN reported that a senior U.S. military official revealed that in May 2015, U.S. special operations forces came "tantalisingly close" to capturing or killing ISIL leader Abu Bakr al-Baghdadi in Raqqa, but failed to do so because classified information was leaked to the news media. Coalition air support was decisive in the YPG victory over ISIL in the May 2015 Western al-Hasakah offensive. June–July 2015 U.S. air support, particularly from the 9th Bomb Squadron, was decisive in the YPG victory over ISIL in the Second Battle of Sarrin. Coalition air support was also decisive in the YPG/FSA victory over ISIL in the Tell Abyad offensive. Following a 20 July suicide bombing in the Şanlıurfa Province of Turkey, believed to have been carried out by ISIL militants, as well as an ISIL cross-border attack that killed a Turkish serviceman on 23 July, Turkish armour and aircraft struck ISIL targets in cross-border engagements in northern Syria. Turkey also agreed to let the United States use the USAF Incirlik Air Base for strikes against ISIL. August–October 2015: UK drone strike and Canada ceases airstrikes On 21 August, three Islamic State fighters, two of United Kingdom nationality, were targeted and killed in Raqqa by a British Royal Air Force MQ-9 Reaper strike. Prime Minister David Cameron gave a statement to Parliament that one of the British nationals targeted had been plotting attacks in the UK. Another British national was killed in a separate air strike by U.S. forces in Raqqa on 24 August. In October 2015, 50 U.S. special forces operators were deployed to northern Syria to help train and coordinate anti-ISIL forces in the region. The introduction of Russian aircraft and ship based cruise missiles in support of the Syrian Government to Syrian airspace created new threats to the U.S.-led coalition. Discussions were held to deconflict Syrian airspace. On 10 October, the state-run Syrian Arab News Agency reported claims that two U.S. F-16 jets had "violated Syrian airspace" and bombed two electricity power plants in al-Rudwaniya, east Aleppo, "in breach of international law". On 20 October, Canada's Prime Minister-elect Justin Trudeau informed Barack Obama by phone of Canada's intention to pull out of bombing raids in Syria. Canada would remain a coalition partner but will stop strikes. November–December 2015: French retaliation and the UK officially begins airstrikes After deadly terror attacks in Paris conducted by jihadists, French President Francois Hollande sent France's only aircraft carrier, the Charles de Gaulle, with its 26 fighters to intensify air strikes. On 27 November, SANA claimed that the coalition targeted water pumping stations in al-Khafsah area, east of Aleppo, causing them to go out of service. According to Bellingcat's investigation, however, it was a Russian MoD bombing. On 2 December, the UK parliament voted 397–223 in favour of airstrikes in Syria. Within hours, RAF Tornado jets carried out their first air strikes, targeting the al-Omar oil fields near Deir ez-Zor in eastern Syria, which were under ISIL control. On 6 December, a Syrian Arab Army base at Deir ez-Zor was struck, killing at least one Syrian Arab Army soldier, with reports circulating that as many as four were killed, 13 wounded and two tanks destroyed. Syria accused the U.S. of conducting the strike; however, U.S. officials denied this, stating instead that the bombing was a mistake by Russians. After the airstrikes, the SAA reported that ISIL forces began to attack the base. 2016 March–April 2016: Continued special forces operations On 4 March, a U.S.-led Coalition airstrike targeted Omar al-Shishani, ISIL's top field commander, who was travelling in a convoy near al-Shaddadi in northeastern Syria. The strike injured him, and there were reports that he died from his injuries. However, this proved to be incorrect and he was actually killed later in an airstrike in Iraq in July 2016. Also on 4 March, 100 ISIL militants assaulted Peshmerga lines in Syria; U.S. Navy SEAL Charles Keating IV helped the Peshmerga to repel the attack. As ISIL fighters sent a car bomb towards him, Keating led a team to counterattack with sniper and rocket fire. For his actions during the battle, he was posthumously awarded the Silver Star. On 24 March, U.S. special operations forces conducted an operation with the intent of capturing Abd al-Rahman Mustafa al-Qaduli in Syria. Al-Qaduli, then the 6th most wanted terrorist in the world and, according to analysts, the then-second-in-command of ISIL, acting as the group's finance minister and was involved in external plots; he also temporarily commanded ISIL after a commander was injured. U.S. Special forces inserted by helicopter and laid in wait to intercept his vehicle; the operators attempted to capture him but the situation escalated and, at the last moment, they decided to fire on the vehicle instead, killing al-Qaduli and 3 other militants. On 25 April, it was reported that U.S. President Barack Obama authorized the deployment of an additional 250 special operations soldiers to Syria. In the following weeks, they are to join the 50 that are already in the country; their main aim is to advise, assist and expand the ongoing effort to bring more Syrian Arab fighters into units the U.S. supports in northern Syria to combat ISIL. May 2016 In late May 2016, more than a dozen U.S. special forces troops were seen in the village of Fatisah, less than north of Raqqa. They were fighting near the front lines with the YPG and wearing both YPG and U.S. insignia on their military uniforms; the operators were helping call in fire support for local SDF forces and coordinating airstrikes from behind the front lines in their advance toward Raqqa. However, the Pentagon and White House insisted that the troops were not fighting ISIL on the front lines and were still participating in a non-combat mission known as "train, advise and assist." Also in late May, a U.S. special forces operator was wounded north of Raqqa by indirect ISIL rocket or mortar fire. The Telegraph reported that British special forces had been operating on the frontline in Syria, particularly in May when they frequently crossed the border from Jordan to defend a New Syrian Army (NSA) rebel unit composed of former Syrian special forces as it defended the village of al-Tanf against ISIL attacks. They mostly helped the unit with logistics such as building defenses and making bunkers safe. The NSA captured the village that month and faced regular ISIL attacks; an ISIL SVBIED drove into the base and killed 11 members of the NSA and injuring 17 others. The wounded were CASEVAC'd by U.S. helicopters to Jordan; the suicide attack damaged the structure of the al-Tanf base; British troops crossed over from Jordan to help them to rebuild their defences. June 2016: Kurdish offensive to take Manbij On 1 June, a senior U.S. defense official told Fox News that a "thousands"-strong SDF force consisting of Sunni Arab fighter and a small contingent of Kurdish fighters (mainly from the YPG) with assistance by U.S. special forces operators and fighter jets launched an operation to recapture the strategically important ISIL-held city of Manbij in northern Syria, from the border with Turkey; ISIL used the town to move supplies and foreign fighters into Syria from Turkey. In the 24 hours since the start of offensive, 18 U.S. airstrikes destroyed ISIL headquarters buildings, weapons caches, training areas, six bridges and an unknown number of ISIL fighters were killed; 15 civilians were also reported killed. On 3 June, F/A-18 Hornets launched from conducted air strikes against ISIS targets in Syria from the eastern Mediterranean. It was the first time the U.S. Navy had conducted strike missions in the Middle East from the Mediterranean Sea since flying operations against the Iraqi military in 2003. By 9 June, the U.S. Central Command said the Coalition had conducted more than 105 strikes in support of the SDF's advance; French special forces were offering training and advice to SDF fighters in the area and on 15 June, British special forces were also reported to be operating in the area. Much of the SDF advance was made possible by Coalition air support, with airstrikes being directed by special forces personnel on the ground. On the same day, four U.S. special operations troops in northern Syria were "lightly" wounded by shrapnel when an Islamic State anti-tank missile fired at a nearby vehicle exploded, but they quickly returned to duty. On 16 June, supposedly as part of Russia's campaign to pressure the U.S. to agree to closer cooperation over Syria, Russian military aircraft bombed, with cluster bombs, a military outpost in al-Tanf in southeast Syria that was garrisoned by the New Syrian Army (NSA); U.S. and British special forces based in Jordan regularly worked with Syrian rebels at the al-Tanf outpost. The airstrike happened 24 hours after a detachment of 20 British special forces left the outpost. After the airstrike took place, U.S. commanders warned Russia that the garrison was part of the international coalition against ISIL and therefore should not be attacked, but 90 minutes later, nearby U.S. warplanes observed Russian jets dropping a second barrage of bombs on the outpost, killing four rebel soldiers. A U.S. spy plane overhead tried to contact the Russian pilots on emergency frequencies, but the Russians did not answer. U.S. officials demanded an explanation from Moscow, but they were told the Russian pilots struck the outpost because they "thought it was an ISIL base", Russian officials then said that Jordan had approved the strikes in advance, but Jordan denied this. Moscow also claimed its air command headquarters in Syria was unable to call off the strikes because the U.S. had not given them the precise position of the outpost. On 29 June, as part of the 2016 Abu Kamal offensive — the offensive by the Pentagon-trained New Syrian Army and several hundred other rebels from different factions that aimed to capture Abu Kamal and sever ISIL's transit link between Syria and Iraq — rebel forces entered the al-Hamdan air base — northwest of the border town Abu Kamal following intense clashes. This followed significant advances into ISIL-held territory near the Abu Kamal border crossing, the NSA said it had captured a number of ISIL positions on the outskirts of Abu Kamal, but a raid on the town at dawn was reported to have been repelled by militants. Fighting continued around the town, as coalition airstrikes were carried out on ISIL hideouts; the NSA also said it was coordinating the assault with Iraqi government forces, who were advancing on the border from the other side. The NSA issued a statement saying "the NSA maintains control of the desert, the approaches to Abu Kamal, and maintains freedom of manoeuvre". later on that day, ISIL militants ambushed the rebels, inflicting heavy casualties and seizing weapons, according to a rebel source. ISIL retook the airbase from the NSA and continued to advance against the rebels, recapturing some of the outposts the NSA had captured south of the town; Coalition helicopters dropped in "foreign" airborne troops on the southern edge of Abu Kamal to help the rebels in their advance; coalition jets also carried out eight airstrikes on ISIL targets in the Abu Kamal area. A contributing reasons for the failure of the U.S.-backed rebel operation was the withdrawing of air support at a critical moment; the aircraft assigned to the operation were ordered in the middle of the operation to leave the area and instead fly to the outskirts of Fallujah, where a large convoy of ISIL fighters, which U.S. commanders considered a "strategic target", had been seen trying to escape across the desert after the city was recaptured by the Iraqi army. The convoy was eliminated by American and British planes along with gunships and aircraft from the Iraqi air force. August 2016: Operation Euphrates Shield On 7 August, as part of Operation Tidal Wave II, "multiple" coalition warplanes destroyed some 83 oil tankers used by the Islamic State near Abu Kamal.CNN reported that the Coalition carried out airstrikes in support of the Turkish intervention in Syria with Syrian opposition forces in August 2016, which seized the town of Jarabulus from ISIL and pushed south and west in an effort to clear the terror group from its border. U.S. special forces had initially intended to accompany the offensive but the U.S. was still working on approving the proposal when Turkish units pushed across the border. On 30 August, the New York Times reported that Abu Mohammad al-Adnani was killed while traveling in a vehicle by a U.S. drone strike in Al-Bab. CNN reported that al-Adnani was a key deputy to ISIL's leader, he also acted as the principal architect in ISIL's external operations and as the group's spokesman; he also coordinated the movements of their fighters - directly encouraging them to carryout lone-wolf attacks on civilians and military targets. The strike marked the highest-profile killing of an ISIL member thus far. September–October 2016: Coalition air raid on Deir ez-Zor On 8 September, an airstrike allegedly carried out by the United States killed Abu Hajer al-Homsi (nom de guerre Abu Omar Saraqib), the top military commander of the renamed al-Nusra Front (Jabhat Fateh al-Sham) in the countryside of the Aleppo Governorate. Abu Hajer al-Homsi was one of the founding members of the al-Nusra Front and had taken part in the Iraq War against the U.S. when he was part of the processor organization al-Qaeda in Iraq. The Pentagon denied carrying out the strike and instead claimed Russia was responsible. On 16 September, CNN reported that up to 40 U.S. special forces operators were accompanying Turkish troops and vetted Syrian opposition forces as they cleared ISIL from northern Syria. The mission, called Operation Noble Lance, was authorised that week and was now underway. Officially, the U.S. personnel were to conduct the same type of "advising, assisting and training" missions that the U.S. had been providing to moderate opposition and local anti-ISIL forces. The Washington Post reported that the contingent of Special Operations forces (SOF) assisting the Turkish and Syrian rebel forces around the cities of Jarabulus and al-Rai were sent at the request of the Turkish government. On 17 September, two U.S. A-10s, two Danish F-16s, and a UK Reaper drone mistakenly bombed a Syrian Army-controlled base in the ISIL-besieged city of Deir ez-Zor. More than 62 Syrian soldiers were killed and at least 100 were wounded in the airstrike. ISIL forces attacked immediately after the Coalition airstrike and took the strategically important elevation near Deir ez-Zor airbase: Tharda (Thurda) mountain. According to Russian and Syrian government sources, SAA forces, supported by Russian and Syrian airstrikes, counterattacked and recaptured Tharda mountain by the end of the day, suffering additional losses, including one Syrian jet fighter. The USAF immediately issued an official explanation - it was a navigation\intelligence mistake and bombing was stopped after Russian Air Force contact informed them about the SAA loses. The Danish Air Force confirmed that their two F-16 fighters participated in the airstrike, insisting that operations stopped the split-second they received the message from the Russians and explaining it as a mistake and was regretting the losses. Russian officials accused the U.S. in helping ISIL due to the air raid. Russia also called for a meeting of the United Nations Security Council over the airstrike and the U.S. temporarily ceased airstrikes in the area. In response to the errant airstrike, the Syrian Armed Forces called it a "serious and blatant attack on Syria and its military". On 3 October, Ahmad Salama Mabruk, a senior al-Nusra Front and previously Egyptian Islamic Jihad commander, was killed in a U.S. drone strike in Jisr al-Shughur. November 2016 On 18 November, a U.S. airstrike killed an Afghan al-Nusra Front commander, Abu Afghan al-Masri, in the town of Sarmada. On 24 November, the Washington Post reported that Senior Chief Petty Officer Scott C. Dayton of Explosive Ordnance Disposal Mobile Unit 2 was killed by an IED near Ayn Issa - roughly 35 miles northwest of ISIL's self-proclaimed capital of Raqqa. It was the first time a U.S. service member was killed in Syria since a contingent of SOF was deployed there in October 2015.CNN reported that on 26 November, a U.S. drone strike in Raqqa killed Boubaker Hakim, a senior ISIL terrorist suspected of enabling the Sousse terrorist attack as he had connections to the Tunisian ISIL cell that carried out the attack and the Bardo National Museum attack. Pentagon spokesman Peter Cook said, "His removal degrades ISIL's ability to conduct further attacks in the West and denies ISIL a veteran extremist with extensive ties."Stars and Stripes reported that in November 2016, airmen from the 621st Contingency Response Wing with a contingent of civil engineers, intelligence personnel, and security forces were temporarily deployed to expand and modify the airstrip that the airmen had established earlier in 2016 at an airbase where they deployed to near Kobani, so it can be used to assist in the offensive to retake Raqqa. The airbase gave the U.S. an additional location for its aircraft to support the Coalition and other anti-ISIL forces, but it had been used by U.S. forces limitedly due to the condition of the runway which restricted what types of aircraft could land there. General Carlton Everhart II, commander of U.S. Air Mobility Command, said that the base enabled aircraft to deliver critical supplies, equipment and help position forces; he added that airmen from the 621st group have supported anti-ISIL coalition forces on the ground in Syria. December 2016 On 4 December, it was reported that a U.S. airstrike in Raqqa killed three key ISIL leaders, two of whom (Salah Gourmat and Sammy Djedou) were involved in plotting the November 2015 Paris attacks. On 8 December, during the 4th Palmyra offensive, U.S.-led Coalition warplanes bombed an ISIL convoy near Palmyra in central Syria and destroyed 168 trucks carrying petroleum. On 10 December, it was reported that the U.S. was sending 200 more special operations personnel to Syria, joining the 300 U.S. special forces already in the country. Secretary of Defense Ash Carter said the troops would include special forces trainers, advisers and bomb disposal teams and that they will "continue organising, training, equipping, and otherwise enabling capable, motivated, local forces" to take the fight to ISIL. In particular, the troops will assist SDF forces in the ongoing Raqqa offensive; France also continues to have special operations units in the country. The New York Times reported that on 15 December, Coalition warplanes destroyed 14 Syrian Army T-72 battle tanks, three artillery systems and a number of buildings and vehicles that ISIL militants were using at a military base in central Syria that they seized the previous weekend from Syrian troops and their Russian advisers. On 31 December, a Coalition airstrike in Raqqa killed Mahmud al-Isawi, al-Isawi was an ISIL member who supported the organization's media and intelligence structure in Fallujah before relocating to Raqqa. His role in the group was controlling the flow of instructions and finances between ISIL-held areas and ISIL leaders and provided support to propaganda and intelligence outlets; he was also known to have facilitated trans-regional travel with other ISIL external operations coordinators and had a close working and personal relationship with Abd al-Basit al-Iraqi, the emir of ISIL's Middle East attack network, according to the U.S. defense department. 2017 January 2017 On 1 January 2017, a United States drone strike killed Abu Omar al-Turkistani, a Jabhat Fatah al-Sham and Turkistan Islamic Party military commander, and three other JFS members near the town of Sarmada in the northern Idlib Governorate. On 2 January, more than 25 JFS members were killed in an air raid by suspected U.S. warplanes. On 6 January, as part of the Raqqa offensive, SDF forces, supported by American special forces and international coalition aircraft, seized Qalaat Jaabar fortress after fierce fighting with ISIL jihadist fighters. On 8 January, coalition forces conducted a landing operation onto the road between the villages of Jazra and Kabr in the western Deir ez-Zor Governorate from four helicopters. The landing forces set up checkpoints on the road and raided a water plant in Kabr, where they killed and captured a number of ISIL fighters. After an hour and 15 minutes, the operation was complete and the forces withdrew. On 11 January, an air-to-surface missile launched from suspected U.S. aircraft hit a Jabhat Fatah al-Sham (JFS) convoy consisting of five vehicles and killed 14 JFS members. On 17 January, separate U.S. airstrikes in the Idlib Governorate killed Mohammad Habib Boussaboun al-Tunisi and Abd al-Jalil al-Muslimi, two Tunisian al-Qaeda external operations leaders. Also that day, it was reported that U.S. warplanes and combat advisers were supporting Turkish military units battling ISIL fighters in northern Syria, particularly at the Battle of al-Bab. On 19 January, U.S. airstrikes by B-52 strategic bombers struck the former Syrian Army Sheikh Suleiman military base near Darat Izza, in western Aleppo, which was used by Jabhat Fatah al-Sham and the Nour al-Din al-Zenki Movement. The airstrike killed at least 110 JFS fighters and some al-Zenki fighters, including Abu Hasan al-Taftanaz, an al-Qaeda senior leader. Since 1 January 2017, more than 150 al-Qaeda members were killed by U.S. airstrikes in 2017. The Sheikh Suleiman base had been operated as a training camp by Jabhat Fateh al-Sham and al-Zenki since 2013. According to the Syrian Observatory for Human Rights (SOHR), between 22 September 2014 and 23 January 2017, U.S.-led Coalition airstrikes killed 7,043 people across Syria, of which: 5,768 dead were ISIL fighters, 304 al-Nusra Front militants and other rebels, 90 Syrian government soldiers and 881 civilians. February 2017 On 1 February, it was reported that the U.S.-led Coalition had conducted an airstrike on the Carlton Hotel in the city of Idlib, which local and NGO sources said was a Syrian Arab Red Crescent (SARC) facility and which pro-government media said was used by Hayat Tahrir al-Sham (HTS)'s former al-Nusra component for troop housing, and hosting meetings of prominent commanders. The Coalition denied responsibility, although an investigation of open source materials confirmed a strike had occurred and that a SARC facility was damaged. On 2 February, Sky News reported that Turkish aircraft killed 51 Islamic State fighters in the space of 24 hours in the areas of al-Bab, Tadef, Qabasin, and Bizaah. The airstrikes targeted buildings and vehicles resulting in 85 ISIL positions destroyed. According to Turkish military command, since the beginning of Operation Euphrates Shield, at least 1,775 ISIL militants had been "neutralised," with more than 1,500 of those killed. On 3 February, U.S. airstrikes hit Jund al-Aqsa and Tahrir al-Sham (HTS) positions in Sarmin, near Idlib, and killed more than 12 militants. On the same day, the Royal Jordanian Air Force launched several airstrikes on ISIL outposts in southern Syria. On 4 February, a U.S. airstrike killed Abu Hani al-Masri, who was part of Ahrar al-Sham at the time of his death, but described by the Pentagon as a former al-Qaeda commander. It was reported that there was speculation that he was about to defect to Tahrir al-Sham before his death. On 26 February, in Al-Mastoumeh, Idlib, a U.S. drone strike killed Abu Khayr al-Masri, the deputy leader of al-Qaeda. He had been released and allowed into Syria as part of a prisoner swap between Iran and al-Qaeda in 2015. The U.S. airstrike also killed another Tahrir al-Sham militant, who was traveling in the same car. It was later revealed in May 2019 that the missile used in the airstrike was a Hellfire R9X, which has a kinetic warhead with pop-out blades, intended to reduce collateral damage. March 2017: Regular U.S. forces arrive and the Battle of Tabqa On 8 March, various news outlets reported that regular U.S. troops, part of an amphibious task force, left their ships in the Middle East and deployed to Syria to establish an outpost from which they can provide artillery support for U.S.-backed local forces who were preparing to assault Raqqa in a battle to liberate the city from ISIL control. The deployment marked a new escalation in the U.S.'s role in Syria and put more conventional U.S. troops on the ground, a role that, thus far, had primarily been filled by Special Operations units. The ground force was part of the 11th Marine Expeditionary Unit; 400 U.S. Marines from the Battalion Landing Team 1st Battalion, 4th Marines were tasked to crew an artillery battery of M777 howitzers whilst additional infantrymen from the unit will provide security. Resupplies were to be handled by a detachment of the expeditionary force's combat logistics element. A defense official with direct knowledge of the operation said the Marines were flown from Djibouti to Kuwait and then into Syria. By then, there were 900 U.S. soldiers and Marines deployed to Syria in total (500 special forces troops were already on the ground to train and support the SDF); under the existing limits put in place by the Obama administration, the formal troop cap for Syria is 503 personnel, but commanders have the authority to temporarily exceed that limit to meet military requirements. There were approximately 100 U.S. Army Rangers in Stryker vehicles and armored Humvees deployed in and around Manbij in northern Syria, U.S. officials said. Officially, they were deployed there to discourage Syrian, Russian, or Turkish troops from making any moves that could shift the focus away from an assault on ISIL militants, specifically preventing them from inadvertently coming under fire. The U.S. believed the pressure on ISIL in Raqqa was working – a U.S. official said that intelligence indicated some ISIL leadership and operatives were continuing to try to leave the city. He added that there was also U.S. intelligence indicating the city was laced with trenches, tunnels, roadside bombs and buildings wired to explode, which, if correct, indicated that the U.S. was able to gather intelligence from both overhead surveillance aircraft and people on the ground. However, the official also noted that "Raqqa will probably not be the final battle against ISIS" and added that the group still had some personnel dispersed in areas south and east of the city. According to the official, the U.S. estimated that ISIL could have had roughly as many as 4,000 fighters in Raqqa. An official told The Guardian that in addition, the U.S. was preparing to send hundreds of troops to Kuwait on stand-by to be ready to fight ISIL in Syria if needed and the number would be fewer than 1,000. The Independent reported that Colonel John Dorrian, a spokesperson for Operation Inherent Resolve, said the artillery unit and the Army Rangers would not have a front line role. On 16 March, a U.S. drone strike hit a mosque west of Aleppo and killed between 45 and 49 people, mostly civilians. The location was assessed by the U.S. military as a meeting place for al-Qaeda and claimed that the airstrike hit a target across the mosque and was not targeted at the mosque itself.Stars and Stripes reported that on 28 March, an airman assigned to the 21st Space Wing died in a non-combat incident (possibly of natural causes) in northern Syria. On 22 March, hundreds of SDF fighters, with an undisclosed number of U.S. Special Operations troops operating as their advisers, launched a large-scale heliborne assault on ISIL around the area of the Tabqa Dam. They were inserted on the southern bank of the Euphrates river behind ISIL's defenses to take them by surprise; Colonel Joe Scrocca, an OIR spokesman, said that as a result of the air insertion behind ISIL lines, the SOF-SDF force did not come under fire. The following day, there was heavy fighting in the area; Col. Scrocca added that the ground forces were supported by helicopter gunships, U.S. Marine 155mm artillery and U.S. airstrikes. Airwars reported that March 2017 saw the greatest number of munitions dropped during the war thus far – 3,878 munitions on ISIL targets in both Syria and Iraq, based on figures published by United States Air Forces Central Command – as well as the highest number of civilian deaths (between 477 and 1,216 non-combatants, 57% of which were in Syria) to date, likely caused by Coalition strikes, exceeding casualties caused by Russian strikes for the third consecutive month. Significant incidents that were attributed to Coalition strikes occurred in Tabqa and Kasrat al-Faraj during the Battle of Tabqa. The deadliest incident occurred in al-Mansoura, where local witnesses said at least 33 civilians were killed in a former school used to house displaced persons, although this was denied by the Coalition. April 2017: Shayrat missile strike On 6 April, U.S. special forces conducted a landing operation against ISIL west of Deir ez-Zor. Two Coalition helicopters airdropped soldiers in the area who then interdicted a car on route from Raqqa to Deir ez-Zor. During the operation, U.S. forces killed four ISIL commanders and extracted a Jordanian spy who had infiltrated ISIL and served as one of its leaders. CNN reported that the operation took place near Mayadin and that one of the ISIL commanders killed by U.S. forces was Abdurakhmon Uzbeki, a top facilitator and close associate of ISIL's leader Abu Bakr al-Baghdadi; he was also connected to the 2017 New Year's nightclub bombing in Turkey. On 7 April, in response to chemical weapon attacks (most notably the Khan Shaykhun chemical attack) against Syrian civilians allegedly by the Syrian government, the U.S. launched missile strikes on the airfield from which the chemical weapon attacks were allegedly launched. This incident marked the first deliberate direct attack by the U.S. on the Assad government. The Russian Foreign Ministry denounced the attack as being based on false intelligence and against international law, suspended the Memorandum of Understanding on Prevention of Flight Safety Incidents that had been signed with the U.S., and called an emergency meeting of the UN Security Council. On 8 April, ISIL militants attacked a U.S. garrison at al-Tanf in Southern Syria: the garrison's main gate was blown up with a vehicle-borne improvised explosive device (VBIED), followed by a ground assault of about 20-30 ISIL militants, some of whom were wearing suicide vests. The U.S. Central Command said that the ″U.S. special operators″ at the base along with other coalition members and ″U.S.-backed Syrian fighters″, supported by multiple airstrikes, repelled the attack, with no American casualties. The Telegraph reported that during the battle, ISIL militants also ambushed a convoy of reinforcements from an allied rebel group who were trying to relieve the base. CNN reported that on 11 April, a misdirected U.S. airstrike near Tabqa, during the ongoing Raqqa offensive, killed 18 SDF soldiers. May 2017 The BBC reported that on 9 May, a Royal Air Force drone strike stopped an ISIL-staged public killing. The hellfire missile killed an ISIL sniper positioned on a rooftop set to shoot civilians attempting to walk away. No civilians were harmed and other ISIL fighters fled on motorbikes. The Independent reported on 12 May that SDF forces had seized control of the Tabqa Dam after a deal struck by the SDF and around 70 ISIL militants; the deal included the dismantling of IEDs and booby traps, the surrender of heavy weaponry and withdrawal of remaining ISIL fighters from Tabqa city. On 18 May, the U.S. conducted airstrikes on a convoy of a pro-government militia during the 2017 Baghdad–Damascus highway offensive. According to a U.S. defense official, before the strikes were conducted, government troops were warned they were getting too close to Coalition forces garrisoned at al-Tanf but did not respond. According to the U.S., four or five vehicles were destroyed, including a tank and two bulldozers. In contrast, the Syrian Army reported that two tanks were destroyed and a Shilka SPAAG was damaged. Eight soldiers were killed. June 2017: Battle of Raqqa begins . On 6 June, SDF ground troops backed by Coalition airstrikes launched the battle for Raqqa. USCENTCOM reported that 4,400 munitions were fired in support of operations in Raqqa, a dramatic increase from previous months. Also on 6 June, U.S. aircraft conducted airstrikes on over 60 troops, a tank, artillery, antiaircraft weapons, and armed technical vehicles from pro-government forces that had entered what the Coalition called the al-Tanf "deconfliction zone". On 8 June, a U.S. F-15E Strike Eagle aircraft shot down a drone and other aircraft destroyed two armed pick-up trucks belonging to pro-government forces that moved near U.S. backed fighters at al-Tanf. On 18 June, a U.S. F/A-18E Super Hornet shot down a Syrian Su-22 after it allegedly bombed an SDF position in Ja'Din, south of Tabqa. A statement by the Syrian Army claimed that the plane was on a mission to bomb ISIL militants. The same day, pro-government forces captured the village of Ja'Din following an SDF withdrawal. On 20 June, a U.S. F-15E shot down a pro-government Shahed 129 drone near al-Tanf after it "displayed hostile intent" and allegedly advanced towards Coalition forces. Across Iraq and Syria, Airwars tracked 223 reported Coalition airstrikes with civilian casualties during June 2017, likely killing a minimum of between 529 and 744 civilians (including at least 415 in Syria, mainly in Raqqa governorate, making it the second mostly deadly month for civilians since the strikes began in 2014. Significant reported incidents included 3 June in Raqqa (20 civilians), 5 June (hitting civilians fleeing conflict), and 8 June in Raqqa (including reported white phosphorus use and a mosque hit). August 2017 On 21 August, U.S. forces in northern Syria were fired on by Turkish-backed Free Syrian Army units near Manbij, and returned fire in a short firefight. On 29 August, following the Qalamoun offensive, ISIL militants were surrounded by Lebanese, Hezbollah and Syrian forces on both sides of the Lebanon–Syria border. They negotiated a safe-passage deal so that 670 ISIL fighters and their relatives would be taken from the border in vehicles to Abu Kamal. The U.S. military disapproved of the deal; Colonel Ryan Dillon, a spokesman for the U.S.-led coalition said the deal undermined efforts to fight the ISIL in Syria. U.S. aircraft carried out airstrikes, blocking the road the ISIL convoy was travelling on, before it reached ISIL-occupied territory in Deir ez-Zor Governorate. Dillon added that other U.S. airstrikes hit militants apparently attempting to join the stranded militants in the convoy. The Independent later reported that the convoy was trapped in between the towns of Humayma and al-Sukhnah. September 2017 On 3 September, the Independent reported that 400 ISIL militants and their families traveling in the convoy that was trapped by U.S. airstrikes in Syria in late August had abandoned their vehicles and began travelling on foot to the Iraqi border. December 2017 CNN reported that on 12 December, Maghawir Al-Thawra fighters accompanied by U.S. advisers intercepted a convoy of about ten vehicles that was passing through the 55 km "de-confliction" zone surrounding the coalition base at al-Tanf; a firefight ensued, resulting in 21 ISIL fighters killed and a further 17 captured. CNN reported that on 13 December, two U.S. F-22A fighters intercepted two Russian Su-25 jets that crossed the "de-confliction line" multiple times. An Air Forces Central Command spokesman said that "The F-22s conducted multiple maneuvers to persuade the Su-25s to depart our de-conflicted airspace, including the release of chaff and flares in close proximity to the Russian aircraft and placing multiple calls on the emergency channel to convey to the Russian pilots that they needed to depart the area." One U.S. defense official said that a Russian Su-35 fighter was also involved in the incident. On 22 December, Australian Defense Minister Marise Payne said that Australia will end their air strikes against the Islamic State and recall its six Super Hornet aircraft. Payne added that other Australian operations in the region would continue, with 80 personnel who are part of the Special Operations Task Group in Iraq, including Australian special forces, continuing their deployment. 2018 January 2018Military Times reported on 12 January that Coalition aircraft carried out more than 90 airstrikes between January 4 and January 11 near the Iraq-Syria border.Military Times also reported that on 20 January, U.S. airstrikes targeting an ISIL headquarters and command and control center in the Middle Euphrates River Valley (MERV) near Al-Shaafah killed nearly 150 ISIL militants. According to a press release, SDF fighters provided target observation and intelligence on the target. February–March 2018: The Khasham engagement According to U.S. military officials, on 7 February, in deliberate air and artillery strikes, the U.S.-led coalition killed more than 100 pro-government fighters in the Euphrates River valley in Deir ez-Zor province after they launched an "unprovoked attack" against the Syrian Democratic Forces. Syrian state news corroborated the events, but insisted that the Kurdish forces were mixed in with ISIL forces; it also stated that ten Russian mercenaries were among those killed. CNN reported that on 30 March, Master Sergeant Jonathan J. Dunbar of Delta Force and Sergeant Matt Tonroe of the British Special Air Service were killed by an IED blast during a mission in Manbij, the objective of which was — according to Pentagon spokesman Major Adrian Rankine-Galloway — to "kill or capture a known ISIS member." April–June 2018 On 14 April, U.S. President Donald Trump announced that the U.S., France, and the United Kingdom had decided to carry out a series of military strikes against the Syrian government. The strikes came in the wake of the Douma chemical attack. On 1 May, the SDF, in coordination with the Iraqi Armed Forces, announced the resumption of their Deir ez-Zor offensive to capture the final ISIL enclaves near the Iraqi border and along the Euphrates. By 3 May, the USS Harry S. Truman carrier strike group had joined in support of the SDF's anti-ISIL operations. November 2018 On 1 November, the Coalition began a series of joint patrols with the Turkish Armed Forces along the frontlines of the Kurdish-controlled Manbij region and the Turkish-backed Free Syrian Army's territory. The move was seen as a part of a "roadmap" to ease tensions between the NATO ally and U.S. backed Kurdish forces, and reduce violence between Kurdish and Turkish-backed elements. On 21 November, U.S. Secretary of Defense Jim Mattis announced the U.S. would set up new observation posts along the Turkish border in northern Syria in order to reduce skirmishes between Turkish forces and armed Kurdish militants in the region such as the border clashes in late October-early November. Mattis affirmed that it was a co-operational endeavor with Turkey and it will not require additional U.S. troops to be deployed to Syria. December 2018: Initial announcement of U.S. withdrawal President Donald Trump, declaring "we have won against ISIS," unilaterally announced on 19 December 2018 that the remaining 2,000-2,500 U.S. troops in Syria would be withdrawn. Trump made the announcement on Twitter, overruling the recommendations of his military commanders and civilian advisors, with apparently no prior consultation with Congress. Although no timetable was provided, Press Secretary Sarah Sanders indicated that the withdrawal had already been ordered. Various sources indicated that Trump had directed that the withdrawal be completed within 30 days. However, Reuters was told by a U.S. official that the withdrawal was expected to take 60 to 100 days. Following Trump's surprise announcement, the Pentagon and State Department tried to change his mind, with several of his congressional and political allies expressing serious concerns about the sudden move, specifically that it would hand control of the region to Russia and Iran and abandon America's Kurdish allies. CNN reported on 24 December that during the weeks before Trump's withdrawal announcement, national security advisor John Bolton told senior officials to meet directly with anti-ISIL coalition partners to assure them that America would remain in Syria until Iran had left. One senior administration official commented that Trump's decision was "a complete reversal," done "without deliberation," reportedly leaving allies and partners "bewildered." According to one CNN analysis, the announcement reportedly came as the Coalition had reason to believe ISIL leader Abu Bakr al-Baghdadi and his top commanders were possibly cornered in a small pocket of northern Syria, "in a Tora Bora situation" akin to the region where al-Qaeda leader Osama bin Laden escaped from American forces in 2001. On 27 December, administration officials stated that USCENTCOM's troop withdrawal plan entailed the withdrawal taking place over several months instead of weeks, falling in line with Trump's post-announcement comments that the pullout of U.S. troops would be "deliberate and orderly." By the end of the month, it remained unclear whether anti-ISIL air operations would continue post-withdrawal. By 31 December, after U.S. Senator Lindsey Graham and a group of generals held a luncheon with the president over the withdrawal, Graham tweeted that Trump would seek a more gradual withdrawal over a course of several months; a slow down of the withdrawal was not officially confirmed by the administration at the time. In December 2018, US President Donald Trump announced that US troops involved in the fight against the Islamic State (ISIS) in northeast Syria would be withdrawn imminently. Trump's surprise decision overturned Washington's policy in the Middle East. It fueled the ambitions and anxieties of local and regional actors vying over the future shape of Syria. Many experts proposed that President Trump could mitigate the damage of his withdrawal of U.S. military forces from Syria by using Special Activities Center. Many believe the president chose "to replace U.S. ground forces in Syria with personnel from the CIA's Special Activities Center" and that the process has been underway for months. Already experienced in operations in Syria, the CIA has numerous paramilitary officers who have the skills to operate independently in harms way. And while the CIA lacks the numbers to replace all 2,000 U.S. military personnel currently in Syria and work alongside the Syrian Democratic Forces (these CIA personnel are spread cross the world), but their model is based on fewer enablers and support. 2019 January 2019 On 6 January 2019, U.S. National Security Advisor John Bolton, while on a trip to Israel and Turkey, said that the pullout of U.S. troops from Syria depended on certain conditions, including the assurance that the remnants of ISIL forces are defeated and Kurds in northern Syria were safe from Turkish forces. However, Turkey's President Recep Tayyip Erdogan rejected the call to protect Kurdish troops, whom he regarded as terrorist groups. On 10 January, U.S. Secretary of State Mike Pompeo said that the U.S. would withdraw its troops from Syria while continuing the battle against ISIL. He also stated that there would be no U.S. reconstruction aid for areas controlled by Syrian President Bashar al-Assad until Iran and its "proxies" had left. On 11 January, Coalition spokesman Col. Sean Ryan confirmed the U.S. troop withdrawal process from Syria had begun. "Out of concern for operational security, we will not discuss specific timelines, locations or troops movements," he said. The SOHR observed that the Coalition had started scaling down its presence at Rmeilan airfield in al-Hasakah. U.S. defense officials said it had begun the removal of equipment, but not yet troops, and that the total number of U.S. soldiers in Syria may temporarily increase in order to provide security for the final pullout. French Foreign Minister Jean-Yves Le Drian welcomed what he believed was a slower, more effective withdrawal by the U.S. after pressure from its allies. On 15 January the Coalition released fresh numbers regarding their ongoing operations in both Syria and Iraq. Between 30 December 2018 and 6 January 2019, the Coalition conducted 575 air and artillery strikes against ISIL in Syria; the strikes destroyed 105 ISIL mortar and rocket artillery units, 50 IED manufacturing sites, 26 vehicles, 19 weapons caches, and two UAV systems. Between January 7–13, airstrikes in the MERV near the Iraqi border also killed around 200 militants including four senior commanders. On 29 January, with ISIL cornered in its final redoubt due to the Kurdish-led conquest against it in the Middle Euphrates River Valley, acting U.S. Defense Secretary Patrick Shanahan proclaimed at his first news conference as SecDef that the Coalition will liberate all of the Islamic State's remaining self-proclaimed caliphate in "two weeks". "I'd say 99.5 percent plus of…the ISIS-controlled territory has been returned to the Syrians. Within a couple of weeks it will be 100 percent," Shanahan said. He added that the U.S. is still in the early stages of what he called a "deliberate, coordinated, disciplined withdrawal," from Syria and that "very important dialogues going on in major capitals" about support to Syria once the U.S. leaves, were ongoing. February 2019: Battle of Baghuz President Donald Trump reiterated his support for withdrawing American ground troops from both Syria and Afghanistan in a series of tweets on 1 February amid proliferating concerns among America's allies, politicians, analysts, and local activists over a feared power vacuum in Syria post-withdrawal. "I inherited a total mess in Syria and Afghanistan, the 'Endless Wars' of unlimited spending and death. During my campaign I said, very strongly, that these wars must finally end. We spend $50 Billion a year in Afghanistan and have hit them so hard that we are now talking peace after 18 long years," Trump tweeted. The day prior, the U.S. Senate had issued a rebuke of the president cautioning against the "precipitous withdrawal" of military forces; furthermore the United States Intelligence Community contradicted the president on its perception of the global threat ISIL continued to pose during a Senate committee hearing. A draft Pentagon report emerged on 1 February warning that ISIL could regain territory in Syria within a year following a U.S. disengagement from Syria. On 5 February, CENTCOM commander General Joseph Votel noted during a Senate Armed Services Committee testimony that he had not been consulted prior to Trump's decision to withdraw American forces, reinforcing the notion that the U.S. withdrawal was ordered completely unilaterally from the White House without prior consultation with relevant military advisors and Defense Department personnel. On 6 February, President Trump, while at a summit of 79 foreign ministers and officials that assisted in the global coalition against ISIL, predicted a formal announcement of a final victory against ISIL as early as the following week. "Remnants - that's all they have, remnants - but remnants can be very dangerous," Trump said in regards to ISIL. "Rest assured, we'll do what it takes to defeat every ounce and every last person within the ISIS madness". The Wall Street Journal, citing State Department officials, reported on 8 February that the U.S. pullout was expected to be complete by April, with the majority of ground troops expected to be already withdrawn by mid-March. A U.S. official confirmed to Reuters that the withdrawal included pulling troops from al-Tanf. An Operation Inherent Resolve summary on Coalition activity between 27 January and 9 February detailed air and artillery strikes conducted in Iraq and Syria. The Coalition conducted 176 strikes in Syria. Targets included: 146 ISIL tactical units, 131 supply routes, 53 fighting positions, 31 staging areas, 14 VBIEDs, 13 pieces of engineering equipment, 11 explosive belts, nine tankers for petroleum oil and lubricants, eight tactical vehicles, five command and control nodes, four buildings, three aircraft operations areas, three tunnels, two petroleum oil and lubricant storage facilities, two manufacturing facilities for IEDs, two artillery pieces, two weapons caches, and one armored vehicle. After the SDF's assault on Baghuz Fawqani began on 9 February, CENTCOM commander Joseph Votel told CNN on 11 February that ISIL losing physical territory does not mean the end of the organization. "Putting military pressure on [ISIL] is always better, it's always easier when you are there on the ground, but in this case our President has made a decision and we are going to execute that and so it's my responsibility as the CENTCOM commander working with my chain of command to look at how we do that," adding that the completion of the U.S. pullout was "weeks away...but then again it will be driven by the situation on the ground". Trump tweeted late on 16 February urging European countries to repatriate the over 800 captured suspected ISIL members from Syria, warning the U.S. may be forced to release them otherwise. Kurdish prisons could not hold the ISIL members and all their families, totaling around 2,000 people, indefinitely. The Kurds called the situation a "time bomb". The U.S.-Kurdish demand to take responsibility got mixed responses from Europe. German foreign minister Heiko Maas said repatriation would be possible only if returning fighters could be immediately taken into custody, which would be "extremely difficult to achieve" without proper judicial information. France, whose citizens made up the majority of European ISIL recruits, said it would not act immediately on Trump's call but would take militants back "case by case," and not categorically. Britain has said its fighters can return only if they seek consular help in Turkey, while acknowledging repatriation was a dilemma. Belgium’s justice minister Koen Geens called for a "European solution," urging "calm reflection and a look at what would pose the least security risks." The Hungarian foreign minister, Péter Szijjártó, said the issue was "one of the greatest challenges ahead of us for the upcoming months." After announcing the U.S. would keep a "peacekeeping" force of around 200-400 troops in Syria — instead of the initially planned total withdrawal — on 22 February, senior Trump administration and defense officials stated the decision was an endorsement of a plan pressed by U.S. military leaders for some time, calling for an international force, preferably NATO or regional Arab allies, of 800 to 1,500 troops that would monitor a safe zone along Syria's border with Turkey. March–April 2019: Talon Anvil Baghuz strike and ISIL territorial collapse On 10 March, John Bolton stated that he was "optimistic" France and the UK would commit personnel to the planned observer force. He also reiterated the U.S. commitment to keep troops in Iraq. On 18 March, during the ongoing Battle of Baghuz Fawqani, the U.S. Air Force's Talon Anvil special operations unit (a Delta Force unit within the larger Task Force 9) bombed a group of people in Baghuz, killing up to 80 (64 civilians and at least 16 ISIL militants). According to CENTCOM spokesman Captain William (Bill) Urban, the U.S.-backed SDF called in air support on a position following a recent ISIL counterattack that almost overran them, but the only UAV above the battlefield recorded standard-definition video and the only available offensive aircraft was an F-15E jet; however a high-definition coalition UAV was operating in the area, unbeknownst to Talon Anvil. The SDF, the standard-definition UAV operator, and Talon Anvil all concluded there were no civilians in the area, with the latter approving the airstrike, in which the F-15E dropped three 500 lb. bombs on the gathering, reportedly to the surprise of drone activity analysts. According to Capt. Urban, hours after the strike, the UAV operator reported possible civilians in the area, and a bomb damage assessment and investigation acknowledged there were civilian casualties but called the bombing "legitimate self-defense strikes" and "proportional due to the unavailability of smaller ordinance at the time of the request," citing video showing "multiple armed women and at least one armed child". The 64 civilians were likely exclusively women and children and the airstrike was covered up by the U.S. military—it was not included in the coalition's annual civilian casualty report and the site was reportedly bulldozed—and it was not revealed to the public until mid-November 2021 amid media reports of a possible war crime cover up in Syria. Urban added that procedures were changed following the strike, requiring the "strike cell" on the ground to coordinate with coalition aircraft and the usage of only high-definition drone surveillance before such strikes are ordered. On 20 March, in response to new developments in Baghuz, President Trump predicted that the remaining ISIL holdout would be cleared "by tonight" during a speech at the Lima Army Tank Plant in Lima, Ohio. "The caliphate is gone as of tonight," he said, as he used maps depicting ISIL's territorial collapse since November 2016; later, the November 2016 map was shown to actually be a map from 2014 when ISIL was at its peak territorial size, before the Coalition's anti-ISIL operations. On 23 March, the SDF announced victory in the battle for Baghuz, signifying the territorial collapse of ISIL in Syria, a critical milestone for the U.S.-led Syrian intervention. U.S. Deputy Assistant Secretary of Defense Michael Mulroy stated that the physical caliphate was defeated but ISIL was not and that there were over 10,000 completely unrepentant fighters left in Syria and Iraq. He expected the U.S. to be in Syria for the long haul with a very capable partner in the Syrian Democratic Forces. He said that the U.S. partnership with the SDF was a model to follow, like the partnership with the Northern Alliance in Afghanistan to defeat the Taliban and with the Kurdish Peshmerga in Iraq as the northern front against Saddam Hussein. U.S.-Turkish negotiations over joint troop patrols in a designated safe zone along the northern Turkish-Syrian border continued into late April as the UK and France rejected a plan to provide troops to a buffer zone between Rojava and Turkey, claiming their missions in Syria are only to fight ISIL. With their troop numbers set to be cut to 1,000 in upcoming months, the U.S. reportedly prefers a narrower strip of land to patrol than the approximately 20 miles that Turkey has proposed. The Turks would send their own troops into the buffer zone while only demanding U.S. logistical help and air cover. The Turkish proposal reportedly saw push back as the Americans prefer to avoid a situation that effectively pushes the Turkish border 20 miles into Syria, further increasing the chances of clashes with the Kurds instead of reducing it. May 2019 The Syria Study Group, a U.S. Congressionally-appointed panel of experts tasked with assessing the situation in Syria, similar to the Iraq Study Group appointed in 2006, released an interim report on 1 May endorsing the view that instead of a draw down, the U.S. should reassert its presence in Syria, citing the prospect of a potential ISIL resurgence, Russian "prestige" after successfully propping up the Assad government, perceived Iranian entrenchment in the country, and al-Qaeda retaining control in the form of Hayat Tahrir al-Sham's dominance in northwestern Syria, a region U.S. warplanes rarely venture to due to the nearby presence of Russian air defenses deployed on behalf of the Syrian government. The report argued that the U.S. should step up attempts to isolate Assad and counter Iranian influence in the region; it also argued that the U.S. should take in more Syrian refugees, the admittance of which the Trump administration has reduced from thousands to just a few dozen in recent years. The report further underlined the differing views between the president and comparatively more hawkish Congress on what direction to take the U.S.'s commitments in the country. June–July 2019 On 30 June 2019, in a rare operation against non-ISIL elements, the U.S. carried out a strike against an al-Qaeda in Syria (AQ-S) leadership meeting at a training facility west of Aleppo, which killed eight jihadists from the Guardians of Religion Organization, including six commanders: two Tunisians, two Algerians, an Egyptian and a Syrian. It was the first known coalition strike in western Syria since February 2017 due to the U.S. and Russia arranging an unofficial deconfliction boundary that largely bars any substantial U.S. forces from venturing into the region. The U.S. did not specify what assets were used in the strike. In July, U.S. special anti-ISIL envoy James Jeffrey continued to urge Britain, France and Germany to assist the U.S.'s ground mission in Syria. "We want ground troops from Germany to partly replace our soldiers" in the area as part of the anti-Islamic State coalition, Jeffrey told German media. During a Senate Foreign Relations Committee hearing, Deputy Secretary of Defense for the Middle East Michael Mulroy stated that the SDF has over 2,000 foreign terrorist fighters in custody from over 50 countries—in which they spend quite a bit of time, effort and resources taking care of—and that the U.S. has pushed these countries to take back their citizens. The number of American nationals who joined ISIL on the battlefield is small compared to countries like France and the UK, where several hundred foreign fighters traveled from. August 2019 On 7 August 2019, the U.S. and Turkey reached a framework deal to jointly implement a demilitarized buffer zone in the areas between the Tigris and Euphrates rivers—excluding the Manbij area—in northern Syria. Terms of the deal include joint U.S.-Turkish ground patrols, the relocation of some Syrian refugees into the area, and the withdrawal of heavily armed YPG and YPJ forces and fortifications from the Syria–Turkey border, leaving the areas under SDF military council rule instead. On 24 August, the SDF began dismantling border fortifications under the supervision of U.S. forces. On 27 August, YPG units began withdrawing from Tell Abyad and Ras al-Ayn. On 31 August, in a second attack against non-ISIL militants in western Syria since June 30, the U.S. carried out a series of airstrikes on a Rouse the Believers Operations Room meeting between Kafriya and Maarrat Misrin, killing over 40 Guardians of Religion militants, including several leaders. Local human rights NGOs reported that 29 civilians were killed in the attack, naming 22, of whom six were children. October 2019: U.S. drawdown from north Syria, return, & al-Baghdadi's death Following a phone call between U.S. President Trump and Turkish president Recep Tayyip Erdoğan, the White House released a statement on 6 October that Turkey would "soon" be carrying out a planned military offensive into Kurdish-administered northern Syria, so U.S. troops in northern Syria began to withdraw from the area to avoid interference with the offensive. The White House statement also passed responsibility for the area's captured ISIL fighters (held by Kurdish forces) to Turkey. This initial withdrawal involved around 50 troops from two towns along the Syrian border, Tal Abyad and Ras al-Ayn. The sudden withdrawal proved controversial as U.S. Congress members of both parties sharply denounced the move, including Republican allies of Trump such as Senator Lindsey Graham and Mitch McConnell. They argued that the move betrayed the American-allied Kurds, and would only benefit ISIL, Turkey, Russia, Iran and Bashar al-Assad's Syrian regime. After the U.S. pullout, Turkey launched its ground offensive into Kurdish-controlled areas in northeast Syria on 9 October, spelling the collapse of the Turkey–U.S. Northern Syria Buffer Zone agreement established in August 2019. Secretary of State Mike Pompeo denied that the U.S. had given a "green light" for Turkey to attack the Kurds. However, Pompeo defended the Turkish military action, stating that Turkey had a "legitimate security concern" with "a terrorist threat to their south". On 13 October, Defense Secretary Mark Esper stated that the entire contingent of nearly 1,000 U.S. troops would be withdrawn from northern Syria entirely, because the U.S. learned that Turkey will "likely intend to extend their attack further south than originally planned, and to the west". Hours later, the Kurdish forces, concluding that it would help save Kurdish lives, announced an alliance with the Syrian government and its Russian allies, in a united effort to repel the Turkish offensive. On 25 October, Mark Esper confirmed the U.S. had partially reversed its Syria pullout and that the U.S. had a new dedicated mission to guard and secure Syrian oil and gas fields and infrastructure, assisted by the deployment of mechanized infantry units. Though the deployment of mechanized units shifted the U.S.'s special forces-led Syria intervention to a more conventional military operation, therefore leaving a heavier U.S. footprint in Syria, Esper stressed that the U.S.'s "core mission" in Syria continued to be defeating ISIL. Death of al-Baghdadi On 26 October 2019, U.S. Joint Special Operations Command's (JSOC) 1st SFOD-D (Delta Force) conducted a high-profile raid into Idlib Governorate near the border with Turkey that resulted in the death of ISIL leader Abu Bakr al-Baghdadi. November 2019 Observers raised concerns about the potential for deliberate or inadvertent altercations in north and east Syria, as U.S. forces were now operating in close proximity to Russian, Syrian, and Turkish-backed forces in accordance with an 22 October Russian-Turkish "safe zone" arrangement and a prior SDF-Syrian government deal. By November, there had been a steady flow of pictures and videos online showing U.S., Syrian, and Russian forces passing by each other in northern Syria. On 3 November, a U.S. convoy came within one kilometer of a Turkish-backed rebel artillery strike near Tell Tamer, with no U.S. personnel injured. On 7 November, the Pentagon reiterated that the U.S.'s only mission in Syria is to "defeat ISIS", and that securing Syrian oil fields is "a subordinate task to that mission" by denying ISIL any potential oil revenue. Though ISIL's territory and physical caliphate had been decimated, "We've not said that ISIS as an ideology and ISIS as an insurgency has been eliminated," a Pentagon spokesman stated. The Pentagon also insisted that the revenue from the Syrian oil fields the U.S. is protecting will go to the SDF, not the U.S., despite President Trump raising the possibility in late October of American oil companies taking over the oilfields. On 19 November, a report by the Pentagon Inspector General, citing Defense Intelligence Agency assessments, starkly concluded that the U.S. pullout from most of northern Syria on 6 October and the subsequent 9 October Turkish ground offensive gave ISIL the "time and space" to regroup as a potent transnational terror threat. The report assessed that Turkey's incursion impacted the U.S.'s relationship with the Syrian Kurds, greatly shifted the balance of power in north Syria, and disrupted CJTF-OIR and SDF counter-terrorism operations to the point of giving ISIL ample room to quickly resurge. The report, like the Syria Study Group's May 2019 report, further underscored the differing attitudes of the Trump White House and the intelligence community on the state of the intervention. On 10 November, in regards to U.S. troop levels in Syria, Chairman of the Joint Chiefs of Staff Mark Milley stated that there will be fewer than 1,000 troops remaining "for sure" and estimated that number would be around 500 to 600; Defense Secretary Mark Esper echoed the 600 estimate three days later, adding that numbers could fluctuate depending on the situation on the ground and the possible increase of European presence in the country. On 23 November, General Kenneth McKenzie, USCENTCOM commander, stated that there were "about 500 U.S. personnel generally east of the Euphrates river east of Deir al Zor up to Hasaka" that were working closely with the SDF and were set to reconvene anti-ISIL operations in coming days and weeks. SDF-U.S. counterinsurgency coordination reportedly recommenced that same day. Lieutenant Colonel David Olson, a spokesman for the U.S. Third Army, confirmed that the M2 Bradley Fighting Vehicles and many elements of the 30th Armored Brigade Combat Team, particularly elements of the South Carolina Army National Guard's 4th Battalion, 118th Infantry Regiment that were deployed to Syria on 31 October, had been quietly withdrawn and returned to Kuwait by the end of November, remaining less than two months in the country. Olson said approximately 100 members of the 30th ABCT remained in Syria conducting "logistical and support tasks" and that the Bradleys were pulled out because commanders deemed the additional firepower no longer necessary. Observers noted that the deployment of the armored units demonstrated a willingness and ability for the U.S. to deploy heavy armed forces to forward positions in Syria. December 2019 On 3 December, a U.S. coalition drone strike on a minivan in Atme killed two men. Their affiliations were not immediately clear, but one man was reportedly Hay'at Tahrir al-Sham member Abu Ahmad al-Muhajir, a foreign trainer of Tahrir al-Sham's elite "Red Bands" unit. If confirmed, it would be the first reported U.S. strike specifically targeting the group since 2017. The vehicle was relatively intact and not exploded, and the men inside were "mashed" by the impact, leaving some observers to suggest the munition deployed was probably the Hellfire R9X missile, a Hellfire missile variant that uses a kinetic warhead utilizing sword-like blades to kill the target rather than an explosive. The missile is known for its precision kills that reduce collateral damage. On 4 December, Mark Esper told Reuters that the U.S. had scaled down its presence in northeast Syria and consolidated its forces in the country to a flexible 600 troops, a 40 percent reduction in troop levels since Donald Trump's initial October order to withdraw 1,000; it was not clear if the 600 estimate included the ~200-250 troops at al-Tanf, which would raise the actual total estimate to around 800–850. Esper again suggested that U.S. troop numbers could further decline if more NATO allies volunteered personnel. Local sources reported that a team of 15 Egyptian and Saudi engineers and technicians arrived at the al-Omar oil field in Deir ez-Zor on 13 December, reportedly tasked by the U.S. with enhancing oil production at the field and training locals to observe oil productivity in the area. The Syrian Observatory for Human Rights reported that Russian and U.S. soldiers got into a "brawl" in Tell Tamer on 25 December, where a spontaneous meeting between Russian and American troops devolved into a fist fight, "due to their presence in the same area"; no one was reportedly hurt as the incident did not involve weapons. The U.S. troops were reportedly in the area with an interpreter to get to know the opinions of local residents. The Russian Ministry of Defense denied the claims, calling the SOHR report a "hoax". 2020 January 2020 On 7 January 2020, during Russian president Vladimir Putin's visit to the Mariamite Cathedral of Damascus, Syrian president Bashar al-Assad mentioned the legend of Paul the Apostle who became a Christian at the gate of Damascus, and added jokingly: "If Trump arrives along this road, everything will become normal with him too". Putin later told al-Assad to invite Trump to Damascus. On 18 January, U.S. troops blocked a Russian convoy from entering Rmelan, where the U.S. is protecting oil fields under SDF control. Tension occurred between the two groups as U.S. soldiers asked the Russian soldiers to return to the Amuda district in northwest of Al-Hasakah Governorate. Tensions between Russian and American forces continued to grow as U.S. troops again blocked a Russian convoy from using a main road between two Kurdish-held towns on 21 January. On 23 January, in regards to ISIL activity in Iraq and northeastern Syria, ambassador James Jeffrey stated there was no uptick in violence following the U.S. drone strike in Baghdad on 3 January that killed Iranian general Qasem Soleimani. Jeffrey also acknowledged there had been "dustups" between American and Russian troops in Syria, but downplayed the apparent tensions. On 24 January, U.S. Army Reserve Specialist Antonio Moore of 346th Engineer Company, 363rd Engineer Battalion—a sub-unit of the 411th Engineer Brigade—died when his vehicle rolled over during mine sweeping/route clearance operations at an unspecified location in Deir ez-Zor Governorate. Spc. Moore was on his first deployment after enlisting in May 2017. The U.S. Army had deployed dedicated route clearance combat engineer units in Syria since at least 2017. February–March 2020 On 12 February, a group of Assad government supporters, including civilians and militiamen, blocked a U.S. military convoy in the village of Khirbet Amo, near Qamishli, resulting in clash that left one civilian dead and another injured. A coalition spokesman said the troops fired in self defense and that the incident was under investigation. One U.S. soldier sustained a "minor superficial scratch while operating their equipment." According to the coalition, soldiers issued a series of warnings to de-escalate and then came under small arms fire before firing back. In early March 2020, NPR's Tom Bowman accompanied an American patrol tasked with protecting oil fields in northeast Syria. He reported on a "multiday" drone attack on two oil fields, the first reported attack of its kind since the U.S. launched its mission to protect them. On 4 March, a drone of an unspecified type dropped a mortar round near where West Virginia National Guard soldiers—operating in Syria as part of the 30th ABCT—were sleeping at an oil field, and on 6 March, drones returned and dropped multiple mortar rounds; no soldiers were injured. The report noted the attacks left noticeable impact craters and "sprayed" oil tanks and at least one military truck. The West Virginian guardsmen reportedly repelled one of the drones, but it was unclear what weapon was used. It also remained unclear who exactly were operating the drones, but Army investigators told Bowman that some of the mortar-like munitions were 3D printed. May–August 2020: Caesar Act, Syrian checkpoint skirmish, and Russian collision incident On 30 May, Special Operations Joint Task Force-Operation Inherent Resolve (SOJTF-OIR) released images of special operations forces personnel training at al-Tanf with an advanced optical sighting system, the Israeli-made Smart Shooter SMASH 2000 "smart sight", attached to their M4A1 carbines, suggesting that special operators in Syria were among the first American forces to deploy the system into a real combat zone. It remained unclear if any special operations units had adopted the computerized optic or if the training was part of field trials. On 14 June, a U.S. coalition drone strike killed Guardians of Religion Organization leaders Khalid al-Aruri and Bilal al-Sanaani who were driving a vehicle in Idlib. There was reportedly no explosion and the target vehicle was relatively intact, with the roof and windshield impacted from above and one side shredded, leading observers to suggest the munition used was probably the kinetic Hellfire R9X missile that uses blades to eviscerate its target rather than an explosive warhead. Ten days later, on 24 June, Abu Adnan al-Homsi, logistics and equipment commander at the Guardians of Religion Organization, was also killed by a U.S. drone strike. On 17 June, the U.S. imposed the Caesar Act on the Syrian government, regarded as the toughest American sanctions ever imposed on Syria, in a bid to pressure Assad to return to the UN and western-led peace process. The sanctions target Assad's inner circle, including his wife Asma, family members, and Russian and Iranian entities, and freezes the assets of any investors dealing with the country's energy, military, and intelligence agencies and infrastructure. The act was named after the Caesar Report. On 23 July, Maj. Gen. Christopher Donahue of the 82nd Airborne Division confirmed that a U.S. paratrooper who died in Syria after suffering fatal injuries in a car crash was in fact on "combat deployment." The paratrooper, Sgt. Bryan "Cooper" Mount of St. George, Utah, was said to be on his "second" combat deployment as well. On 27 July, the Congressional Research Service issued a report, in which they outlined the American strategy in Syria which would aim to achieve: (1) the enduring defeat of the Islamic State; (2) a political settlement to the Syrian civil war; and (3) the withdrawal of Iranian-commanded forces. In early August 2020, Syria's foreign ministry accused the U.S. and the SDF of stealing Syria's oil, in reference to a recent deal apparently signed by the SDF and an unidentified American company. U.S. Secretary of State Mike Pompeo and Senator Lindsey Graham both publicly acknowledged that the SDF had made an agreement with a U.S. firm to "modernize the oil fields in northeastern Syria". Damascus condemned the agreement as U.S. government-sponsored theft of the country's crude and deemed it "null and void and has no legal basis." There was no immediate response from the SDF or U.S. officials in regards to the ministry's statements. On 17 August, in the first direct clash since February 2018, Syrian personnel clashed with U.S. troops at a government checkpoint near Tal al-Zahab, in the southern Qamishli countryside where Russian, Syrian and American forces often patrol. According to the SOHR, the Syrian troops refused a U.S. patrol from passing the checkpoint near an air base with a deployed Syrian "combat formation", resulting in an exchange of gunfire after the patrol continued to pass through. The coalition said that its troops returned fire after coming under fire from the checkpoint and then "returned to base" after the gunfight. SOHR and state media said, however, that the patrol was initially repelled but deployed two attack helicopters 30 minutes later to attack the checkpoint with "heavy machine guns", killing one Syrian soldier and injuring two others. The coalition denied that it called in air support and said it launched an investigation into the fatal clash. On 25-26 August, four U.S. troops were injured in a "chaotic" incident with Russian troops. According to U.S. officials, "at approximately 10 a.m. (Syria Time), Aug. 25," a coalition patrol encountered a Russian patrol near Dayrick, Syria, in which an armored Russian patrol vehicle eventually side-swiped an M-ATV during the interaction, causing four of the crew to suffer concussive-type injuries, described as minor in nature. The coalition patrol then departed the area. Video of the incident posted online showed a tense scene, with Russian military helicopters briefly seen hovering above the American vehicles as well as the armored vehicle collision. The U.S. condemned the "reckless move" that breached "de-confliction protocols, committed to by the United States and Russia in December 2019" and said the Russians had previously agreed to stay out of the area. Chairman of the Joint Chiefs of Staff Mark Milley contacted Chief of the Russian General Staff Valery Gerasimov about the incident, with specific details of the telephone call remaining private. September–November 2020 In September 2020, the U.S. deployed 100 additional troops and six M2A2 Bradley IFVs from the 2nd Brigade Combat Team, 1st Armored Division, based in Kuwait, to north-eastern Syria. Additionally, CENTCOM said an AN/MPQ-64 Sentinel radar was to be deployed and "the frequency of U.S. fighter patrols over U.S. forces" would increase. Some U.S. officials said the reinforcements were in response to recent skirmishes and increasingly belligerent interactions with Russian soldiers in north-eastern Syria, with one official saying the deployment was a "clear signal to Russia" to "avoid unprofessional, unsafe and provocative actions in north-east Syria". "The United States does not seek conflict with any other nation in Syria, but will defend Coalition forces if necessary," said a CENTCOM spokesman. The move was seen specifically as a response to the 26 August incident. On 15 October, two commanders from the Guardians of Religion Organization, Abu Dhar al-Masri and Abu Yusuf al-Maghribi were killed by a U.S. drone strike. On 22 October, at least 17 people with reported connections to al-Qaeda were killed by a U.S. airstrike during a dinner gathering in Jakarah, Salqin Subdistrict, Idlib. It was reported on 30 November that an airstrike near the Iraq–Syria border killed an unidentified Islamic Revolutionary Guard Corps commander and three other men traveling with him from Iraq and into Syria. The vehicle was struck after it entered Syrian territory. Iraqi security and local militia officials said the commander's vehicle had weapons in it and that pro-Iran paramilitary groups helped retrieve the bodies. Sources did not identify the commander nor elaborate on the exact time of the incident. It was not immediately known who conducted the strike, and Reuters could not independently verify the reports. 2021 February–May 2021 On 21 February 2021, images emerged online of Avenger air defense systems purportedly being transported on a highway between the Iraqi city of Ramadi and the Syrian border. Though it was unclear which military units the Avengers belonged to, when and where the images were taken, and the destination of the reported convoy was uncertain, The War Zone online warfare magazine speculated that the Avengers were being transported to U.S. bases in Syria, where small drone attacks remained a persistent threat to U.S. forces since March 2020, according to officials. Observers suspected the drone attacks, carried out by small commercial drones such as quadcopters, were being conducted either by Iran-backed proxy elements or the Islamic State, which is known for its usage of drone warfare. In March 2021, soldiers of the Louisiana National Guard stationed at Mission Support Site Conoco (MSS Conoco)—a makeshift U.S. military outpost established near the Conoco gas field—and another outpost named "Green Village", a base near Mayadin housing U.S. troops in dilapidated apartments once used by oil field workers, soldiers gave some details on their situation on the ground to the media. At Green Village, troops fire a long-range salvo from an M777 howitzer every two weeks into remote areas where ISIL insurgents are believed to be present. At MSS Conoco, a (tattered) U.S. flag is strung between 40-foot-tall gas processing towers in a deliberate display of American presence. "We want them to know we are committed to this region," said one U.S. Army lieutenant. On 22 April 2021, Politico reported that the Pentagon had been investigating suspected "directed-energy attacks" against U.S. troops in Syria since 2020, and that U.S. officials had identified Russia as a likely culprit. According to officials, in one incident in autumn 2020, several troops developed "flu-like symptoms", and the U.S. Congress's Gang of Eight and Senate Armed Services Committee were briefed on the development. The investigation was reportedly part of a larger Pentagon probe into suspected directed-energy attacks on U.S. personnel around the world. Following the Politico report, General Frank McKenzie, head of U.S. Central Command, testified to the Senate that he had seen "no evidence" of purported directed-energy attacks against U.S. forces in the Middle East. General Frank McKenzie and deputy commander of CJTF–OIR, British Brig. Gen. Richard Bell, visited four outposts in eastern Syria on 21 May 2021. During the trip, McKenzie met with coalition troops and commanders, along with SDF commander Mazloum Abdi, where, according to Abdi, they discussed security and economic challenges. McKenzie and Bell reiterated the coalition's mission to assist the SDF in combating the ISIL insurgency, but McKenzie said the issue of how long U.S. troops would remain in Syria was up to President Joe Biden. McKenzie also expressed support for Iraq's announced repatriation of 100 families from the Al-Hawl refugee camp, which he said remained a breeding ground for radicalization and a target of ISIL recruitment, despite recent security improvements. In March 2021, the SDF conducted a five day sweeping operation of the camp, arresting 125 suspects. June–November 2021 On 10 July, a mortar shell landed near MSS Conoco, with no injuries reported. It was reportedly the fourth attack on or near U.S. troops or diplomats within a week, reportedly including one in which two service members were injured. No group claimed responsibility, but U.S. forces suspected Iran-backed proxy militias of carrying out such attacks. On 20 October, five drones equipped with explosive charges attacked the al-Tanf garrison. There were no reports of injuries. U.S. officials reportedly held Iran and its proxy forces in the region responsible for the attack, with Pentagon Press Secretary John Kirby calling the attack "complex, coordinated and deliberate" and saying it resembled previous attacks by Iran-backed Shia militias. According to an Associated Press report, Iran "resourced and encouraged the attack," but the drones were not launched directly from Iranian territory. It remained unclear if the U.S. was considering retaliation, with Kirby stating "if there is to be a response, it will be at a time and a place and a manner of our choosing, and we certainly won't get ahead of those kinds of decisions." Pro-Iranian media outlets suggested the drone attack was retaliation for a previous attack near Palmyra, which the Axis of Resistance reportedly accused Israel of conducting; U.S. officials denied American forces were involved. Israeli defense sources suggested drone attacks on coalition forces in Syria were part of a larger Iranian-led "expansion campaign". In mid-November 2021, a New York Times'' investigation revealed that operations carried out by Talon Anvil, an Air Force special operations unit a part of Task Force 9, resulted in substantial civilian casualties between 2014 and 2019. After the 18 March 2019 Baghuz strike that killed up to 64 civilians was revealed to the public, defense secretary Lloyd Austin asked Gen. Frank McKenzie for a briefing on the strike, which occurred days before McKenzie became CENTCOM commander. Pentagon spokesman John Kirby said the Pentagon was working on two studies on civilian harm, one of which focuses specifically on Syria. 2022 On 12 January 2022, the cabinet of German Chancellor Olaf Scholz decided to end the Bundeswehr's anti-ISIL mission in Syria, pending parliamentary approval. German military jets had been flying reconnaissance missions in Syria until March 2020. See also Foreign interventions by the United States Military intervention against the Islamic State of Iraq and the Levant American-led intervention in Iraq (2014–present) Military of ISIL List of wars and battles involving ISIL Foreign involvement in the Syrian Civil War Cities and towns during the Syrian Civil War Opposition–ISIL conflict during the Syrian Civil War Iraqi insurgency (2011–present) 2015 Egyptian military intervention in Libya Syria–United States relations Russian military intervention in the Syrian Civil War Timeline of the Syrian Civil War (August 2014–present) List of United States attacks on the Syrian government during the Syrian Civil War List of United States special forces raids during the Syrian Civil War References External links Operation Inherent Resolve ISIL frontline maps (Syria) 2014 in the Syrian civil war 2015 in the Syrian civil war 2016 in the Syrian civil war 2017 in the Syrian civil war 2018 in the Syrian civil war Military operations of the Syrian civil war involving the Syrian Democratic Forces Military operations of the Syrian civil war involving the People's Protection Units Military operations of the Syrian civil war involving the Peshmerga Military operations of the Syrian civil war involving the Islamic State of Iraq and the Levant Military operations of the Syrian civil war involving the al-Nusra Front US intervention in the Syrian Civil War Military operations of the Syrian civil war involving the Free Syrian Army Syria 2014 Syria 2014 Syria 2017 Invasions by the United States Islamic State of Iraq and the Levant and the United States Syria 2014 Syria 2014 Syria 2014 Syria 2014 Syria 2014 Wars involving the United Kingdom Syria 2014 Articles containing video clips Invasions of Syria Major phases of the Syrian civil war
738649
https://en.wikipedia.org/wiki/Art%20forgery
Art forgery
Art forgery is the creating and selling of works of art which are falsely credited to other, usually more famous artists. Art forgery can be extremely lucrative, but modern dating and analysis techniques have made the identification of forged artwork much simpler. History Art forgery dates back more than two thousand years. Roman sculptors produced copies of Greek sculptures. The contemporary buyers likely knew that they were not genuine. During the classical period art was generally created for historical reference, religious inspiration, or simply aesthetic enjoyment. The identity of the artist was often of little importance to the buyer. During the Renaissance, many painters took on apprentices who studied painting techniques by copying the works and style of the master. As a payment for the training, the master would then sell these works. This practice was generally considered a tribute, not forgery, although some of these copies have later erroneously been attributed to the master. Following the Renaissance, the increasing prosperity of the bourgeoisie created a fierce demand for art. Near the end of the 14th century, Roman statues were unearthed in Italy, intensifying the populace's interest in antiquities, and leading to a sharp increase in the value of these objects. This upsurge soon extended to contemporary and recently deceased artists. Art had become a commercial commodity, and the monetary value of the artwork came to depend on the identity of the artist. To identify their works, painters began to mark them. These marks later evolved into signatures. As the demand for certain artwork began to exceed the supply, fraudulent marks and signatures began to appear on the open market. During the 16th century, imitators of Albrecht Dürer's style of printmaking added signatures to them to increase the value of their prints. In his engraving of the Virgin, Dürer added the inscription "Be cursed, plunderers and imitators of the work and talent of others". Even extremely famous artists created forgeries. In 1496, Michelangelo created a sleeping Cupid figure and treated it with acidic earth to cause it to appear ancient. He then sold it to a dealer, Baldassare del Milanese, who in turn sold it to Cardinal Riario of San Giorgio who later learned of the fraud and demanded his money back. However, Michelangelo was permitted to keep his share of the money. The 20th-century art market has favored artists such as Salvador Dalí, Pablo Picasso, Klee and Matisse and works by these artists have commonly been targets of forgery. These forgeries are typically sold to art galleries and auction houses who cater to the tastes of art and antiquities collectors; at time of the occupation of France by German forces during World War II, the painting which fetched the highest price at Drouot, the main French auction house, was a fake Cézanne. Forgers There are essentially three varieties of art forger. The person who actually creates the fraudulent piece, the person who discovers a piece and attempts to pass it off as something it is not, in order to increase the piece's value, and the third who discovers that a work is a fake, but sells it as an original anyway. Copies, replicas, reproductions and pastiches are often legitimate works, and the distinction between a legitimate reproduction and deliberate forgery is blurred. For example, Guy Hain used original molds to reproduce several of Auguste Rodin's sculptures. However, when Hain then signed the reproductions with the name of Rodin's original foundry, the works became deliberate forgeries. Artists An art forger must be at least somewhat proficient in the type of art he is trying to imitate. Many forgers were once fledgling artists who tried, unsuccessfully, to break into the market, eventually resorting to forgery. Sometimes, an original item is borrowed or stolen from the owner in order to create a copy. Forgers will then return the copy to the owner, keeping the original for himself. In 1799, a self-portrait by Albrecht Dürer which had hung in the Nuremberg Town Hall since the 16th century, was loaned to . The painter made a copy of the original and returned the copy in place of the original. The forgery was discovered in 1805, when the original came up for auction and was purchased for the royal collection. Although many art forgers reproduce works solely for money, some have claimed that they have created forgeries to expose the credulity and snobbishness of the art world. Essentially the artists claim, usually after they have been caught, that they have performed only "hoaxes of exposure". Some exposed forgers have later sold their reproductions honestly, by attributing them as copies, and some have actually gained enough notoriety to become famous in their own right. Forgeries painted by the late Elmyr de Hory, featured in the film F for Fake directed by Orson Welles, have become so valuable that forged de Horys have appeared on the market. A peculiar case was that of the artist Han van Meegeren who became famous by creating "the finest Vermeer ever" and exposing that feat eight years later in 1945. His own work became valuable as well, which in turn attracted other forgers. One of these forgers was his son Jacques van Meegeren who was in the unique position to write certificates stating that a particular piece of art that he was offering "was created by his father, Han van Meegeren". Forgers usually copy works by deceased artists, but a small number imitate living artists. In May 2004, Norwegian painter Kjell Nupen noticed that the Kristianstad gallery was selling unauthorized, signed copies of his work. American art forger Ken Perenyi published a memoir in 2012 in which he detailed decades of his activities creating thousands of authentic-looking replicas of masters such as James Buttersworth, Martin Johnson Heade, and Charles Bird King, and selling the forgeries to famous auction houses such as Christie's and Sotheby's and wealthy private collectors. Dealers Certain art dealers and auction houses have been alleged to be overly eager to accept forgeries as genuine and sell them quickly to turn a profit. If a dealer finds the work is a forgery, they may quietly withdraw the piece and return it to its previous ownergiving the forger an opportunity to sell it elsewhere. For example, New York art gallery M. Knoedler & Co. sold $80 million of fake artworks claimed to be by Abstract Expressionist artists between 1994 and 2008. During this time, Glafira Rosales brought in about 40 paintings she claimed were genuine and sold them to gallery president Ann Freedman. Claimed to be by the likes of Mark Rothko and Jackson Pollock, the paintings were all in fact forgeries by Pei-Shen Qian, an unknown Chinese artist and mathematician living in Queens. In 2013, Rosales entered a guilty plea to charges of wire fraud, money laundering, and tax evasion. In July 2017, Rosales was ordered by a federal judge to pay US$81 million to victims of the fraud. Pei-Shen Qian was indicted but fled to China and was not prosecuted. The final lawsuit connected with the case was settled in 2019. The case became the subject of a Netflix documentary Made You Look: The True Story About Fake Art, released in 2020. Some forgers have created false paper trails relating to a piece in order to make the work appear genuine. British art dealer John Drewe created false documents of provenance for works forged by his partner John Myatt, and even inserted pictures of forgeries into the archives of prominent art institutions. In 2016, Eric Spoutz plead guilty to one count of wire fraud related to the sale of hundreds of falsely-attributed artworks to American masters, accompanied by forged provenance documents. Spoutz was sentenced to 41 months in federal prison and ordered to forfeit the $1.45 million he made from the scheme and pay $154,100 in restitution. Experts and institutions may also be reluctant to admit their own fallibility. Art historian Thomas Hoving estimates that various types of forged art comprise up to 40% of the art market, though others find this estimate to be absurdly high. Genuine fakes After his conviction, John Myatt continues to paint and sell his forgeries as what he terms "Genuine Fakes." This allows Myatt to create and sell legitimate copies of well-known works of art, or paint one in the style of an artist. His Genuine Fakes copy artists such as Vincent van Gogh, Claude Monet, Leonardo da Vinci and Gustav Klimt, which can be bought as originals or limited edition prints. They are popular among collectors, and can sell for tens of thousands of pounds (GBP). British businessman James Stunt has allegedly commissioned a number of "genuine fakes" by Los Angeles artist and convicted forger Tony Tetro. However, some of these works were loaned by Stunt to the Princes' Foundation, which is one of Prince Charles's many charities, and displayed at historic Dumfries House, with the understanding that they were genuine. When Tetro claimed the works as his own, they were quietly removed from Dumfries House and returned to Stunt. Methods of detection The most obvious forgeries are revealed as clumsy copies of previous art. A forger may try to create a "new" work by combining the elements of more than one work. The forger may omit details typical to the artist they are trying to imitate, or add anachronisms, in an attempt to claim that the forged work is a slightly different copy, or a previous version of a more famous work. To detect the work of a skilled forger, investigators must rely on other methods. Technique of examination Often a thorough examination (sometimes referred to as Morellian Analysis) of the piece is enough to determine authenticity. For example, a sculpture may have been created with obviously modern methods and tools. Some forgers have used artistic methods inconsistent with those of the original artists, such as incorrect characteristic brushwork, perspective, preferred themes or techniques, or have used colors that were not available during the artist's lifetime to create the painting. Some forgers have dipped pieces in chemicals to "age" them and some have even tried to imitate worm marks by drilling holes into objects (see image, right). While attempting to authenticate artwork, experts will also determine the piece's provenance. If the item has no paper trail, it is more likely to be a forgery. Other techniques forgers use which might indicate that a painting is not authentic include: Frames, either new or old, that have been altered in order to make forged paintings look more genuine. To hide inconsistencies or manipulations, forgers will sometimes glue paper, either new or old, to a painting's back, or cut a forged painting from its original size. Recently added labels or artist listings on unsigned works of art, unless these labels are as old as the art itself, should cause suspicion. While art restorers legitimately use new stretcher bars when the old bars have worn, new stretcher bars on old canvases might be an indication that a forger is attempting to alter the painting's identity. Old nail holes or mounting marks on the back of a piece might indicate that a painting has been removed from its frame, doctored and then replaced into either its original frame or different frame. Signatures on paintings or graphics that look inconsistent with the art itself (either fresher, bolder, etc.). Unsigned work that a dealer has "heard" is by a particular artist. More recently, magnetic signatures, such as those used in the ink of bank notes, are becoming popular for authentication of artworks. Forensic authentication If examination of a piece fails to reveal whether it is authentic or forged, investigators may attempt to authenticate the object using some, or all, of the forensic methods below: Carbon dating is used to measure the age of an object up to 10,000 years old. "White Lead" dating is used to pinpoint the age of an object up to 1,600 years old. Conventional x-ray can be used to detect earlier work present under the surface of a painting (see image, right). Sometimes artists will legitimately re-use their own canvasses, but if the painting on top is supposed to be from the 17th century and the one underneath shows people in 19th-century dress, the scientist will assume the top painting is not authentic. Also x-rays can be used to view inside an object to determine if the object has been altered or repaired. X-ray diffraction (the object bends x-rays) is used to analyze the components that make up the paint an artist used, and to detect pentimenti (see image, right). X-ray fluorescence (bathing the object with radiation causes it to emit X-rays) which can reveal if the metals in a metal sculpture or the composition of pigments are too pure, or newer than their supposed age. This technique can also reveal the artist's (or forger's) fingerprints. Ultraviolet fluorescence and infrared analysis are used to detect repairs or earlier painting present on canvasses. Atomic Absorption Spectrophotometry (AAS) and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) are used to detect anomalies in paintings and materials. If an element is present that the investigators know was not used historically in objects of this type, then the object is not authentic. Pyrolysis–gas chromatography–mass spectrometry (Py-GC-MS) can be used to analyze the paint-binding medium. Similar to AAS and ICP-MS, if there are elements detected that were not used in the period, or not available in the region where the art is from, then the object is not authentic. Stable isotope analysis can be used to determine where the marble used in a sculpture was quarried. Thermoluminescence (TL) is used to date pottery. TL is the light produced by heat; older pottery produces more TL when heated than a newer piece. A feature of genuine paintings sometimes used to detect forgery is craquelure. Digital authentication Statistical analysis of digital images of paintings is a new method that has recently been used to detect forgeries. Using a technique called wavelet decomposition, a picture is broken down into a collection of more basic images called sub-bands. These sub-bands are analyzed to determine textures, assigning a frequency to each sub-band. The broad strokes of a surface such as a blue sky would show up as mostly low frequency sub-bands whereas the fine strokes in blades of grass would produce high-frequency sub-bands. A group of 13 drawings attributed to Pieter Brueghel the Elder was tested using the wavelet decomposition method. Five of the drawings were known to be imitations. The analysis was able to correctly identify the five forged paintings. The method was also used on the painting Virgin and Child with Saints, created in the studios of Pietro Perugino. Historians have long suspected that Perugino painted only a portion of the work. The wavelet decomposition method indicated that at least four different artists had worked on the painting. Problems with authentication Art specialists with expertise in art authentication began to surface in the art world during the late 1850s. At that time they were usually historians or museum curators, writing books about paintings, sculpture, and other art forms. Communication among the different specialties was poor, and they often made mistakes when authenticating pieces. While many books and art catalogues were published prior to 1900, many were not widely circulated, and often did not contain information about contemporary artwork. In addition, specialists prior to the 1900s lacked many of the important technological means that experts use to authenticate art today. Traditionally, a work in an artist's "catalogue raisonné" has been key to confirming the authenticity, and thus value. Omission from an artist's catalogue raisonné indeed can prove fatal to any potential resale of a work, notwithstanding any proof the owner may offer to support authenticity. The fact that experts do not always agree on the authenticity of a particular item makes the matter of provenance more complex. Some artists have even accepted copies as their own work - Picasso once said that he "would sign a very good forgery". Camille Corot painted more than 700 works, but also signed copies made by others in his name, because he felt honored to be copied. Occasionally work that has previously been declared a forgery is later accepted as genuine; Vermeer's Young Woman Seated at the Virginals had been regarded as a forgery from 1947 until March 2004, when it was finally declared genuine, although some experts still disagree. At times restoration of a piece is so extensive that the original is essentially replaced when new materials are used to supplement older ones. An art restorer may also add or remove details on a painting, in an attempt to make the painting more saleable on the contemporary art market. This, however, is not a modern phenomenon - historical painters often "retouched" other artist's works by repainting some of the background or details. Many forgeries still escape detection; Han van Meegeren, possibly the most famous forger of the 20th century, used historical canvasses for his Vermeer forgeries and created his own pigments to ensure that they were authentic. He confessed to creating the forgeries only after he was charged with treason, an offense which carried the death penalty. So masterful were his forgeries that van Meegeren was forced to create another "Vermeer" while under police guard, to prove himself innocent of the treason charges. A recent instance of potential art forgery involves the Getty kouros, the authenticity of which has not been resolved. The Getty Kouros was offered, along with seven other pieces, to The J. Paul Getty Museum in Malibu, California, in the spring of 1983. For the next 12 years art historians, conservators, and archaeologists studied the Kouros, scientific tests were performed and showed that the surface could not have been created artificially. However, when several of the other pieces offered with the Kouros were shown to be forgeries, its authenticity was again questioned. In May 1992, the Kouros was displayed in Athens, Greece, at an international conference, called to determine its authenticity. The conference failed to solve the problem; while most art historians and archeologists denounced it, the scientists present believed the statue to be authentic. To this day, the Getty Kouros' authenticity remains a mystery and the statue is displayed with the date: "Greek, 530 B.C. or modern forgery". To combat these problems some initiatives are being developed. The Authentication in Art Foundation. Established in 2012 by experts from different fields involved with the authenticity of art. The aim of the foundation is to bring together experts from different specialities to combat art forgery. Among its members are noted experts such as David Bomford, Martin Kemp, and Mauricio Seracini. The Cultural Heritage Science Open Source – CHSOS, founded by Antonino Cosentino. They “provide practical methods for the scientific examination of fine arts, historical and archaeological objects”. The International Foundation for Art research – IFAR. Established 1969, it is a “not-for-profit educational and research organization dedicated to integrity in the visual arts. IFAR offers impartial and authoritative information on authenticity, ownership, theft, and other artistic, legal, and ethical issues concerning art objects. IFAR serves as a bridge between the public, and the scholarly and commercial art communities”. Institute of Appraisal and Authentication of works of Art – i3A. A not-for-profit organization that gathers professionals of different fields, providing equipment and preparing procedure manuals aligned with international techniques, in the search of further knowledge on the production of Brazilian artists. Photographic forgery Recently, photographs have become the target of forgers, and as the market value of these works increase, so will forgery continue. Following their deaths, works by Man Ray and Ansel Adams became frequent targets of forgery. The detection of forged photography is particularly difficult, as experts must be able to tell the difference between originals and reprints. In the case of photographer Man Ray print production was often poorly managed during his lifetime, and many of his negatives were stolen by people who had access to his studio. The possession of the photo-negatives would allow a forger to print an unlimited number of fake prints, which he could then pass off as original. Fake prints would be nearly indistinguishable from originals, if the same photographic paper was used. Since unused photographic paper has a short (2–5 years) useful life, and the composition of photographic paper was frequently changed, the fakes would have had to be produced not long after the originals. Further complicating matters, following Man Ray's death, control of printing copyrights fell to his widow, Juliet Man Ray, and her brother, who approved production of a large number of prints that Man Ray himself had earlier rejected. While these reprints are of limited value, the originals, printed during Man Ray's lifetime, have skyrocketed in value, leading many forgers to alter the reprints, so that they appear to be original. US legal issues In the United States, criminal prosecutions of art forgers are possible under federal, state and/or local laws. For example, federal prosecutions have been successful using generalized criminal statutes, including the Racketeer Influenced and Corrupt Organizations Act (RICO). A successful RICO charge was brought against a family which had sold counterfeit prints purportedly by Chagall, Miró, and Dalí. The defendants were also found guilty of other federal crimes including conspiracy to defraud, money laundering, and postal fraud. Federal prosecutors are also able to prosecute forgers using the federal wire fraud or mail fraud statutes where the defendants used such communications. However, federal criminal prosecutions against art forgers are seldom brought due in part to high evidentiary burdens and competing law enforcement priorities. For example, internet art frauds now appear in the federal courts' rulings that one may study in the PACER court records. Some frauds are done on the internet on a popular auction websites. Traces are readily available to see the full extent of the frauds from a forensic standpoint or even basic due diligence of professionals who may research matters including sources of PACER / enforcing authority records and on the internet. Prosecution is also possible under state criminal laws, such as prohibitions against criminal fraud, or against the simulation of personal signatures. However, in order to trigger criminal liability under states' laws, the government must prove that the defendant had intent to defraud. The evidentiary burden, as in all criminal prosecutions, is high; proof "beyond a reasonable doubt" is required. Art forgery may also be subject to civil sanctions. The Federal Trade Commission, for example, has used the FTC Act to combat an array of unfair trade practices in the art market. An FTC Act case was successfully brought against a purveyor of fake Dalí prints in FTC v. Magui Publishers, Inc., who was permanently enjoined from fraudulent activity and ordered to restore their illegal profits. In that case, the defendant had collected millions of dollars from his sale of forged prints. At the state level, art forgery may constitute a species of fraud, material misrepresentation, or breach of contract. The Uniform Commercial Code provides contractually-based relief to duped buyers based on warranties of authenticity. The predominant civil theory to address art forgery remains civil fraud. When substantiating a civil fraud claim, the plaintiff is generally required to prove that the defendant falsely represented a material fact, that this representation was made with intent to deceive, that the plaintiff reasonably relied on the representation, and the representation resulted in damages to the plaintiff. Some legal experts have recommended strengthening existing intellectual property laws to address the growing problem of art forgeries proliferating in the mass market. They argue that the existing legal regime is ineffective in combating this growing trend. UK legal issues In the United Kingdom, if a piece of art is found to be a forgery, then the owner will have different legal remedies according to how the work was obtained. If bought at an auction house, then there may be a contractual guarantee which enables the buyer to be reimbursed for the piece, if returned within a set period. Further contractual warranties may be applicable through purchase meaning that terms such as fitness for purpose could be implied (ss.13–14 Sale of Goods Act 1979 or ss. 9-11 Consumer Rights Act 2015). Detecting forgeries is difficult for a multitude of reasons; issues with the lack of resources to identify forgeries, a general reluctance to identify forgeries due to negative economic implications for both owner and dealer and the burden of proof requirement means its problematic to criminally charge forgers. Further, the international nature of the art market creates difficulties due to contrasting laws from different jurisdictions. Art crime education In summer 2009, ARCA - the Association for Research into Crimes against Art - began offering the first postgraduate program dedicated to the study of art crime. The Postgraduate Certificate Program in Art Crime and Cultural Heritage Protection includes coursework that discusses art fakes and forgery. Education on art crime also requires research efforts from the scholarly community through analysis on fake and forged artworks. Fictional art forgery Film Orson Welles directed a film about art forgery called F for Fake. How to Steal a Million (1966, directed by William Wyler) stars Audrey Hepburn joining a burglar (Peter O'Toole) to prevent technical examinations on Cellini's sculpture, Venus, that would expose both her grandfather and father as art forgers (the latter working on more forgeries by Cézanne and van Gogh). In Incognito (1998, directed by John Badham and starring Jason Patric), an expert in forging famous "third tier" artists' paintings is hired to paint a Rembrandt, but is framed for murder after meeting a beautiful Rembrandt expert. In the 1999 remake of The Thomas Crown Affair, Pierce Brosnan's millionaire character plays cat-and-mouse about a stolen (and then, on his initiative, forged) Monet painting with an insurance investigator (Rene Russo). Monet's San Giorgio Maggiore at Dusk is overlaid with a painting by Camille Pissarro, The Artist's Garden at Eragny. In the film The Moderns the lead character, artist Nick Hart, forges several paintings, including a Cézanne, for his art dealer. These are sold to a wealthy collector who, upon being informed that they are fakes, destroys them in the presence of company. The 2001 documentary film about international art forgery, The Forgery, consists of interviews with the well-known artist Corneille (Guillaume Cornelis van Beverloo) and Dutch art forger Geert Jan Jansen. In the Polish comedy Vinci two thieves are commissioned to steal Leonardo da Vinci's Lady with an Ermine. One of them does not want the precious painting to disappear from Czartoryski Museum and orders a forgery of it. In the 2007 film St Trinians the main characters steal and frame Vermeer's Girl with a Pearl Earring. In the 2014 film The Forger, the title character, played by John Travolta, attempts to forge a well known piece of art. In the 2012 remake of Gambit, Colin Firth's character plans to sell a forged painting from the Haystacks series by Monet, to a British billionaire (Alan Rickman). Already the owner of the genuine Haystacks Dawn, he has long been searching for Haystacks Dusk to complete his set. Although Haystacks is a genuine series of paintings by Monet, the two paintings in this film are fictional. TV series White Collar is a series about Neal Caffrey (played by Matt Bomer), a convicted art forger who starts working with the FBI to solve cases for the White Collar Crime Division. Lovejoy, is about a roguish art dealer with a reputation for being able to spot forgeries Literature Tom Ripley is involved in an artwork forgery scheme in several of Patricia Highsmith's crime novels, most notably Ripley Under Ground (1970), in which he is confronted by a collector who correctly suspects that the paintings sold by Tom are forgeries. The novel was adapted to film in 2005, and the 1977 film The American Friend is also partially based on the novel. In Robertson Davies' 1985 novel What's Bred in the Bone, protagonist Francis Cornish studies with an accomplished art forger and is inspired to produce two paintings which are subsequently accepted by experts as original 16th-century artworks. In Russell H. Greenan's novel It Happened in Boston?, the protagonist is a madman, a serial killer, and an astonishingly good artist in the Old Master style, fooled into creating a painting that becomes accepted as a da Vinci. The Art Thief, an international best-selling novel by professor of art history Noah Charney, features a series of forgeries and art heists. In Clive Barker's 1991 novel Imajica, the protagonist, John Furie Zacharias, known as "Gentle," makes his living as a master art forger. William Gaddis' acclaimed 1955 novel The Recognitions centers on the life of an art forger and prodigal Calvinist named Wyatt Gwyon and his struggle to find meaning within art. The novel itself discusses the process and history of forgery in depth as well as the possible artistic merit of forged paintings. David Mitchell's novel Ghostwritten features a section set in the State Hermitage Museum in Russia, and follows a crime syndicate that steals artwork from the museum to sell on the black market, replacing the originals with high quality forgeries. The plot of Dominic Smith's novel The Last Painting of Sara de Vos revolves around a forged work by the fictional 17th century Dutch painter. See also Archaeological forgery Authenticity in art Forgery Museum of Art Fakes, Vienna Works of Art with Contested Provenance Notable forgeries Etruscan terracotta warriors Flower portrait Michelangelo's Sleeping Cupid Rospigliosi Cup sometimes referred to as the Cellini Cup Samson Ceramics forgeries/reproductions Known art forgers and dealers of forged art Giovanni Bastianini (1838–1868), Italian forger of renaissance sculptures Wolfgang Beltracchi (born 1951), German forger William Blundell (born 1947), forged Australian painters Yves Chaudron, France - forged Mona Lisa (1911) Zhang Daqian (1899–1993), forged Chinese art Alceo Dossena (1878–1937), Italian sculptor John Drewe (born 1948), sold the work of John Myatt Shaun Greenhalgh (born 1960), British forger Guy Hain (living), forged Rodin bronzes Eric Hebborn (1934–1996), British-born forger of old master drawings Elmyr de Hory (1906–1976), Hungarian-born painter of Picassos Geert Jan Jansen (born 1943), Dutch painter Karel Appel recognized one of Jansen's forgeries as his own work. Tom Keating (1917–1984), British art restorer and forger who claimed to have faked more than 2,000 paintings by over 100 different artists Mark A. Landis (born 1955), American forger who donated his works to many American museums Fernand Legros (1919–1983), purveyor of forged art Han van Meegeren (1889–1947), Dutchman who painted Vermeers John Myatt (born 1945), British painter, created forgeries for John Drewe Ken Perenyi (born 1947), American, forged works of American masters Ely Sakhai (born 1952), who twice sold Gauguin's Vase de Fleurs Jean-Pierre Schecroun (active 1950s), forged Picasso Émile Schuffenecker (1851–1934), French forger with Otto Wacker David Stein (1935–1999), U.S. art dealer and painter Tony Tetro (born 1950), prolific U.S. forger The Spanish Forger (early 20th Century), French forger of medieval miniatures William J. Toye (1931-2018), forged and sold the work of Clementine Hunter Eduardo de Valfierno (ca. 1850–ca. 1931), art dealer who worked with forger Yves Chaudron Otto Wacker (1898–1970), German purveyor of fake Van Goghs Kenneth Walton (living), prosecuted for selling forged paintings on eBay Earl Washington (born 1962), forger of prints that he attributed to a grandfather, allegedly named "E[arl] M[ack] Washington". References Further reading A History of Art Forgery. Judging the Authenticity of Prints by The Masters. by art historian David Rudd Cycleback Museum Security Network Careful Collecting: Fakes and Forgeries Can science help solve art crime? Famous Forgers Art Signature Dictionary, One of the largest collections of counterfeit art See more than 4000 pictures of forged paintings and signatures from over 300 renowned artists. External links The Association for Research into Crimes Against Art Postgraduate Certificate Program All articles with unsourced statements
37802534
https://en.wikipedia.org/wiki/2012%20Sun%20Bowl
2012 Sun Bowl
The 2012 Hyundai Sun Bowl, the 79th edition of the game, was a post-season American college football bowl game, held on December 31, 2012, at Sun Bowl Stadium in El Paso, Texas, as part of the 2012–13 NCAA Bowl season. The game, the 79th edition of the Sun Bowl, was televised in the United States on CBS. The game featured the USC Trojans from the Pac-12 Conference (Pac-12) against the Georgia Tech Yellow Jackets from the Atlantic Coast Conference (ACC). The Trojans accepted their invitation to the game after attaining a 7–5 regular-season record, while the Yellow Jackets entered the game with a 6–7 record (5–3 ACC), after losing to Florida State in the 2012 ACC Championship Game. Georgia Tech had to request for a postseason waiver that was granted in order to participate as a result of the conference championship game the Yellow Jackets played under extenuating circumstances caused by sanctions on the two teams ahead of them in division standings. Georgia Tech won the game in a 21–7 upset, thanks to a strong performance by the Yellow Jackets' defense. USC, which came into the game averaging more than 30 points per game, was limited to 7 points and 205 total yards. Georgia Tech cornerback Rod Sweeting was named the game's most valuable player. Teams The 2012 Sun Bowl marked the 4th meeting between USC and Georgia Tech, with USC holding a 2-1 advantage coming into the bowl game. The previous meeting was on September 22, 1973, in a 29-18 win for the Trojans. USC Coming off a two-year postseason ban, the Trojans had high hopes to return to their former glory in 2012, which were undoubtedly heightened by their preseason #1 ranking. USC started the season 6–1 and they appeared likely to at least make it back to the Rose Bowl. However, these plans were quickly dampened by the Trojans losing four of their last five games to finish at 7–5 and a second-place tie in the Pac-12 South Division. In addition, the Trojans lost their star quarterback Matt Barkley to injuries suffered in a game against rivals UCLA, leading to redshirt freshman Max Wittek to start the remaining USC games, including the 2012 Sun Bowl. The Trojans came into the bowl game averaging more than 30-points per game. The USC offense was led by wide receivers Robert Woods and Fred Biletnikoff Award winner Marqise Lee, with the later coming into the game with 2,588 all-purpose yards. USC's offense was prone to turning the ball over, however, coming into the game with 31 turnovers. On defense, USC was led by defensive end Morgan Breslin, who came into the game with 12 sacks. The Trojans' rushing defense was average, however, as USC ranked 59th nationally against the run. This was the Trojans' third Sun Bowl; they had previously appeared in the 1990 game against the Michigan State Spartans and again in the 1998 game against the TCU Horned Frogs, losing both games by scores of 17–16 and 28–19, respectively. The Sun Bowl also marked USC's first bowl game since the 2009 Emerald Bowl. Georgia Tech Georgia Tech started the season poorly, going 2-4 in their first six games. However, the Yellow Jackets won four of their last six games to end the regular season with a 5–3 conference record and a 6-6 record overall, making them bowl-eligible initially. The Yellow Jackets finished in a three-way tie for first-place in the ACC's Coastal Division. However, both teams they were tied with (the North Carolina Tar Heels and Miami Hurricanes) were banned from the postseason that year (one by the NCAA and another self-imposed), leaving the Yellow Jackets as the only such team eligible for the 2012 ACC Championship Game. The ACC and the school filed a bowl waiver with the NCAA, which was promptly granted, to assure that the Yellow Jackets would be able to play in a bowl game in case they were to drop to a 6–7 record after a loss in the championship game. They lost to the Florida State Seminoles by a score of 21–15, but still were in solid position for the Sun Bowl's invite. Like with USC, Georgia Tech came into the game with an offense that averaged more than 30 points per game. With its triple option offense, Georgia Tech averaged 312.5 rushing yards per game, which was fourth in the FBS. Leading the offense was quarterback Tevin Washington, who led the team with 19 rushing touchdowns. The Yellow Jackets' defensive unit came into the game allowing an average of 30 points per game. Defensive coordinator Al Groh was fired midway through the season after poor defensive performances against Miami Hurricanes, MTSU and Clemson. Due to Georgia Tech's porous defense, the Yellow Jackets came into the game as underdogs. This was the Yellow Jackets' third Sun Bowl; they had previously won the 1970 game over the Texas Tech Red Raiders by a score of 17–9, and they would later lose the 2011 game to the Utah Utes by a score of 30–27 in overtime. The 2012 Sun Bowl also marked Georgia Tech's 16th consecutive bowl game. Georgia Tech was coming into the game with a seven-game bowl losing streak. Game summary First half In the first quarter, USC appeared to have scored first on a 38-yard field goal. However, the play was reviewed and the field goal was overturned after replay showed the kick sailing wide left. The rest of the first quarter remained scoreless. In the second quarter, the Yellow Jackets went up 7-0 on a 3-yard touchdown pass from quarterback Vad Lee to David Sims. Georgia Tech's defense was then able to force USC to punt. However, on the ensuing drive, quarterback Vad Lee threw an interception to USC's Lamar Dawson. USC was unable to take advantage of the interception, however, and quickly gave the ball back to Georgia Tech after Georgia Tech's Rod Sweeting intercepted Max Wittek. Georgia Tech's next possession was also short lived, after USC's Morgan Breslin forced Tech's Vad Lee to fumble. Taking advantage of the fumble, USC tied the game 7-7 on a 9-yard touchdown pass from Max Wittek to Silas Redd. The score remained tied 7-7 going into the half. Second half Georgia Tech's defense forced USC to punt on the Trojans' first possession of the second half. On the ensuing punt return, Georgia Tech's Jamal Golden returned the punt 56 yards to the USC 1-yard line. Two plays later, Georgia Tech took the lead, 14–7, on a 1-yard touchdown run from quarterback Tevin Washington. Near the end of the third quarter, USC drove to Georgia Tech's 38-yard line. However, the Trojans were unable to convert a fourth and 4 play, turning the ball on downs. Taking advantage of the turnover on downs, Georgia Tech further extended their lead to 21–7 on a 17-yard touchdown pass from Tevin Washington to Orwin Smith. After being forced to a three-and-out, Trojan punter Kyle Negrete pinned Georgia Tech at the Yellow Jackets' 5-yard line. Georgia Tech drove from their own 5-yard line to USC's 26-yard line. However, the Yellow Jackets gave the ball back to the Trojans after being unable to convert a fourth and 4 play. On the ensuing USC drive, the Trojans drove to the Yellow Jackets' 4-yard line. The Trojans were unable to score, however, after Georgia Tech's Quayshawn Nealy intercepted USC's Max Wittek's pass in the endzone. USC had one more chance to cut into the lead after forcing Georgia Tech to punt with 1:41 left in regulation. Helped by several Tech penalties, USC once again drove deep into Georgia Tech territory, this time to the 14-yard line. The Trojans were still unable to score, however, and Georgia Tech sealed its first bowl victory since 2004 after Max Wittek threw an interception to Georgia Tech's Jamal Golden with 1:04 remaining in regulation. Scoring summary Statistical summary Georgia Tech's defense dominated the game, as Georgia Tech allowed only 205 yards of total offense and held USC to a season low 7 points. USC quarterback Max Wittek completed only 14 of his 37 passes for 107 yards and threw 3 interceptions. In addition, wide receivers Marqise Lee and Robert Woods were held to only 41-yards and 33-yards of receiving respectively. The Yellow Jackets also did not have a lot of passing yards, as Georgia Tech had a combined 75 passing yards from quarterbacks Tevin Washington and Vad Lee. Georgia Tech's leading rusher was David Sims, who rushed the ball 17 times for 99 yards. Zach Laskey was Georgia Tech's no. 2 rusher, who rushed the ball 6 times for 60 yards. Overall, Georgia Tech rushed the ball for 294 yards. USC's leading rusher was Silas Redd, who rushed for 88 yards on 17 carries. Curtis McNeal was USC's no. 2 rusher, rushing the ball only 5-yards on 3 carries. References Sun Bowl Sun Bowl Georgia Tech Yellow Jackets football bowl games USC Trojans football bowl games Sun Bowl December 2012 sports events in the United States
6512767
https://en.wikipedia.org/wiki/Students%20for%20Free%20Culture
Students for Free Culture
Students for Free Culture, formerly known as FreeCulture.org, is an international student organization working to promote free culture ideals, such as cultural participation and access to information. It was inspired by the work of former Stanford, now Harvard, law professor Lawrence Lessig, who wrote the book Free Culture, and it frequently collaborates with other prominent free culture NGOs, including Creative Commons, the Electronic Frontier Foundation, and Public Knowledge. Students for Free Culture has over 30 chapters on college campuses around the world, and a history of grassroots activism. Students for Free Culture is sometimes referred to as "FreeCulture", "the Free Culture Movement", and other variations on the "free culture" theme, but none of those are its official name. It is officially Students for Free Culture, as set for in the new bylaws that were ratified by its chapters on October 1, 2007, which changed its name from FreeCulture.org to Students for Free Culture. Goals Students for Free Culture has stated its goals in a "manifesto": The mission of the Free Culture movement is to build a bottom-up, participatory structure to society and culture, rather than a top-down, closed, proprietary structure. Through the democratizing power of digital technology and the Internet, we can place the tools of creation and distribution, communication and collaboration, teaching and learning into the hands of the common person -- and with a truly active, connected, informed citizenry, injustice and oppression will slowly but surely vanish from the earth. It has yet to publish a more "official" mission statement, but some of its goals are: decentralization of creativity—getting ordinary people and communities involved with art, science, journalism and other creative industries, especially through new technologies reforming copyright, patent, and trademark law in the public interest, ensuring that new creators are not stifled by old creators making important information available to the public Purpose According to its website, Students for Free Culture has four main functions within the free culture movement: Creating and providing resources for its chapters and for the general public Outreach to youth and students Networking with other people, companies and organizations in the free culture movement Issue advocacy on behalf of its members History Initial stirrings at Swarthmore College Students for Free Culture had its origins in the Swarthmore Coalition for the Digital Commons (SCDC), a student group at Swarthmore College. The SCDC was founded in 2003 by students Luke Smith and Nelson Pavlosky, and was originally focused on issues related to free software, digital restrictions management, and treacherous computing, inspired largely by the Free Software Foundation. After watching Lawrence Lessig's OSCON 2002 speech entitled "free culture" however, they expanded the club's scope to cover cultural participation in general (rather than just in the world of software and computers), and began tackling issues such as copyright reform. In September 2004, SCDC was renamed Free Culture Swarthmore, laying the groundwork for Students for Free Culture and making it the first existing chapter. OPG v. Diebold case Within a couple of months of founding the SCDC, Smith and Pavlosky became embroiled in the controversy surrounding Diebold Election Systems (now Premier Election Solutions), a voting machine manufacturer accused of making bug-ridden and insecure electronic voting machines. The SCDC had been concerned about electronic voting machines using proprietary software rather than open source software, and kept an eye on the situation. Their alarm grew when a copy of Diebold's internal e-mail archives leaked onto the Internet, revealing questionable practices at Diebold and possible flaws with Diebold's machines, and they were spurred into action when Diebold began sending legal threats to voting activists who posted the e-mails on their websites. Diebold was claiming that the e-mails were their copyrighted material, and that anyone who posted these e-mails online was infringing upon their intellectual property. The SCDC posted the e-mail archive on its website and prepared for the inevitable legal threats. Diebold sent takedown notices under the DMCA to the SCDC's ISP, Swarthmore College. Swarthmore took down the SCDC website, and the SCDC co-founders sought legal representation. They contacted the Electronic Frontier Foundation for help, and discovered that they had an opportunity to sign on to an existing lawsuit against Diebold, OPG v. Diebold, with co-plaintiffs from a non-profit ISP called the Online Policy Group who had also received legal threats from Diebold. With pro bono legal representation from EFF and the Stanford Cyberlaw Clinic, they sued Diebold for abusing copyright law to suppress freedom of speech online. After a year of legal battles, the judge ruled that posting the e-mails online was a fair use, and that Diebold had violated the DMCA by misrepresenting their copyright claims over the e-mails. The network of contacts that Smith and Pavlosky built during the lawsuit, including dozens of students around the country who had also hosted the Diebold memos on their websites, gave them momentum they needed to found an international student movement based on the same free culture principles as the SCDC. They purchased the domain name Freeculture.org and began building a website, while contacting student activists at other schools who could help them start the organization. FreeCulture.org launching at Swarthmore On April 23, 2004, Smith and Pavlosky announced the official launch of FreeCulture.org, in an event at Swarthmore College featuring Lawrence Lessig as the keynote speaker (Lessig had released his book Free Culture less than a month beforehand.) The SCDC became the first Freeculture.org chapter (beginning the process of changing its name to Free Culture Swarthmore), and students from other schools in the area who attended the launch went on to found chapters on their campuses, including Bryn Mawr College and Franklin and Marshall. Internet campaigns FreeCulture.org began by launching a number of internet campaigns, in an attempt to raise its profile and bring itself to the attention of college students. These have covered issues ranging from defending artistic freedom (Barbie in a Blender) to fighting the Induce Act (Save The iPod), from celebrating Creative Commons licenses and the public domain (Undead Art) to opposing business method patents (Cereal Solidarity). While these one-shot websites succeeded in attracting attention from the press and encouraged students to get involved, they didn't directly help the local chapters, and the organization now concentrates less on web campaigns than it did in the past. However, their recent Down With DRM video contest was a successful "viral video" campaign against DRM, and internet campaigns remain an important tool in free culture activism. Increased emphasis on local chapters Today the organization focuses on providing services to its local campus chapters, including web services such as mailing lists and wikis, pamphlets and materials for tabling, and organizing conferences where chapter members can meet up. Active chapters are located at schools such as New York University (NYU), Harvard, MIT, Fordham Law, Dartmouth, University of Florida, Swarthmore, USC, Emory, Reed, and Yale. The NYU chapter made headlines when it began protesting outside of record stores against DRM on CDs during the Sony rootkit scandal, resulting in similar protests around New York and Philadelphia. In 2008, the MIT chapter developed and released YouTomb, a website to track videos removed by DMCA takedown from YouTube. Other activities at local chapters include: art shows featuring Creative Commons-licensed art, mix CD-exchanging flash mobs, film-remixing contests, iPod liberating parties, where the organizers help people replace the proprietary DRM-encumbered operating system on their iPods with a free software system like Rockbox, Antenna Alliance, a project that provides free recording space to bands, releases their music online under Creative Commons licenses, and distributes the music to college radio stations, a campaign to promote open access on university campuses. Structure Students for Free Culture began as a loose confederation of student groups on different campuses, but it has been moving towards becoming an official tax-exempt non-profit. With the passage of official bylaws, Students for Free Culture now has a clear governance structure which makes it accountable to its chapters. The supreme decision-making body is the Board of Directors, which is elected once a year by the chapters, using a Schulze method for voting. It is meant to make long-term, high-level decisions, and should not meddle excessively in lower-level decisions. Practical everyday decisions will be made by the Core team, composed of any students who are members of chapters and meet the attendance requirements. Really low-level decisions and minutiae will be handled by a coordinator, who ideally will be a paid employee of the organization, and other volunteers and assistants. A new board of directors was elected in February 2008, and a new Core Team was assembled shortly thereafter. There is no coordinator yet. References External links Official homepage Blog posts about Students for Free Culture/FreeCulture.org in the media 2007 establishments in the United States Copyright law organizations Free content Intellectual property activism Political advocacy groups in the United States Student political organizations in the United States
562904
https://en.wikipedia.org/wiki/Dc%20%28computer%20program%29
Dc (computer program)
dc (desk calculator) is a cross-platform reverse-Polish calculator which supports arbitrary-precision arithmetic. Written by Lorinda Cherry and Robert Morris at Bell Labs, it is one of the oldest Unix utilities, preceding even the invention of the C programming language. Like other utilities of that vintage, it has a powerful set of features but terse syntax. Traditionally, the bc calculator program (with infix notation) was implemented on top of dc. This article provides some examples in an attempt to give a general flavour of the language; for a complete list of commands and syntax, one should consult the man page for one's specific implementation. History dc is the oldest surviving Unix language program. When its home Bell Labs received a PDP-11, dcwritten in Bwas the first language to run on the new computer, even before an assembler. Ken Thompson has opined that dc was the very first program written on the machine. Basic operations To multiply four and five in dc (note that most of the whitespace is optional): $ cat << EOF > cal.txt 4 5 * p EOF $ dc cal.txt 20 $ The results are also available from the commands: $ echo "4 5 * p" | dc or $ dc - 4 5*pq 20 $ dc 4 5 * p 20 q $ dc -e '4 5 * p' This translates into "push four and five onto the stack, then, with the multiplication operator, pop two elements from the stack, multiply them and push the result onto the stack." Then the p command is used to examine (print out to the screen) the top element on the stack. The q command quits the invoked instance of dc. Note that numbers must be spaced from each other even as some operators need not be. The arithmetic precision is changed with the command k, which sets the number of fractional digits (the number of digits following the point) to be used for arithmetic operations. Since the default precision is zero, this sequence of commands produces 0 as a result: 2 3 / p By adjusting the precision with k, an arbitrary number of decimal places can be produced. This command sequence outputs .66666. 5 k 2 3 / p To evaluate : (v computes the square root of the top of the stack and _ is used to input a negative number): 12 _3 4 ^ + 11 / v 22 - p To swap the top two elements of the stack, use the r command. To duplicate the top element, use the d command. Input/output To read a line from stdin, use the ? command. This evaluates the line as if it were a dc command, and so it is necessary that it be syntactically correct and presents a potential security problem because the ! dc command enables arbitrary command execution. As mentioned above, p prints the top of the stack with a newline after it. n pops the top of the stack and prints it without a trailing newline. f prints the entire stack with one entry per line. dc also supports arbitrary input and output radices. The i command pops the top of the stack and uses it for the input base. Hex digits must be in upper case to avoid collisions with dc commands and are limited to A-F. The o command does the same for the output base, but keep in mind that the input base affects the parsing of every numeric value afterwards so it is usually advisable to set the output base first. Therefore 10o sets the output radix to the current input radix, but generally not to 10 (ten). Nevertheless Ao resets the output base to 10 (ten), regardless of the input base. To read the values, the K, I and O commands push the current precision, input radix and output radix on to the top of the stack. As an example, to convert from hex to binary: $ echo 16i2o DEADBEEFp | dc 11011110101011011011111011101111 Language features Registers In addition to these basic arithmetic and stack operations, dc includes support for macros, conditionals and storing of results for later retrieval. The mechanism underlying macros and conditionals is the register, which in dc is a storage location with a single character name which can be stored to and retrieved from: sc pops the top of the stack and stores it in register c, and lc pushes the value of register c onto the stack. For example: 3 sc 4 lc * p Registers can also be treated as secondary stacks, so values can be pushed and popped between them and the main stack using the S and L commands. Strings String values are enclosed in [ and ] characters and may be pushed onto the stack and stored in registers. The a command converts the low order byte of the numeric value into an ASCII character, or if the top of the stack is a string it replaces it with the first character of the string. There are no ways to build up strings or perform string manipulation other than executing it with the x command, or printing it with the P command. The # character begins a comment to the end of the line. Macros Macros are then implemented by allowing registers and stack entries to be strings as well as numbers. A string can be printed, but it can also be executed (i.e. processed as a sequence of dc commands). So for instance we can store a macro to add one and then multiply by 2 into register m: [1 + 2 *] sm and then (using the x command which executes the top of the stack) we can use it like this: 3 lm x p Conditionals Finally, we can use this macro mechanism to provide conditionals. The command =r pops two values from the stack, and executes the macro stored in register r only if they are equal. So this prints the string equal only if the top of the stack is equal to 5: [[equal]p] sm 5 =m Other conditionals are >, !>, <, !<, !=, which execute the specified macro if the top two values on the stack are greater, less than or equal to ("not greater"), less than, greater than or equal to ("not less than"), and not equals, respectively. Note that the order of the operands in inequality comparisons is the opposite of the order for arithmetic; evaluates to , but runs the contents of the register because . Loops Looping is then possible by defining a macro which (conditionally) reinvokes itself. A simple factorial of the top of the stack might be implemented as: # F(x): return x! # if x-1 > 1 # return x * F(x-1) # otherwise # return x [d1-d1<F*]dsFxp The 1Q command exits from a macro, allowing an early return. q quits from two levels of macros (and dc itself if there are less than two levels on the call stack). z pushes the current stack depth before the z operation. Examples Summing the entire stack This is implemented with a macro stored in register a which conditionally calls itself, performing an addition each time, until only one value remains on the stack. The z operator is used to push the number of entries in the stack onto the stack. The comparison operator > pops two values off the stack in making the comparison. dc -e "1 2 4 8 16 100 0d[+2z>a]salaxp" And the result is 131. Summing all dc expressions as lines from file A bare number is a valid dc expression, so this can be used to sum a file where each line contains a single number. This is again implemented with a macro stored in register a which conditionally calls itself, performing an addition each time, until only one value remains on the stack. cat file | dc -e "0d[?+2z>a]salaxp" The ? operator reads another command from the input stream. If the input line contains a decimal number, that value is added to the stack. When the input file reaches end of file, the command is null, and no value is added to the stack. { echo "5"; echo "7"; } | dc -e "0d[?+2z>a]salaxp" And the result is 12. The input lines can also be complex dc commands. { echo "3 5 *"; echo "4 3 *"; echo "5dd++"; } | dc -e "0d[?+2z>a]salaxp" And the result is 42. Note that since dc supports arbitrary precision, there is no concern about numeric overflow or loss of precision, no matter how many lines the input stream contains, unlike a similarly concise solution in AWK. Downsides of this solution are: the loop stops on encountering a blank line in the input stream (technically, any input line which does not add at least one numeric value to the stack); and, for handling negative numbers, leading instances of '-' to denote a negative sign must be change to '_' in the input stream, because of dc's nonstandard negative sign. The ? operator in dc does not provide a clean way to discern reading a blank line from reading end of file. Unit conversion As an example of a relatively simple program in dc, this command (in 1 line): dc -e '[[Enter a number (metres), or 0 to exit]psj]sh[q]sz[lhx?d0=z10k39.370079*.5+0k12~1/rn[ feet ]Pn[ inches]P10Pdx]dx' converts distances from metres to feet and inches; the bulk of it is concerned with prompting for input, printing output in a suitable format and looping around to convert another number. Greatest common divisor As an example, here is an implementation of the Euclidean algorithm to find the GCD: dc -e '??[dSarLa%d0<a]dsax+p' # shortest dc -e '[a=]P?[b=]P?[dSarLa%d0<a]dsax+[GCD:]Pp' # easier-to-read version Factorial Computing the factorial of an input value, dc -e '?[q]sQ[d1=Qd1-lFx*]dsFxp' Quines in dc There exist also quines in the programming language dc; programs that produce its source code as output. dc -e '[91Pn[dx]93Pn]dx' dc -e '[91PP93P[dx]P]dx' Printing all prime numbers echo '2p3p[dl!d2+s!%0=@l!l^!<#]s#[s/0ds^]s@[p]s&[ddvs^3s!l#x0<&2+l.x]ds.x' | dc This program was written by Michel Charpentier. It outputs the sequence of prime numbers. Note that it can be shortened by one symbol, which seems to be the minimal solution. echo '2p3p[dl!d2+s!%0=@l!l^!<#]s#[0*ds^]s@[p]s&[ddvs^3s!l#x0<&2+l.x]ds.x' | dc Integer factorization dc -e '[n=]P?[p]s2[lip/dli%0=1dvsr]s12sid2%0=13sidvsr[dli%0=1lrli2+dsi!>.]ds.xd1<2' This program was also written by Michel Charpentier. There is a shorter dc -e "[n=]P?[lfp/dlf%0=Fdvsr]sF[dsf]sJdvsr2sf[dlf%0=Flfdd2%+1+sflr<Jd1<M]dsMx" and a faster solution (try with the 200-bit number (input 2 200^1-) dc -e "[n=]P?[lfp/dlf% 0=Fdvsr]sFdvsr2sfd2%0=F3sfd3%0=F5sf[dlf%0=Flfd4+sflr>M]sN[dlf%0=Flfd2+sflr>N]dsMx[p]sMd1<M" Note that the latter can be sped up even more, if the access to a constant is replaced by a register access. dc -e "[n=]P?[lfp/dlf%l0=Fdvsr]sF2s2dvsr2sf4s4d2%0=F3sfd3%0=F5sf[dlf%l0=Flfdl4+sflr>M]sN[dlf%l0=Flfdl2+sflr>N]dsMx[p]sMd1<M" Diffie–Hellman key exchange A more complex example of dc use embedded in a Perl script performs a Diffie–Hellman key exchange. This was popular as a signature block among cypherpunks during the ITAR debates, where the short script could be run with only Perl and dc, ubiquitous programs on Unix-like operating systems: #!/usr/bin/perl -- -export-a-crypto-system-sig Diffie-Hellman-2-lines ($g, $e, $m) = @ARGV, $m || die "$0 gen exp mod\n"; print `echo "16dio1[d2%Sa2/d0<X+d*La1=z\U$m%0]SX$e"[$g*]\EszlXx+p | dc` A commented version is slightly easier to understand and shows how to use loops, conditionals, and the q command to return from a macro. With the GNU version of dc, the | command can be used to do arbitrary precision modular exponentiation without needing to write the X function. #!/usr/bin/perl my ($g, $e, $m) = map { "\U$_" } @ARGV; die "$0 gen exp mod\n" unless $m; print `echo $g $e $m | dc -e ' # Hex input and output 16dio # Read m, e and g from stdin on one line ?SmSeSg # Function z: return g * top of stack [lg*]sz # Function Q: remove the top of the stack and return 1 [sb1q]sQ # Function X(e): recursively compute g^e % m # It is the same as Sm^Lm%, but handles arbitrarily large exponents. # Stack at entry: e # Stack at exit: g^e % m # Since e may be very large, this uses the property that g^e % m == # if( e == 0 ) # return 1 # x = (g^(e/2)) ^ 2 # if( e % 2 == 1 ) # x *= g # return x % [ d 0=Q # return 1 if e==0 (otherwise, stack: e) d 2% Sa # Store e%2 in a (stack: e) 2/ # compute e/2 lXx # call X(e/2) d* # compute X(e/2)^2 La1=z # multiply by g if e%2==1 lm % # compute (g^e) % m ] SX le # Load e from the register lXx # compute g^e % m p # Print the result '`; See also bc (programming language) Calculator input methods HP calculators Stack machine References External links Package dc in Debian GNU/Linux repositories Native Windows port of bc, which includes dc. dc embedded in a webpage Cross-platform software Unix software Software calculators Free mathematics software Numerical programming languages Stack-oriented programming languages Plan 9 commands
22210
https://en.wikipedia.org/wiki/One-time%20pad
One-time pad
In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is no smaller than the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as a one-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad using modular addition. The resulting ciphertext will be impossible to decrypt or break if the following four conditions are met: The key must be at least as long as the plaintext. The key must be random (uniformly distributed in the set of all possible keys and independent of the plaintext), entirely sampled from a non-algorithmic, chaotic source such as a hardware random number generator. It is not sufficient for OTP keys to pass statistical randomness tests as such tests cannot measure entropy, and the number of bits of entropy must be at least equal to the number of bits in the plaintext. For example, using cryptographic hashes or mathematical functions (such as logarithm or square root) to generate keys from fewer bits of entropy would break the uniform distribution requirement, and therefore would not provide perfect secrecy. The key must never be reused in whole or in part. The key must be kept completely secret by the communicating parties. It has also been mathematically proven that any cipher with the property of perfect secrecy must use keys with effectively the same requirements as OTP keys. Digital versions of one-time pad ciphers have been used by nations for critical diplomatic and military communication, but the problems of secure key distribution make them impractical for most applications. First described by Frank Miller in 1882, the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719 was issued to Gilbert Vernam for the XOR operation used for the encryption of a one-time pad. Derived from his Vernam cipher, the system was a cipher that combined a message with a key read from a punched tape. In its original form, Vernam's system was vulnerable because the key tape was a loop, which was reused whenever the loop made a full cycle. One-time use came later, when Joseph Mauborgne recognized that if the key tape were totally random, then cryptanalysis would be impossible. The "pad" part of the name comes from early implementations where the key material was distributed as a pad of paper, allowing the current top sheet to be torn off and destroyed after use. For concealment the pad was sometimes so small that a powerful magnifying glass was required to use it. The KGB used pads of such size that they could fit in the palm of a hand, or in a walnut shell. To increase security, one-time pads were sometimes printed onto sheets of highly flammable nitrocellulose, so that they could easily be burned after use. There is some ambiguity to the term "Vernam cipher" because some sources use "Vernam cipher" and "one-time pad" synonymously, while others refer to any additive stream cipher as a "Vernam cipher", including those based on a cryptographically secure pseudorandom number generator (CSPRNG). History Frank Miller in 1882 was the first to describe the one-time pad system for securing telegraphy. The next one-time pad system was electrical. In 1917, Gilbert Vernam (of AT&T Corporation) invented and later patented in 1919 () a cipher based on teleprinter technology. Each character in a message was electrically combined with a character on a punched paper tape key. Joseph Mauborgne (then a captain in the U.S. Army and later chief of the Signal Corps) recognized that the character sequence on the key tape could be completely random and that, if so, cryptanalysis would be more difficult. Together they invented the first one-time tape system. The next development was the paper pad system. Diplomats had long used codes and ciphers for confidentiality and to minimize telegraph costs. For the codes, words and phrases were converted to groups of numbers (typically 4 or 5 digits) using a dictionary-like codebook. For added security, secret numbers could be combined with (usually modular addition) each code group before transmission, with the secret numbers being changed periodically (this was called superencryption). In the early 1920s, three German cryptographers (Werner Kunze, Rudolf Schauffler, and Erich Langlotz), who were involved in breaking such systems, realized that they could never be broken if a separate randomly chosen additive number was used for every code group. They had duplicate paper pads printed with lines of random number groups. Each page had a serial number and eight lines. Each line had six 5-digit numbers. A page would be used as a work sheet to encode a message and then destroyed. The serial number of the page would be sent with the encoded message. The recipient would reverse the procedure and then destroy his copy of the page. The German foreign office put this system into operation by 1923. A separate notion was the use of a one-time pad of letters to encode plaintext directly as in the example below. Leo Marks describes inventing such a system for the British Special Operations Executive during World War II, though he suspected at the time that it was already known in the highly compartmentalized world of cryptography, as for instance at Bletchley Park. The final discovery was made by information theorist Claude Shannon in the 1940s who recognized and proved the theoretical significance of the one-time pad system. Shannon delivered his results in a classified report in 1945 and published them openly in 1949. At the same time, Soviet information theorist Vladimir Kotelnikov had independently proved the absolute security of the one-time pad; his results were delivered in 1941 in a report that apparently remains classified. Example Suppose Alice wishes to send the message hello to Bob. Assume two pads of paper containing identical random sequences of letters were somehow previously produced and securely issued to both. Alice chooses the appropriate unused page from the pad. The way to do this is normally arranged for in advance, as for instance "use the 12th sheet on 1 May", or "use the next available sheet for the next message". The material on the selected sheet is the key for this message. Each letter from the pad will be combined in a predetermined way with one letter of the message. (It is common, but not required, to assign each letter a numerical value, e.g., a is 0, b is 1, and so on.) In this example, the technique is to combine the key and the message using modular addition (essentially the standard Vigenère cipher). The numerical values of corresponding message and key letters are added together, modulo 26. So, if key material begins with XMCKL and the message is hello, then the coding would be done as follows: h e l l o message 7 (h) 4 (e) 11 (l) 11 (l) 14 (o) message + 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key = 30 16 13 21 25 message + key = 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) (message + key) mod 26 E Q N V Z → ciphertext If a number is larger than 25, then the remainder after subtraction of 26 is taken in modular arithmetic fashion. This simply means that if the computations "go past" Z, the sequence starts again at A. The ciphertext to be sent to Bob is thus EQNVZ. Bob uses the matching key page and the same process, but in reverse, to obtain the plaintext. Here the key is subtracted from the ciphertext, again using modular arithmetic: E Q N V Z ciphertext 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext − 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key = −19 4 11 11 14 ciphertext – key = 7 (h) 4 (e) 11 (l) 11 (l) 14 (o) ciphertext – key (mod 26) h e l l o → message Similar to the above, if a number is negative, then 26 is added to make the number zero or higher. Thus Bob recovers Alice's plaintext, the message hello. Both Alice and Bob destroy the key sheet immediately after use, thus preventing reuse and an attack against the cipher. The KGB often issued its agents one-time pads printed on tiny sheets of flash paper, paper chemically converted to nitrocellulose, which burns almost instantly and leaves no ash. The classical one-time pad of espionage used actual pads of minuscule, easily concealed paper, a sharp pencil, and some mental arithmetic. The method can be implemented now as a software program, using data files as input (plaintext), output (ciphertext) and key material (the required random sequence). The exclusive or (XOR) operation is often used to combine the plaintext and the key elements, and is especially attractive on computers since it is usually a native machine instruction and is therefore very fast. It is, however, difficult to ensure that the key material is actually random, is used only once, never becomes known to the opposition, and is completely destroyed after use. The auxiliary parts of a software one-time pad implementation present real challenges: secure handling/transmission of plaintext, truly random keys, and one-time-only use of the key. Attempt at cryptanalysis To continue the example from above, suppose Eve intercepts Alice's ciphertext: EQNVZ. If Eve had infinite time, she would find that the key XMCKL would produce the plaintext hello, but she would also find that the key TQURI would produce the plaintext later, an equally plausible message: 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext − 19 (T) 16 (Q) 20 (U) 17 (R) 8 (I) possible key = −15 0 −7 4 17 ciphertext-key = 11 (l) 0 (a) 19 (t) 4 (e) 17 (r) ciphertext-key (mod 26) In fact, it is possible to "decrypt" out of the ciphertext any message whatsoever with the same number of characters, simply by using a different key, and there is no information in the ciphertext that will allow Eve to choose among the various possible readings of the ciphertext. If the key is not truly random, it is possible to use statistical analysis to determine which of the plausible keys is the "least" random and therefore more likely to be the correct one. If a key is reused, it will noticeably be the only key that produces sensible plaintexts from both ciphertexts (the chances of some random incorrect key also producing two sensible plaintexts are very slim). Perfect secrecy One-time pads are "information-theoretically secure" in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon at about the same time. His result was published in the Bell System Technical Journal in 1949. Properly used, one-time pads are secure in this sense even against adversaries with infinite computational power. Claude Shannon proved, using information theoretic considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because (intuitively), given a truly uniformly random key that is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely. Thus, the a priori probability of a plaintext message M is the same as the a posteriori probability of a plaintext message M given the corresponding ciphertext. Mathematically, this is expressed as , where is the information entropy of the plaintext and is the conditional entropy of the plaintext given the ciphertext C. (Here, Η is the capital Greek letter eta.) This implies that for every message M and corresponding ciphertext C, there must be at least one key K that binds them as a one-time pad. Mathematically speaking, this means must hold, where denote the quantities of possible keys, ciphers and messages, respectively. In other words, to be able to go from any plaintext in the message space M to any cipher in the cipher space C (via encryption) and from any cipher in cipher-space C to a plain text in message space M (decryption), it would require at least keys (with all keys used with equal probability of to ensure perfect secrecy). Another way of stating perfect secrecy is that for all messages in message space M, and for all ciphers c in cipher space C, we have , where represents the probabilities, taken over a choice of in key space over the coin tosses of a probabilistic algorithm, . Perfect secrecy is a strong notion of cryptanalytic difficulty. Conventional symmetric encryption algorithms use complex patterns of substitution and transpositions. For the best of these currently in use, it is not known whether there can be a cryptanalytic procedure that can efficiently reverse (or even partially reverse) these transformations without knowing the key used during encryption. Asymmetric encryption algorithms depend on mathematical problems that are thought to be difficult to solve, such as integer factorization or the discrete logarithm. However, there is no proof that these problems are hard, and a mathematical breakthrough could make existing systems vulnerable to attack. Given perfect secrecy, in contrast to conventional symmetric encryption, the one-time pad is immune even to brute-force attacks. Trying all keys simply yields all plaintexts, all equally likely to be the actual plaintext. Even with a partially known plaintext, brute-force attacks cannot be used, since an attacker is unable to gain any information about the parts of the key needed to decrypt the rest of the message. The parts of the plaintext that are known will reveal only the parts of the key corresponding to them, and they correspond on a strictly one-to-one basis; a uniformly random key's bits will be independent. Quantum computers have been shown by Peter Shor and others to be much faster at solving some problems that the security of traditional asymmetric encryption algorithms depends on. The cryptographic algorithms that depend on these problem's difficulty would be rendered obsolete with a powerful enough quantum computer. One-time pads, however, would remain secure, as perfect secrecy does not depend on assumptions about the computational resources of an attacker. Quantum cryptography and post-quantum cryptography involve studying the impact of quantum computers on information security. Problems Despite Shannon's proof of its security, the one-time pad has serious drawbacks in practice because it requires: Truly random, as opposed to pseudorandom, one-time pad values, which is a non-trivial requirement. Random number generation in computers is often difficult, and pseudorandom number generators are often used for their speed and usefulness for most applications. True random number generators exist, but are typically slower and more specialized. Secure generation and exchange of the one-time pad values, which must be at least as long as the message. This is important because the security of the one-time pad depends on the security of the one-time pad exchange. If an attacker is able to intercept the one-time pad value, they can decrypt messages sent using the one-time pad. Careful treatment to make sure that the one-time pad values continue to remain secret and are disposed of correctly, preventing any reuse (partially or entirely) —hence "one-time". Problems with data remanence can make it difficult to completely erase computer media. One-time pads solve few current practical problems in cryptography. High quality ciphers are widely available and their security is not currently considered a major worry. Such ciphers are almost always easier to employ than one-time pads because the amount of key material that must be properly and securely generated, distributed and stored is far smaller. Additionally, public key cryptography overcomes the problem of key distribution. True randomness High-quality random numbers are difficult to generate. The random number generation functions in most programming language libraries are not suitable for cryptographic use. Even those generators that are suitable for normal cryptographic use, including /dev/random and many hardware random number generators, may make some use of cryptographic functions whose security has not been proven. An example of a technique for generating pure randomness is measuring radioactive emissions. In particular, one-time use is absolutely necessary. If a one-time pad is used just twice, simple mathematical operations can reduce it to a running key cipher. For example, if and represent two distinct plaintext messages and they are each encrypted by a common key , then the respective ciphertexts are given by: where means XOR. If an attacker were to have both ciphertexts and , then simply taking the XOR of and yields the XOR of the two plaintexts . (This is because taking the XOR of the common key with itself yields a constant bitstream of zeros.) is then the equivalent of a running key cipher. If both plaintexts are in a natural language (e.g., English or Russian), each stands a very high chance of being recovered by heuristic cryptanalysis, with possibly a few ambiguities. Of course, a longer message can only be broken for the portion that overlaps a shorter message, plus perhaps a little more by completing a word or phrase. The most famous exploit of this vulnerability occurred with the Venona project. Key distribution Because the pad, like all shared secrets, must be passed and kept secure, and the pad has to be at least as long as the message, there is often no point in using one-time padding, as one can simply send the plain text instead of the pad (as both can be the same size and have to be sent securely). However, once a very long pad has been securely sent (e.g., a computer disk full of random data), it can be used for numerous future messages, until the sum of the message's sizes equals the size of the pad. Quantum key distribution also proposes a solution to this problem, assuming fault-tolerant quantum computers. Distributing very long one-time pad keys is inconvenient and usually poses a significant security risk. The pad is essentially the encryption key, but unlike keys for modern ciphers, it must be extremely long and is far too difficult for humans to remember. Storage media such as thumb drives, DVD-Rs or personal digital audio players can be used to carry a very large one-time-pad from place to place in a non-suspicious way, but the need to transport the pad physically is a burden compared to the key negotiation protocols of a modern public-key cryptosystem. Such media cannot reliably be erased securely by any means short of physical destruction (e.g., incineration). A 4.7 GB DVD-R full of one-time-pad data, if shredded into particles in size, leaves over 4 megabits of data on each particle. In addition, the risk of compromise during transit (for example, a pickpocket swiping, copying and replacing the pad) is likely to be much greater in practice than the likelihood of compromise for a cipher such as AES. Finally, the effort needed to manage one-time pad key material scales very badly for large networks of communicants—the number of pads required goes up as the square of the number of users freely exchanging messages. For communication between only two persons, or a star network topology, this is less of a problem. The key material must be securely disposed of after use, to ensure the key material is never reused and to protect the messages sent. Because the key material must be transported from one endpoint to another, and persist until the message is sent or received, it can be more vulnerable to forensic recovery than the transient plaintext it protects (because of possible data remanence). Authentication As traditionally used, one-time pads provide no message authentication, the lack of which can pose a security threat in real-world systems. For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" can derive the corresponding codes of the pad directly from the two known elements (the encrypted text and the known plaintext). The attacker can then replace that text by any other text of exactly the same length, such as "three thirty meeting is canceled, stay home". The attacker's knowledge of the one-time pad is limited to this byte length, which must be maintained for any other content of the message to remain valid. This is different from malleability where the plaintext is not necessarily known. Without knowing the message, the attacker can also flip bits in a message sent with a one-time pad, without the recipient being able to detect it. Because of their similarities, attacks on one-time pads are similar to attacks on stream ciphers. Standard techniques to prevent this, such as the use of a message authentication code can be used along with a one-time pad system to prevent such attacks, as can classical methods such as variable length padding and Russian copulation, but they all lack the perfect security the OTP itself has. Universal hashing provides a way to authenticate messages up to an arbitrary security bound (i.e., for any , a large enough hash ensures that even a computationally unbounded attacker's likelihood of successful forgery is less than p), but this uses additional random data from the pad, and some of these techniques remove the possibility of implementing the system without a computer. Common implementation errors Due to its relative simplicity of implementation, and due to its promise of perfect secrecy, one-time-pad enjoys high popularity among students learning about cryptography, especially as it is often the first algorithm to be presented and implemented during a course. Such "first" implementations often break the requirements for information theoretical security in one or more ways: The pad is generated via some algorithm, that expands one or more small values into a longer "one-time-pad". This applies equally to all algorithms, from insecure basic mathematical operations like square root decimal expansions, to complex, cryptographically secure pseudo-random random number generators (CSPRNGs). None of these implementations are one-time-pads, but stream ciphers by definition. All one-time pads must be generated by a non-algorithmic process, e.g. by a hardware random number generator. The pad is exchanged using non-information-theoretically secure methods. If the one-time-pad is encrypted with a non-information theoretically secure algorithm for delivery, the security of the cryptosystem is only as secure as the insecure delivery mechanism. A common flawed delivery mechanism for one-time-pad is a standard hybrid cryptosystem that relies on symmetric key cryptography for pad encryption, and asymmetric cryptography for symmetric key delivery. Common secure methods for one-time pad delivery are quantum key distribution, a sneakernet or courier service, or a dead drop. The implementation does not feature an unconditionally secure authentication mechanism such as a One-time MAC. The pad is reused (exploited during the Venona project, for example). The pad is not destroyed immediately after use. Uses Applicability Despite its problems, the one-time-pad retains some practical interest. In some hypothetical espionage situations, the one-time pad might be useful because encryption and decryption can b computed by hand with only pencil and paper. Nearly all other high quality ciphers are entirely impractical without computers. In the modern world, however, computers (such as those embedded in mobile phones) are so ubiquitous that possessing a computer suitable for performing conventional encryption (for example, a phone that can run concealed cryptographic software) will usually not attract suspicion. The one-time-pad is the optimum cryptosystem with theoretically perfect secrecy. The one-time-pad is one of the most practical methods of encryption where one or both parties must do all work by hand, without the aid of a computer. This made it important in the pre-computer era, and it could conceivably still be useful in situations where possession of a computer is illegal or incriminating or where trustworthy computers are not available. One-time pads are practical in situations where two parties in a secure environment must be able to depart from one another and communicate from two separate secure environments with perfect secrecy. The one-time-pad can be used in superencryption. The algorithm most commonly associated with quantum key distribution is the one-time pad. The one-time pad is mimicked by stream ciphers. Numbers stations often send messages encrypted with a one-time pad. Historical uses One-time pads have been used in special circumstances since the early 1900s. In 1923, they were employed for diplomatic communications by the German diplomatic establishment. The Weimar Republic Diplomatic Service began using the method in about 1920. The breaking of poor Soviet cryptography by the British, with messages made public for political reasons in two instances in the 1920s (ARCOS case), appear to have caused the Soviet Union to adopt one-time pads for some purposes by around 1930. KGB spies are also known to have used pencil and paper one-time pads more recently. Examples include Colonel Rudolf Abel, who was arrested and convicted in New York City in the 1950s, and the 'Krogers' (i.e., Morris and Lona Cohen), who were arrested and convicted of espionage in the United Kingdom in the early 1960s. Both were found with physical one-time pads in their possession. A number of nations have used one-time pad systems for their sensitive traffic. Leo Marks reports that the British Special Operations Executive used one-time pads in World War II to encode traffic between its offices. One-time pads for use with its overseas agents were introduced late in the war. A few British one-time tape cipher machines include the Rockex and Noreen. The German Stasi Sprach Machine was also capable of using one time tape that East Germany, Russia, and even Cuba used to send encrypted messages to their agents. The World War II voice scrambler SIGSALY was also a form of one-time system. It added noise to the signal at one end and removed it at the other end. The noise was distributed to the channel ends in the form of large shellac records that were manufactured in unique pairs. There were both starting synchronization and longer-term phase drift problems that arose and had to be solved before the system could be used. The hotline between Moscow and Washington D.C., established in 1963 after the 1962 Cuban Missile Crisis, used teleprinters protected by a commercial one-time tape system. Each country prepared the keying tapes used to encode its messages and delivered them via their embassy in the other country. A unique advantage of the OTP in this case was that neither country had to reveal more sensitive encryption methods to the other. U.S. Army Special Forces used one-time pads in Vietnam. By using Morse code with one-time pads and continuous wave radio transmission (the carrier for Morse code), they achieved both secrecy and reliable communications. During the 1983 Invasion of Grenada, U.S. forces found a supply of pairs of one-time pad books in a Cuban warehouse. Starting in 1988, the African National Congress (ANC) used disk-based one-time pads as part of a secure communication system between ANC leaders outside South Africa and in-country operatives as part of Operation Vula, a successful effort to build a resistance network inside South Africa. Random numbers on the disk were erased after use. A Belgian airline stewardess acted as courier to bring in the pad disks. A regular resupply of new disks was needed as they were used up fairly quickly. One problem with the system was that it could not be used for secure data storage. Later Vula added a stream cipher keyed by book codes to solve this problem. A related notion is the one-time code—a signal, used only once; e.g., "Alpha" for "mission completed", "Bravo" for "mission failed" or even "Torch" for "Allied invasion of French Northern Africa" cannot be "decrypted" in any reasonable sense of the word. Understanding the message will require additional information, often 'depth' of repetition, or some traffic analysis. However, such strategies (though often used by real operatives, and baseball coaches) are not a cryptographic one-time pad in any significant sense. NSA At least into the 1970s, the U.S. National Security Agency (NSA) produced a variety of manual one-time pads, both general purpose and specialized, with 86,000 one-time pads produced in fiscal year 1972. Special purpose pads were produced for what NSA called "pro forma" systems, where “the basic framework, form or format of every message text is identical or nearly so; the same kind of information, message after message, is to be presented in the same order, and only specific values, like numbers, change with each message.” Examples included nuclear launch messages and radio direction finding reports (COMUS). General purpose pads were produced in several formats, a simple list of random letters (DIANA) or just numbers (CALYPSO), tiny pads for covert agents (MICKEY MOUSE), and pads designed for more rapid encoding of short messages, at the cost of lower density. One example, ORION, had 50 rows of plaintext alphabets on one side and the corresponding random cipher text letters on the other side. By placing a sheet on top of a piece of carbon paper with the carbon face up, one could circle one letter in each row on one side and the corresponding letter on the other side would be circled by the carbon paper. Thus one ORION sheet could quickly encode or decode a message up to 50 characters long. Production of ORION pads required printing both sides in exact registration, a difficult process, so NSA switched to another pad format, MEDEA, with 25 rows of paired alphabets and random characters. (See Commons:Category:NSA one-time pads for illustrations.) The NSA also built automated systems for the "centralized headquarters of CIA and Special Forces units so that they can efficiently process the many separate one-time pad messages to and from individual pad holders in the field". During World War II and into the 1950s, the U.S. made extensive use of one-time tape systems. In addition to providing confidentiality, circuits secured by one-time tape ran continually, even when there was no traffic, thus protecting against traffic analysis. In 1955, NSA produced some 1,660,000 rolls of one time tape. Each roll was 8 inches in diameter, contained 100,000 characters, lasted 166 minutes and cost $4.55 to produce. By 1972, only 55,000 rolls were produced, as one-time tapes were replaced by rotor machines such as SIGTOT, and later by electronic devices based on shift registers. The NSA describes one-time tape systems like 5-UCO and SIGTOT as being used for intelligence traffic until the introduction of the electronic cipher based KW-26 in 1957. Exploits While one-time pads provide perfect secrecy if generated and used properly, small mistakes can lead to successful cryptanalysis: In 1944–1945, the U.S. Army's Signals Intelligence Service was able to solve a one-time pad system used by the German Foreign Office for its high-level traffic, codenamed GEE. GEE was insecure because the pads were not sufficiently random—the machine used to generate the pads produced predictable output. In 1945, the US discovered that Canberra–Moscow messages were being encrypted first using a code-book and then using a one-time pad. However, the one-time pad used was the same one used by Moscow for Washington, D.C.–Moscow messages. Combined with the fact that some of the Canberra–Moscow messages included known British government documents, this allowed some of the encrypted messages to be broken. One-time pads were employed by Soviet espionage agencies for covert communications with agents and agent controllers. Analysis has shown that these pads were generated by typists using actual typewriters. This method is not truly random, as it makes the pads more likely to contain certain convenient key sequences more frequently. This proved to be generally effective because the pads were still somewhat unpredictable because the typists were not following rules, and different typists produced different patterns of pads. Without copies of the key material used, only some defect in the generation method or reuse of keys offered much hope of cryptanalysis. Beginning in the late 1940s, US and UK intelligence agencies were able to break some of the Soviet one-time pad traffic to Moscow during WWII as a result of errors made in generating and distributing the key material. One suggestion is that Moscow Centre personnel were somewhat rushed by the presence of German troops just outside Moscow in late 1941 and early 1942, and they produced more than one copy of the same key material during that period. This decades-long effort was finally codenamed VENONA (BRIDE had been an earlier name); it produced a considerable amount of information. Even so, only a small percentage of the intercepted messages were either fully or partially decrypted (a few thousand out of several hundred thousand). The one-time tape systems used by the U.S. employed electromechanical mixers to combine bits from the message and the one-time tape. These mixers radiated considerable electromagnetic energy that could be picked up by an adversary at some distance from the encryption equipment. This effect, first noticed by Bell Labs during World War II, could allow interception and recovery of the plaintext of messages being transmitted, a vulnerability code-named Tempest. See also Agrippa (A Book of the Dead) Information theoretic security Numbers station One-time password Session key Steganography Tradecraft Unicity distance Notes References Further reading External links Detailed description and history of One-time Pad with examples and images on Cipher Machines and Cryptology The FreeS/WAN glossary entry with a discussion of OTP weaknesses Information-theoretically secure algorithms Stream ciphers Cryptography 1882 introductions
12251814
https://en.wikipedia.org/wiki/ISO/IEC%2027000-series
ISO/IEC 27000-series
The ISO/IEC 27000-series (also known as the 'ISMS Family of Standards' or 'ISO27K' for short) comprises information security standards published jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The series provides best practice recommendations on information security management—the management of information risks through information security controls—within the context of an overall Information security management system (ISMS), similar in design to management systems for quality assurance (the ISO 9000 series), environmental protection (the ISO 14000 series) and other management systems. The series is deliberately broad in scope, covering more than just privacy, confidentiality and IT/technical/cybersecurity issues. It is applicable to organizations of all shapes and sizes. All organizations are encouraged to assess their information risks, then treat them (typically using information security controls) according to their needs, using the guidance and suggestions where relevant. Given the dynamic nature of information risk and security, the ISMS concept incorporates continuous feedback and improvement activities to respond to changes in the threats, vulnerabilities or impacts of incidents. The standards are the product of ISO/IEC JTC1 (Joint Technical Committee 1) SC27 (Subcommittee 27), an international body that meets in person twice a year. The ISO/IEC standards are sold directly by ISO, mostly in English, French and Chinese. Sales outlets associated with various national standards bodies also sell directly translated versions in other languages. Early history Many people and organisations are involved in the development and maintenance of the ISO27K standards. The first standard in this series was ISO/IEC 17799:2000; this was a fast-tracking of the existing British standard BS 7799 part 1:1999 The initial release of BS 7799 was based, in part, on an information security policy manual developed by the Royal Dutch/Shell Group in the late 1980s and early 1990s. In 1993, what was then the Department of Trade and Industry (United Kingdom) convened a team to review existing practice in information security, with the goal of producing a standards document. In 1995, the BSI Group published the first version of BS 7799. One of the principal authors of BS 7799 recalls that, at the beginning of 1993, "The DTI decided to quickly assemble a group of industry representatives from seven different sectors: Shell ([David Lacey] and Les Riley), BOC Group (Neil Twist), BT (Dennis Willets), Marks & Spencer (Steve Jones), Midland Bank (Richard Hackworth), Nationwide (John Bowles) and Unilever (Rolf Moulton)." David Lacey credits Donn B. Parker as having the "original idea of establishing a set of information security controls", and with producing a document containing a "collection of around a hundred baseline controls" by the late 1980s for "the I-4 Information Security circle which he conceived and founded. Published standards The published ISO27K standards related to "information technology - security techniques" are: ISO/IEC 27000 — Information security management systems — Overview and vocabulary ISO/IEC 27001 — Information technology — Security Techniques — Information security management systems — Requirements. The 2013 release of the standard specifies an information security management system in the same formalized, structured and succinct manner as other ISO standards specify other kinds of management systems. ISO/IEC 27002 — Code of practice for information security controls (essentially a detailed catalog of information security controls that might be managed through the ISMS) ISO/IEC 27003 — Information security management system implementation guidance ISO/IEC 27004 — Information security management — Monitoring, measurement, analysis and evaluation ISO/IEC 27005 — Information security risk management ISO/IEC 27006 — Requirements for bodies providing audit and certification of information security management systems ISO/IEC 27007 — Guidelines for information security management systems auditing (focused on auditing the management system) ISO/IEC TR 27008 — Guidance for auditors on ISMS controls (focused on auditing the information security controls) ISO/IEC 27009 — Essentially an internal document for the committee developing sector/industry-specific variants or implementation guidelines for the ISO27K standards ISO/IEC 27010 — Information security management for inter-sector and inter-organizational communications ISO/IEC 27011 — Information security management guidelines for telecommunications organizations based on ISO/IEC 27002 ISO/IEC 27013 — Guideline on the integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1 ISO/IEC 27014 — Information security governance. (Mahncke assessed this standard in the context of Australian e-health.) ISO/IEC TR 27015 — Information security management guidelines for financial services (now withdrawn) ISO/IEC TR 27016 — information security economics ISO/IEC 27017 — Code of practice for information security controls based on ISO/IEC 27002 for cloud services ISO/IEC 27018 — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors ISO/IEC 27019 — Information security for process control in the energy industry ISO/IEC 27021 — Competence requirements for information security management systems professionals ISO/IEC TS 27022 — Guidance on information security management system processes – under development ISO/IEC TR 27023 — Mapping the revised editions of ISO/IEC 27001 and ISO/IEC 27002 ISO/IEC 27031 — Guidelines for information and communication technology readiness for business continuity ISO/IEC 27032 — Guideline for cybersecurity — IT network security ISO/IEC 27033-1 — Network security – Part 1: Overview and concepts ISO/IEC 27033-2 — Network security – Part 2: Guidelines for the design and implementation of network security ISO/IEC 27033-3 — Network security – Part 3: Reference networking scenarios — Threats, design techniques and control issues ISO/IEC 27033-4 — Network security – Part 4: Securing communications between networks using security gateways ISO/IEC 27033-5 — Network security – Part 5: Securing communications across networks using Virtual Private Networks (VPNs) ISO/IEC 27033-6 — Network security – Part 6: Securing wireless IP network access ISO/IEC 27034-1 — Application security – Part 1: Guideline for application security ISO/IEC 27034-2 — Application security – Part 2: Organization normative framework ISO/IEC 27034-3 — Application security – Part 3: Application security management process ISO/IEC 27034-4 — Application security – Part 4: Validation and verification (under development) ISO/IEC 27034-5 — Application security – Part 5: Protocols and application security controls data structure ISO/IEC 27034-5-1 — Application security — Part 5-1: Protocols and application security controls data structure, XML schemas ISO/IEC 27034-6 — Application security – Part 6: Case studies ISO/IEC 27034-7 — Application security – Part 7: Assurance prediction framework ISO/IEC 27035-1 — Information security incident management – Part 1: Principles of incident management ISO/IEC 27035-2 — Information security incident management – Part 2: Guidelines to plan and prepare for incident response ISO/IEC 27035-3 — Information security incident management – Part 3: Guidelines for ICT incident response operations ISO/IEC 27035-4 — Information security incident management – Part 4: Coordination (under development) ISO/IEC 27036-1 — Information security for supplier relationships – Part 1: Overview and concepts ISO/IEC 27036-2 — Information security for supplier relationships – Part 2: Requirements ISO/IEC 27036-3 — Information security for supplier relationships – Part 3: Guidelines for information and communication technology supply chain security ISO/IEC 27036-4 — Information security for supplier relationships – Part 4: Guidelines for security of cloud services ISO/IEC 27037 — Guidelines for identification, collection, acquisition and preservation of digital evidence ISO/IEC 27038 — Specification for Digital redaction on Digital Documents ISO/IEC 27039 — Intrusion prevention ISO/IEC 27040 — Storage security ISO/IEC 27041 — Investigation assurance ISO/IEC 27042 — Analyzing digital evidence ISO/IEC 27043 — Incident investigation ISO/IEC 27050-1 — Electronic discovery — Part 1: Overview and concepts ISO/IEC 27050-2 — Electronic discovery — Part 2: Guidance for governance and management of electronic discovery ISO/IEC 27050-3 — Electronic discovery — Part 3: Code of practice for electronic discovery ISO/IEC TS 27110 — Information technology, cybersecurity and privacy protection — Cybersecurity framework development guidelines ISO/IEC 27701 — Information technology — Security Techniques — Information security management systems — Privacy Information Management System (PIMS). ISO 27799 — Information security management in health using ISO/IEC 27002 (guides health industry organizations on how to protect personal health information using ISO/IEC 27002) In preparation Further ISO27K standards are in preparation covering aspects such as digital forensics and cybersecurity, while the released ISO27K standards are routinely reviewed and updated on a roughly five-year cycle. See also ISO/IEC JTC 1/SC 27 - IT Security techniques BS 7799, the original British Standard from which ISO/IEC 17799, ISO/IEC 27002 and ISO/IEC 27001 were derived Document management system Sarbanes–Oxley Act Standard of Good Practice published by the Information Security Forum References Information technology management 27000
47665811
https://en.wikipedia.org/wiki/Divinity%3A%20Original%20Sin%20II
Divinity: Original Sin II
Divinity: Original Sin II is a role-playing video game developed and published by Larian Studios. The sequel to 2014's Divinity: Original Sin, it was released for Microsoft Windows in September 2017, for PlayStation 4 and Xbox One in August 2018, for macOS in January 2019, Nintendo Switch in September 2019, and iPadOS in May 2021. The game was a critical and commercial success, with it selling over a million copies in two months and being cited as one of the best role-playing games of all time, with praise given to its combat complexity and interactivity. Gameplay As with Divinity: Original Sin, players can play solo or with up to three others in their party. Several pre-made characters with backstories are available to the player. Players are also able to create a custom character and choose their stats, race, gender, and origin story at the start of the game. Unlike the original game, players are also given the possibility to create an undead character of one of the available races. They can recruit up to three companions to assist them although mods in the Steam Workshop exist which increase the maximum number of party companions. All companions are fully playable, and will potentially have different interactions with the environment and NPCs than the player character. Players are able to split up and individually control their party members, leading to potentially complex battle tactics and role-playing opportunities. The game features both online and local multiplayer modes, both competitive and cooperative. A skill crafting system allows players to mix and change their skills. The game also features a competitive multiplayer mode, where players are divided into two different teams and fight against each other in an arena map. Plot The game is set on the fantasy world of Rivellon, centuries after Divinity: Original Sin. Living beings on Rivellon have a form of energy known as Source, and individuals called Sourcerers can manipulate Source to cast spells or enhance their combat abilities. The Seven Gods of Rivellon had given up a portion of their collective Source power and infused it into a person, Lucian, known as the Divine, who used his powers to hold back the Void. However, Lucian died before the start of the game, which weakened the Veil between the Void and Rivellon, and monstrous creatures of the Void, guided by the God King, their dark deity, have begun to invade Rivellon. These Voidwoken are drawn to the use of Source, and so an organization called the Divine Order is persecuting Sourcerers. At the start of the game, the player character, a Sourcerer, is captured by the Divine Order and sent to an island prison known as Fort Joy. On the way there, a gigantic Kraken Voidwoken attacks and sinks the ship, but the player character is saved by a mysterious voice, who calls the player "Godwoken". On Fort Joy, the Godwoken witnesses the brutal regime of the Divine Order, led by Lucian's son Alexandar and his enforcer Dallis. Sourcerers at Fort Joy are "purged" of their Source, turning them into mindless husks. The Godwoken also learns of a tyrannical Sourcerer king called Braccus Rex, who had died around 1000 years ago. The Godwoken escapes the fortress and visits the Hall of Echoes, the realm of the Seven Gods, where they encounter one of the Seven. The God explains that they had rescued the Godwoken on the ship, and that the weakened Veil has allowed the Void to enter Rivellon, draining the Gods' powers. The God urges the Godwoken to become the next Divine and hold back the Void. The Godwoken then escapes from the island. The Godwoken sails to the island of Reaper's Coast. There, they expand their Source powers. Encountering their God again, they are directed to the Well of Ascension, where they can absorb enough Source to become Divine. The Godwoken also learns that Dallis has excavated the Aeteran, an artifact able to purge Source infinitely. Additionally, the Godwoken meets Aeterna, an immortal being who claims to be a member of a race called Eternals, the original inhabitants of Rivellon. She explains that the Seven Gods were Eternals who craved power and betrayed the other Eternals, banishing them to the Void. The Seven then created the mortal races of Rivellon and maintain their own power by draining Source from them. The Godwoken sails to the Nameless Isle where the Well of Ascension is located. There, they learn that the Eternals in the Void have become the Voidwoken, and the Eternals' former king has become the God King. The God King and the Voidwoken intend to return to Rivellon and reclaim it as theirs. The Godwoken reaches the Well but before they can become Divine, Dallis appears and destroys the Well with the Aeteran. The Godwoken's failure enrages their God, who attacks them, but the Godwoken defeats them. The Godwoken pursues Dallis to the Tomb of Lucian, in the city of Arx, and finds Lucian alive within. Lucian reveals that he faked his death and hid in his tomb and that he, not the Void, has been draining Source from the Seven. Lucian intends to purge all Source from Rivellon and use it to permanently seal the Veil, to bring peace to the world. Dallis, secretly an Eternal, has been aiding Lucian. To this end, she has resurrected Braccus Rex, who has been serving Dallis as Vredeman. Braccus Rex breaks free of Dallis's control and summons the Kraken to attack the Godwoken, Lucian, and Dallis. After Braccus Rex is defeated, the ending varies depending on player choice: the Godwoken can become the next Divine, purge all Source from Rivellon, release the Source and the powers of Divinity to the world, or allow the God King to return to Rivellon, restoring Eternal rule. Development The game was first announced on 12 August 2015. It was announced that the game would launch on Kickstarter on 26 August. The game reached its $500,000 goal on Kickstarter in less than 12 hours. Some of the stretch goals were reached before they were even announced. In the end, all of the available stretch goals were met, with over 2 million dollars collected in total. Larian announced that the company decided to head to Kickstarter again because they wanted the opinions from the community when developing the game, as well as allowing them to further expand the vision they originally had for this game. The game's music was composed by Borislav Slavov, who replaced former series composer, Kirill Pokrovsky, who died in 2015. The game was released for early access for Microsoft Windows on 15 September 2016, and was fully released on 14 September 2017. Despite a power outage in Ghent, the location of Larian's development studio, on the day of launch, the game was successfully released and had a concurrent player count of 75,000 within a week, becoming one of the most played games on Steam at the time. In addition to a free "enhanced edition" update for owners of the original game, it was also released on PlayStation 4 and Xbox One by Bandai Namco Entertainment on 31 August 2018. It was also released for macOS on 31 January 2019, and for the Nintendo Switch on 4 September 2019. Reception Divinity: Original Sin II received "universal acclaim", according to review aggregator Metacritic. Multiple critics and publications considered the game to be one of the best role-playing games (RPGs) of all time. Rick Lane of Eurogamer considered it a "masterpiece", thinking it would be many years before he could play another RPG that was even close to being "that rich with choice and charisma". Adam Smith of Rock, Paper, Shotgun thought that few games allowed players to take part in better tales than Original Sin II. Leif Johnson of IGN highly praised the stories, quests, tactical combat, and replayability, calling it one of the all-time greats of the RPG genre. GameSpot gave it a perfect 10/10 score, becoming only the 14th game in the publication's history to achieve that. Mike Williams of US Gamer called it the "pinnacle" of the computer role-playing game (CRPG) genre, praising its characters, role-playing options, environments, and combat. Janine Hawkins of Polygon was less positive than most, calling it "stunningly ambitious", but that it failed to "pull all its pieces together". A month after release, the game sold over 700,000 copies, with over a million sold by November 2017. The game was nominated for "Best Role-Playing Game" at The Game Awards 2017, and for "Best Narrative Design" and "Best Adventure/Role-Playing Game" at the Titanium Awards; it was also nominated for "Game of the Year" and "Best Story", and was a runner-up for best PC game and best RPG at IGN's Best of 2017 Awards. The game also received a nomination for "Best PC Game" at Destructoids Game of the Year Awards 2017. The staff of PC Gamer voted it as their game of the year for 2017, where it was also nominated for the "Best Co-Op Game" award. The staff of GameSpot voted it as their fifth best, while Eurogamer ranked it 11th on their list of the "Top 50 Games of 2017". Readers and staff of Game Informer gave it the "Best PC Exclusive", "Best Turn-Based Combat", and "Best Side-Quests" awards, and also placed it second for the "Best Co-op Multiplayer" award. The game was also nominated for "Role-Playing Game of the Year" at the D.I.C.E. Awards, for "Game Engineering" and "Game, Franchise Role Playing" at the NAVGTR Awards, and for "Best Sound Design for an Indie Game" and "Best Music for an Indie Game" at the Game Audio Network Guild Awards; and won the award for "Multiplayer" at the 14th British Academy Games Awards. It was also nominated for "Music Design" and "Writing or Narrative Design" at the 2018 Develop Awards. The PlayStation 4 and Xbox One versions were nominated for "Best RPG" at the 2018 Game Critics Awards, and won the award for "Best Role-Playing Game" at Gamescom 2018, whereas its other nomination was for "Best Strategy Game". References External links 2017 video games Crowdfunded video games Fantasy video games Kickstarter-funded video games Role-playing video games Tactical role-playing video games Video game sequels Video games developed in Belgium Video games featuring protagonists of selectable gender Windows games Early access video games Video games with Steam Workshop support Multiplayer and single-player video games PlayStation 4 games Xbox One games MacOS games Nintendo Switch games Bandai Namco games Open-world video games Dark fantasy role-playing video games
2095675
https://en.wikipedia.org/wiki/Atari%20Coldfire%20Project
Atari Coldfire Project
The Atari Coldfire Project (ACP) is a volunteer project that has created a modern Atari ST computer clone called the FireBee. Reason for the project The Atari 16 and 32 computer systems (ST, TT and Falcon) were popular home computers in the 1980s and the first half of the 1990s. Atari withdrew largely from the computer market in 1993, and completely in 1995-1996 when Atari merged with JTS and all support for the platform by Atari was dropped. The systems Atari had built became increasingly left behind as newer and faster systems came out. The few dedicated users who were left wanted more processing power to develop more-advanced TOS applications, paving the way for a number of "clone" machines, such as the 68040-based Milan and the 68060-based Hades, both of which were considerably more powerful than the 68030-based TT and Falcon and the 68000-based ST/STe. These machines support ISA and PCI buses, which make the use of network and graphics cards designed for the PC possible (something no original Atari machines could do). The machines also support tower cases, making it possible to use internal CD drives. A new clone named Phoenix never made it to market in final form. However, the powerful rev. 6 68060 CPU it would use did make it into a new accelerator board for the Falcon, the CT60/CT63 series, which meant that, for the first time, the Atari platform had a CPU rated at over 100 MHz. The use of a high-speed bus and PC133 RAM also accounted for a big performance improvement and significantly increased the Falcon's on-board memory limit from 14 MiB to 512 MiB with a CT60. These systems were not mass-produced and are now hard to find. While the CT60/CT63 needs a Falcon “donor” system, and is still not as powerful as the ACP potential system could be, the ACP will use a completely new design, moving away from 68K CPUs to the newer ColdFire class, more powerful than even the fastest 68K chips while still having a largely similar (but not completely compatible) instruction set. It will also allow for the integration of many I/O ports that are currently only available through extensive hardware modification on the Atari platform. Specifications The specifications for the ACP have changed considerably over time, in response to advancing technology and price considerations. However, it seems the following will be in the final design according to former Atari Coldfire Project homepage: Processor: Coldfire MCF5474, 264 MHz, 400 MIPS RAM: DDR, 512 MB Main- + 128 MB Video- and Special-RAM on Board, Speed: 1 Gbit/s Flash: 8 MB on Board for Operating Systems Atari compatible interface ports: TT/Falcon-IDE, ST/TT-Floppy TT-SCSI (but faster) ACSI ROM-Port: 2×2 mm Connector Printer Port, parallel ST/TT-serial Midi ST-Sound, YM2149 over AC'97 ST/TT/Falcon-Video Atari-Keyboard with Mouse Other Ports: Ethernet 10/100, 1 Port USB 2.0 Host (ISP1563), 5 Ports Compact-Flash, 1 Port SD-Card, 1 Port AC'97 Stereo Codec with DMA-Sound Output and Sampling Input Sound_Connectors: LineIn, LineOut, Mic (Mono), DVD/CD internal New Video Modes about 2MegaPixel, true color PS2 Mouse/Keyboard Port Battery Powered (if desired) PCI 33 MHz direct Edge for passive backplane Power controller with real time clock, PIC18F4520 Extension socket: 60Pol (DSPI , serial sync or async about , I/O about , I²C-Bus) Asynchrone static RAM for DSP or similar already planned extensions in the future: Falcon DSP in the FPGA Format: Card Power consumption of the complete board: Operating systems On the 8MB ROM, FireBee devices have the following pre-installed software: BaS (BasicSystem) FPGA config FireTOS EmuTOS There's a ready to use FreeMiNT and GUI environment setup with applications ported to work on ColdFire which can be ordered on CompactFlash card with the device. µClinux has also been ported to FireBee. Compatibility There are different strategies for dealing with the differences in ColdFire and 68K instruction set and opcodes: FireTOS includes 68K emulation based on an illegal instruction exception handler and CF68KLib 68Kemu program (based on Musashi 68k emulator) can be used to run 68K programs with EmuTOS Most of the operating system and basic desktop software has been ported and built for ColdFire and rest is able to run with emulation Several commercial and shareware Atari SW packages have also either been ported to ColdFire or open sourced so that they could be ported to FireBee FireBee FPGA doesn't yet provide DSP functionality which means that any Atari Falcon specific programs requiring DSP won't run. Many Falcon games and demos use it to play background music. Development tool support GCC, VBCC and (Pure C compatible) AHCC C-compilers and their libraries have fully working ColdFire support Digger disassembler supports ColdFire RSC-editors like ResourceMaster work on Firebee GFA Basic has been modified to support FireTOS SDL library and its (Atari specific) LDG dependency are ported to ColdFire/FireBee References External links former website ACP FireBee on YouTube Home computer remakes Atari ST
12008039
https://en.wikipedia.org/wiki/Visitor%20management
Visitor management
Visitor management refers to tracking the usage of a public building or site. By gathering this information, a Visitor Management System can record the usage of facilities by specific visitors and provide documentation of visitor's whereabouts. Proponents of an information rich visitor management system point to increased security, particularly in schools, as one benefit. As more parents demand action from schools that will protect children from sexual predators, some school districts are turning to modern visitor management systems that not only track a visitor's stay, but also check the visitor's information against national and local criminal databases. Visitor management technologies Computer visitor management systems Basic computer or electronic visitor management systems use a computer network to monitor and record visitor information and are commonly hosted on an iPad or a touchless kiosk. An electronic visitor management system improves upon most of the negative points of a pen and paper system. Visitor ID can be checked against national and local databases, as well as in-house databases for potential security problems. Visitor management software as a service Another alternative to visitor management software is an on-line, web based visitor management system offered as a service. SaaS visitor management software for schools allows administrators to screen visitors upon entrance, often checking for sex offender status, and restrict access to unauthorized entrants. SaaS visitor management software for the real estate industry allows landlords and managers to remotely control and monitor access rights without the need to pass physical keys and keycards to new tenants. SaaS visitor management software for commercial offices allows facilities managers to automate their building's reception area with advocates of this type of system claiming a variety of benefits, including both security and privacy. Many modern SaaS visitor management systems are tablet-based apps, and are thin client solutions operating software as a service in the cloud. Visitor management systems on smart phones Smart phone based visitor management system work similar to a web based system, but hosts can get real-time notifications or alerts on their device. Hosts can allow or deny visit to guest based on their interest or availability. Smart phone based visitor management systems also enable features like automatic and touchless sign-in using technologies that include QR codes and geofencing. Integrations with other systems Visitor management systems offer integration with other workplace management systems, such as access control and Wi-Fi credentials. Types of Visitor Management Systems Pen and Paper-based system On-Premise Software Cloud-based software See also Access control Optical turnstile Identity document Proximity card Boom barrier Cross-device tracking References External links Access control Security
22379203
https://en.wikipedia.org/wiki/Jaguar%20%28supercomputer%29
Jaguar (supercomputer)
Jaguar or OLCF-2 was a petascale supercomputer built by Cray at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. The massively parallel Jaguar had a peak performance of just over 1,750 teraFLOPS (1.75 petaFLOPS). It had 224,256 x86-based AMD Opteron processor cores, and operated with a version of Linux called the Cray Linux Environment. Jaguar was a Cray XT5 system, a development from the Cray XT4 supercomputer. In both November 2009 and June 2010, TOP500, the semiannual list of the world's top 500 supercomputers, named Jaguar as the world's fastest computer. In late October 2010, the BBC reported that the Chinese supercomputer Tianhe-1A had taken over the top spot, achieving over 2.5 quadrillion calculations per second, thereby bumping Jaguar to second place. The November 2010 TOP500 list confirmed the new rankings. In 2012, the Cray XT5 Jaguar was upgraded to the Cray XK7 Titan hybrid supercomputing system by adding the Gemini network interconnect and fitting all of the compute nodes with Kepler generation Nvidia GPUs. Development The Jaguar system has been through a series of upgrades since installation as a 25-teraFLOPS Cray XT3 in 2005. By early 2008, Jaguar was a 263-teraFLOPS Cray XT4. In 2008, Jaguar was expanded with the addition of a 1.4-petaFLOPS Cray XT5. By 2009, after an upgrade from 2.3 GHz 4-core Barcelona AMD processors to 2.6 GHz 6-core Istanbul AMD processors, the resulting system had over 200,000 processing cores connected internally with Cray's Seastar2+ network. The XT4 and XT5 parts of Jaguar are combined into a single system using an InfiniBand network that links each piece to the Spider file system. Jaguar's XT5 partition contains 18,688 compute nodes in addition to dedicated login/service nodes. Each XT5 compute node contains dual hex-core AMD Opteron 2435 (Istanbul) processors and 16 GiB of memory. Jaguar's XT4 partition contains 7,832 compute nodes in addition to dedicated login/service nodes. Each XT4 compute node contains a quad-core AMD Opteron 1354 (Budapest) processor and 8 GiB of memory. Total combined memory amounts to over 360 terabytes (TB). Jaguar uses an external Lustre file system called Spider for all file storage. The file system read/write benchmark is 240 GB/s, and it provides over 10 petabytes (PB) of storage. Hundreds of applications have been ported to run on the Cray XT series, many of which have been scaled up to run on 20,000 to 150,000 processor cores. The petaFLOPS Jaguar seeks to address some of the most challenging scientific problems in areas such as climate modeling, renewable energy, materials science, seismology, chemistry, astrophysics, fusion, and combustion. Annually, 80 percent of Jaguar's resources are allocated through DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, a competitively selected, peer-reviewed process open to researchers from universities, industry, government, and non-profit organizations. See also Phoenix (supercomputer) – OLCF-1 Titan (supercomputer) – OLCF-3 Oak Ridge Leadership Computing Facility Computer science Computing References External links Jaguar: The World's Most Powerful Computer Cray Oak Ridge National Laboratory One-of-a-kind computers Petascale computers X86 supercomputers 64-bit computers
2928667
https://en.wikipedia.org/wiki/Arbitrage%20betting
Arbitrage betting
Betting arbitrage ("sure bets", sports arbitrage) is an example of arbitrage arising on betting markets due to either bookmakers' differing opinions on event outcomes or errors. When conditions allow, by placing one bet per each outcome with different betting companies, the bettor can make a profit regardless of the outcome. Mathematically, arbitrage occurs when there are a set of odds, which represent all mutually exclusive outcomes that cover all state space possibilities (i.e. all outcomes) of an event, whose implied probabilities add up to less than 1. In the bettors' slang an arbitrage is often referred to as an arb; people who take advantage of these arbitrage opportunities are called arbers. Background Arbitrage betting involves relatively large sums of money, given that 98% of arbitrage opportunities return less than 1.2%. The practice is usually detected quickly by bookmakers, who typically hold an unfavorable view of it, and in the past this could result in half of an arbitrage bet being canceled, or even the closure of the bettor's account. Although arbitrage betting has existed since the beginnings of bookmaking, the rise of the Internet, odds-comparison websites and betting exchanges have made the practice easier to perform. On the other hand, these changes also made it easier for bookmakers to keep their odds in line with the market, because arbitrage bettors are basically acting as market makers. In Britain, a practice has developed in which highly experienced "key men" employ others to place bets on their behalf, so as to avoid detection and increase accessibility to retail bookmakers and allow the financiers or key arbitragers to stay at a computer to keep track of market movement. Arbitrage is a fast-paced process and its successful performance requires much time, experience, dedication and discipline, and especially liquidity. Theory There are a number of potential arbitrage deals. Below is an explanation of some of them including formulas and risks associated with them. The table below introduces a number of variables that will be used to formalise the arbitrage models. Using bookmakers This type of arbitrage takes advantage of different odds offered by different bookmakers. For an example of an event with only two possible outcomes (e.g., a tennis match in which either Federer wins or Henman wins), the two bookmakers have different ideas of who has the best chances of winning. They offer the following fixed-odds gambling on the outcomes of the event in both fractional and decimal format: Fractional odds: Decimal odds: The bookmaker's return rate is , which is the amount the bookmaker earns on offering bets at some event. Bookmaker 1 will in this example expect to earn 5.34% on bets on the tennis game. For an individual bookmaker, the sum of the inverse of all outcomes of an event will always be greater than 1. and Inverse of decimal odds: The idea of arbitrage betting is to find odds at different bookmakers, where the sum of the inverse of all the outcomes are below 1, meaning that the bookmakers disagree on the chances of the outcomes. This discrepancy can be used to obtain a profit. For instance if one places a bet on outcome 1 at bookmaker 2 and outcome 2 at bookmaker 1: Placing a bet of $100 on the most likely outcome with the lowest odds (outcome 1 with bookmaker 2) and a bet of $36.67 on outcome 2 at bookmaker 1 would ensure the bettor a profit. When there are more than two possible outcomes the value of the subsequent bets can be calculated with respect to the lowest quoted odds. In case outcome 1 comes out, one could collect from bookmaker 2. In case outcome 2 comes out, one could collect from bookmaker 1. One would have invested $136.67, but have collected $143, a profit of $6.33 (4.6%) no matter the outcome of the event. So for 2 odds and , where . If one wishes to place stake at outcome 1, then one should place at outcome 2, to even out the odds, and receive the same return no matter the outcome of the event. Or in other words, if there are two outcomes, a 1/1 and a 2/1, by covering the 1/1 with $500 and the 2/1 with $333, one is guaranteed to win $1000 at a cost of $833, giving a 20% profit. More often profits exists around the 4% mark or less. Reducing the risk of human error is vital being that the mathematical formula is sound and only external factors add "risk". Numerous online arbitrage calculator tools exist to help bettors get the math right. For example, arbitrage calculators can handle calculations for both book arbitrage ("back/back" or "lay/lay") and "back/lay" arbitrage opportunities on an intra-exchange or inter-exchange basis, and are free. For arbitrages involving three outcomes (e.g. a game which can be won, lost or drawn) having the odds for Outcome 1, for outcome 2 and for outcome 3 with their respective bids being , and and sum of the bids being B. The amount required to bet on each possibility in order to ensure profit can be calculated by Back-lay sports Betting exchanges have opened up a new range of arbitrage possibilities since on the exchanges it is possible to lay (i.e. to bet against) as well as to back an outcome. Arbitrage using only the back or lay side might occur on betting exchanges. It is in principle the same as the arbitrage using different bookmakers. Arbitrage using back and lay side is possible if a lay bet on one exchange provides shorter odds than a back bet on another exchange or bookmaker. However, the commission charged by the bookmakers and exchanges must be included into calculations. Back-lay sports arbitrage is often called "scalping" or "trading". Scalping is not actually arbitrage, but short-term trading. In the context of sports arbitrage betting a scalping trader or scalper looks to make many small profits, which in time can add up. In theory a trader could turn a small investment into large profits by re-investing his earlier profits into future bets so as to generate exponential growth. Scalping relies on liquidity in the markets and that the odds will fluctuate around a mean point. A key advantage to scalping on one exchange is that most exchanges charge commission only on the net winnings in a particular event, thus ensuring that even the smallest favorable difference in the odds will guarantee some profit. Bonus sports Many bookmakers offer first time users a signup bonus in the range $10–200 for depositing an initial amount. They typically demand that this amount is wagered a number of times before the bonus can be withdrawn. Bonus sport arbitraging, also known as matched betting, is a form of sports arbitraging where the bettor hedges or backs their bets as usual, but since they received the bonus, a small loss can be allowed on each wager (2–5%), which comes off their profit. In this way the bookmakers wagering demand can be met and the initial deposit and sign up bonus can be withdrawn with little loss. The advantage over usual betting arbitrage is that it is a lot easier to find bets with an acceptable loss, instead of an actual profit. Since most bookmakers offer these bonuses this can potentially be exploited to harvest the sign up bonuses. By signing up to various bookmakers, it is possible to turn these "free" bets into cash fairly quickly, and either making a small arbitrage, or in the majority of cases, making a small loss on each bet, or trade. However, it is relatively time consuming to find close matched bets or arbitrages, which is where a middleman service is useful. As many bookmakers require a certain turnover of the bonus amount, matching money from different bookmakers against each other enables the player to in effect quickly "play free" the money of the losing bookmaker and in effect transfer it to the winning bookmaker. By avoiding most of the turnover requirements in this way the player can usually expect a 70-80% return on investment. As well as spending time physically matching odds from various bet sites to exchanges, the other draw back with bonus bagging and arbitrage trading in this sense is that often the free bets are "non-stake returned". This effectively reduces the odds, in decimal format, by 1. Therefore, in order to reduce "losses" on the free bet, it is necessary to place a bet with high odds, so that the percentage difference of the decrease in odds is minimised. Shop arbitrage (sharbing) Shop arbitrage (also known as sharbing or shop-arbing) is the process of using a betting shop's coupons and a betting exchange to create an arbitrage position. This is made possible because online prices change quickly to close these positions and betting shops are slower to change the prices on their printed coupons. Risks While often claimed to be "risk-free", in the past it only true if an arbitrage is successfully completed; in the past, there were several threats to this: Disappearance of arbitrage: Arbitrages in online sports markets have a median lifetime of around 15 minutes, after which the difference in odds underpinning them vanishes through betting activity. Without rapid alerting and action, it is possible to fail to make all the "legs" of the arbitrage before it vanishes, thus transforming it from a risk-free arbitrage into a conventional bet with the usual risks involved. High street bookmakers however, offer their odds days in advance and rarely change them once they have been set. These arbitrages can have a lifetime of several hours. Hackers: Due to the large number of accounts that have to be created and managed (containing personal details such as email, name, address, ewallet, credit card information and often even a copy of the bettor's ID/passport or driver's license), arbitrage traders used to highly susceptible to cyber fraud, such as bank account theft. While making deposits is usually made easy and quick, making withdrawals often requires proof of identity in the form of passport/driver license, copies of which need to be shared with the bookmakers via fax/email or even postal mail, which causes additional identity theft risks. Traders are often attracted to high odds comparison sites that yield high percentage profits per stake (5-30%); this is often used by hackers to lure a high number of arbitrage bettors that then place large sums of money on these arbs, only to lose all of the profit and even entire savings in bank accounts to hackers or untrustworthy websites, which may further use the gathered data to sell personal data to criminals. Making errors as an arber: In the excitement of the action and due to the high number of bets placed, it is not uncommon to make a mistake (like traders on financial markets). For example, the appropriate stakes may be incorrectly calculated, or be placed on the wrong "legs" of the arb, locking in a loss, or there may be inadequate funds in one of the accounts to complete the arb. Those errors might temporarily have an important impact. In the long term, the benefit will depend on the odds. For example, one could actually make more money by placing the "wrong" bet where the outcome happens to be beneficial, though not justified by the arbitrage calculation. However, repetition of this stroke of luck is unlikely, assuming the bookmaker has calculated the odds so they make a profit. Websites and bet placement interfaces differ between bookmakers, so that arbitrage bettors need to be familiar with different web interfaces. In some sports different bookmakers deal with outcomes in different ways (they differ in their handling of - for example - player withdrawal due to injury in tennis, overtime in ice hockey), meaning that both "legs" can lose. Matching terms for all bookmakers is time-consuming, requires expertise and experience, while still being fairly error-prone. Detection: Bookmakers like Pinnacle Sports and Sbobet not only are not “chasing” arbers but in fact, they welcome them to bet on their sites. However, these are the leaders of the market and they can afford it. Most bookmakers try to discourage arbers usually by setting betting limits or by blocking their accounts. Many bookmakers could be used shared security servers in order to pinpoint people suspected of arbitrage betting; they can simply limit stakes to make arbing. Unprofitable and even close accounts without honoring a bet that was placed. Loss of deposited money into a bookmaker could occur. This usually leads to unprofitable arbing as the most successful bookmakers are so adept at identifying arbitrage bettors. To avoid detection, people sometimes use special arbing VPN and VPS services. Stake reviewal: Some bookmakers used to accept only very small stakes by default, while requiring larger stakes to be manually reviewed before being accepted, which basically makes it difficult for an arbitrage better to determine if a leg was completely accepted or not, until it may be too late. Bet cancellation: If a bettor places bets so as to make an arbitrage and one bookmaker cancels a bet, the bettor could find himself in a bad position because he is actually betting with all the risks implied. The bettor can repeat the bet that has been cancelled so as minimize the risk, but if he cannot get the same odds he had before he may be forced to take a loss. In some cases the situation arises when there are very high potential payouts by the bookmaker, perhaps due to an unintentional error made while quoting odds. Many jurisdictions allow bookmakers to cancel bets in the event of such a "palpable" ["obvious"] error in the quoted odds. This is often loosely defined as an obvious mistake, but whether a "palp" in fact has been made is often the sole discretion of the bookmaker. Other potential problems include: Arbers' dedicated email addresses are subject to advertising campaigns from third parties which suggests that client data may be resold behind the scenes. Bookmakers who encourage responsible gambling used to close accounts where they saw only large losses, unaware that the arbitrage trader has made wins at other books. Capital diffusion is serious; many bookmakers make it easy to deposit funds and difficult to withdraw them (requiring much additional information, and documents as proof of identity, i.e. a passport/ID copy). Making a return involves many bets spread over typically many bookmakers and keeping track requires good record-keeping and discipline. Responding to an available arb may require transfer of funds from one bookmaker to another, through one or more ewallet accounts with each withdrawal requiring special approval. While there are commercial software products and web services available to help with some of these tasks, they are complicated and may involve significant initial investments and monthly subscription fees. Arbitrage bettors using software tools or web services to find arbitrages will often make an existing arbitrage even more prominent and obvious to the bookmaker because of the number of arbitrage bettors placing bets on the same outcome, so that the lifetime of an arbitrage found via such tools is often even much shorter than the average 15 minutes. Thus, the risk of seeing bets revoked is also often much higher for arbitrages found via such tools than for arbitrages found manually, that are not shared with other arbitrage bettors. Arbing often involves making use of bookmaker bonuses which usually require substantial transactions before being eligible for withdrawal, thus reducing total liquidity. Foreign currency movements can wipe out small percentage gains and can make quick calculation of stakes difficult. Transferring funds between bookmakers and ewallets may create additional costs at some point; most bookmakers and/or ewallets limit deposits to certain amounts per month. in the past Withdrawals used to be often limited to a certain amount per month or to a certain number of free withdrawals per month Withdrawals are often charged for, not just on the side of the bookmaker, but sometimes also on the ewallet side (transfer to the bettor's bank account). In some countries, additional costs are imposed by government taxes, so that the final profit is further reduced by a fixed percentage of say 5% (Germany/Europe). Professional arbitrage betting may require considerable time and energy and requires much experience and liquidity, as well as sufficient funds to recover from inevitable losses due to the aforementioned reasons. Typically, arbitrages have a profit margin of only 2-5% - many other arbitrages are regarded as "high risk" ("palps"). Accordingly, profits accumulated through 20-40 successful arbitrages can be lost on a single failed bet. See also Advantage gambling Dutch book Mathematics of bookmaking Sports betting References Arbitrage Sports betting Investment Wagering Gambling terminology
1847427
https://en.wikipedia.org/wiki/Team%20Fortress%202
Team Fortress 2
Team Fortress 2 is a multiplayer first-person shooter game developed and published by Valve Corporation. It is the sequel to the 1996 Team Fortress mod for Quake and its 1999 remake, Team Fortress Classic. The game was released in October 2007 as part of The Orange Box for Windows and the Xbox 360, and ported to the PlayStation 3 in December 2007. It was released as a standalone game for Windows in April 2008, and updated to support Mac OS X in June 2010 and Linux in February 2013. It is distributed online through Valve's digital retailer Steam, with Electronic Arts managing retail and console editions. Players join one of two teams, RED and BLU, and choose one of nine character classes to play as in game modes such as capture the flag and king of the hill. Development was led by John Cook and Robin Walker, the developers of the original Team Fortress mod. Team Fortress 2 was announced in 1998 under the name Team Fortress 2: Brotherhood of Arms. Initially, the game had more realistic, militaristic visuals and gameplay, but this changed over the protracted nine years of development. After Valve released no information for six years, Team Fortress 2 regularly featured in Wired News annual vaporware list among other ignominies. The finished Team Fortress 2 has cartoon-like visuals influenced by the art of J. C. Leyendecker, Dean Cornwell, and Norman Rockwell, and uses Valve's Source game engine. Team Fortress 2 has received critical acclaim for its art direction, gameplay, humor, and use of character in a wholly multiplayer game. Valve continues to release new content on a seasonal basis in the form of submissions made through the Steam Workshop. In June 2011, the game became free-to-play, supported by microtransactions for in-game cosmetics. A 'drop system' was also added and refined, allowing free-to-play users to periodically receive in-game equipment and items. Though the game has had an unofficial competitive scene since its release, both support for official competitive play through ranked matchmaking and an overhauled casual experience were added in July 2016. Gameplay In most game modes, BLU and RED compete for a combat-based objective. Players can choose to play as one of nine character classes in these teams, each with their own unique strengths, weaknesses, and weapon sets. In order to accomplish objectives efficiently, a balance of these classes is required due to how these strengths and weaknesses interact with each other in a team-based environment. Although the abilities of a number of classes have changed from earlier Team Fortress incarnations, the basic elements of each class have remained, that being one primary weapon, one secondary weapon, and one melee weapon. The game was released with six official maps, although over one hundred maps have since been included in subsequent updates, including community-created maps. When players choose a gamemode for the first time, an introductory video is played, showing how to complete its objectives. During matches, the Administrator, voiced by Ellen McLain, announces events over loudspeakers. The player limit for one match is sixteen on the Xbox 360 and PlayStation 3, and twenty-four on the Windows edition. However, in 2008, the Windows edition was updated to include a server variable that allows for up to thirty-two players. Team Fortress 2 is the first of Valve's multiplayer games to provide detailed statistics for individual players, such as the total amount of time spent playing as each class, most points obtained, and most objectives completed in a single life. Persistent statistics tell the player how they are performing in relation to these statistics, such as if a player comes close to their record for the damage inflicted in a round. Team Fortress 2 also features numerous achievements for carrying out certain tasks, such as achieving a certain number of kills or completing a round within a certain time. Sets of class-specific achievements have been added in updates, which can award weapons to the player upon completion. This unlockable system has since been expanded into a random drop system, whereby players can also obtain items simply by playing the game. Game modes Core game modes Team Fortress 2 contains five core game modes. Attack/Defend (A/D) is a timed game mode in which the BLU team's goal is to capture RED control points. The number of control points varies between maps, and the points must be captured by the BLU team in respective order. To capture a control point, a player must stand on it for a certain amount of time. This process can be sped up by more players on one team capturing a single point. Once a control point is captured by the BLU team, it cannot be re-captured by the RED team. The RED team's job is to prevent the BLU team from capturing all the control points before the time limit ends. Once a point is captured, the time limit will extend. Capture the Flag (CtF) is a mode which revolves around the BLU and RED teams attempting to steal and capture the opposing team's flag, represented in-game as an intelligence briefcase. At the same time, both teams must defend their own intelligence. When the intelligence is dropped by the carrier – either by dying or dropping it manually, it will stay on the ground for 1 minute before returning to its original location if it is not picked up again. A team's intelligence can only be carried by the opposing team. The first team to capture the enemy's intelligence three times wins. Control Points (CP) is a timed game mode where there are several control points placed around the map, with 3 or 5 control points in total depending on the map. These are referred to as "3CP" and "5CP," respectively. The game will start off with only the middle control point being available for capture, with the other control points split equally among both teams. Once this middle control point is captured, a team can begin capturing the enemy team's points in respective order. The time limit is extended on the capture of a control point by either team. For a team to win, they must capture all the control points within the time limit. King of the Hill (KOTH) is a timed game mode that contains a single control point at the middle of the map that can be captured by both the RED and BLU teams. Upon capturing the control point, a team-specific timer starts counting down but stops upon the point being captured by the opposing team. The first team to have their timer count down to 0 wins. Payload (PL) is a timed game mode where the BLU team must push an explosive cart along a track, while the RED team must prevent the cart from reaching their base. To push the cart, at least one BLU player must stay within the range of the cart, which will dispense health and ammo every few seconds. The cart's speed will increase as more BLU players attempt to push it. Payload maps have multiple "checkpoints" along the track. Once these checkpoints are captured, they may adjust the spawn locations of both teams. Capturing a checkpoint will also increase the time limit. If the cart is not pushed by the BLU team for 20 seconds, it will begin to move back to the last captured checkpoint, where it will stop. The RED team can stop the cart from being pushed by being within range of it. The RED team wins by preventing the cart from reaching the final checkpoint before time runs out. Alternative game modes There are several alternative game modes in Team Fortress 2. These modes consist of a small number of maps and detach from the core game modes in some way. Arena is a special game mode in which players do not respawn upon death. A team can win either by eliminating all opposing players, or by claiming a single capture point that opens after a certain time has elapsed. This mode is currently unavailable through matchmaking, but is still accessible through community servers. Mannpower is a mode in which players have access to a grappling hook and assorted power-ups laid around the map that grant unique abilities. While not bound to any specific mode, all current official Mannpower maps use a variation of Capture the Flag. In Mannpower's variation of Capture the Flag, both teams have an intelligence flag, and the first team to capture the enemy's intelligence ten times wins. The mode is heavily inspired by the Quake mod, Threewave CTF, a mod created by former Valve employee David Kirsch. Medieval Mode is a mode in which players are restricted to using melee and support weapons, with certain exceptions for medieval-themed projectile weapons. While not bound to any specific mode, the only official Medieval Mode map uses a 3CP variation of Attack/Defend. If Medieval Mode is enabled on a map, select phrases spoken by players in the in-game text chat will be replaced with more thematic variants, such as "hello" being replaced with "well meteth". PASS Time is a unique timed game mode inspired by rugby, developed by Valve, Bad Robot Interactive, and Escalation Studios. Three unique goals (the Run-In, Throw-In, and Bonus Goals) are placed on each team's side of the map. A single ball called the JACK will spawn at the center of the map, and players must pick it up and carry it to the opposing team's side. Players can score a goal by either carrying the JACK to a Run-In Goal or by throwing the JACK through the Throw-In Goal. Three goals can be scored by throwing the JACK through the Bonus Goal, which is much more difficult to score. To win, a team must either score five goals, or have the most goals when the timer runs out. Payload Race, like Payload, has the main objective being to push a cart to a final checkpoint. Unlike Payload, both the RED and BLU teams are fighting to push their cart to the final checkpoint. There is only one checkpoint for each track, and there is no time limit. The team to reach their checkpoint first wins. Player Destruction is a community-made game mode in which a player's death causes a pickup to appear. The first team to collect a set number of pickups and deliver them to a drop-off point wins the game. The players on each team with the most pickups are highlighted for everyone to see, and gain a passive healing effect for themselves and any nearby teammates. Special Delivery is a mode similar to Capture the Flag, but there is only one neutral briefcase that can be picked up both the RED and BLU teams. Upon a team picking up the briefcase, the opposing team will be unable to pick up the briefcase until it has been dropped for 45 seconds and respawns as a neutral briefcase. A team wins by carrying the briefcase onto a loading platform, which will gradually rise until the platform reaches its peak. Territorial Control consists of several control points spread out across a single map. Like Control Points, each point can be captured by either the RED or BLU teams. Unlike Control Points, only two points are accessible at a single time. Upon a team's successful capture of a point, the "stage" ends and the accessible capture points change. When a team only has control of a single control point, they are blocked from capturing the opposing team's control point and the team must wait until the time limit is up and the accessible capture points change. A team wins by capturing all the control points. Other game modes These modes are not categorized with the other modes, and instead have their own separate sections in the game. Halloween Mode is a special mode that is enabled during the Halloween season, and allows the players access to more than 20 maps, Halloween-exclusive cosmetics, and challenges. For example, Halloween 2012 included a difficult Mann vs. Machine mission involving destroying more than 800 enemy forces. Owing to popular demand of the Halloween events, Valve later added the Full Moon event, an event that triggers around every full moon phase throughout the year, which allows players to equip Halloween-exclusive cosmetics. In 2013, Valve introduced an item called Eternaween, and upon use, allows players of a specific server to use Halloween-exclusive cosmetics for 2 hours. Mann vs Machine, also known as MvM, is a cooperative game mode where players must defend their base from waves of robots modeled after all nine playable classes, and slow-moving tanks carrying bombs. Robots and tanks drop a currency referred to as Credits upon their death, which players can use to buy upgrades for themselves or their weapons. The players win upon successfully defending their base from the bomb until the last wave. A paid version of this game mode called "Mann Up" is also available, where players buy tickets to play "Tours of Duty", a collection of missions with the chance to win unique cosmetics and weapon skins upon completion. Offline Practice Mode is just like any other multiplayer match, but it only consists of the player and bots. The number of bots, their difficulty, and the map can all be adjusted to a player's preference, though only a select amount of maps are available to play. Training Mode exists to help new players get acquainted with basic controls, and teaches them the basics of four of the nine classes. It uses wooden dummies and bots to teach players the basic mechanics of classes and the game. Classes and characters Team Fortress 2 features nine playable classes, evenly split and categorized into "Offense", "Defense", and "Support". Each class has strengths and weaknesses and must work with other classes to be efficient, encouraging strategy and teamwork. Each class has at least three default weapons: a primary weapon, secondary weapon, and melee weapon. Some classes have additional slots for PDAs. Offense The Scout (Nathan Vetterlein) is an American baseball fan and street runner from Boston, Massachusetts who practiced running to "beat his mad dog siblings to the fray." He is a fast, agile character, who is armed by default with a scattergun, a pistol, and an aluminum baseball bat. The Scout can double jump, and counts as two people when capturing control points, thus doubling the capture speed, and when pushing the Payload cart. The Soldier (Rick May) is an American jingoistic patriot from the Midwest who stylizes himself as a military man despite having never served in any branch of the Armed Forces. The Soldier is armed by default with a rocket launcher, a shotgun, and a folding shovel. He is both the second-slowest class in the game and the class with the second-highest health, after the Heavy Weapons Guy. The Soldier can use his rocket launcher to rocket jump to other locations at the cost of some health. The Pyro (Dennis Bateman) is a pyromaniac of unknown gender and origin who wears a fire-retardant suit and a voice-muffling gas mask. By default, the Pyro is armed with a flamethrower which can set players on fire, a shotgun, and a fire axe. The Pyro's flamethrower can also produce a blast of compressed air that repels any nearby enemies and projectiles, and extinguishes burning teammates. The Pyro is deluded and believes they are living in a utopian fantasy world referred to as "Pyroland". Defense The Demoman (Gary Schwartz) is a black Scottish, one-eyed, alcoholic demolitions expert from Ullapool, Scotland. Armed by default with a grenade launcher, a sticky bomb launcher, and a glass bottle of scrumpy, the Demoman can use his explosives to provide indirect fire and set traps. The Demoman can use his sticky bomb launcher to "sticky jump" at the cost of some health. The Heavy Weapons Guy, or simply the Heavy, (Schwartz) is a large Russian man from the Dzhugdzhur Mountains of the USSR. He is heavy in stature and accent, and is obsessed with firepower. He is the slowest class, and can both sustain and deal substantial amounts of damage. His default weapons consist of a minigun that he affectionately refers to as "Sasha", a shotgun, and his fists. The Engineer (Grant Goodeve) is an American inventor, engineer, intellectual, and "good ol' boy" from Bee Cave, Texas. The Engineer can build structures to support his team: a sentry gun for defending key points, a health and ammunition dispenser, and a pair of teleporter modules (one entrance and one exit). The Engineer is armed by default with a shotgun, a pistol, a wrench that functions as both a melee weapon and to repair and upgrade his buildings, and two separate PDAs; one to erect his buildings and one to remotely destroy them. Support The Medic (Robin Atkin Downes) is a German doctor from Stuttgart with little regard for the Hippocratic Oath. He is equipped with a "Medi Gun" that can restore health to injured teammates. When healing teammates, the Medi Gun progressively builds an "ÜberCharge" meter, which, when fully charged, can be activated to provide the Medic and his patient with temporary invulnerability. The Medic is also equipped with a syringe gun and a bonesaw for situations in which he must fight without his teammates' protection. He keeps doves as pets, one of which is named Archimedes. The Sniper (John Patrick Lowrie) is a New Zealand ocker raised in the Australian outback, equipped by default with a laser-sighted sniper rifle to shoot enemies from afar. He can cause severe damage or an instant kill with a headshot, depending on how the player aims and fires. By default, he also carries a submachine gun and a kukri for close combat. The Spy (Dennis Bateman) is a French covert operative whose equipment is designed for stealth and infiltration, including a cloaking device disguised as a wristwatch, an electronic sapper, used to disable and destroy enemy Engineers' buildings, and a device hidden in his cigarette case that enables him to disguise himself as any player on either team. He is armed with a revolver and a butterfly knife, able to use the latter to instantly kill enemies by stabbing them in the back. He is the only character who does not wear any clothing in his team's bright color or a patch denoting his specialty, instead preferring a balaclava, business suit, necktie, and gloves in muted team-color hues. Non-playable characters Other characters include the Administrator (voiced by Ellen McLain), an unseen announcer who provides information about time limits and objectives to players, and her assistant Miss Pauling (Ashly Burch). The cast has expanded with Halloween updates, including the characters of the Horseless Headless Horsemann and MONOCULUS (Schwartz). 2012 and 2013 saw the addition of Merasmus, the Bombinomicon, and Redmond, Blutarch, and Zepheniah Mann (all played by Nolan North). Previous unused voicelines recorded by North were later used for a Horseless Headless Horsemann seen in the 2019 map "Laughter" and a jack-o'-lantern resting atop the Payload cart in the 2020 map "Bloodwater". The character Davy Jones (voiced by Calvin Kipperman) made an appearance in the 2018 map "Cursed Cove". In the video announcement for the "Jungle Inferno" update, Mann Co. CEO Saxton Hale is voiced by JB Blanc. Competitive play Team Fortress 2 is played competitively, through multiple leagues. The North American league, ESEA, supports a paid Team Fortress 2 league, with $42,000 in prizes for the top teams in 2017. Team Fortress 2 is played competitively in many formats, such as Highlander (nine players per team, one of each class), Prolander (7v7) and 6v6. While formalized competitive gameplay is very different from normal Team Fortress 2, it offers an environment with a much higher level of teamwork than in public servers. Most teams use voice chat to communicate, and use a combination of strategy, communication, and aiming ability to win against other teams. Community-run competitive leagues also tend to feature restrictions such as item bans and class limits. These leagues are often supported by Valve via in-game medals (which are submitted via the Steam Workshop) and announcements on the official blog. In April 2015, Valve announced that a dedicated competitive mode would be added to Team Fortress 2, utilizing skill-based matchmaking; closed beta testing began in the following year. The competitive mode was added in the "Meet Your Match" update, released on July 7, 2016. Ranked matches are played six-vs-six, with players ranked in thirteen tiers based on win/losses and an assessment of their skills. Ranked matchmaking will balance players based on their tiers and rating. A similar matchmaking approach has been added for casual games for matches of 12-vs-12 players. In order to join competitive matchmaking, players must have associated their Steam account with the Steam Guard Mobile Authenticator, as well as having a Team Fortress 2 "premium account", which is unlocked by either having bought the game before it went free-to-play or by having made an in-game item purchase since. Development Origins The original Team Fortress was developed by the Australian team TF Software, comprising Robin Walker and John Cook, as a free mod for the 1996 PC game Quake. In 1998, Walker and Cook were employed by Valve, which had just released its first game, Half-Life. Valve began developing Team Fortress 2 as an expansion pack for Half-Life using Valve's GoldSrc engine, and gave a release date for the end of the year. In 1999, Valve released Team Fortress Classic, a port of the original Team Fortress, as a free Half-Life mod. Team Fortress Classic was developed using the publicly available Half-Life software development kit as an example to the community and industry of its flexibility. Unlike Team Fortress, Valve originally planned Team Fortress 2 to have a modern war aesthetic. It would feature innovations including a command hierarchy with a Commander class, parachute drops over enemy territory, and networked voice communication. The Commander class played similarly to a real-time strategy game, with the player viewing the game from a bird's-eye perspective and issuing orders to players and AI-controlled soldiers. Team Fortress 2 was first shown at E3 1999, where Valve showcased new technologies including parametric animation, which blended animations for smoother, more lifelike movement,<ref name="ptf tf2 tech">{{cite web |url=http://www.planetfortress.com/tf2/gameinfo/technology.shtml |title=Team Fortress 2: Technology |publisher=PlanetFortress |access-date=April 5, 2007 |url-status=dead |archive-url=https://web.archive.org/web/20070601173137/http://www.planetfortress.com/tf2/gameinfo/technology.shtml |archive-date=June 1, 2007 |df=mdy }}</ref> and Intel's multi-resolution mesh technology, which dynamically reduced the detail of distant on-screen elements to improve performance. The game earned several awards including Best Online Game and Best Action Game. In mid-2000, Valve announced that Team Fortress 2 had been delayed for a second time. They attributed the delay to development switching to its new in-house engine, Source. Following the announcement, Valve released no news on the game for six years. Walker and Cook worked on various other Valve projects; Walker was project lead on Half-Life 2: Episode One and Cook worked on Valve's content distribution platform Steam. Team Fortress 2 became a prominent example of vaporware, a long-anticipated game that had seen years of development, and was often mentioned alongside another much-delayed game, Duke Nukem Forever. Walker said that Valve built three or four different versions of Team Fortress 2 before settling on their final design. Shortly before the release of Half-Life 2 in 2004, Valve's marketing director Doug Lombardi confirmed that Team Fortress 2 was still in development. Final design Valve reintroduced Team Fortress 2 at the July 2006 EA Summer Showcase event. Departing from the realistic visual design of other Valve games, Team Fortress 2 features a cartoon-like visual style influenced by 20th-century commercial illustrations and the artwork of J. C. Leyendecker, Dean Cornwell, and Norman Rockwell, achieved through Gooch shading. The game debuted with the Source engine's new dynamic lighting, shadowing and soft particle technologies alongside Half-Life 2: Episode Two. It was the first game to implement the Source engine's new Facial Animation 3 features. Valve abandoned the realistic style when it became impossible to reconcile it with the unrealistic gameplay, with opposing armies having constructed elaborate bases directly next to each other. The Commander class was abandoned as players would simply refuse to follow the player's orders. Valve designed each character, team, and equipped weapon to be visually distinct, even at range; for example, the coloring draws attention to the chest area, bringing focus on the equipped weapon. The voices for each of the classes were based on imagining what people from the 1960s would expect the classes to have sounded like, according to writer Chet Faliszek. The map design has an "evil genius" theme with archetypical spy fortresses, concealed within inconspicuous buildings such as industrial warehouses and farms to give plausibility to their close proximities; these bases are usually separated by a neutrally themed space. The bases hide exaggerated super weapons such as laser cannons, nuclear warheads, and missile launch facilities, taking the role of objectives. The maps have little visual clutter and stylized, almost impressionistic modeling, to allow enemies to be spotted more easily. The impressionistic design approach also affects textures, which are based on photos that are filtered and improved by hand, giving them a tactile quality and giving Team Fortress 2 its distinct look. The bases are designed to let players immediately know where they are. RED bases use warm colors, natural materials, and angular shapes, while BLU bases use cool colors, industrial materials, and orthogonal shapes. Release During the July 2006 Electronic Arts press conference, Valve revealed that Team Fortress 2 would ship as the multiplayer component of The Orange Box. A conference trailer showcasing all nine of the classes demonstrated for the first time the game's whimsical new visual style. Managing director of Valve Gabe Newell said that the company's goal was to create "the best looking and best-playing class-based multiplayer game". A beta release of the entire game was made on Steam on September 17, 2007, for customers who had pre-purchased The Orange Box, who had activated their Black Box coupon, which was included with the ATI HD 2900XT Graphics cards, and for members of Valve's Cyber Café Program. The beta continued until the game's final release. The game was released on October 10, 2007, both as a standalone product via Steam and at retail stores as part of The Orange Box compilation pack, priced at each gaming platform's recommended retail price. The Orange Box also contains Half-Life 2, Half-Life 2: Episode One, Half-Life 2: Episode Two, and Portal. Valve offered The Orange Box at a ten percent discount for those who pre-purchased it via Steam before the October 10 release, as well as the opportunity to participate in the beta test. Post-release Since the release of Team Fortress 2, Valve has continually released free updates and patches through Steam for Windows, OS X, and Linux users; though most patches are used for improving the reliability of the software or to tweak gameplay changes, several patches have been used to introduce new features and gameplay modes, and are often associated with marketing materials such as comics or videos offered on the Team Fortress 2 website; this blog is also used to keep players up to date with the ongoing developments in Team Fortress 2. As of July 2012, each class has been given a dedicated patch that provides new weapons, items, and other gameplay changes; these class patches typically included the release of the class's "Meet the Team" video. Other major patches have included new gameplay modes including the Payload, Payload Race, Training, Highlander, Medieval, and Mann vs. Machine modes. Themed patches have also been released, such as a yearly Halloween-themed event called "Scream Fortress", where players may obtain unique items available only during a set period around the holiday. Other new features have given players the ability to craft items within the game from other items, trade items with other players, purchase in-game items through funds in Steam, and save and edit replay videos that can be posted to YouTube. Valve has released tools to allow users to create maps, weapons, and cosmetic items through a contribution site; the most popular are added as official content for the game. This approach has subsequently created the basis for the Steam Workshop functionality of the software client. In one case, more than fifty users from the content-creation community worked with Valve to release an official content update in May 2013, with all of the content generated by these players. Valve reported that as of June 2013, over $10 million has been paid back to over 400 community members that have helped to contribute content to the game, including a total of $250,000 for the participants in the May 2013 patch. To help promote community-made features, Valve has released limited-time events, such as the "Gun Mettle" or "Invasion" events in the second half of 2015, also including the "Tough Break" update in December 2015, in which players can spend a small amount of money which is paid back to the community developers for the ability to gain unique items offered while playing on community-made maps during the event. Development of the new content had been confirmed (but later quietly cancelled) for the Xbox 360, while development for the PlayStation 3 was deemed "uncertain" by Valve. However, the PlayStation 3 version of Team Fortress 2 received an update that repaired some of the issues found within the game, ranging from graphical issues to online connectivity problems; this update was included in a patch that also repaired issues found in the other games within The Orange Box. The updates released on PC and planned for later release on Xbox 360 include new official maps and game modes, as well as tweaks to classes and new weapons that can be unlocked through the game's achievement system. The developers attempted to negotiate with Xbox 360 developer Microsoft to keep the Xbox 360 releases of these updates free, but Microsoft refused and Valve announced that they would release bundles of several updates together to justify the price. Because of the cost of patching during the seventh generation of video game consoles, Valve has been unable to provide additional patches to the Xbox 360 version since 2009, effectively cancelling development of the console versions. On June 10, 2010, Team Fortress 2 was released for OS X, shortly after the release of Steam for OS X. The release was teased by way of an image similar to early iPod advertising, showing a dark silhouette of the Heavy on a bright green background, his Sandvich highlighted in his hand. Virtual earbuds, which can be worn when playing on either OS X or Windows once acquired, were given to players playing the game on OS X before June 14, though the giveaway period was later extended to August 16. On November 6, 2012, Valve announced the release of Team Fortress 2 for Linux as part of a restricted beta launch of Steam on the platform. This initial release of Steam and Team Fortress 2 was targeted at Ubuntu with support for other distributions planned for the future. Later, on December 20, 2012, Valve opened up access to the beta, including Team Fortress 2, to all Steam users without the need to wait for an invitation. On February 14, 2013, Valve announced the full release of Team Fortress 2 for Linux. From then to March 1, anyone who played the game on Linux would receive a free Tux penguin, which can be equipped in-game.Team Fortress 2 was announced in March 2013 to be the first game to officially support the Oculus Rift, a consumer-grade virtual reality headset. A patch will be made to the client to include a "VR Mode" that can be used with the headset on any public server. In April 2020, source code for 2018 versions Team Fortress 2 and Counter-Strike: Global Offensive leaked online. This created fears that malicious users would use the code to make remote code execution software and attack servers or players' computers. Several fan projects halted development until the impact of the leak could be determined. Valve confirmed the legitimacy of the code leaks, but stated they do not believe it affects servers and clients running the latest official builds of either game. On May 1, 2020, a few weeks after the death of the voice actor of the Soldier, Rick May, Valve released an update to Team Fortress 2, adding a tribute to his voicework as the Soldier in the form of a new main menu theme (a rendition of Taps), as well as statues of the Soldier saluting, added to most of the official in-game maps. These statues all featured a commemorative plaque dedicated to May and lasted through the end of the month. One of these statues, appearing on the map "cp_granary", the setting of the "Meet the Soldier" short video, was made permanent. Free-to-play On June 23, 2011, Valve announced that Team Fortress 2 would become free to play. Unique equipment including weapons and outfits would be available as microtransactions through the in-game store, tied through Steam. Walker stated that Valve would continue to provide new features and items free. Walker stated that Valve had learned that the more players Team Fortress 2 had, the more value it had for each player. The move came a week after Valve introduced several third-party free-to-play games to Steam and stated they were working on a new free-to-play game. Within nine months of becoming free to play, Valve reported that revenue from Team Fortress 2 had increased by a factor of twelve. 2020-2022 bot issues Since around April 2020, Team Fortress 2 has endured large amounts of bot accounts entering Valve casual matchmaking servers. Though bot accounts had been an issue in Team Fortress 2 for some time prior to this, multiple sources began to report a spike in activity for these bot accounts. The activities of these bots have included forcibly crashing servers, spamming copypastas in the text chats of matches, assuming other players' usernames, and the usage of aimbots. On June 16, 2020, Valve responded to this by restricting accounts that have not paid for Mann Co. Store items from the use of voice and text chat in-game. On June 24, players in this category were further restricted from changing their Steam username while connected to any Valve matchmaking server. On June 22, 2021, approximately one year later, additional changes were implemented which aim to discourage bot activity. However, these measures have remained largely ineffective, leading some to criticize Valve. Marketing Beginning in May 2007, to promote the game, Valve began a ten-video advertisement series referred to as "Meet the Team". Constructed using Source Filmmaker and using more detailed character models, the series consists of short videos introducing each class and displaying their personalities and abilities. The videos are usually interspersed with simulated gameplay footage. The format of the videos varies drastically; the first installment, "Meet the Heavy", depicts him being interviewed, while "Meet the Soldier" shows the Soldier giving a misinformed lecture on Sun Tzu to a row of severed BLU heads as if they were raw recruits. He claims Sun Tzu "invented" fighting, then further confuses this claim with the story of Noah and his Ark. The videos were generally released through Valve's official YouTube channels, though in one notable exception, the "Meet the Spy" video was leaked onto YouTube, several days before its intended release. Early "Meet the Team" videos were based on the audition scripts used for the voice actors for each of the classes; the "Meet the Heavy" script is nearly word-for-word a copy of the Heavy's script. Later videos, such as "Meet the Sniper", contain more original material. The videos have been used by Valve to help improve the technology for the game, specifically improving the facial animations, as well as a source of new gameplay elements, such as the Heavy's "Sandvich" or the Sniper's "Jarate". The final video in the Meet the Team series, "Meet the Pyro", was released on June 27, 2012. Gabe Newell has stated that Valve used the "Meet the Team" series as a means of exploring the possibilities of making feature film movies themselves. He believes that only game developers themselves have the ability to bring the interesting parts of a game to a film, and suggested that this would be the only manner through which a Half-Life-based movie would be made. A fifteen-minute short, "Expiration Date", was released on June 17, 2014. The shorts were made using Source Filmmaker, which was officially released and has been in open beta as of July 11, 2012. In more recent major updates to the game, Valve has presented teaser images and online comic books that expand the fictional history of the Team Fortress 2, as part of the expansion of the "cross-media property", according to Newell. In August 2009, Valve brought aboard American comic writer Michael Avon Oeming to teach Valve "about what it means to have a character and do character development in a comic format, how you do storytelling". "Loose Canon", a comic associated with the Engineer Update, establishes the history of RED versus BLU as a result of the last will and testament of Zepheniah Mann in 1890, forcing his two bickering sons Blutarch and Redmond to vie for control of Zepheniah's lands between them; both have engineered ways of maintaining their mortality to the present, waiting to outlast the other while employing separate forces to try to wrest control of the land. This and other comics also establish other background characters such as Saxton Hale, the CEO of Mann Co., the company that provides the weapons for the two sides and was bequeathed to one of Hale's ancestors by Zepheniah, and the Administrator, the game's announcer, that watches over, encourages the RED/BLU conflict, and keeps each side from winning. The collected comics were published by Dark Horse Comics in Valve Presents: The Sacrifice and Other Steam-Powered Stories, a volume along with other comics created by Valve for Portal 2 and Left 4 Dead, and released in November 2011. Cumulative details in updates both in-game and on Valve's sites from 2010 through 2012 were part of a larger alternate reality game preceding the reveal of the Mann vs. Machine mode, which was revealed as a co-op mode on August 15, 2012. Valve had provided other promotions to draw players into the game. Valve has held weekends of free play for Team Fortress 2 before the game was made free-to-play. Through various updates, hats and accessories can be worn by any of the classes, giving players an ability to customize the look of their character, and extremely rare hats named "Unusuals" have particle effects attached to it are and are only obtainable through opening "crates" or trading with other players. New weapons were added in updates to allow the player to choose a loadout and play style that best suits them. Hats and weapons can be gained as a random drop, through the crafting/trading systems, or via cross-promotion: Limited-edition hats and weapons have been awarded for pre-ordering or gaining Achievements in other content from Steam, both from Valve (such as Left 4 Dead 2 and Alien Swarm) or other third-party games such as Sam & Max: The Devil's Playhouse, Worms Reloaded, Killing Floor, or Poker Night at the Inventory (which features the Heavy class as a character). According to Robin Walker, Valve introduced these additional hats as an indirect means for players to show status within the game or their affiliation with another game series simply by visual appearance. The Red Pyro, Heavy, and Spy all function as a single playable character in the PC release of Sonic and All-Stars Racing Transformed. The Pyro, Medic, Engineer, and Heavy appear as playable characters in Dungeon of the Endless. The Pyro was added as a henchmen in the game Evil Genius 2. The game's first television commercial premiered during the first episode of the fifth season of The Venture Bros. in June 2013, featuring in-game accessories that were created with the help of Adult Swim. Economy The economy of Team Fortress 2 has received significant attention from economists, journalists, and users, due to its relative sophistication and the value of many of its in-game items. It has often been the subject of study. It operates on a system of supply and demand, barter, and scarcity value, akin to many real-world economies such as that of the United States. Trading In Team Fortress 2, players can trade with others for items including weapons, cosmetics, war paints, taunts, and currency. In 2011, it was reported that the economy of Team Fortress 2 was worth over US$50 million. Many third-party websites such as backpack.tf and scrap.tf have been created to aid users in trading, as well as track the value of in-game items. Crate keys, crafting metal, and Earbuds (an in-game cosmetic item) are all used as currency, due to their value. 2019 Crate bug On July 25, 2019, a bug was mistakenly included in an update - if players unboxed certain older series of Crates, they would be guaranteed to receive an Unusual-grade cosmetic item, compared to the usual one per cent chance of obtaining an Unusual-grade cosmetic item from a Crate. This damaged the in-game economy, causing Unusual-grade cosmetic items able to be unboxed from these Crates to drop substantially in value. The incident has been nicknamed "The Crate Depression" (a pun on "Crate" and "The Great Depression") by fans. On July 26, 2019, this bug was fixed. Users who received any Unusual-grade cosmetic items from the bug were restricted from trading them, with Valve later announcing in an official statement on August 2, 2019 that the first Unusual-grade item any player received from the bug is tradable, with any subsequent Unusual-grade items being permanently untradable and only usable by the player who received them. Item values Many items within Team Fortress 2 have reached notable real-world values, including the "Burning Flames Team Captain", valued at approximately US$7,000, the "Strange Golden Frying Pan", at approximately US$2,000, and the "Collector's Dead of Night", at approximately US$2,500. Reception Team Fortress 2 received widespread critical acclaim, with overall scores of 92/100 "universal acclaim" on Metacritic. Many reviewers praised the cartoon-styled graphics, and the resulting light-hearted gameplay, and the use of distinct personalities and appearances for the classes impressed a number of critics, with PC Gamer UK stating that "until now multiplayer games just haven't had it". Similarly, the game modes were received well, GamePro described the settings as focusing "on just simple fun", while several reviewers praised Valve for the map "Hydro" and its attempts to create a game mode with variety in each map. Additional praise was bestowed on the game's level design, game balance and teamwork promotion. Team Fortress 2 has received several awards individually for its multiplayer gameplay and its graphical style, as well as having received a number of "game of the year" awards as part of The Orange Box. Although Team Fortress 2 was well received, its removal of class-specific grenades, a feature of previous Team Fortress incarnations, was controversial amongst reviewers. IGN expressed some disappointment over this, while conversely, PC Gamer UK approved, stating "grenades have been removed entirely—thank God". Some further criticism came over a variety of issues, such as the lack of extra content such as bots (although Valve has since added bots in an update), problems of players finding their way around maps due to the lack of a minimap, and some criticism of the Medic class being too passive and repetitive in his nature. The Medic class has since been re-tooled by Valve, giving it new unlockable weapons and abilities. With the "Gold Rush Update" in April 2008, Valve had started to add fundamentals of character customization through unlockable weapons for each class, which continued in subsequent updates, most notably the "Sniper vs. Spy Update" in April 2009, which introduced unlockable cosmetic items into the game. Further updates expanded the number of weapons and cosmetics available, but also introduced monetization options, eventually allowing it to go free-to-play. To this end, Team Fortress 2'' is considered one of the first games to offer games as a service, a feature which would become more prevalent in the 2010s. References External links 2007 video games First-person shooter multiplayer online games First-person shooters Free-to-play video games Hero shooters Linux games MacOS games Multiplayer and single-player video games Multiplayer online games PlayStation 3 games Source (game engine) games Valve Corporation games Vaporware video games Video game sequels Video games about robots Video games adapted into comics Video games containing loot boxes Video games developed in the United States Video games scored by Mike Morasky Video games set in the 1960s Video games set in the 1970s Video games using Havok Video games with commentaries Video games with Steam Workshop support Virtual reality games Windows games Xbox 360 games
12962867
https://en.wikipedia.org/wiki/William%20Barden%20Jr.
William Barden Jr.
William Barden Jr. is an author of books and articles on computer programming. Barden's writings mainly covered microcomputers, computer graphics and assembly language and BASIC programming. He was a contributing editor for The Rainbow magazine in which he wrote a monthly column called Barden's Buffer on low-level assembly language programming on the TRS-80 Color Computer. Some of his books were published under the name William T. Barden. He lives in Scottsdale, Arizona. Books Connecting the CoCo to the Real World, 1990, no ISBN (Radio Shack project cancelled, then printed by author). Part of the book comes from his chronicle Barden's Buffer in Rainbow Magazine. TRS-80 Models I, III, & Color Computer Interfacing Projects, 1983, TRS-80 Color Computer Assembly Language Programming, Radio Shack catalog number 62-2077, 1983 TRS-80 Color Computer & MC-10 Programs, Radio Shack cat. no. 26-3195, 1983 How To Do It on the TRS-80, IJG Inc. publisher, 1983 Color Computer Graphics, Radio Shack cat. no. 62-2076, 1982 The Z-80 Microcomputer Handbook, Longman Higher Education, 1978, How to program microcomputers, H. W. Sams, 1977, TRS-80 Pocket BASIC Handbook, Radio Shack, 1982 References Year of birth missing (living people) Living people American computer programmers American information and reference writers American instructional writers TRS-80 Color Computer
175613
https://en.wikipedia.org/wiki/Hypercomputation
Hypercomputation
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic. The Church–Turing thesis states that any "computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense. Technically, the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of deterministic, rather than random, uncomputable functions. History A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals. This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are mathematical abstractions, and are not physically realizable. State space In a sense, most functions are uncomputable: there are computable functions, but there are an uncountable number () of possible Super-Turing functions. Models Hypercomputer models range from useful but probably unrealizable (such as Turing's original oracle machines), to less-useful random-function generators that are more plausibly "realizable" (such as a random Turing machine). Uncomputable inputs or black-box components A system granted knowledge of the uncomputable, oracular Chaitin's constant (a number with an infinite sequence of digits that encode the solution to the halting problem) as an input can solve a large number of useful undecidable problems; a system granted an uncomputable random-number generator as an input can create random uncomputable functions, but is generally not believed to be able to meaningfully solve "useful" uncomputable functions such as the halting problem. There are an unlimited number of different types of conceivable hypercomputers, including: Turing's original oracle machines, defined by Turing in 1939. A real computer (a sort of idealized analog computer) can perform hypercomputation if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for useful (rather than random) computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would require the ability to measure the real-valued physical value to arbitrary precision, though standard physics makes such arbitrary-precision measurements theoretically infeasible. Similarly, a neural net that somehow had Chaitin's constant exactly embedded in its weight function would be able to solve the halting problem, but is subject to the same physical difficulties as other models of hypercomputation based on real computation. Certain fuzzy logic-based "fuzzy Turing machines" can, by definition, accidentally solve the halting problem, but only because their ability to solve the halting problem is indirectly assumed in the specification of the machine; this tends to be viewed as a "bug" in the original specification of the machines. Similarly, a proposed model known as fair nondeterminism can accidentally allow the oracular computation of noncomputable functions, because some such systems, by definition, have the oracular ability to identify reject inputs that would "unfairly" cause a subsystem to run forever. Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic. These models require an uncomputable input, such as a physical event-generating process where the interval between events grows at an uncomputably large rate. Similarly, one unorthodox interpretation of a model of unbounded nondeterminism posits, by definition, that the length of time required for an "Actor" to settle is fundamentally unknowable, and therefore it cannot be proven, within the model, that it does not take an uncomputably long period of time. "Infinite computational steps" models In order to work correctly, certain computations by the machines below literally require infinite, rather than merely unlimited but finite, physical space and resources; in contrast, with a Turing machine, any given computation that halts will require only finite physical space and resources. A Turing machine that can complete infinitely many steps in finite time, a feat known as a supertask. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. According to Shagrir, Zeno machines introduce physical paradoxes and its state is logically undefined outside of one-side open period of [0, 2), thus undefined exactly at 2 minutes after beginning of the computation. It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation. According to a 1992 paper, a computer operating in a Malament–Hogarth spacetime or in orbit around a rotating black hole could theoretically perform non-Turing computations for an observer inside the black hole. Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable. Quantum models Some scholars conjecture that a quantum mechanical system which somehow uses an infinite superposition of states could compute a non-computable function. This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space). "Eventually correct" systems Some physically-realizable systems will always eventually converge to the correct answer, but have the defect that they will often output an incorrect answer and stick with the incorrect answer for an uncomputably large period of time before eventually going back and correcting the mistake. In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals" and "trial-and-error predicates", respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)" L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines." A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. The 'Monotone Turing machines' traditionally used in description size theory cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber () uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can eventually converge to a correct solution of the halting problem by evaluating a Specker sequence. Analysis of capabilities Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing or . Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be . Gold further showed that limiting partial recursion would allow the computation of precisely the predicates. Criticism Martin Davis, in his writings on hypercomputation, refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in the 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above. In his argument, he makes a remark that all of hypercomputation is little more than: "if non-computable inputs are permitted, then non-computable outputs are attainable." See also Computation Digital physics Supertask References Further reading Mario Antoine Aoun, "Advances in Three Hypercomputation Models", (2016) L. Blum, F. Cucker, M. Shub, S. Smale, Complexity and Real Computation, Springer-Verlag 1997. General development of complexity theory for abstract machines that compute on real numbers instead of bits. Burgin, M. S. (1983) Inductive Turing Machines, Notices of the Academy of Sciences of the USSR, v. 270, No. 6, pp. 1289–1293 Keith Douglas. Super-Turing Computation: a Case Study Analysis (PDF), M.S. Thesis, Carnegie Mellon University, 2003. Mark Burgin (2005), Super-recursive algorithms, Monographs in computer science, Springer. Cockshott, P. and Michaelson, G. Are there new Models of Computation? Reply to Wegner and Eberbach, The computer Journal, 2007 Copeland, J. (2002) Hypercomputation, Minds and machines, v. 12, pp. 461–502 Davis, Martin (2006), "The Church–Turing Thesis: Consensus and opposition". Proceedings, Computability in Europe 2006. The requested URL /~simon/TEACH/28000/DavisUniversal.pdf was not found on this server. Lecture Notes in Computer Science, 3988 pp. 125–132 Hagar, A. and Korolev, A., Quantum Hypercomputation—Hype or Computation?, (2007) Ord, Toby. Hypercomputation: Computing more than the Turing machine can compute: A survey article on various forms of hypercomputation. Piccinini, Gualtiero: Computation in Physical Systems Putz, Volkmar and Karl Svozil, Can a computer be "pushed" to perform faster-than-light?, (2010) Rogers, H. (1987) Theory of Recursive Functions and Effective Computability, MIT Press, Cambridge Massachusetts Mike Stannett, The case for hypercomputation, Applied Mathematics and Computation, Volume 178, Issue 1, 1 July 2006, Pages 8–24, Special Issue on Hypercomputation Syropoulos, Apostolos (2008), Hypercomputation: Computing Beyond the Church–Turing Barrier (preview), Springer. External links Hypercomputation Research Network Theory of computation
2706016
https://en.wikipedia.org/wiki/Linienzugbeeinflussung
Linienzugbeeinflussung
Linienzugbeeinflussung (or LZB) is a cab signalling and train protection system used on selected German and Austrian railway lines as well as on the AVE and some commuter rail lines in Spain. The system was mandatory where trains were allowed to exceed speeds of in Germany and in Spain. It is also used on some slower railway and urban rapid transit lines to increase capacity. The German Linienzugbeeinflussung translates to continuous train control, literally: linear train influencing. It is also called linienförmige Zugbeeinflussung. LZB is deprecated and will be replaced with European Train Control System (ETCS) between 2023 and 2030. It is referenced by European Union Agency for Railways (ERA) as a Class B train protection system in National Train Control (NTC). Driving cars mostly have to replace classical control logic to ETCS Onboard Units (OBU) with common Driver Machine Interface (DMI). Because high performance trains are often not scrapped or reused on second order lines, special Specific Transmission Modules (STM) for LZB were developed for further support of LZB installation. Overview In Germany the standard distance from a distant signal to its home signal is . On a train with strong brakes, this is the braking distance from 160 km/h. In the 1960s Germany evaluated various options to increase speeds, including increasing the distance between distant and home signals, and cab signalling. Increasing the distance between the home and distant signals would decrease capacity. Adding another aspect would make the signals harder to recognize. In either case, changes to the conventional signals wouldn't solve the problem of the difficulty of seeing and reacting to the signals at higher speeds. To overcome these problems, Germany chose to develop continuous cab signalling. The LZB cab signalling system was first demonstrated in 1965, enabling daily trains at the International Transport Exhibition in Munich to run at 200 km/h. The system was further developed throughout the 1970s, then released on various lines in Germany in the early 1980s and on German, Spanish, and Austrian high-speed lines in the 1990s with trains running up to . Meanwhile, additional capabilities were built into the system. LZB consists of equipment on the line as well as on the trains. A 30–40 km segment of track is controlled by a LZB control centre. The control center computer receives information about occupied blocks from track circuits or axle counters and locked routes from interlockings. It is programmed with the track configuration including the location of points, turnouts, gradients, and curve speed limits. With this, it has sufficient information to calculate how far each train may proceed and at what speed. The control centre communicates with the train using two conductor cables that run between the tracks and are crossed every 100 m. The control centre sends data packets, known as telegrams, to the vehicle which give it its movement authority (how far it can proceed and at what speed) and the vehicle sends back data packets indicating its configuration, braking capabilities, speed, and position. The train's on-board computer processes the packets and displays the following information to the driver: Current speed: locally derived from speed sensing equipment - shown with a standard speedometer Permitted speed: maximum allowed speed now - shown with a red line or triangle on the outside of the speedometer Target speed: maximum speed at a certain distance - shown with LED numbers at the bottom of the speedometer Target distance: distance for target speed - shown with LED bars showing up to 4000 m, with numbers for longer distances If there is a long distance free in front of the train the driver will see the target speed and permitted speed equal to the maximum line speed, with the distance showing the maximum distance, between 4 km and 13.2 km depending on the unit, train, and line. As the train approaches a speed restriction, such as one for a curve or turnout, LZB will sound a buzzer and display the distance to and speed of the restriction. As the train continues the target distance will decrease. As the train nears the speed restriction the permitted speed will start to decrease, ending up at the target speed at the restriction. At that point the display will change to the next target. The LZB system treats a red signal or the beginning of a block containing a train as a speed restriction of 0 speed. The driver will see the same sequence as approaching a speed restriction except the target speed is 0. LZB includes Automatic Train Protection. If the driver exceeds the permitted speed plus a margin LZB will activate the buzzer and an overspeed light. If the driver fails to slow the train the LZB system can apply the brakes itself, bringing the train to a halt if necessary. LZB also includes an Automatic Train Operation system known as AFB (Automatische Fahr- und Bremssteuerung, automatic driving and braking control), which enables the driver to let the computer drive the train on auto-pilot, automatically driving at the maximum speed currently allowed by the LZB. In this mode, the driver only monitors the train and watches for unexpected obstacles on the tracks. Finally, the LZB vehicle system includes the conventional Indusi (or PZB) train protection system for use on lines that aren't LZB equipped. History Choice of cab signalling In the 1960s the German railways wanted to increase the speeds of some of their railway lines. One issue in doing so is signalling. German signals are placed too close to allow high-speed trains to stop between them, and signals may be difficult for train drivers to see at high speeds. Germany uses distant signals placed before the main signal. Trains with conventional brakes, decelerating at , can stop from in that distance. Trains with strong brakes, usually including electromagnetic track brakes, decelerating at can stop from and are allowed to travel that speed. However, even with strong brakes and the same deceleration, a train traveling would require to stop, exceeding the signalling distance. Furthermore, as energy dissipated at a given acceleration increases with speed, higher speeds may require lower decelerations to avoid overheating the brakes, further increasing the distance. One possibility to increase speed would be to increase the distance between the main and distant signal. But, this would require longer blocks, which would decrease line capacity for slower trains. Another solution would be to introduce multiple aspect signalling. A train traveling at would see a "slow to 160" signal in the first block, and then a stop signal in the 2nd block. Introducing multi-aspect signalling would require substantial reworking for the existing lines, as additional distant signals would need to be added onto long blocks and the signals reworked on shorter ones. In addition, it wouldn't solve the other problem with high-speed operation, the difficulty of seeing signals as a train rushes past, especially in marginal conditions such as rain, snow, and fog. Cab signalling solves these problems. For existing lines it can be added on top of the existing signalling system with little, if any, modifications to the existing system. Bringing the signals inside the cab makes it easy for the driver to see them. On top of these, the LZB cab signalling system has other advantages: The driver is immediately aware of signalling changes. This allows a driver to stop slowing down if a signal at the end of a block improves, saving energy and time. It also allows the control center to instantly signal stop in the case of dangerous conditions such as a derailment or avalanche. The driver can electronically "see" a long distance (up to 13 km) down the track, allowing him or her to drive the train more smoothly. A train following a slower train can "see" the slower train well in advance, coasting or using regenerative braking to slow and thereby saving energy. It can signal a variety of speeds. (Conventional German signals in the 1960s could only signal for turnouts. Modern conventional German signals can signal any increment, but LZB can signal even finer increments.) It allows the track to be divided up into a large number of small blocks if necessary to increase capacity. It enables a more capable Automatic Train Protection system. It enables the AFB Automatic Train Operation system. Given all of these advantages, in the 1960s the German railways chose to go with LZB cab signalling instead of increasing the signal spacing or adding aspects. Development The first prototype system was developed by German Federal Railways in conjunction with Siemens and tested in 1963. It was installed in Class 103 locomotives and presented in 1965 with runs on trains to the International Exhibition in Munich. From this Siemens developed the LZB 100 system and introduced it on the Munich-Augsburg-Donauwörth and Hanover-Celle-Uelzen lines, all in Class 103 locomotives. The system was overlaid on the existing signal system. All trains would obey the standard signals, but LZB equipped trains could run faster than normal as long as the track was clear ahead for a sufficient distance. LZB 100 could display up to in advance. The original installations were all hard-wired logic. However, as the 1970s progressed Standard Elektrik Lorenz (SEL) developed the computer based LZB L72 central controllers and equipped other lines with them. By the late 1970s, with the development of microprocessors, the 2-out-of-3 computers could be applied to on-board equipment. Siemens and SEL jointly developed the LZB 80 on-board system and equipped all locomotives and trains that travel over plus some heavy haul locomotives. By 1991, Germany replaced all LZB 100 equipment with LZB 80/L 72. When Germany built its high-speed lines, beginning with the Fulda-Würzburg segment that started operation in 1988, it incorporated LZB into the lines. The lines were divided into blocks about long, but instead of having a signal for every block, there are only fixed signals at switches and stations, with approximately between them. If there was no train for the entire distance the entry signal would be green. If the first block was occupied it would be red as usual. Otherwise, if the first block was free and a LZB train approached the signal would be dark and the train would proceed on LZB indications alone. The system has spread to other countries. The Spanish equipped their first high-speed line, operating at , with LZB. It opened in 1992 and connects Madrid, Cordoba, and Seville. In 1987 the Austrian railways introduced LZB into their systems, and with the 23 May 1993 timetable change introduced EuroCity trains running on a -long section of the Westbahn between Linz and Wels. Siemens continued to develop the system, with "Computer Integrated Railroading", or "CIR ELKE", lineside equipment in 1999. This permitted shorter blocks and allowed speed restrictions for switches to start at the switch instead of at a block boundary. See CIR ELKE below for details. Development timeline Line equipment Cable loops The LZB control centre communicates with the train using conductor cable loops. Loops can be as short as 50 meters long, as used at the entrance and exit to LZB controlled track, or as long as . Where the loops are longer than they are crossed every . At the crossing the signal phase angle is changed by 180° reducing electrical interference between the track and the train as well as long-distance radiation of the signal. The train detects this crossing and uses it to help determine its position. Longer loops are generally fed from the middle rather than an end. One disadvantage of very long loops is that any break in the cable will disable LZB transmission for the entire section, up to . Thus, newer LZB installations, including all high-speed lines, break the cable loops into physical cables. Each cable is fed from a repeater, and all of the cables in a section will transmit the same information. LZB route centre (central controller) The core of the LZB route centre, or central controller, consists of a 2-of-3 computer system with two computers connected to the outputs and an extra for standby. Each computer has its own power supply and is in its own frame. All 3 computers receive and process inputs and interchange their outputs and important intermediate results. If one disagrees it is disabled and the standby computer takes its place. The computers are programmed with fixed information from the route such as speed limits, gradients, and the location of block boundaries, switches, and signals. They are linked by LAN or cables to the interlocking system from which they receive indications of switch positions, signal indications, and track circuit or axle counter occupancy. Finally, the route center's computers communicates with controlled trains via the cable loops previously described. Other equipment Repeaters: Repeaters connect individual long loop sections to the primary communication links, strengthening the signal from the route center and sending the vehicle responses. Fixed loops: Fixed loops, typically about long, are placed at the ends of the controlled section. They transmit fixed telegrams which allow entering trains to receive an address. Isolation cabinets: A long communication link will consist of multiple individual cables connected in "isolation cabinets" which serve to prevent the low-frequency voltage which is coupled from the catenary from accumulating on the cable. Signs: Signs indicate the LZB block boundaries (if not at a signal) and the entrance and exit from the LZB controlled area. Vehicle equipment The vehicle equipment in the original LZB80 designed consisted of: Computers: The on-board equipment centred around a 2-of-3 computer system. The original LZB 80 design used 8085 microprocessors programmed in assembly language. The programs were interrupt driven, with interrupts generated by a 70 ms clock, the track receivers and transmitters, the serial interface, and also within the program itself. Interrupts triggered comparison and output programs. Peripheral equipment was arranged around the computers with all interfaces electrically separated and all grounds tied to the cabinet frame which was tied to the vehicle chassis. Redundant power supply: The computers and peripheral equipment were supplied with a redundant power supply based on two identical voltage transformers. Each was capable of supplying the power necessary for all of the equipment. They were normally alternately switched, but if one failed the other would take over. On-board batteries could also supply temporary power. Odometry: The vehicle speed and distance traveled is measured on two independent channels by two pulse generators mounted on different axles. Each is linked to a separate micro-controller based unit used to correct any inaccuracies. The central logic polls the two units as well as an accelerometer, compares the values and checks for plausibility. Receiver: Two pairs of receiving antennas are each fed to selective, self-regulating amplifiers whose output is fed to a demodulator and then a serial-parallel transformer. The received telegrams are then fed byte by byte to the central logic computer. The receivers also indicate transition points and whether the signal is present. Transmitter: The 2 out putting computers feed serial-parallel transformers. They are compared after conversion, and transmission is only allowed if they are identical. Only one signal is actually transmitted, with the transmitter transmitting the two signals at 56 kHz with the signals displaced by a 90° phase angle. Emergency brake connection: The computers are connected to the brake via a relay. A computer command or loss of current will release the air from the brake pipe applying the emergency brake. Indusi horn connection: The horn signalling the driver is also connected by a relay. Serial interface: A serial interface is used to connect the rest of the components, including the driver inputs, display unit, logger, and the automatic drive and brake control (AFB) to the computers. Telegrams are transmitted cyclically both from and to the computers. Driver input unit: The driver inputs train related data such as the type of braking (passenger/freight), braking potential, maximum train speed, and train length on the driver interface unit. This is then displayed to the driver to verify that it is correct. Modular cab display (MFA): The modular cab display shows the relevant speeds and distances to the driver as described in the overview. Automatic drive/brake control: When enabled by the driver, the automatic drive/brake control unit (AFB) will drive the train following the permitted speed. When not operating on an LZB equipped line, i.e. under Indusi operation, the AFB acts mainly as a "cruise control", driving according to the speed set by the driver. The equipment in newer trains is similar, although the details may vary. For example, some vehicles use radar rather than accelerometers to aid in their odometry. The number of antennas may vary by vehicle. Finally, some newer vehicles use a full-screen computer generated "Man-machine interface" (MMI) display rather than the separate dials of the "Modular cab display" (MFA). Operation Telegrams LZB operates by exchanging telegrams between the central controller and the trains. The central controller transmits a "call telegram" using Frequency-shift keying (FSK) signalling at 1200 bits per second on a 36 kHz ± 0.4 kHz. The train replies with a "response telegram" at 600 bits per second at 56 kHz ± 0.2 kHz. Call telegram format Call telegrams are 83.5 bits long: Start sequence: Synchronization: 5.5 bits, Start element + baker code: 3 bits Address: Section ID: A-E, A1-A3, Location: 1-127 or 255-128 Vehicle information: Travel direction: up/down, Braking type: passenger/freight, Brake curve number: 1-10, A-B Braking information: Distance to brake application: Nominal distance XG: , Target information, Distance: , Speed: Display information, Signal information: 3 bits, Additional information: 5 bits Auxiliary information: Group identity: 1-4 - Indicates response type required, Line identity: new high-speed/normal main lines, Central controller type: LZB 100/72 Cyclic redundancy check (CRC): 8 bits One might note that there is no "train identification" field in the telegram. Instead, a train is identified by position. See Zones and Addressing for more details. Response telegram format There are 4 types of response telegrams, each 41 bits long. The exact type of telegram a train sends depends on the "Group identity" in the call telegram. The most common type of telegram is type 1, which is used to signal a train's position and speed to the central controller. It contains the following fields: {LZB p3} Synchronization and start sequence: 6 bits Group identity: 1-4 - Indicates response type Vehicle location acknowledgement: number of zones advanced = ±0, ±1, ±2 Location within zone: (in increments) Braking type: passenger/freight Brake curve number: 16 possible brake curves Actual speed: Operational and disgnostic information: 5 bits Cyclic redundancy check (CRC): 7 bits The other telegrams are used primarily when a train enters the LZB controlled section. They all start with the same synchronization and start sequence and a "group identity" to identify the telegram type, and end with the CRC. Their data fields vary as follows: Type 2: Vehicle location acknowledgement, location within zone, braking type, brake curve number, maximum train speed, train length Type 3: Railway, train number Type 4: Locomotive/train series, serial number, train length Entry into LZB, zones and addressing Before entering an LZB controlled section the driver must enable the train by entering the required information on the Driver Input Unit and enabling LZB. When enabled the train will light a "B" light. A controlled section of track is divided into up to 127 zones, each long. The zones are consecutively numbered, counting up from 1 in one direction and down from 255 in the opposite. When a train enters a LZB controlled section of track, it will normally pass over a fixed loop that transmits a "change of section identification" (BKW) telegram. This telegram indicates to the train the section identification number as well as the starting zone, either 1 or 255. The train sends back an acknowledgement telegram. At that time the LZB indications are switched on, including the "Ü" light to indicate that LZB is running. From that point on the train's location is used to identify a train. When a train enters a new zone it sends a response telegram with the "vehicle location acknowledgement" filed indicating that it has advanced into a new zone. The central controller will then use the new zone when addressing the train in the future. Thus a trains address will gradually increase or decrease, depending on its direction, as it travels along the track. A train identifies that it has entered a new zone by either detecting the cable transposition point in the cable or when it has traveled . A train can miss detecting up to 3 transposition points and still remain under LZB control. The procedure for entering LZB controlled track is repeated when a train transitions from one controlled section to another. The train receives a new "change of section identification" telegram and gets a new address. Until the train knows its address it will ignore any telegrams received. Thus, if a train doesn't properly enter into the controlled section it won't be under LZB control until the next section. Speed signalling The main task of LZB is signalling to the train the speed and distance it is allowed to travel. It does this by transmitting periodic call telegrams to each train one to five times per second, depending on the number of trains present. Four fields in the call telegram are particularly relevant: Target distance. Target speed. Nominal stopping distance, known as "XG" (See below). Distance to brake application point. The target speed and location are used to display the target speed and distance to the driver. The train's permitted speed is calculated using the trains braking curve, which can vary by train type, and the XG location, which is the distance from the start of the zone that is used to address the train. If the train is approaching a red signal or the beginning of an occupied block the location will match the location of the signal or block boundary. The on-board equipment will calculate the permitted speed at any point so that the train, decelerating at the deceleration indicated by its braking curve, will stop by the stopping point. A train will have a parabolic braking curve as follows: where: decel = deceleration dist = distance from beginning of zone Where a train is approaching a speed restriction the control center will transmit a packet with an XG location set to a point behind the speed restriction such that a train, decelerating based on its braking curve, will arrive at the correct speed at the start of the speed restriction. This, as well as deceleration to zero speed, is illustrated with the green line in the "Permitted and supervised speed calculation" figure. The red line in the figure shows the "monitoring speed", which is the speed which, if exceeded, the train will automatically apply the emergency brakes. When running at constant speed this is above the permitted speed for transited emergency braking (until speed is reduced) or above the permitted speed for continuous emergency braking. When approaching a stopping point, the monitoring speed follows a braking curve similar to the permitted speed, but with a higher deceleration, that will bring it to zero at the stopping point. When approaching a speed restriction, the monitoring speed braking curve intersects the speed restriction point at above the constant speed. Deceleration rates are more conservative with LZB than with conventional German signalling. A typical passenger train braking curve might have a "permitted speed" deceleration of and a "monitoring speed" deceleration of 42% higher than the deceleration for the permitted speed, but lower than the required to stop from in used in conventional signaling. The ICE3, which has a full service braking deceleration of below , dropping to by , has a LZB target speed deceleration of only to , between , and at higher speeds. In between the permitted speed and monitoring speed is a warning speed, normally above the permitted speed. If the train exceeds that speed LZB will flash the "G" light on the train's display and sound a horn. Leaving LZB About before the end of the LZB controlled section the central controller will send a telegram to announce the end of LZB control. The train will flash the "ENDE" light which the driver must acknowledge within 10 seconds. The display will normally give the distance and target speed at the end of the controlled section, which will depend on the signal at that point. When the train reaches the end of LZB control the "Ü" and "ENDE" lights go off and the conventional Indusi (or PZB) system takes over automatic train protection. Special operating modes Special conditions not covered by the full LZB system or failures can put LZB into one of the special operating modes. Crossover to opposite track As a train approaches a crossover to a normally opposite direction track the display will flash the "E/40" light. The driver confirms the indication and the permitted speed drops following the braking curve to . When the crossover section is reached the displays are switched off and the driver can proceed through the crossover at . Drive by sight signal German signalling systems have a "drive by sight" signal that consists of 3 white lights forming a triangle with one light at the top. This signal, labeled "Zs 101", is placed with a fixed line side signal and, when lighted, permits the driver to pass a fixed red or defective signal and drive by sight to the end of the interlocking no faster than . When approaching such a signal in LZB territory the "E/40" light will be lit until before the signal, then the "E/40" will go dark and "V40" will flash. The "V40" signal indicates the ability to drive by sight. Transmission failure If data exchange is interrupted, the trains distance measurement system fails, or the train fails to detect 4 or more cable transposition points the LZB system will go into a failure state. It will light the "Stör" indicator and then flash "Ü". The driver must acknowledge the indications within 10 seconds. The driver must slow the train to no more than or lower; the exact speed depends on the backup signalling system in place. Extensions CIR ELKE-I CIR-ELKE is an improvement on the basic LZB system. It uses the same physical interface and packets as standard LZB but upgrades its software, adding capabilities and modifying some procedures. It is designed to increase line capacity by up to 40% and to further shorten travel times. The name is an abbreviation of the English/German project title Computer Integrated Railroading - Erhöhung der Leistungsfähigkeit im Kernnetz der Eisenbahn (Computer Integrated Railroading - Increase Capacity in the Core Railway Network). Being an extension of LZB it is also called LZB-CIR-ELKE further abbreviated into LZB-CE. CIR-ELKE includes the following improvements: Shorter blocks - CIR-ELKE blocks can be as short as , or even shorter for S-Bahn systems. The Munich S-Bahn system has blocks as short as at the beginning of the platform, allowing a train to pull into the platform as another is leaving and making it capable of running 30 trains per hour. Speed changes at any location - The standard LZB system required that speed restrictions start at block boundaries. With CIR-ELKE speed restrictions can start at any point, such as at a turnout. This means a train doesn't have to slow down as soon, increasing average speeds. Telegram evaluation changes - In order to increase safety on a system with shorter intervals between trains CIR-ELKE sends identical telegrams twice. The train will only act on a telegram if it receives two identical valid telegrams. In order to compensate for the increase in the number of telegrams CIR-ELKE sends telegrams to non-moving trains less frequently. CIR ELKE-II The original LZB system was designed for permitted speeds up to and gradients up to 1.25%. The Cologne–Frankfurt high-speed rail line was designed for operation and has 4% gradients; thus, it needed a new version of LZB, and CIR ELKE-II was developed for this line. CIR ELKE-II has the following features: Maximum speed of . Support for braking curves with higher decelerations and curves taking into account the actual altitude profile of the distance ahead instead of assuming the maximum down slope of the section. This makes operation on 4% gradients practical. Support for target distances of up to to a stopping or speed restriction point. If there is no such point within that distance the system will display a target distance of and a target speed of the line speed. Support for enabling the Eddy current brake of the ICE3 trains. By default, the eddy current brake is enabled for emergency braking only. With CE2 it is possible to enable it for service braking, too. Signalling voltage or phase changes. Audible warning signals 8 seconds before the point of braking, or 4 seconds for the Munich S-Bahn, instead of before or with a speed difference done previously. Malfunctions The LZB system has been quite safe and reliable; so much so that there have been no collisions on LZB equipped lines because of the failure of the LZB system. However, there have been some malfunctions that could have potentially resulted in accidents. They are: On 29 June 1991, after a disturbance, the train driver had the LZB system off and passed a stop signal with two trains in the tunnel at Jühnde on the Hanover-Würzburg high-speed line. On 29 June 2001, there was nearly a serious accident at the Oschatz crossover on the Leipzig-Dresden railway line. The crossover was set to diverging with a speed limit but the LZB system displayed a limit. The driver of ICE 1652 recognized the diverging signal and managed to slow down to before the crossing and the train did not derail. A software error in the LZB computer was suspected as the cause. A similar near-accident occurred on 17 November 2001 in Bienenbüttel on the Hamburg-Hanover rail line. In order to pass a failed freight train an ICE train crossed over to the opposite track going through a crossover that was rated at . The suspected cause was the faulty execution of a change to the interlocking system where the crossover speed was increased from . Without that speed restriction the LZB system did continue to show the pass-through line speed on the in-cab display - the train driver applied the brakes on recognizing the line-side signal lights set to diverge and the train did not derail. On 9 April 2002 on the Hanover-Berlin high-speed rail line, a fault in the LZB line centre computer brought four LZB controlled trains to a stop with two trains in each line direction being halted in the same signalling block (Teilblockmodus - divided block control). When the computer was rebooted it signaled to the trains in front and to the following trains. The drivers of the following trains did not proceed however - one driver saw the train in front of him and the other driver double-checked with the operations center which had warned him prior to departure, so two possible collisions were averted. As a consequence of this incident, the two mainline train operators (DB Cargo and DB Passenger Transport) issued an instruction to their drivers to be especially cautious during periods of LZB outage when the system is running in divided block mode. The cause turned out to be a software error. Equipped lines DB (Germany) The following lines of Deutsche Bahn are equipped with LZB, allowing for speeds in excess of 160 km/h (providing the general suitability of the track): Augsburg - Dinkelscherben - Ulm (km 7.3 - km 28.5) Berlin - Nauen - Glöwen - Wittenberge - Hagenow Land - Rothenburgsort - Hamburg (km 16.5 - km 273.1) Bremen - Hamburg (km 253.9 - km 320.1) Dortmund - Hamm (Westf) - Bielefeld (except for the station of Hamm) Frankfurt am Main - Gelnhausen - Fulda (km 24.8 - km 40.3) Hannover - Stadthagen - Minden (km 4.4 - km 53.4) Hannover - Celle - Uelzen - Lüneburg - Hamburg (km 4.0 - km 166.5) Hannover - Göttingen - Kassel-Wilhelmshöhe - Fulda - Würzburg (km 4.2 - km 325.6) Karlsruhe - Achern - Offenburg - Kenzingen - Leutersberg - Weil am Rhein - Basel Bad. Bf. (km 102.2 - km 270.6) Köln - Aachen (km 1,9 - km 41,8) Köln - Düsseldorf - Duisburg (km 6.7 - km 37.3 and km 40.1 - km 62.2; Düsseldorf main station is not equipped) Köln - Troisdorf - Montabaur - Limburg a.d. Lahn - Frankfurt am Main (km 8.7 - km 172.6) Leipzig - Wurzen - Dresden (km 3.6 - km 59.5) Lengerich (Westf) - Münster (Westf) Lehrte - Stendal - Berlin-Spandau Mannheim - Karlsruhe Mannheim - Vaihingen an der Enz - Stuttgart (km 2.1 - km 99.5) München - Augsburg - Donauwörth (km 9,2 - km 56.3 and km 2.7 - km 39.8; Augsburg main station is not equipped) Nürnberg - Allersberg - Kinding - Ingolstadt-Nord (ABS: km 97.9 - km 91.6; NBS: km 9.0 - km 88.7) Nürnberg - Neustadt an der Aisch - Würzburg (km 34.8 - km 62.7) Osnabrück - Bremen (km 139.7 - km 232.0) Paderborn - Lippstadt - Soest - Hamm (Westf) (Strecke 1760: km 125.2 - km 180.8; Strecke 2930: km 111.5 - km 135.6) Zeppelinheim bei Frankfurt/Main - Mannheim Note: italics indicate the physical location of an LZB control center. ÖBB (Austria) The West railway (Vienna–Salzburg) is equipped with LZB in three sections: St. Pölten–Ybbs an der Donau (km 62.4–km 108.6) Amstetten–St. Valentin (km 125.9–km 165.0) Linz–Attnang-Puchheim (km 190.5–km 241.6) RENFE (Spain) Madrid - Córdoba - Sevilla (9 Centers / 480 km), operational since 1992. Since 2004, the terminus Madrid Atocha is also equipped with LZB. In November 2005, a branch line to Toledo was opened. (20 km). Cercanías Madrid line C5 from Humanes over Madrid Atocha to Móstoles-El Soto, operational since 1995. It is 45 km long with two LZB centres and 76 Series 446 vehicles. All of the Euskotren network with the exception of the Euskotren Tranbia tramways. United Kingdom A modified version of LZB is installed on the Chiltern Mainline as Chiltern ATP. Non-mainline uses In addition to mainline railways, versions of the LZB system are also used in suburban (S-Bahn) railways and subways. Dusseldorf, Duisburg, Krefeld, Mülheim an der Ruhr Tunnels in the Düsseldorf and Duisburg Stadtbahn (light rail) systems, and some tunnels of the Essen Stadtbahn around the Mülheim an der Ruhr area are equipped with LZB. Vienna (Wien) With the exception of line 6, the entire Vienna U-Bahn is equipped with LZB since it was first built and includes the capability of automatic driving with the operator monitoring the train. Munich The Munich U-Bahn was built with LZB control. During regular daytime operations the trains are automatically driven with the operator simply starting the train. Stationary signals remain dark during that time. In the evenings from 9:00 p.m. until end of service and on Sundays the operators drive the trains manually according to the stationary signals in order to remain in practice. There are plans to automate the placement and reversal of empty trains. The Munich S-Bahn uses LZB on its core mainline tunnel section (Stammstrecke). Nuremberg The Nuremberg U-Bahn U3 line uses LZB for fully automatic (driverless) operation. The system was jointly developed by Siemens and VAG Nuremberg and is the first system where driverless trains and conventional trains share a section of line. The existing, conventionally driven U2 line trains shares a segment with the automatic U3 line trains. Currently, an employee still accompanies the automatically driven trains, but later the trains will travel unaccompanied. After several years of delays, the final three-month test run was successfully completed on April 20, 2008, and the operating licence granted on April 30, 2008. A few days later the driverless trains started operating with passengers, first on Sundays and public holidays, then weekdays at peak hours, and finally after the morning rush hour which has a tight sequence of U2 trains. The official opening ceremony for the U3 line was held on June 14, 2008 in the presence of the Bavarian Prime Minister and Federal Minister of Transport, the regular operation began with the schedule change on 15 June 2008. The Nuremberg U-bahn plans to convert U2 to automatic operation in about a year. London The Docklands Light Railway in east London uses the SelTrac technology which was derived from LZB to run automated trains. The trains are accompanied by an employee who closes the doors and signals the train to start, but then is mainly dedicated to customer service and ticket control. In case of failure the train can be driven manually by the on train staff. See also Automatic Train Protection Train protection system European Train Control System References Train protection systems Railway signalling in Germany
378989
https://en.wikipedia.org/wiki/Digital%20waveguide%20synthesis
Digital waveguide synthesis
Digital waveguide synthesis is the synthesis of audio using a digital waveguide. Digital waveguides are efficient computational models for physical media through which acoustic waves propagate. For this reason, digital waveguides constitute a major part of most modern physical modeling synthesizers. A lossless digital waveguide realizes the discrete form of d'Alembert's solution of the one-dimensional wave equation as the superposition of a right-going wave and a left-going wave, where is the right-going wave and is the left-going wave. It can be seen from this representation that sampling the function at a given point and time merely involves summing two delayed copies of its traveling waves. These traveling waves will reflect at boundaries such as the suspension points of vibrating strings or the open or closed ends of tubes. Hence the waves travel along closed loops. Digital waveguide models therefore comprise digital delay lines to represent the geometry of the waveguide which are closed by recursion, digital filters to represent the frequency-dependent losses and mild dispersion in the medium, and often non-linear elements. Losses incurred throughout the medium are generally consolidated so that they can be calculated once at the termination of a delay line, rather than many times throughout. Waveguides such as acoustic tubes are three-dimensional, but because their lengths are often much greater than their cross-sectional area, it is reasonable and computationally efficient to model them as one-dimensional waveguides. Membranes, as used in drums, may be modeled using two-dimensional waveguide meshes, and reverberation in three-dimensional spaces may be modeled using three-dimensional meshes. Vibraphone bars, bells, singing bowls and other sounding solids (also called idiophones) can be modeled by a related method called banded waveguides where multiple band-limited digital waveguide elements are used to model the strongly dispersive behavior of waves in solids. The term "digital waveguide synthesis" was coined by Julius O. Smith III who helped develop it and eventually filed the patent. It represents an extension of the Karplus–Strong algorithm. Stanford University owned the patent rights for digital waveguide synthesis and signed an agreement in 1989 to develop the technology with Yamaha, however, many of the early patents have now expired. An extension to DWG synthesis of strings made by Smith is commuted synthesis, wherein the excitation to the digital waveguide contains both string excitation and the body response of the instrument. This is possible because the digital waveguide is linear and makes it unnecessary to model the instrument body's resonances after synthesizing the string output, greatly reducing the number of computations required for a convincing resynthesis. Prototype waveguide software implementations were done by students of Smith in the Synthesis Toolkit (STK). The first musical use of digital waveguide synthesis was in the composition May All Your Children Be Acrobats (1981) by David A. Jaffe, followed by his Silicon Valley Breakdown (1982). Licensees Yamaha VL1 (1994) — expensive keyboard (about $10,000 USD) VL1m, VL7 (1994) — tone module and less expensive keyboard, respectively VP1 (prototype) (1994) VL70m (1996) — less expensive tone module EX5 (1999) — workstation keyboard that included a VL module PLG-100VL, PLG-150VL (1999) — plug-in cards for various Yamaha keyboards, tone modules, and the SWG-1000 high-end PC sound card. The MU100R rack-mount tone module included two PLG slots, pre-filled with a PLG-100VL and a PLG-100VH (Vocal Harmonizer). YMF-724, 744, 754, and 764 sound chips for inexpensive DS-XG PC sound cards and motherboards (the VL part only worked on Windows 95, 98, 98SE, and ME, and then only when using .VxD drivers, not .WDM). No longer made, presumably due to conflict with AC-97 and AC-99 sound card standards (which specify 'wavetables' (sample tables) based on Roland’s XG-competing GS sound system, which Sondius-XG [the means of integrating VL instruments and commands into an XG-compliant MIDI stream along with wavetable XG instruments and commands] cannot integrate with). The MIDI portion of such sound chips, when the VL was enabled, was functionally equivalent to an MU50 Level 1 XG tone module (minus certain digital effects) with greater polyphony (up to 64 simultaneous notes, compared to 32 for Level 1 XG) plus a VL70m (the VL adds an additional note of polyphony, or, rather, a VL solo note backed up by the up-to-64 notes of polyphony of the XG wavetable portion). The 724 only supported stereo out, while the others supported various four and more speaker setups. Yamaha’s own card using these was the WaveForce-128, but a number of licensees made very inexpensive YMF-724 sound cards that retailed for as low as $12 at the peak of the technology’s popularity. The MIDI synth portion (both XG and VL) of the YMF chips was actually just hardware assist to a mostly software synth that resided in the device driver (the XG wavetable samples, for instance, were in system RAM with the driver [and could be replaced or added to easily], not in ROM on the sound card). As such, the MIDI synth, especially with VL in active use, took considerably more CPU power than a truly hardware synth would use, but not as much as a pure software synth. Towards the end of their market period, YMF-724 cards could be had for as little as $12 USD brand new, making them by far the least expensive means of obtaining Sondius-XG CL digital waveguide technology. The DS-XG series also included the YMF-740, but it lacked the Sondius-XG VL waveguide synthesis module, yet was otherwise identical to the YMF-744. S-YXG100plus-VL Soft Synthesizer for PCs with any sound card (again, the VL part only worked on Windows 95, 98, 98SE, and ME: it emulated a .VxD MIDI device driver). Likewise equivalent to an MU50 (minus certain digital effects) plus VL70m. The non-VL version, S-YXG50, would work on any Windows OS, but had no physical modeling, and was just the MU50 XG wavetable emulator. This was basically the synth portion of the YMF chips implemented entirely in software without the hardware assist provided by the YMF chips. Required a somewhat more powerful CPU than the YMF chips did. Could also be used in conjunction with a YMF-equipped sound card or motherboard to provide up to 128 notes of XG wavetable polyphony and up to two VL instruments simultaneously on sufficiently powerful CPUs. S-YXG100plus-PolyVL SoftSynth for then-powerful PCs (e. g. 333+MHz Pentium III), capable of up to eight VL notes at once (all other Yamaha VL implementations except the original VL1 and VL1m were limited to one, and the VL1/1m could do two), in addition to up to 64 notes of XG wavetable from the MU50-emulating portion of the soft synth. Never sold in the US, but was sold in Japan. Presumably a much more powerful system could be done with today’s multi-GHz dual-core CPUs, but the technology appears to have been abandoned. Hypothetically could also be used with a YMF chipset system to combine their capabilities on sufficiently powerful CPUs. Korg Prophecy (1995) Z1, MOSS-TRI (1997) EXB-MOSS (2001) OASYS PCI (1999) OASYS (2005) with some modules, for instance the STR-1 plucked strings physical model Kronos (2011) same as OASYS Technics WSA1 (1995) PCM + resonator Seer Systems Creative WaveSynth (1996) for Creative Labs Sound Blaster AWE64. Reality (1997) - one of the earliest professional software synthesizer products by Dave Smith team Cakewalk Dimension Pro (2005) - software synthesizer for OS X and Windows XP. References Further reading Yamaha VL1. Virtual Acoustic Synthesizer, Sound on Sound, July 1994 Brian Heywood (22 Nov 2005) Model behaviour. The technology your PC uses to make sound is usually based on replaying an audio sample. Brian Heywood looks at alternatives., PC Pro External links Julius O. Smith III's ``A Basic Introduction to Digital Waveguide Synthesis" Waveguide Synthesis home page Virtual Acoustic Musical Instruments: Review and Update Modeling string sounds and wind instruments - Sound on Sound magazine, September 1998 Jordan Rudess playing on Korg Oasys Youtube recording. Note the use of the joystick to control the vibrato effect of the plucked strings physical model. Yamaha VL1 with breath controller vs. traditional synthesizer for wind instruments Sound synthesis types
65189601
https://en.wikipedia.org/wiki/Checkmarx
Checkmarx
Checkmarx is a global software security company headquartered in Ramat Gan, Israel. The company was acquired in April 2020 by Hellman & Friedman, a global private equity firm with headquarters in San Francisco. Founded in 2006, Checkmarx integrates automated software security technologies into DevOps. Checkmarx provides static and interactive application security testing (SAST and IAST), software composition analysis (SCA), infrastructure as code security testing (KICS), and application security and training development (Codebashing). Checkmarx's research department is known for uncovering technical vulnerabilities in popular technologies, software, applications, and IoT devices. History Checkmarx was founded by Maty Siman, the company's CTO, and has over 700 employees. Emmanuel Benzaquen has served as the CEO since 2006. In 2017, Checkmarx acquired Codebashing, an application security training company. In 2018, Checkmarx acquired Custodela, a company that provides software security program development as well as consulting services. In November 2019, the company's security research team uncovered a number of vulnerabilities affecting Google and Samsung smartphones. The vulnerabilities allowed an attacker to take remote control of smartphone apps, giving them the ability to take photos, record video and conversations, and identify the phone’s location. The research team submitted a report to the Android security team at Google and continued to provide feedback as the vulnerabilities were addressed. In January 2020, Checkmarx detailed multiple security vulnerabilities with the Trifo Ironpie robot vacuum. The company has also uncovered issues with Amazon Alexa, Meetup, and Tinder, among others. In August 2021, Checkmarx acquired Dustico, a software that detects backdoors and malicious attacks in software supply chain. Funding Checkmarx's early investors include Salesforce, which remains a partner as Checkmarx provides security reviews for the Salesforce AppExchange. In 2015, U.S. private equity and venture capital firm Insight Partners acquired Checkmarx for $84 million. In April 2020, private equity firm Hellman & Friedman, alongside private investment firm TPG, acquired Checkmarx for $1.15 billion. After the acquisition, Insight Partners retained a minority interest in the company. See also Security testing References Software companies established in 2006 Software companies of Israel Computer security software Computer security software companies Static program analysis tools Software testing tools
40052720
https://en.wikipedia.org/wiki/Mixman
Mixman
Mixman Technologies, Inc. is an American interactive music company that develops computer software that allows for the creation and manipulation of music files. Founded by Josh Gabriel and Eric Almgren, Mixman launched in April 1994 and is headquartered in San Francisco. History Early development The original concept came from prototypes Gabriel developed while a student at the Institute of Sonology in the Netherlands. He had developed a system to control individual music loops and later a hardware configuration that involved projected light beams and sensors. A musician and computer programmer, he had long wanted to make composing and recording music accessible to the average person. After Gabriel and Almgren became partners in the early 1990s, they built a team to develop a hardware device prototype that could work with music cartridges much like video game cartridges. Within a year, the prototype was built and contained a hardware controller and music cartridges that held data for each song. The industrial designs of the hardware controller were designed by Scott Summit of Summit Industrial Design. Fundamental patents were also filed on the synchronization technology. The original mission was to create powerful but easy-to-use interactive music creation tools that enabled the user to make and perform music with the digital song elements of their favorite artist or music style in real time. In other words, users could edit and rebuild the raw ingredients of a song. Later, the company turned to developing software as well as publishing dance music. Products and technology In 1996, Mixman released the first interactive CD that allowed users to perform live with their PC with zero latency and auto-beat matching. After partnering with record companies in 1996 and publishing one music title, Mixman launched Mixman Studio in 1997 and Mixman StudioPro in 1998, which shifted Mixman from a consumer activity toward a more sophisticated production and creativity tool. Mixman products were sold in both retail consumer outlets like Best Buy, Comp USA, Fry’s and Circuit City, as well as in the Music Instrument channel such as Guitar Center and other Music Instrument Stores. Artist and label promotion Mixman developed an online version that in conjunction with its online community promoted independent and major label artists for Mixman’s label and artist clients. The online community included user services such as an online radio station for user uploads (similar to Soundcloud) and community services (similar to what became offered by MySpace). From 2000-2001, Mixman launched online remixing songs called Mixman eMixes with Brittney Spears, ‘N Sync, David Bowie, LL Cool J and forty other major artists. Mixman the software company Starting in 1996, Mixman developed and released several software SKUs to retail, licensed its software to OEMs and record companies and developed its online, self-publishing community. Mixman Studio Pro licensed software to several OEM brands including Sony, Creative Labs and Intel. Mixman also developed and licensed retail and online software in collaboration with major labels with notable titles such as George Clinton’s Greatest Funkin’ Hits (Capitol) and Tommy Boy’s Greatest Beats (Tommy Boy/Warners). Computer users were able to dissect and play with Clinton's classics by using Mixman software to keep the beats and mixes in sync. In 1999, Mixman released an apple Macintosh version. In 2000-2001, Mixman partnered with Mattel, Inc. for a hardware and software titled the DM2, which included a custom version of Mixman software, a USB controller manufactured by Mattel under its Apzu brand and creatively guided by Mixman who worked with Vice President of Mattel Steve Sucher and his teams. The DM2 included sounds from the Mixman sound library as well as some name brand artist content. A line of Sound Libraries titled Mixman Soundiscs were also available upon an upgrade to a more advances version of the software. Mattel distributed the DM2 in retail chains like Toys-R-Us and Target. Over 200,000 units were manufactured and distributed in retail. When Mixman reformed as an independent company in 2002, Mattel had ceased development of their Apzu teen line, of which DM2 was the first product, and sold to Digital Blue with Mixman’s approval. From 2002-2005, Mixman continued to develop and release new versions of software and two more SKU’s based on the hardware controller Mixman LoopStudio and Mixman MP3 Producer. The most advanced version of Mixman software is Mixman StudioXPro, released in 2003. In 2006, Mixman ended its relationship with Digital Blue, stopped manufacturing further hardware units and shifted entirely to internet distribution. Mixman continued to explore partnerships with the music industry and technology companies and focused on a range of licensing and brand partner deals including Pepsi Frito Lay, SCION, M&M Mars, Hershey’s, Heineken and others. In 2010, Mixman began developing a new strategy and business plan, and in 2011, Mixman entered a new partnership with Intel Corp. In January 2012 at CES in Las Vegas, Intel announced and presented a new version of Mixman that was exclusively available on the Intel AppUp Store. Notable optimizations were Intel chip architecture and consumer features like Rex file support and Export to Facebook. In 2012 and 2013, Mixman did two more deals with Intel and developed a new version of Mixman titled Mixman Loop10, specifically for the AAIO market with ten point touch capability and the ability to support 1, 2, 4 person playback. Since 2014, the founders have been developing a new model and vision for Mixman now focused on the EDM market that Gabriel has been performing in for over a decade. Venture financing, merger, IPO Early seed funding for the company was provided by Almgren, Roger Summit and Dick Asher. Dr. Roger Summit, founder and CEO of Dialog Corporation and electronic music enthusiast, was the first board member and business advisor to the company. Dick Asher, former CEO of Polygram Music and President of Columbia Records (now Sony), also joined the board and became an investor and consultant to the company to facilitate working relationships with major record labels. Mixman’s Series A preferred stock financing was led by Don McKinney, founder and CEO of International Networking Devices and previously with Sequoia Capital. In 1999, Mixman looked to raise a Series C venture financing in order to expand its internet presence and further development. Mixman was brought together with Beatnik, an internet audio company of similar size to Mixman, by the Mayfield Fund, led by Mayfield Partner Alan Morgan, with the idea that the Mixman application run on the Beatnik web audio platform. Mayfield agreed to fund the combined company with a $12M financing and to take the company public shortly thereafter. Tony Fadell, consulting for Mayfield, who later went on to create the iPod and iPhone, worked with Almgren to write the combined company business plan. Other investors included Zomba Music (later acquired by BMG), which also held a board seat. The combined company quickly grew to 140 people and began preparing its S1 SEC filing to go public. In March 2000, market conditions caused the company to postpone the IPO. During the IPO process, the company raised $35M mezzanine round of financing led by MTV. As the lead revenue generator, MTV continued to invest in, develop and market Mixman software and content and continue partnerships with OEM’s and music content companies. Leadership Gabriel left the company in mid 2000 to start his career as a recording artist/producer and formed Gabriel and Dresden. Almgren left the company in late 2000 to start Vivcom, Inc. (now VMark.tv), a video search company. Almgren remained on the Beatnik board. In late 2002, Mixman once again became a stand-alone company. Gabriel is the current CEO, and in 2018, Chris Scarborough joined the board as executive director. Scarborough spent his early career with Credit Suisse Tech and Media banking and then worked for media investment banking firm, Code Advisors, for which he helped finance and was a strategic advisor to Spotify for eight years. Accolades Mixman software won Keyboard magazine software awards, and Josh Gabriel won “Demo God” at the 1997 and 1998 DEMO conferences. Music education Mixman donates products and has assisted with programs with the Miracles Foundation, Zeum, Boys and Girls Clubs, and many other institutions in the U.S. and Canada. References External links Mixman Official site American companies established in 1994 Companies based in San Francisco
34204
https://en.wikipedia.org/wiki/XEmacs
XEmacs
XEmacs is a graphical- and console-based text editor which runs on almost any Unix-like operating system as well as Microsoft Windows. XEmacs is a fork, based on a version of GNU Emacs from the late 1980s. Any user can download, use, and modify XEmacs as free software available under the GNU General Public License version 2 or any later version. History Between 1987 and 1993 significant delays occurred in bringing out a new version of GNU Emacs (presumed to be version 19). In the late 1980s, Richard P. Gabriel's Lucid Inc. faced a requirement to ship Emacs to support the Energize C++ IDE. So Lucid recruited a team to improve and extend the code, with the intention that their new version, released in 1991, would form the basis of GNU Emacs version 19. However, they did not have time to wait for their changes to be accepted by the Free Software Foundation (FSF). Lucid continued developing and maintaining their version of Emacs, while the FSF released version 19 of GNU Emacs a year later, while merging some of the code and adapting some other parts. When Lucid went out of business in 1994, other developers picked up the code. Companies such as Sun Microsystems wanted to carry on shipping Lucid Emacs, but using the trademark had become legally ambiguous because no one knew who would eventually control the trademark "Lucid". Accordingly, the "X" in XEmacs represents a compromise among the parties involved in developing XEmacs. The "X" in XEmacs is thus not related to the X Window System. XEmacs has always supported text-based terminals and windowing systems other than X11. Installers can compile both XEmacs and GNU Emacs with and without X support. For a period of time XEmacs even had some terminal-specific features, such as coloring, that GNU Emacs lacked. The software community generally refers to GNU Emacs, XEmacs (and a number of other similar editors) collectively or individually as emacsen (by analogy with boxen) or as emacs, since they both take their inspiration from the original TECO Emacs. Features XEmacs has commands to manipulate words and paragraphs (deleting them, moving them, moving through them, and so forth), syntax highlighting for making source code easier to read, and "keyboard macros" for performing arbitrary batches of editing commands defined by the user. XEmacs has comprehensive online help, as well as five manuals available from the XEmacs website. XEmacs supports many human languages as well as editing-modes for many programming and markup-languages. XEmacs runs on many operating systems including Unix/Linux, BSDs and Mac OS X. Running on Mac OS requires X11; while development has on a native Carbon version. Two versions of XEmacs for the Microsoft Windows environment exist: a native installer and a Cygwin package. Users can reconfigure almost all of the functionality in the editor by using the Emacs Lisp language. Changes to the Lisp code do not require the user to restart or recompile the editor. Programmers have made available many pre-written Lisp extensions. Many packages exist to extend and supplement the capabilities of XEmacs. Users can either download them piecemeal through XEmacs' package manager or apply them in bulk using the xemacs-sumo package or "sumo tarballs". The package manager in XEmacs predates the ELPA package system used by GNU Emacs by almost a decade and is incompatible with it. Since XEmacs 21.1 functionality has been moved out of XEmacs core and made available separately as packages. This allows users to exclude packages they have no need for. XEmacs had a package manager for over a decade before GNU Emacs developed one, but XEmacs must be restarted before new packages are loaded. Development From the project's beginnings, the developers of XEmacs aimed to have a frequent release-cycle. They also aimed for more openness to experimentation, and XEmacs often offers new features before other emacsen—pioneering (for example) inline images, variable fonts and terminal coloring. Over the years, the developers have extensively rewritten the code in order to improve consistency and to follow modern programming conventions stressing data abstraction. XEmacs has a packaging system for independently maintained Lisp packages. The version has GTK+ support and a native Carbon port for Mac OS X. XEmacs has always had a very open development-environment, including anonymous CVS, later Mercurial access and publicly accessible development mailing-lists. XEmacs comes with a 500+ page internals manual (Wing, et al., 2004). Support for Unicode has become a problem for XEmacs. As of 2005, the released version depends on the unmaintained package called Mule-UCS to support Unicode, while the development branch of XEmacs has had robust native support for external Unicode encodings since May 2002, but the internal Mule character sets lack completeness, and development seems stalled as of September 2005. XEmacs development features three branches: stable, gamma, and beta, with beta getting new features first, but potentially having less testing, stability and security. The developers released version 20.0 on 9 February 1997, and version 21.0 on 12 July 1998. As of January 2009, the stable branch had reached version 21.4.22 and the beta branch version 21.5.28. No gamma releases exist . With the release of XEmacs 21.4.0, version numbers follow a scheme whereby an odd second number signals a development-version, and an even second number indicates a stable release. XEmacs and GNU Emacs Several of XEmacs's principal developers have published accounts of the split between XEmacs and GNU Emacs, for example, Stephen Turnbull's summary of the arguments from both sides. One of the main disagreements involves different views of copyright assignment. The FSF sees copyright assignment to the FSF as necessary to allow it to defend the code against GPL violations, while the XEmacs developers have argued that the lack of copyright assignment has allowed major companies to get involved, as sometimes companies can license their code but due to a cautious attitude concerning fiduciary duties to shareholders, companies may have trouble in getting permission to assign away code completely. The Free Software Foundation holds copyright of much of the XEmacs code because of prior copyright assignment during merge attempts and cross-development. Whether a piece of new XEmacs code enters GNU Emacs often depends on the willingness of that individual contributor to assign the code to the FSF. New features in either editor usually show up in the other sooner or later. Furthermore, many developers contribute to both projects. The XEmacs project has a policy of maintaining compatibility with the GNU Emacs API. For example, it provides a compatibility-layer implementing overlays via the native extent functionality. "XEmacs developers strive to keep their code compatible with GNU Emacs, especially on the Lisp level." As XEmacs development has slowed, XEmacs has incorporated much code from GNU Emacs, while GNU Emacs has implemented many formerly XEmacs-only features. This has led some users to proclaim XEmacs' death, advocating that its developers contribute to GNU Emacs instead. Many major packages, such as Gnus and Dired, were formerly developed to work with both, although the main developer of Gnus has announced his intention to move the Gnus tree into the main Emacs trunk and remove XEmacs compatibility code, citing other packages similarly dropping XEmacs support. In December 2015 project maintainer Stephen J. Turnbull posted a message to an XEmacs development list stating the project was "at a crossroads" in terms of future compatibility with GNU Emacs due to developer attrition and GNU Emacs' progress. Several options were laid out for future directions including ending development entirely, creating a new fork from the current version of GNU Emacs, or putting the project in maintenance mode in case someone wants to restart development in the future. This last option was the direction decided, with commitments from individual contributors to provide minimal support for the web site and development resources. See also List of Unix commands Comparison of text editors SXEmacs (a fork of XEmacs) References External links The XEmacs Project's website Downloadable XEmacs manuals Lucid Emacs history from the view of its original maintainer, Jamie Zawinski The History of XEmacs Concise XEmacs tutorial Printable XEmacs Reference Card (PDF) Cross-platform software Emacs Free software programmed in C Free text editors MacOS text editors Software forks Software using the GPL license Text editors that use GTK Discontinued development tools Unix text editors Windows text editors
27969888
https://en.wikipedia.org/wiki/Ole%20J%C3%B8rgen%20Anfindsen
Ole Jørgen Anfindsen
Ole Jørgen Anfindsen (born 18 March 1958) is a Norwegian computer scientist, author and social commentator. Anfindsen holds a doctorate in computer science from the University of Oslo. Using the research results from his doctoral thesis, he formed the company Xymphonic Systems to use the technology commercially. Later, he has been a senior research scientist at Telenor R&D, guest researcher at GTE Laboratories in Massachusetts and Sun Microsystems Laboratories in California, and adjunct associate professor at the University of Oslo. As a social commentator, Anfindsen focuses on the issues of immigration, race and intelligence. In relation to these topics, he is the operator of the website HonestThinking, writes opinion pieces in newspapers, and in 2010 authored the book Selvmordsparadigmet. Professional career Anfindsen obtained a Ph.D. in computer science from the University of Oslo in 1997. His doctoral thesis, delivered at the Department of Informatics, was titled Apotram: an application-oriented transaction model. Anfindsen's research dealt with so-called xymphonic transactions in databases, a generalization of the classical transaction model. The research results later led him to form the company Xymphonic Systems to use the technology commercially. He has since been a senior research scientist at Telenor R&D, guest researcher at GTE Laboratories in Massachusetts and Sun Microsystems Laboratories in California, and adjunct associate professor at the Institute of Informatics at the University of Oslo. Social commentary Anfindsen has been engaged in the debate about immigration, race and intelligence. In February 2005 Anfindsen started HonestThinking together with his brother Jens Tomas Anfindsen. Through the website and newspaper debates, Anfindsen puts the spotlight on the Norwegian immigration and integration policies. One of the site's main issues is the forecast that ethnic Norwegians may become a minority in Norway towards the end of the 21st century, possibly already by 2050. He has argued that the intelligence of immigrants in Norway could be critical for the future of the nation. He expressed worry over the low European birth rates, but held immigration to not be a sustainable solution. This was as it would lead to a multiethnic and racially mixed society. He argued that many scientists have pointed out that few things control a persons loyalty and preferences more than race, and that there likely are a limit to how many different languages and "loyalties" a society can contain until it breaks apart. Anfindsen uses a model which divides humans into three main races: one originating from East Asia, one from Europe and the Middle East, and one from Africa. According to him, East Asians have the highest IQ and Africans the lowest. He claims that for instance foreign aid may have been less successful because one has not taken different IQ levels into account. He has also claimed that there are ideological motives behind the denial of the existence of different races, and that the longer one hides "the truth", the more it will eventually pave the way for racism and worse. In 2010 Anfindsen released his book Selvmordsparadigmet – hvordan politisk korrekthet ødelegger samfunnet (lit. the "Suicide Paradigm – how political correctness destroys society"), for which he had been awarded a grant to write by the Freedom of Expression Foundation. According to his publishing house Koloritt Forlag, "seldom or never has such a radical criticism of society been published in Norway." Besides Anfindsen's writings, appendixes in the book have been written by Henry Harpending, Frank Salter, Roger Scruton and Fjordman. In 2020, Anfindsen converted to Islam. Publications (selection) Computer science The new SQL standards, Kjeller: Televerkets forskningsinstitutt, 1993. Application-oriented transaction management (med Mark Hornick), Kjeller: Televerkets forskningsinstitutt, 1994. The significance of SQL3, Kjeller: Televerkets forskningsinstitutt, 1994. Extended three and five value logics for reasoning in the presence of missing and unreliable information (med Dimitrios Georgakopolous og Ragnar Normann), Kjeller: Televerkets forskningsinstitutt, 1994. Supporting Cooperative Work in Multimedia Conferences by Means of Nested Databases, Proceedings of Norwegian Computer Science Conference—NIK '96, 1996, side 311-322. Cooperative Work Support in Engineering Environment by Means of Nested Databases, Proceedings of CEEDA, 1996, Poole, pp. 325–330. Apotram: an application-oriented transaction model, UNIK Center for Technology og Institutt for Informatikk (UiO), 1997 (2. edition). Java-databaser: en evaluering (w/ Asbjørn Danielsen), Kjeller: Telenor forskning og utvikling, 1999. Books Selvmordsparadigmet – hvordan politisk korrekthet ødelegger samfunnet, Koloritt, 2010. Fundamentalistiske favntak – om islamofobi, islamisme og andre typer religiøs eller sekulær fundamentalisme, Kolofon, 2015. References External links HonestThinking 1958 births Living people Norwegian computer scientists Race and intelligence controversy Norwegian Muslims Converts to Islam
24775031
https://en.wikipedia.org/wiki/Outline%20of%20software
Outline of software
The following outline is provided as an overview of and topical guide to software: Software – collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system. The term was coined to contrast to the old term hardware (meaning physical devices). In contrast to hardware, software "cannot be touched". Software is also sometimes used in a more narrow sense, meaning application software only. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records. . What type of thing is software? Software can be described as all of the following: Technology Computer technology Tools Types of software Application software – end-user applications of computers such as word processors or video games, and ERP software for groups of users. Business software Computer-aided design Databases Decision-making software Educational software Emotion-sensitive software Image editing Industrial automation Mathematical software Medical software Molecular modeling software Quantum chemistry and solid state physics software Simulation software Spreadsheets Telecommunications (i.e., the Internet and everything that flows on it) Video editing software Video games Word processors Middleware controls and co-ordinates distributed systems. Programming languages – define the syntax and semantics of computer programs. For example, many mature banking applications were written in the language COBOL, invented in 1959. Newer applications are often written in more modern languages. System software – provides the basic functions for computer usage and helps run the computer hardware and system. It includes a combination of the following: Device driver Operating system Package management system Server Utility Window system Teachware – any special breed of software or other means of product dedicated to education purposes in software engineering and beyond in general education. Testware – any software for testing hardware or software. Firmware – low-level software often stored on electrically programmable memory devices. Firmware is given its name because it is treated like hardware and run ("executed") by other software programs. Firmware often is not accessible for change by other entities but the developers' enterprises. Shrinkware is the older name given to consumer-purchased software, because it was often sold in retail stores in a shrink wrapped box. Device drivers – control parts of computers such as disk drives, printers, CD drives, or computer monitors. Programming tools – assist a programmer in writing computer programs, and software using various programming languages in a more convenient way. The tools include: Compilers Debuggers Interpreters Linkers Text editors profiler Integrated development environment (IDE) – single application for managing all of these functions. Software products By publisher List of Adobe software List of Microsoft software By platform List of Macintosh software List of old Macintosh software List of proprietary software for Linux List of Linux audio software List of Linux games By type List of software categories List of 2D animation software List of 3D animation software List of 3D computer graphics software List of 3D modeling software List of antivirus software List of chess software List of compilers List of computer-aided design software List of computer algebra systems List of computer-assisted organic synthesis software List of computer simulation software List of concept- and mind-mapping software List of content management systems List of desktop publishing software List of discrete event simulation software List of finite element software packages List of graphing software List of HDL simulators List of text editors List of HTML editors List of information graphics software List of Linux distributions List of operating systems List of protein structure prediction software List of molecular graphics systems List of numerical analysis software List of optimization software List of PDF software List of PHP editors List of proof assistants List of quantum chemistry and solid state physics software List of spreadsheet software List of statistical packages List of theorem provers List of tools for static code analysis List of Unified Modeling Language tools List of video editing software List of web browsers Comparisons Comparison of 3D computer graphics software Comparison of accounting software Comparison of audio player software Comparison of computer-aided design editors Comparison of data modeling tools Comparison of database tools Comparison of desktop publishing software Comparison of digital audio editors Comparison of DOS operating systems Comparison of email clients Comparison of EM simulation software Comparison of force field implementations Comparison of instant messaging clients Comparison of issue tracking systems Comparison of Linux distributions Comparison of mail servers Comparison of network monitoring systems Comparison of nucleic acid simulation software Comparison of operating systems Comparison of raster graphics editors Comparison of software for molecular mechanics modeling Comparison of system dynamics software Comparison of text editors Comparison of vector graphics editors Comparison of web frameworks Comparison of web server software Comparison of word processors Comparison of deep-learning software History of software History of software engineering History of free and open-source software History of software configuration management History of programming languages Timeline of programming languages History of operating systems History of Mac OS X History of Microsoft Windows Timeline of Microsoft Windows History of the web browser Web browser history Software development Software development  (outline) – development of a software product, which entails computer programming (process of writing and maintaining the source code), but also encompasses a planned and structured process from the conception of the desired software to its final manifestation. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products. Computer programming Computer programming  (outline) – Software engineering Software engineering  (outline) – Software distribution Software distribution – Software licenses Beerware Free Free and open source software Freely redistributable software Open-source software Proprietary software Public domain software Revenue models Adware Donationware Freemium Freeware Commercial software Nagware Postcardware Shareware Delivery methods Digital distribution List of mobile software distribution platforms On-premises software Pre-installed software Product bundling Software as a service Software plus services Scams Scareware Malware End of software life cycle Abandonware Software industry Software industry Software publications Free Software Magazine InfoWorld PC Magazine Software Magazine Wired (magazine) Persons influential in software Bill Gates Steve Jobs Jonathan Sachs Wayne Ratliff See also Outline of information technology Outline of computers Outline of computing List of computer hardware terms Bachelor of Science in Information Technology Custom software Functional specification Marketing strategies for product software Service-Oriented Modeling Framework Bus factor Capability Maturity Model Software publisher User experience References External links Outline Software Software
11882157
https://en.wikipedia.org/wiki/MPack%20%28software%29
MPack (software)
In computer security, MPack is a PHP-based malware kit produced by Russian crackers. The first version was released in December 2006. Since then a new version is thought to have been released roughly every month. It is thought to have been used to infect up to 160,000 PCs with keylogging software. In August 2007 it was believed to have been used in an attack on the web site of the Bank of India which originated from the Russian Business Network. Unusual for such kits, MPack is sold as commercial software (costing $500 to $1,000 US), and is provided by its developers with technical support and regular updates of the software vulnerabilities it exploits. Modules are sold by the developers containing new exploits. These cost between $50 and $150 US depending on how severe the exploit is. The developers also charge to make the scripts and executables undetectable by antivirus software. The server-side software in the kit is able to customize attacks to a variety of web browsers including Microsoft Internet Explorer, Mozilla Firefox and Opera. MPack generally works by being loaded in an IFrame attached to the bottom of a defaced website. When a user visits the page, MPack sends a script that loads in the IFrame and determines if any vulnerabilities in the browser or operating system can be exploited. If it finds any, it will exploit them and store various statistics for future reference. Included with the server is a management console, which allows the attacker deploying the software to view statistics about the computers that have been infected, including what web browsers they were using and what countries their connections originated from. See also Exploit Exploit kit Trojan horse (computing) Spyware Botnet Computer virus Backdoor (computing) References Cybercrime Malware toolkits
16830
https://en.wikipedia.org/wiki/Keyboard%20technology
Keyboard technology
The technology of computer keyboards includes many elements. Among the more important of these is the switch technology that they use. Computer alphanumeric keyboards typically have 80 to 110 durable switches, generally one for each key. The choice of switch technology affects key response (the positive feedback that a key has been pressed) and pre-travel (the distance needed to push the key to enter a character reliably). Virtual keyboards on touch screens have no physical switches and provide audio and haptic feedback instead. Some newer keyboard models use hybrids of various technologies to achieve greater cost savings or better ergonomics. The modern keyboard also includes a control processor and indicator lights to provide feedback to the user (and to the central processor) about what state the keyboard is in. Plug and play technology means that its 'out of the box' layout can be notified to the system, making the keyboard immediately ready to use without need for further configuration unless the user so desires. Types Membrane keyboard There are two types of membrane-based keyboards, flat-panel membrane keyboards and full-travel membrane keyboards: Flat-panel membrane keyboards are most often found on appliances like microwave ovens or photocopiers. A common design consists of three layers. The top layer has the labels printed on its front and conductive stripes printed on the back. Under this it has a spacer layer, which holds the front and back layer apart so that they do not normally make electrical contact. The back layer has conductive stripes printed perpendicularly to those of the front layer. When placed together, the stripes form a grid. When the user pushes down at a particular position, their finger pushes the front layer down through the spacer layer to close a circuit at one of the intersections of the grid. This indicates to the computer or keyboard control processor that a particular button has been pressed. Generally, flat-panel membrane keyboards do not produce a noticeable physical feedback. Therefore, devices using these issue a beep or flash a light when the key is pressed. They are often used in harsh environments where water- or leak-proofing is desirable. Although used in the early days of the personal computer (on the Sinclair ZX80, ZX81 and Atari 400), they have been supplanted by the more tactile dome and mechanical switch keyboards. Full-travel membrane-based keyboards are the most common computer keyboards today. They have one-piece plastic keytop/switch plungers which press down on a membrane to actuate a contact in an electrical switch matrix. Dome-switch keyboard Dome-switch keyboards are a hybrid of flat-panel membrane and mechanical-switch keyboards. They bring two circuit board traces together under a rubber or silicone keypad using either metal "dome" switches or polyurethane formed domes. The metal dome switches are formed pieces of stainless steel that, when compressed, give the user a crisp, positive tactile feedback. These metal types of dome switches are very common, are usually reliable to over 5 million cycles, and can be plated in either nickel, silver or gold. The rubber dome switches, most commonly referred to as polydomes, are formed polyurethane domes where the inside bubble is coated in graphite. While polydomes are typically cheaper than metal domes, they lack the crisp snap of the metal domes, and usually have a lower life specification. Polydomes are considered very quiet, but purists tend to find them "mushy" because the collapsing dome does not provide as much positive response as metal domes. For either metal or polydomes, when a key is pressed, it collapses the dome, which connects the two circuit traces and completes the connection to enter the character. The pattern on the PC board is often gold-plated. Both are common switch technologies used in mass market keyboards today. This type of switch technology happens to be most commonly used in handheld controllers, mobile phones, automotive, consumer electronics and medical devices. Dome-switch keyboards are also called direct-switch keyboards. Scissor-switch keyboard A special case of the computer keyboard dome-switch is the scissor-switch. The keys are attached to the keyboard via two plastic pieces that interlock in a "scissor"-like fashion, and snap to the keyboard and the key. It still uses rubber domes, but a special plastic 'scissors' mechanism links the keycap to a plunger that depresses the rubber dome with a much shorter travel than the typical rubber dome keyboard. Typically scissor-switch keyboards also employ 3-layer membranes as the electrical component of the switch. They also usually have a shorter total key travel distance (2 mm instead of 3.5–4 mm for standard dome-switch keyswitches). This type of keyswitch is often found on the built-in keyboards on laptops and keyboards marketed as 'low-profile'. These keyboards are generally quiet and the keys require little force to press. Scissor-switch keyboards are typically slightly more expensive. They are harder to clean (due to the limited movement of the keys and their multiple attachment points) but also less likely to get debris in them as the gaps between the keys are often smaller (as there is no need for extra room to allow for the 'wiggle' in the key, as typically found on a membrane keyboard). Capacitive keyboard In this type of keyboard, pressing a key changes the capacitance of a pattern of capacitor pads. The pattern consists of two D-shaped capacitor pads for each switch, printed on a printed circuit board (PCB) and covered by a thin, insulating film of soldermask which acts as a dielectric. Despite the sophistication of the concept, the mechanism of capacitive switching is physically simple. The movable part ends with a flat foam element about the size of an aspirin tablet, finished with aluminum foil. Opposite the switch is a PCB with the capacitor pads. When the key is pressed, the foil tightly clings to the surface of the PCB, forming a daisy chain of two capacitors between contact pads and itself separated with thin soldermask, and thus "shorting" the contact pads with an easily detectable drop of capacitive reactance between them. Usually this permits a pulse or pulse train to be sensed. Because the switch does not have an actual electrical contact, there is no debouncing necessary. The keys do not need to be fully pressed to be actuated, which enables some people to type faster. The sensor tells enough about the position of the key to allow the user to adjust the actuation point (key sensitivity). This adjustment can be done with the help of the bundled software and individually for each key, if so implemented. The IBM Model F keyboard is mechanical-key design consisted of a buckling spring over a capacitive PCB, similarly to the later Model M keyboard that used a membrane in place of the PCB. The Topre Corporation design for key switches uses a spring below a rubber dome. The dome provides most of the force that keeps the key from being pressed, similar to a membrane keyboard, while the spring helps with the capacitive action. Mechanical-switch keyboard Every key on a mechanical-switch keyboard contains a complete switch underneath. Each switch is composed of a housing, a spring, and a stem, and sometimes other parts such as a separate tactile leaf or a clickbar. Switches come in three variants: "linear" with consistent resistance, "tactile" with a non-audible bump, and "clicky" with both a bump and an audible click. Depending on the resistance of the spring, the key requires different amounts of pressure to actuate and to bottom out. The shape of the stem as well as the design of the switch housing varies the actuation distance and travel distance of the switch. The sound can be altered by the material of the plate, case, lubrication, the keycap profile, and even modifying the individual switch. These modifications, or "mods" include applying lubricant to reduce friction inside the switch itself, inserting "switch films" to reduce wobble, swapping out the spring inside to modify the resistance of the switch itself and many more. Mechanical keyboards allow for the removal and replacement of keycaps, but replacing them is more common with mechanical keyboards due to common stem shape. Alongside the mechanical keyboard switch is the stabilizer, which supports longer keys such as the "spacebar", "enter", "backspace", and "shift" keys. Although these aren't as diverse as switches, they do come in different sizes. These different sizes are meant for keyboards that are longer in build than normal. Just like the mechanical keyboard switch, the stabilizer can be modified to alter the sound and feel of these certain keys. Lubricant is the big one, to reduce the rattle of the metal wire that makes up a stabilizer. Furthermore, implementing padding in the "housing" of the stabilizer will lessen rattle and improve acoustics. Mechanical keyboards typically have a longer lifespan than membrane or dome-switch keyboards. Cherry MX switches, for example, have an expected lifespan of 50 million clicks per switch, while switches from Razer have a rated lifetime of 60 million clicks per switch. A major producer of mechanical switches is Cherry, who has manufactured the MX family of switches since the 1980s. Cherry's color-coding system of categorizing switches has been imitated by other switch manufacturers. Hot-swappable keyboard Hot-swappable keyboards are keyboards where switches can be pulled out and replaced rather than requiring the typical solder connection. Hot-swappable keyboards can accept any switch that is in the 'MX' style. Instead of the switch being soldered to the keyboard's PCB, hot-swap sockets are instead soldered on. They are mostly used by keyboard enthusiasts that build custom keyboards, and have recently begun being adopted by larger companies on production keyboards. Hot-swap sockets typically cost anywhere from $10–25 USD to fill a complete board and can allow users to try a variety of different switches without having the tools or knowledge required to solder electronics. Buckling-spring keyboard Many typists prefer buckling spring keyboards. The buckling spring mechanism (expired ) atop the switch is responsible for the tactile and aural response of the keyboard. This mechanism controls a small hammer that strikes a capacitive or membrane switch. In 1993, two years after spawning Lexmark, IBM transferred its keyboard operations to the daughter company. New Model M keyboards continued to be manufactured for IBM by Lexmark until 1996, when Unicomp was established and purchased the keyboard patents and tooling equipment to continue their production. IBM continued to make Model M's in their Scotland factory until 1999. Hall-effect keyboard Hall effect keyboards use magnets and Hall effect sensors instead of switches with mechanical contacts. When a key is depressed, it moves a magnet that is detected by a solid-state sensor. Because they require no physical contact for actuation, Hall-effect keyboards are extremely reliable and can accept millions of keystrokes before failing. They are used for ultra-high reliability applications such as nuclear power plants, aircraft cockpits, and critical industrial environments. They can easily be made totally waterproof, and can resist large amounts of dust and contaminants. Because a magnet and sensor are required for each key, as well as custom control electronics, they are expensive to manufacture. Laser projection keyboard A laser projection device approximately the size of a computer mouse projects the outline of keyboard keys onto a flat surface, such as a table or desk. This type of keyboard is portable enough to be easily used with PDAs and cellphones, and many models have retractable cords and wireless capabilities. However, sudden or accidental disruption of the laser will register unwanted keystrokes. Also, if the laser malfunctions, the whole unit becomes useless, unlike conventional keyboards which can be used even if a variety of parts (such as the keycaps) are removed. This type of keyboard can be frustrating to use since it is susceptible to errors, even in the course of normal typing, and its complete lack of tactile feedback makes it even less user-friendly than the lowest quality membrane keyboards. Roll-up keyboard Keyboards made of flexible silicone or polyurethane materials can roll up in a bundle. Tightly folding the keyboard may damage the internal membrane circuits. When they are completely sealed in rubber, they are water resistant. Like membrane keyboards, they are reported to be very hard to get used to, as there is little tactile feedback, and silicone will tend to attract dirt, dust, and hair. Optical keyboard technology Also known as photo-optical keyboard, light responsive keyboard, photo-electric keyboard, and optical key actuation detection technology. Optical keyboard technology was introduced in 1962 by Harley E. Kelchner for use in a typewriter machine with the purpose of reducing the noise generating by actuating the typewriter keys. An optical keyboard technology utilizes light-emitting devices and photo sensors to optically detect actuated keys. Most commonly the emitters and sensors are located at the perimeter, mounted on a small PCB. The light is directed from side to side of the keyboard interior, and it can only be blocked by the actuated keys. Most optical keyboards require at least two beams (most commonly a vertical beam and a horizontal beam) to determine the actuated key. Some optical keyboards use a special key structure that blocks the light in a certain pattern, allowing only one beam per row of keys (most commonly a horizontal beam). The mechanism of the optical keyboard is very simple – a light beam is sent from the emitter to the receiving sensor, and the actuated key blocks, reflects, refracts or otherwise interacts with the beam, resulting in an identified key. Some earlier optical keyboards were limited in their structure and required special casing to block external light, no multi-key functionality was supported and the design was very limited to a thick rectangular case. The advantages of optical keyboard technology are that it offers a real waterproof keyboard, resilient to dust and liquids; and it uses about 20% PCB volume, compared with membrane or dome switch keyboards, significantly reducing electronic waste. Additional advantages of optical keyboard technology over other keyboard technologies such as Hall effect, laser, roll-up, and transparent keyboards lie in cost (Hall effect keyboard) and feel – optical keyboard technology does not require different key mechanisms, and the tactile feel of typing has remained the same for over 60 years. The specialist DataHand keyboard uses optical technology to sense keypresses with a single light beam and sensor per key. The keys are held in their rest position by magnets; when the magnetic force is overcome to press a key, the optical path is unblocked and the keypress is registered. Debouncing When a key is pressed, it oscillates (bounces) against its contacts several times before settling. When released, it oscillates again until it comes to rest. Although it happens on a scale too small to be visible to the naked eye, it can be enough to register multiple keystrokes. To resolve this, the processor in a keyboard debounces the keystrokes, by averaging the signal over time to produce one "confirmed" keystroke that (usually) corresponds to a single press or release. Early membrane keyboards had limited typing speed because they had to do significant debouncing. This was a noticeable problem on the ZX81. Keycaps Keycaps are used on full-travel keyboards. While modern keycaps are typically surface-printed, they can also be double-shot molded, laser printed, sublimation printed, engraved, or they can be made of transparent material with printed paper inserts. There are also keycaps which are thin shells that are placed over key bases. These were used on IBM PC keyboards. Other parts The modern PC keyboard also includes a control processor and indicator lights to provide feedback to the user about what state the keyboard is in. Depending on the sophistication of the controller's programming, the keyboard may also offer other special features. The processor is usually a single chip 8048 microcontroller variant. The keyboard switch matrix is wired to its inputs and it processes the incoming keystrokes and sends the results down a serial cable (the keyboard cord) to a receiver in the main computer box. It also controls the illumination of the "caps lock", "num lock" and "scroll lock" lights. A common test for whether the computer has crashed is pressing the "caps lock" key. The keyboard sends the key code to the keyboard driver running in the main computer; if the main computer is operating, it commands the light to turn on. All the other indicator lights work in a similar way. The keyboard driver also tracks the shift, alt and control state of the keyboard. Keyboard switch matrix The keyboard switch matrix is often drawn with horizontal wires and vertical wires in a grid which is called a matrix circuit. It has a switch at some or all intersections, much like a multiplexed display. Almost all keyboards have only the switch at each intersection, which causes "ghost keys" and "key jamming" when multiple keys are pressed (rollover). Certain, often more expensive, keyboards have a diode between each intersection, allowing the keyboard microcontroller to accurately sense any number of simultaneous keys being pressed, without generating erroneous ghost keys. Alternative text-entering methods Optical character recognition (OCR) is preferable to rekeying for converting existing text that is already written down but not in machine-readable format (for example, a Linotype-composed book from the 1940s). In other words, to convert the text from an image to editable text (that is, a string of character codes), a person could re-type it, or a computer could look at the image and deduce what each character is. OCR technology has already reached an impressive state (for example, Google Book Search) and promises more for the future. Speech recognition converts speech into machine-readable text (that is, a string of character codes). This technology has also reached an advanced state and is implemented in various software products. For certain uses (e.g., transcription of medical or legal dictation; journalism; writing essays or novels) speech recognition is starting to replace the keyboard. However, the lack of privacy when issuing voice commands and dictation makes this kind of input unsuitable for many environments. Pointing devices can be used to enter text or characters in contexts where using a physical keyboard would be inappropriate or impossible. These accessories typically present characters on a display, in a layout that provides fast access to the more frequently used characters or character combinations. Popular examples of this kind of input are Graffiti, Dasher and on-screen virtual keyboards. Other issues Keystroke logging Unencrypted Bluetooth keyboards are known to be vulnerable to signal theft for keylogging by other Bluetooth devices in range. Microsoft wireless keyboards 2011 and earlier are documented to have this vulnerability. Keystroke logging (often called keylogging) is a method of capturing and recording user keystrokes. While it can be used legally to measure employee activity, or by law enforcement agencies to investigate suspicious activities, it is also used by hackers for illegal or malicious acts. Hackers use keyloggers to obtain passwords or encryption keys. Keystroke logging can be achieved by both hardware and software means. Hardware key loggers are attached to the keyboard cable or installed inside standard keyboards. Software keyloggers work on the target computer's operating system and gain unauthorized access to the hardware, hook into the keyboard with functions provided by the OS, or use remote access software to transmit recorded data out of the target computer to a remote location. Some hackers also use wireless keylogger sniffers to collect packets of data being transferred from a wireless keyboard and its receiver, and then they crack the encryption key being used to secure wireless communications between the two devices. Anti-spyware applications are able to detect many keyloggers and remove them. Responsible vendors of monitoring software support detection by anti-spyware programs, thus preventing abuse of the software. Enabling a firewall does not stop keyloggers per se, but can possibly prevent transmission of the logged material over the net if properly configured. Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with his or her typed information. Automatic form-filling programs can prevent keylogging entirely by not using the keyboard at all. Most keyloggers can be fooled by alternating between typing the login credentials and typing characters somewhere else in the focus window. Keyboards are also known to emit electromagnetic signatures that can be detected using special spying equipment to reconstruct the keys pressed on the keyboard. Neal O'Farrell, executive director of the Identity Theft Council, revealed to InformationWeek that "More than 25 years ago, a couple of former spooks showed me how they could capture a user's ATM PIN, from a van parked across the street, simply by capturing and decoding the electromagnetic signals generated by every keystroke," O'Farrell said. "They could even capture keystrokes from computers in nearby offices, but the technology wasn't sophisticated enough to focus in on any specific computer." Physical injury The use of any keyboard may cause serious injury (such as carpal tunnel syndrome or other repetitive strain injuries) to the hands, wrists, arms, neck or back. The risks of injuries can be reduced by taking frequent short breaks to get up and walk around a couple of times every hour. Users should also vary tasks throughout the day, to avoid overuse of the hands and wrists. When typing on a keyboard, a person should keep the shoulders relaxed with the elbows at the side, with the keyboard and mouse positioned so that reaching is not necessary. The chair height and keyboard tray should be adjusted so that the wrists are straight, and the wrists should not be rested on sharp table edges. Wrist or palm rests should not be used while typing. Some adaptive technology ranging from special keyboards, mouse replacements and pen tablet interfaces to speech recognition software can reduce the risk of injury. Pause software reminds the user to pause frequently. Switching to a much more ergonomic mouse, such as a vertical mouse or joystick mouse may provide relief. By using a touchpad or a stylus pen with a graphic tablet, in place of a mouse, one can lessen the repetitive strain on the arms and hands. See also List of mechanical keyboards Keyboard layout AZERTY QWERTY QWERTZ Keyboard mapping References External links Computer keyboards
30323819
https://en.wikipedia.org/wiki/Actifio
Actifio
Actifio was a privately held information technology firm headquartered in Waltham, Massachusetts. The company specialised in copy data virtualization for making information technology infrastructure more efficient by reducing unnecessary duplication of data. On December 3, 2020, Google announced it intended to acquire Actifio for an undisclosed sum. The acquisition closed on Monday Dec 14, 2020. Products Purportedly, Actifio's products are able to reduce unnecessary duplication of application data and software requirements for its users. The technology is designed to maintain data integrity while ensuring rapid access to that data throughout its entire life cycle. The system virtualizes data management and storage to replace siloed data protection and availability applications with a single purpose-built system. The process involves creating a "golden master" of production data that allows for a rapid manipulation and recovery of data if needed. This storage system is said to reduce data storage costs and improve efficiency over other data management applications. History In July 2009, Ash Ashutosh founded Actifio in Waltham, Massachusetts. The company started with four employees. It launched its first product in the fall of 2011. In 2012 Gartner recommended Actifio in their Cool Vendor report and said its products facilitated cloud-based and offsite start and computing without the need to build secondary data centers. That year sales increased about 700 percent over 2011. By the end of 2012, Actifio had achieved five consecutive quarters of 500% year-on-year growth. In the fourth quarter of 2012 alone, Actifio did 62 deals with new clients with an average value of $210,000. In 2012, it was described as the fastest growing storage startup. Staffing increased from 50 in December 2011 to 120 in May 2012. As of May 2012, about a fourth of the company's revenue came from Europe. Funding On July 21, 2010, Actifio announced that it had secured $8 million in series A funding. This round was led by North Bridge Venture Partners and Greylock Partners. Jamie Goldstein, general partner at North Bridge said, “Actifio has all the ingredients for success including a hot market opportunity, technological superiority, and a stellar executive team that will allow Actifio to deliver on the promise of Data Management Virtualization.” On September 30, 2010, Actifio announced that it had closed on $16 million in series B funding. This round of funding was led by Advanced Technology Ventures (ATV) with participation by North Bridge Venture Partners and Greylock Partners. It brought Actifio's total venture capital funding to $24 million. In 2011, Actifio received $33 million in Series C funding led by Andreessen Horowitz. This firm, headed by Marc Andreessen and Ben Horowitz, has also funded Facebook, Zynga, and Twitter. Actifio had just started looking for its next round of funding when Peter Levine, a partner at Andreesen Horowitz, called to merely ask about Actifio's location. Actifio closed on funding nine days later. Levine said he did the deal because he believes Actifio will “dominate a large segment of the backup and storage space.” North Bridge Venture Partners, Greylock Partners, and Advanced Technology Ventures also invested in this round of funding. After completing its Series C funding, Actifio had received a total of $57 million in venture capital. During Actifio's $50 million Series D round, media reported that investors valued the company at $500 million. After completing its Series D funding the firm had received $108 million in venture capital. On 23 March 2014, Actifio announced that it had raised another $100 million in series E funding from Tiger Global Management, Andreessen Horowitz, Greylock Partners, North Bridge Venture Partners, Advanced Technology Ventures, and Technology Crossover Ventures that increased its implied valuation to $1.1 billion. This money came in the form of primary funding with no secondary liquidity for employees or earlier shareholders. Ashutosh said that $125 million could have been raised but that some money had been turned away. $75 million closed on 14 March and the rest on 7 April. After this round of funding, Actifio had raised $207.5 million in capital. In August 2018, Actifio took in $100 million. Clients Actifio clients included Time Warner Cable, Boston University Medical Campus, Navisite, the City of South Portland, Unilever, IBM, Netflix, and many other organizations As of October 2012, Actifio had about 180 clients and its deals averaged a value of about $250,000. By March 2013, Actifio had over 300 clients in 31 countries paying an average of $349,000 per three-year contract. The data under Actifio management at this time was over 1 exabyte, with 14 petabytes of active application data, and 55 petabytes of physical storage capacity. As of July 2014, there were about 400 Actifio clients worldwide, including both large multinationals and cloud service providers. IBM OEM Actifio and IBM announced an OEM agreement on Feb 5, 2019. This allowed IBM to sell an IBM Branded version of Actifio software as IBM InfoSphere Virtual Data Pipeline (VDP). Acquisition On December 3, 2020, Google announced it intended to acquire Actifio for an undisclosed sum. The acquisition closed on Monday Dec 14, 2020. At that time Actifio informed State Authorities in Massachusetts that it was laying off 54 workers in that state. It is unclear how many were laid off in other regions where Actifio had employees. Copy data Copy data consists of multiple copies of the same file. This data could come in the form of multiple copies of volumes, backups, test/dev, online copies for disaster recover, etc. In a study IDC found that about 75% of storage is consumed by copy data. Actifio conducts joint research with IDC to understand the copy data problem in depth. IDC estimated that businesses would spend roughly $44 billion on coping with copy data in 2013. Actifio was one of the first firms to enter the copy data management (CDM) market. In July 2014, Actifio introduced Resiliency Director, an automated disaster recovery tool. Sungard Availability Services, a disaster recovery firm, was able to recover hundreds of virtual machines in less than 20 minutes during a test of Resiliency Director. Ash Ashutosh Actifio was founded by Ash Ashutosh, who had previously served as chief technologist at HP's StorageWorks division, a position he took after HP acquired another Ashutosh startup, AppIQ in 2005. Ashutosh holds an undergraduate degree in electrical engineering and a master's degree in computer science from Penn State University. After working for LSI and Intergraph, he joined StorageNetworks. Later he founded Serano Systems, a fiber channel controller manufacturer that he sold to Vitesse Semiconductor. His next venture was AppIQ, which was acquired by HP in 2005. After leaving HP in 2008, Ashutosh became a venture capitalist at Greylock Partners, where he was an investor in information technology companies, and ultimately founded Actifio. He left Greylock in 2009 to focus full-time on Actifio, serving as its president and CEO. As of 2013, Ashutosh was an entrepreneur-in-residence at Harvard Business School. He also lectures at the Massachusetts Institute of Technology. In 2013, Ashutosh was named "EY Entrepreneur of the Year" for New England by Ernst & Young. Ashutosh is originally from India. References External links Defunct technology companies of the United States Technology companies established in 2009 Companies based in Massachusetts 2009 establishments in Massachusetts 2020 mergers and acquisitions Technology companies disestablished in 2020 2020 disestablishments in Massachusetts
16766706
https://en.wikipedia.org/wiki/Fibre%20Channel%20over%20Ethernet
Fibre Channel over Ethernet
Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel protocol. The specification was part of the International Committee for Information Technology Standards T11 FC-BB-5 standard published in 2009. Functionality FCoE transports Fibre Channel directly over Ethernet while being independent of the Ethernet forwarding scheme. The FCoE protocol specification replaces the FC0 and FC1 layers of the Fibre Channel stack with Ethernet. By retaining the native Fibre Channel constructs, FCoE was meant to integrate with existing Fibre Channel networks and management software. Data centers used Ethernet for TCP/IP networks and Fibre Channel for storage area networks (SANs). With FCoE, Fibre Channel becomes another network protocol running on Ethernet, alongside traditional Internet Protocol (IP) traffic. FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs on top of TCP and IP. As a consequence, FCoE is not routable at the IP layer, and will not work across routed IP networks. Since classical Ethernet had no priority-based flow control, unlike Fibre Channel, FCoE required enhancements to the Ethernet standard to support a priority-based flow control mechanism (to reduce frame loss from congestion). The IEEE standards body added priorities in the data center bridging (dcb) Task Group. Fibre Channel required three primary extensions to deliver the capabilities of Fibre Channel over Ethernet networks: Encapsulation of native Fibre Channel frames into Ethernet Frames. Extensions to the Ethernet protocol itself to enable an Ethernet fabric in which frames are not routinely lost during periods of congestion. Mapping between Fibre Channel N_port IDs (aka FCIDs) and Ethernet MAC addresses. Computers can connect to FCoE with converged network adapters (CNAs), which contain both Fibre Channel host bus adapter (HBA) and Ethernet network interface controller (NIC) functionality on the same physical card. CNAs have one or more physical Ethernet ports. FCoE encapsulation can be done in software with a conventional Ethernet network interface card, however FCoE CNAs offload (from the CPU) the low level frame processing and SCSI protocol functions traditionally performed by Fibre Channel host bus adapters. Application The main application of FCoE is in data center storage area networks (SANs). FCoE has particular application in data centers due to the cabling reduction it makes possible, as well as in server virtualization applications, which often require many physical I/O connections per server. With FCoE, network (IP) and storage (SAN) data traffic can be consolidated using a single network. This consolidation can: reduce the number of network interface cards required to connect to disparate storage and IP networks reduce the number of cables and switches reduce power and cooling costs Frame format FCoE is encapsulated over Ethernet with the use of a dedicated Ethertype, 0x8906. A single 4-bit field (version) satisfies the IEEE sub-type requirements. The 802.1Q tag is optional but may be necessary in a given implementation. The SOF (start of frame) and EOF (end of frame) are encoded as specified in . Reserved bits are present to guarantee that the FCoE frame meets the minimum length requirement of Ethernet. Inside the encapsulated Fibre Channel frame, the frame header is retained so as to allow connecting to a storage network by passing on the Fibre Channel frame directly after de-encapsulation. The FIP (FCoE Initialization Protocol) is an integral part of FCoE. Its main goal is to discover and initialize FCoE capable entities connected to an Ethernet cloud. FIP uses a dedicated Ethertype of 0x8914. Timeline In October 2003, Azul Technology developed early version and applied for a patent. In April 2007, the FCoE standardization activity started. In October 2007, the first public end-to-end FCoE demo occurred at Storage Network World including adapters from QLogic, switches from Nuova Systems, and storage from NetApp (none of the companies involved made any product announcements at the time). In April 2008, an early implementor was Nuova Systems, a subsidiary of Cisco Systems, which announced a switch. Brocade Communications Systems also announced support in 2008. After the late-2000s financial crisis, however, any new technology had a hard time getting established. In June 2009, the FCoE technology that had been defined as part of the International Committee for Information Technology Standards (INCITS) T11 FC-BB-5 standard was forwarded to ANSI for publication. In May 2010, the FC-BB-5 standard was published as ANSI/INCITS 462-2010. Some additional work was done in the INCITS. Data center switches from Force10 and Dell PowerConnect supported FCoE and in June 2013, Dell Networking, which is the new brand-name for all networking portfolio of Dell, introduced the S5000 series which can be a fully native FCoE switch with the option to include a native fibre channel module, allowing you to connect the S5000 directly to an FC SAN environment. See also iSCSI ATA over Ethernet (AoE) HyperSCSI LIO Linux SCSI Target SCST Linux FCoE target driver References External links FCoE Fibre Channel Network protocols Ethernet Computer storage buses Network booting
7543
https://en.wikipedia.org/wiki/Computational%20complexity%20theory
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically. Computational problems Problem instances A computational problem can be viewed as an infinite collection of instances together with a set (possibly empty) of solutions for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances. Representing problem instances When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems as formal languages Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected or not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. Function problems A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples (a, b, c) such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers. Measuring the size of an instance To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. This is usually taken to be the size of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2n vertices compared to the time taken for a graph with n vertices? If the input size is n, the time taken can be expressed as a function of n. Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis argues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm. Machine models and complexity measures Turing machine A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata, lambda calculus or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others. A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm. Other machine models Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically. However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that non-deterministic time is a very important resource in analyzing computational problems. Complexity measures For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine M on input x is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine M is said to operate within time f(n) if the time required by M on each input of length n is at most f(n). A decision problem A can be solved in time f(n) if there exists a Turing machine operating in time f(n) that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time f(n) on a deterministic Turing machine is then denoted by DTIME(f(n)). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity. The complexity of an algorithm is often expressed using big O notation. Best, worst and average case complexity The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size n may be faster to solve than others, we define the following complexities: Best-case complexity: This is the complexity of solving the problem for the best input of size n. Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size n. Amortized analysis: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm. Worst-case complexity: This is the complexity of solving the problem for the worst input of size n. The order from cheap to costly is: Best, average (of discrete uniform distribution), amortized, worst. For example, consider the deterministic sorting algorithm quicksort. This solves the problem of sorting a list of integers that is given as the input. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case the algorithm takes time O(n2). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is O(n log n). The best case occurs when each pivoting divides the list in half, also needing O(n log n) time. Upper and lower bounds on the complexity of problems To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound T(n) on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most T(n). However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of T(n) for a problem requires showing that no algorithm can have time complexity lower than T(n). Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if T(n) = 7n2 + 15n + 40, in big O notation one would write T(n) = O(n2). Complexity classes Defining complexity classes A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors: The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc. The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc. The resource (or resources) that is being bounded and the bound: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc. Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following: The set of decision problems solvable by a deterministic Turing machine within time f(n). (This complexity class is known as DTIME(f(n)).) But bounding the computation time above by some concrete function f(n) often yields complexity classes that depend on the chosen machine model. For instance, the language {xx | x is any binary string} can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" . This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP. Important complexity classes Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following: The logarithmic-space classes (necessarily) do not take into account the space needed to represent the problem. It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem. Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems. Hierarchy theorems For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n) is contained in DTIME(n2), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. More precisely, the time hierarchy theorem states that . The space hierarchy theorem states that . The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE. Reduction Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult than Y, and we say that X reduces to Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions. The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication. This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class of problems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C. The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems. If a problem X is in C and hard for C, then X is said to be complete for C. This means that X is the hardest problem in C. (Since many problems could be equally hard, one might say that X is one of the hardest problems in C.) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP. Important open problems P versus NP problem The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem. Problems in NP not known to be in P or NP-complete It was shown by Ladner that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and Eugene Luks has run time for graphs with n vertices, although some recent work by Babai offers some potentially new perspectives on this. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP will equal co-NP). The best known algorithm for integer factorization is the general number field sieve, which takes time to factor an odd integer n. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes. Separations between other complexity classes Many known complexity classes are suspected to be unequal, but this has not been proved. For instance P ⊆ NP ⊆ PP ⊆ PSPACE, but it is possible that P = PSPACE. If P is not equal to NP, then P is not equal to PSPACE either. Since there are many known complexity classes between P and PSPACE, such as RP, BPP, PP, BQP, MA, PH, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory. Along the same lines, co-NP is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of NP problems. It is believed that NP is not equal to co-NP; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then P is not equal to NP, since P=co-P. Thus if P=NP we would have co-P=co-NP whence NP=P=co-P=co-NP. Similarly, it is not known if L (the set of all problems that can be solved in logarithmic space) is strictly contained in P or equal to P. Again, there are many complexity classes between the two, such as NL and NC, and it is not known if they are distinct or equal classes. It is suspected that P and BPP are equal. However, it is currently open if BPP = NEXP. Intractability A problem that can be solved in theory (e.g. given large but finite resources, especially time), but for which in practice any solution takes too many resources to be useful, is known as an . Conversely, a problem that can be solved in practice is called a , literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization. Tractable problems are frequently identified with problems that have polynomial-time solutions (P, PTIME); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If NP is not the same as P, then NP-hard problems are also intractable in this sense. However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes 1.0001n operations is practical until n gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even n3 or n2 algorithms are often impractical on realistic sizes of problems. Continuous complexity theory Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis is information based complexity. Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical systems and differential equations. Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems. History An early example of algorithm complexity analysis is the running time analysis of the Euclidean algorithm done by Gabriel Lamé in 1844. Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer. The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard E. Stearns, which laid out the definitions of time complexity and space complexity, and proved the hierarchy theorems. In addition, in 1965 Edmonds suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size. Earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers: In 1967, Manuel Blum formulated a set of axioms (now known as Blum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called speed-up theorem. The field began to flourish in 1971 when the Stephen Cook and Leonid Levin proved the existence of practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete. See also Context of computational complexity Descriptive complexity theory Game complexity Leaf language Limits of computation List of complexity classes List of computability and complexity topics List of important publications in theoretical computer science List of unsolved problems in computer science Parameterized complexity Proof complexity Quantum complexity theory Structural complexity theory Transcomputational problem Computational complexity of mathematical operations Works on complexity References Citations Textbooks Surveys External links The Complexity Zoo What are the most important results (and papers) in complexity theory that every one should know? Scott Aaronson: Why Philosophers Should Care About Computational Complexity Computational fields of study
52236211
https://en.wikipedia.org/wiki/Motorsport%20Manager
Motorsport Manager
Motorsport Manager is a racing management-simulation strategy video game developed by British video game developer, Playsport Games. The game was released on iOS in August 2014 and Android in 2015. A desktop version of the game was published by Sega on macOS, Microsoft Windows and Linux operating systems in November 2016. The game has received regular updates and patches, including downloadable content on the PC versions, since release. The latest iteration of the game on the PC version was released on 1 November 2017. In March 2019, Motorsport Manager was released for Nintendo Switch. Gameplay Mobile version The player can begin (and later have the option to continue) a career that places them in control of a motorsport racing team. When starting a new career, the user can customise the team name and colour, before selecting a racing series to enter. There are four tiers of racing series, where each successive tier can be unlocked by winning each respective racing series per tier once. The game features a hints & tips tutorial system, represented by a sprite textbox, written in the character of a moustached man named Nigel (as a homage to former British racing car driver, Nigel Mansell), who provides first-time advice throughout the game. The user also has the ability to invest into a young driver development programme, appointing head engineers and drivers, improving their team headquarters as well as developing their car to get an edge over the competition, all of which can vary in quality depending on investment level. Cash required for investment into such things can be acquired by signing sponsorship deals and completing challenges appointed by sponsors to earn bonus payments, which may vary in quality, difficulty, quantity and contract duration depending on the success and popularity of the user's racing team, with a secondary method being through earning money at the end of the racing season based on the Team Championship scores. The game also includes a random events system, which may provide choices to the player that can compromise the player's popularity as a manager with one group for the sake of another, or it may provide optional upgrade choices to the three research segments in the game at the cost of cash: Manufacturing - which determines car reliability and tyre wear - Design - which generally affects all aspects of the car - and Aerodynamics - which affects car downforce and acceleration - all of which can trigger at any time during or at the end of a racing season. The user's choices behind-the-scenes ultimately affect their team's racing performances, which is pivotal to success in the game, and serves as the main gameplay element of Motorsport Manager. There are over a dozen race tracks featured in-game that share the likeness of their real-life counterparts, all of which are split into the respective racing series in the game. Each track used in a racing series roster is split into two parts - a one-time qualifying session and the final race, the team's qualifying performance determines their starting positions in the final race. Challenges set by sponsors may offer the player a considerable sum of cash if they achieve a set position in the leaderboards, which can prove useful to apply any changes to their team in order to give a better chance of success on race day. Exclusive to the mobile version, micro-transactions that supply customers with set amounts of cash can also accelerate the rate at which their team can improve, reducing the time and challenge required to develop a fully-fledged racing team in the game. The third version included new graphics and camera modes. The third version of the mobile game was the first to have augmented reality (AR), when viewing races. PC version About the PC version, Eurogamer wrote that "It's all been retooled thoughtfully for its debut on PC, Mac and Linux [compared to the original mobile version] - this isn't the mobile game with a few bits of extra bodywork thrown on, and instead is a totally new beast, built from the ground up." Largely similar to the mobile version of the game, the player has the option to either create their own racing team or has the additional option to join a fictional motorsport racing team. Switch version The 2019 Switch version includes 20 unique circuits and 65 track layouts in a variety of countries, with a New Zealand circuit exclusive to the Switch version. Also exclusive to the Switch version, were higher resolution artwork and 3D cars. Development The game developed by British video game developer, Playsport Games, and published by Sega. Release Motorsport Manager was originally released on mobile for iOS on 21 August 2014. Following considerable success both critically and commercially upon release, the game was released on Android in 2015. The subsequent macOS, Microsoft Windows and Linux desktop versions were published by Sega, following their financial backing of developer Playsport Games, and were released on 9 November 2016 and 23 November 2016 respectively. The desktop versions of the game significantly expanded upon the gameplay mechanics of the mobile version, alleviating the previous hardware constraints put forward by mobile devices, which allowed for a more in-depth gameplay experience with enhanced graphical fidelity, relative to mobile versions of the game. The version for Nintendo Switch was released on March 14, 2019. In the Switch version, which is the console debut for the series, you can build a team with three car classes and compete in nine tiers of racing. Reception Mobile version The iOS version of Motorsport Manager received positive reviews from critics. J. D. Cohen of TouchArcade awarded the game a score of 4.5 out of 5, writing: "It’s never overwhelming, nor is it too light to maintain interest. Motorsport Manager finds a nice spot in the complexity spectrum wherein it requires frequent decision-making, without ever inducing paralysis by presenting too many options simultaneously." Pocket Gamer'''s Harry Slater writes: "This is the ice cold nature of Motorsport Manager. One second you can be flying high in first place, the next your tyres fail you and you slip back to fifth, cursing as you do. (...) Things do get a little repetitive, but there's a huge game to work your way through here, and if it gets its claws into you it's unlikely to let go for a good long while. Motorsport Manager manages to walk the line between number-juggling sims and the softer end of the spectrum, and in doing so creates a strategy game that almost anyone can have a crack at." The website awarded a score of 9/10. The game sold over 1.6 million copies on both iOS and Android.Pocket Gamer gave Motorsport Manager Mobile 3, released in July 2018, a 4.5/5 score and a positive review, saying the changes between 2 and 3 were significant. In February 2020, Tech Radar named Motorsport Manager Mobile 3 one of the best Android games of the year. The publication called it "a big leap on from the relatively simplistic original Motorsport Manager Mobile." Motorsport Manager 3 was also given a positive review by Kotaku, saying the presentation was interesting and easy to navigate. While Vice praised gameplay, it did say the game made it too easy to succeed, saying "without that risk frustration, that little taste of sacrifice and the threat of encroaching despair, it's never feels quite like racing." Top Gear called MMM3 "streamlined and intuitive." The review also called it "far less involved game than the PC version of Motorsport Manager, better suited to bus rides, train journeys and studiously ignoring your family by the pool on holiday." PC version Pre-release Websites that early-tested the game gave it very positive reviews. In May 2016, PC Gamer's Sean Clever wrote: "It’s a niche subgenre, and one I didn’t expect to see outside of an FIA licence, but this seems like a decent first effort to put racing management games back on the grid." In August 2016, IGN's Luke Reilly stated: "It’s an incredibly nuanced experience and, while it’s initially rather intimidating, Motorsport Manager has successfully got its tentacles around me after several sessions with an early version of the game. (...) I also like the pressure of race day and the genuinely stressful situations that can arise." Post-release The game received positive reviews after release. Eurogamer.net's Martin Robinson reports the lack of the ability to save or copy racing set-ups, and writes: "Such an approachable veneer disguises an otherwise complex, sometimes cruel and a little too often abstruse experience. Managing the strategies of two cars - and micromanaging each driver to boot - can prove to be a taxing task, and while it's satisfying to execute the perfect strategy there aren't quite enough tools at your disposal to balance out the frustration that often accompanies raceday. The feedback you're given by drivers is a little too obscure, the ability to read lap times and deltas a little too sparse to make chasing the perfect set-up anything other than a dark art. (...) Motorsport Manager's a couple of tweaks away from greatness, then, but it's far from a disappointment." On GameStar, Benjamin Danneberg says: "Motorsport Manager is not a hardcore management game, where I have to put every screw correctly. Instead, the game is a healthy way of strategy and fun." Motorsport.com's Valentin Khorounzhiy writes: "Motorsport Manager is a great game on its own merit and a roaring success for a subgenre that seems to be waking up from a long hiatus. Its price tag may seem steep to some, but it's got enough content and variety to justify it. (...) There's potential for improvement, yes, but Playsport has done an admirable job with the first installment – and it will hopefully move enough copies to justify many, many follow-ups." James Swinbanks of PC PowerPlay writes: "There is always something going on behind the scenes in Motorsport Manager, and it always adds to the experience. I never felt cheated, even when things didn’t go my way. (...) It’s not perfect; a few setup quirks, like increasing rear wing angle adding understeer, don’t make any sense and had me scratching my head trying to work out why they might design it that way. But in the grand scheme of things, Motorsport Manager does what I want it to do. It puts me on the pit wall, in the thick of it." The website gave a score of 8/10.DSport Magazine gave the first PC version a positive response in its review, noting the level of micromanaging was scalable depending on how intensive the player wanted the strategizing to be. Nintendo Switch In March 2019, Pocket Gamer positively gave a review about the port of Motorsport Manager 3 to Switch. It said, positively, that the gameplay was similar to the mobile version. It described a more complicated control scheme allowed through hotkeys on controllers, and the ability to see tiny models of cars instead of dots on the maps. Motorsport Manager Mobile 2 A sequel to the mobile version of Motorsport Manager was released to positive reception on 13 July 2017. Motorsport Manager Mobile 2 was praised for being "more technical" than the first game, providing "many more settings to choose from" compared to the original mobile version. The main additions from the original, aside from enhancements in graphical fidelity, include the ability for the player to control individual car part design, engine modes, extra tyre compounds, more downforce options, and an expanded sponsorship system. Motorsport Manager Mobile 3 The company announced Motorsport Manager Mobile 3 with a clip posted on July 10, 2018, adding Endurance and GT Championships beyond the Formula series, along with other newer features. MMM3'' introduced six new championships. The game had now introduced to it the Open Wheel Sprint Championship which would feature as the fourth tier in the Open Wheel Racing class alongside with the already pre-existing European Racing Series, Asia-Pacific SuperCup and World Motorsport Championship, third to first tiers respectively. References https://www.rockpapershotgun.com/2016/11/07/motorsport-manager-review-pc/ https://www.rockpapershotgun.com/2016/09/28/when-does-motorsport-manager-come-out/ https://www.rockpapershotgun.com/2016/08/24/motorsport-manager-preview-pc/ External links 2014 video games Single-player video games IOS games Android (operating system) MacOS games Windows games Linux games Video games developed in the United Kingdom Sega video games Racing simulators Simulation video games Sports management video games
103791
https://en.wikipedia.org/wiki/Power%20Mac%20G4%20Cube
Power Mac G4 Cube
The Power Mac G4 Cube is a Macintosh personal computer sold by Apple Computer, Inc. between July 2000 and 2001. Designed by Jonathan Ive, the Cube was conceived by Apple chief executive officer (CEO) Steve Jobs, who held an interest in a powerful, miniaturized desktop computer. Apple's designers developed new technologies and manufacturing methods for the product—a cubic computer housed in clear acrylic glass. Apple positioned the Cube in the middle of its product range, between the consumer iMac G3 and the professional Power Mac G4. The Cube was announced to the general public at the Macworld Expo on July 19, 2000. The Cube won awards and plaudits for its design upon release, but reviews noted the high cost of the machine compared to its power, its limited expandability, and cosmetic defects. The product was an immediate commercial failure, selling only 150,000 units before production was suspended within a year of its announcement. The Cube was one of the rare failures for the company under Jobs, after a successful period that brought the company back from the brink of bankruptcy. However, it ultimately proved influential to future Apple products, from the iPod to the Mac Mini. The Museum of Modern Art, located in New York City, holds a G4 Cube as part of its collection. Overview The Power Mac G4 Cube is a small cubic computer, suspended in a acrylic glass enclosure. The designers intended the transparent plastic to give the impression that the computer is floating. The enclosure houses the computer's vital functions, including a slot-loading optical disc drive. The Cube requires a separate monitor with either an Apple Display Connector (ADC) or a Video Graphics Array (VGA) connection. The machine has no fan to move air and heat through the case. Instead, it is passively cooled, with heat dissipated via a grille at the top of the case. The base model shipped with a 450 MHz PowerPC G4 processor, 64 MB of random-access memory (RAM), 20 GB hard drive, and an ATI Rage 128 Pro video card. A higher-end model with a 500 MHz processor, double the RAM, and a 30 GB hard drive was available only through Apple's online store. To fit the components of a personal computer in the case's confined space, the Cube does not feature expansion slots; it does have a video card in a standard Accelerated Graphics Port (AGP) slot, but cannot fit a full-length card. The power supply is located externally to save space, and the Cube features no input or outputs for audio on the machine itself. Instead, the Cube shipped with round Harman Kardon speakers and digital amplifier, attached to the computer via Universal Serial Bus (USB). Despite its size, the Cube fits three RAM slots, two FireWire 400 ports, and two USB 1.1 ports for connecting peripherals in its frame. These ports and the power cable are located on the underside of the machine. Access to the machine's internal components is accomplished by inverting the unit and using a pop-out handle to slide the entire internal assembly out from the shell. Development The Cube was an important product to Apple, and especially to Apple CEO Steve Jobs, who said the idea for the product came from his own desires as a computer user for something between the iMac and Power Mac G4. "I wanted the [flat-panel] Cinema Display but I don't need the features of the Power Mac," he told Newsweek. Jobs's minimalist aesthetic influenced the core components of the design, from the lack of a mechanical power button, to the trayless optical drive and quiet fanless operation. The design team at Apple, led by Jonathan Ive, attempted to fit the power of a desktop in a much smaller form factor; Ive saw traditional desktop tower computers as lazy, designed around what was easiest for engineers. The Cube represented an internal shift in Apple, as the designers held increasing sway over product design. The New York Times called the Cube "pure [...] industrial design" harkening to Bauhaus concepts. The Cube represented an effort by Apple to simplify the computer to its barest essentials. Journalist Jason Snell called the machine an example of Jobs and Ive's obsession with a "Black Box"—dense, miniaturized computers hidden within a pleasing shell hiding the "magic" of its technology. As the Cube has no fan, the design started with the heat sink. The power button that turned on with a wave or touch was accomplished via the use of capacitive sensing. The proprietary plastics formula for the housing took Apple six months to develop. Effort spent developing the Cube would pioneer new uses and processes for materials at Apple that benefitted later products. Because of the technology included in the Cube, Apple's engineers had a tough time keeping the total cost low. Advertising director Ken Segall recalled that Jobs learned of the product's price shortly before an ad agency meeting, and was left "visibly shaken" by the news, realizing that the high price might cause the product's failure. Release and reception Rumors of a cube-shaped Apple computer leaked weeks in advance, and some sites posted purported pictures. The G4 Cube was announced at Macworld Expo on July 19, 2000, as an end-of-show "one more thing". Jobs touted it as combining the power of the Power Mac G4 with a sleek design and miniaturization Apple learned from producing the iMac. Alongside the Cube, Apple introduced a new mouse, keyboard, and displays to complement the machine. The machine's size and looks were immediately divisive, which Macworld editor Andrew Gore took as an indication that Apple had succeeded in creating a cutting-edge product. The design was a point of praise as well as jokes—the computer was compared to a Borg cube, toasters, or a box of Kleenex tissues. Others compared it to the NeXTcube. Ive and the design team were so amused by the comparison to a tissue box that they used spare Cube shells for that purpose in their studio. Reviews were generally positive. Peter H. Lewis, writing for The New York Times, called the computer the most attractive on the market, and that the machine, combined with Apple's displays and peripherals, created "desk sculpture". PC Magazine Australia said that after changing the look of computers with the iMac, the G4 Cube had raised the bar for competitors even further. Gore called the Cube a work of art that felt more like sculpture than a piece of technology, but noted that one had to live with compromises made in the service of art. Walt Mossberg, writing for The Wall Street Journal, called it the "most gorgeous personal computer" that he had ever seen. Critics noted that to get easy access to plug and unplug peripherals, users would have to tip the entire machine—risking accidental sleep activation or dropping the slippery plastic computer entirely. Macworld found the touch-sensitive power button too sensitive and that they accidentally activated sleep mode regularly. They also reported that the stock 5400-rpm hard drive and 64 MB of RAM on the base model slowed the system considerably. The Cube won several international design awards on release, as well as PC Magazines best desktop computer for its Technical Innovation Awards. The G4 Cube and its peripherals were acquired and showcased by The Museum of Modern Art alongside other Apple products. Sales The introduction of the Cube did not fit with the focused product lineup Jobs had introduced since his return to Apple, leaving it without a clear audience. It was as expensive as a similarly equipped Power Mac, but did not feature extra room for more storage or PCI slots. It was likewise much more expensive than an upgraded consumer iMac. Jobs imagined that creative professionals and designers would want one, and that the product was so great that it would inform buying patterns. Sales for the Cube were much lower than expected. Returning from the brink of bankruptcy, Apple had eleven profitable quarters before the Cube's announcement, but Apple's end-of-year financials for 2000 missed predicted revenues by $180million. Part of the drop in profit was attributed to the Cube, which sold only a third as many units as Apple had expected, creating a $90million shortfall in their revenue targets. The Cube counted for 29,000 of the Macs Apple shipped in the quarter, compared to 308,000 iMacs. Retailers were awash in excess product, leaving Apple with a large amount of unsold inventory heading into 2001 they expected to last until March. The computer appealed to high-end customers who wanted a small and sleek design, but Jobs admitted that audience was smaller than expected. In February 2001, Apple lowered the price on the 500 MHz model and added new memory, hard drive, and graphics options. These updates made little difference, and sales continued to decline. The Cube sold 12,000units in the first quarter of 2001, representing just 1.6% of the company's total computer sales. In addition to the product's high price, the Cube suffered from cosmetic issues. Early buyers noticed cracks caused by the injection-molded plastic process. The idea of a design-focused product having aesthetic flaws turned into a negative public relations story for Apple, and dissuaded potential buyers for whom the design was its main appeal. The Cube's radical departure from a conventional personal computer alienated potential buyers, and exacerbated Apple's struggles in the market competing with the performance of Windows PCs. Macworlds Benj Edwards wrote that consumers treated the Cube as "an underpowered, over-expensive toy or [...] an emotionally inaccessible, ultra-geometric gray box suspended in an untouchable glass prison." The lack of internal expansion and reliance on less-common USB and FireWire peripherals also hurt the computer's chances of success. Despite Jobs's clear love of the computer, he was quick to axe the underperforming product. On July 3, 2001, an Apple press release made the unusual statement that the computer—rather than being canceled or discontinued—was having its production "suspended indefinitely", owing to low demand. Apple did not rule out an upgraded Cube model in the future, but considered it unlikely. Business journalist Karen Blumenthal called the Cube Jobs's first big failure since his return to Apple. Jobs's ability to quickly move on the mistake left the Cube a "blip" in Apple's history, according to Segall—a quickly forgotten failure amidst other successful innovations. Legacy Though Apple CEO Tim Cook called the Cube "a spectacular failure" and the product sold only 150,000 units before being discontinued, it became highly popular with a small but enthusiastic group of fans. Macworlds Benj Edwards wrote that the Cube was a product ahead of its time; its appeal to a dedicated group of fans years after it was discontinued was a testament to its vision. After its discontinuation the product fetched high prices from resellers, and a cottage industry developed selling upgrades and modifications to make the machine run faster or cooler. John Gruber wrote 20 years after its introduction that the Cube was a "worthy failure [...] Powerful computers needed to get smaller, quieter, and more attractive. The Cube pushed the state of the art forward." CNET called the machine "an iconic example of millennium-era design". Its unconventional and futuristic appearance earned it a spot as a prop in several films and television shows, including Absolutely Fabulous, The Drew Carey Show, Orange County, and 24. Sixteen Cubes were also used to power the displays of the computer consoles in Star Trek: Enterprise. Although the Cube failed commercially, it influenced future Apple products. The efforts at miniaturizing computer components would benefit future computers like the flatscreen iMac G4, while the efforts Apple spent learning how to precision machine parts of the Cube would be integral to the design of aluminum MacBooks. The Mac mini fit an entire computer in a shell one-fifth the size of the Cube and retained some of the Cube's design philosophies. In comparison to the high price of the Cube, the Mini retailed for $499 and became a successful product that remains part of Apple's lineup. The translucent cube shape would return with the design for the flagship Apple Fifth Avenue store in New York City. Capacitive touch would reappear in the iPod and iPhone lines, and the Cube's vertical thermal design and lattice grille pattern were echoed by the 2013 and 2019 versions of the Mac Pro. Specifications References External links Computer-related introductions in 2000 Macintosh desktops Macintosh case designs G4 Cube G4 Cube
48723471
https://en.wikipedia.org/wiki/How%27s%20Your%20Process%3F%20%28Play%29
How's Your Process? (Play)
How's Your Process? (Play) is the second part of the second studio album by alternative rock band Dot Hacker and the second of a two-album series. The album was released on October 7, 2014 on ORG Music label in digital, CD, cassette, and 12″ vinyl formats. Josh Klinghoffer stated in an interview that How's Your Process was intended to be released as a single album, but was split into two when the band could not agree on which songs to include: Klinghoffer also revealed that the cover is a photograph by Ryszard Horowitz, which the band discovered in an article from a 1969 issue of Esquire magazine. Track listing Personnel Dot Hacker Josh Klinghoffer – lead vocals, guitar, keyboards, synthesizers Clint Walsh – guitar, backing vocals, synthesizers Jonathan Hischke – bass guitar Eric Gardner – drums Additional musicians Vanessa Freebairn-Smith – string arrangements Sonus Quartet – strings Production Eric Palmquist – engineer, mixing Bernie Grundman – mastering Artwork Ryszard Horowitz – Photography Astrelle Johnquest – Design References 2014 albums Dot Hacker albums
25060001
https://en.wikipedia.org/wiki/Sales%20decision%20process
Sales decision process
Sales decision process is a formalized sales process companies use to manage the decision process behind a sale. SDP “is a defined series of steps you follow as you guide prospects from initial contact to purchase.” This method includes planning specific timelines and milestones at the beginning of a sale, both internally and with the business customer. The process can be managed with special purpose SDP software. SDP software allows customers and vendors to work collaboratively throughout a sales cycle with the objective to close larger/longer deals faster. An SDP system is typically integrated with software that automates some of the sales process (Sales Force Automation) and one that helps manage the customer data (Customer relationship management). SDP manages the sales process while the SFA and CRM manage the customer. Overview SDP takes the concept of customer driven sales automation and turns it on its head. It recognizes that a business can’t control individuals or teams but it can control the company’s sales process. SDP allows customers and vendors to work collaboratively throughout the sales cycle. This collaboration drives the sales toward a final decision. SDP steps can include: Presentations Demos Buy-in from stakeholders Budget approval Business Cases Case Studies Reference visits Contract negotiations Each sales cycle is essentially a project with associated milestones, tasks, and deliverables that require participation, coordination, and contributions from multiple individuals on both the customer side and sales side. According to IT industry experts, investing in decision process software can make a competitive difference in any industry. An article in the Harvard Business Review by Andrew McAfee and Eric Brynjolfsson, says market competition in the United States is heating up “not because more products are becoming digital but because more processes are. Just as a digital photo or a web-search algorithm can be endlessly replicated quickly and accurately by copying the underlying bits, a company’s unique business process can now be propagated with much higher fidelity across the organization by embedding it in enterprise information technology. As a result, an innovator with a better way of doing things can…dominate an industry.” Advantages Historically, the sales decision process has been managed through the common sales practice of Close Plans or Solution Evaluation Plans that pass information in Excel or Word Documents back-and-forth between customer and vendor. This creates challenges with version control, data latency, poor visibility, and lack of productive participation. With SDP software user can benefit from: 1. Faster Close Rates and Lower Cost of Sale: Instead of filling-in, filing, and sending Excel or Word based Close Plans, users can simply update the SDP on-line. SDP can generate milestones, assign tasks, send reminders, and track completion dates. A tighter sales cycle equals faster close rates and lowers the overall costs of sale. 2. A Competitive Advantage: SDP technology drives teamwork, greater efficiencies, and clearer communication during each step of the sales cycle giving companies an edge over their competition. 3. Reliable Close Dates: SDP provides effective collaboration between customer and vendor on every step in the sales cycle to create on-going check-points and a mutually agreed upon close date. 4. One Source of Truth: Instead of passing Word or Excel based close plans back-and-forth that can suffer from version control and scattered islands of information – SDP provides secure web-based access from any browser to quickly view up-to-the-minute consolidated information (including GANTT charts) on the status of the sale. Timely updates accurately set expectations for the customer, sales management, and the entire extended sales team. Disadvantages Problems can occur if the company implementing a new SDP system does not outline a realistic sales approach to roadmap the process. If the sales team does not perceive SDP as a benefit, they are unlikely to buy into the process and use the software effectively. Implementation strategies Sales and Marketing experts recommend that companies speak with their salespeople and their customers for insight in order to achieve a successful implementation of a formalized SDP system. “A thorough understanding of the process is required to compete effectively, according to Pat Thull, COO/Partner of Prime Resource Group Inc.” The smart performer today, and the smart leader, implements a system that properly navigates the process to getting the job done right. Once that is complete, it is imperative to “sell” the sales team. Sales teams need to understand how SDP will help them better manage and close deals. This is what gives them the confidence and drive to use the system. References Sales
139242
https://en.wikipedia.org/wiki/Apple%20Desktop%20Bus
Apple Desktop Bus
Apple Desktop Bus (ADB) is a proprietary bit-serial peripheral bus connecting low-speed devices to computers. It was introduced on the Apple IIGS in 1986 as a way to support low-cost devices like keyboards and mice, allowing them to be connected together in a daisy chain without the need for hubs or other devices. Apple Device Bus was quickly introduced on later Macintosh models, on later models of NeXT computers, and saw some other third-party use as well. Like the similar PS/2 connector used in many PC-compatibles at the time, Apple Desktop Bus was rapidly replaced by USB as that system became popular in the late 1990s; the last external Apple Desktop Bus port on an Apple product was in 1999, though it remained as an internal-only bus on some Mac models into the 2000s. History AppleBus Early during the creation of the Macintosh computer, the engineering team had selected the fairly sophisticated Zilog 8530 to supply serial communications. This was initially done to allow multiple devices to be plugged into a single port, using simple communication protocols implemented inside the 8530 to allow them to send and receive data with the host computer. During development of this AppleBus system, computer networking became a vitally important feature of any computer system. With no card slots, the Macintosh was unable to easily add support for Ethernet or similar local area networking standards. Work on AppleBus was re-directed to networking purposes, and was released in 1985 as the AppleTalk system. This left the Mac with the original single-purpose mouse and keyboard ports, and no general-purpose system for low-speed devices to use. Apple Desktop Bus The first system to use Apple Desktop Bus was the Apple IIGS of 1986. It was used on all Apple Macintosh machines starting with the Macintosh II and Macintosh SE. Apple Desktop Bus was also used on later models of NeXT computers. The vast majority of Apple Desktop Bus devices are for input, including trackballs, joysticks, graphics tablets and similar devices. Special-purpose uses included software protection dongles and even the TelePort modem. Move to USB The first Macintosh to move on from Apple Desktop Bus was the iMac in 1998, which uses USB in its place. The last Apple computer to have an Apple Desktop Bus port is the Power Macintosh G3 (Blue and White) in 1999. PowerPC-based PowerBooks and iBooks still used the Apple Desktop Bus protocol in the internal interface with the built-in keyboard and touchpad. Subsequent models use a USB-based trackpad. Design Physical In keeping with Apple's general philosophy of industrial design, Apple Desktop Bus was intended to be as simple to use as possible, while still being inexpensive to implement. A suitable connector was found in the form of the 4-pin mini-DIN connector, which is also used for S-Video. The connectors are small, widely available, and can only be inserted the "correct way". They do not lock into position, but even with a friction fit they are firm enough for light duties like those intended for Apple Desktop Bus. Apple Desktop Bus protocol requires only a single pin for data, labeled Apple Desktop Bus. The data signal is self-clocking. Two of the other pins are used for +5 V power supply and ground. The +5 V pin guarantees at least 500 mA, and requires devices to use only 100 mA each. ADB also includes the PSW pin which is attached directly to the power supply of the host computer. This is included to allow a key on the keyboard to start up the machine without needing the Apple Desktop Bus software to interpret the signal. In more modern designs, an auxiliary microcontroller is always kept running, so it is economical to use a power-up command over the standard USB channel. The decoding transceiver ASIC as well as associated patents were controlled by Apple; this required vendors to work more closely with Apple. In the Macintosh SE, the Apple Desktop Bus is implemented in an Apple-branded Microchip PIC16CR54 Microcontroller. Communication The Apple Desktop Bus system is based around the devices having the ability to decode a single number (the address) and being able to hold several small bits of data (their registers). All traffic on the bus is driven by the host computer, which sends out commands to read or write data: devices are not allowed to use the bus unless the computer first requests it. These requests take the form of single-byte strings. The upper four bits contain the address, the ID of one of the devices on the chain. The four bits allow for up to 16 devices on a single bus. The next two bits specify one of four commands, and the final two bits indicate one of four registers. The commands are: talk - tells the selected device to send the contents of a register to the computer listen - tells the device to set the register to the following value flush - clear the contents of a selected register reset - tell all devices on the bus to reset For instance, if the mouse is known to be at address $D, the computer will periodically send out a 1-byte message on the bus that looks something like: 1101 11 00 This says that device $D (1101) should talk (11) and return the contents of register zero (00). To a mouse this means "tell me the latest position changes". Registers can contain between two and eight bytes. Register zero is generally the primary communications channel. Registers one and two are undefined, and are generally intended to allow 3rd party developers to store configuration information. Register three always contains device identification information. Enumeration and identification The addresses and enumeration of the devices are set to default values when reset. For instance, all keyboards are set to $2, and all mice to $3. When the machine is first powered on, the ADB device driver will send out talk commands asking each of these known default addresses, in turn, for the contents of register three. If no response comes from a particular address, the computer marks it dead and doesn't bother polling it later. If a device does respond, it does so by saying it is moving to a new randomly selected higher address. The computer then responds by sending another command to that new address, asking the device to move to yet another new address. Once this completes, that device is marked live, and the system continues polling it in the future. Once all of the devices are enumerated in this fashion, the bus is ready to be used. Although it was not common, it is possible for the Apple Desktop Bus bus to have more than one device of the same sort plugged in — two graphics tablets or software copy protection dongles, for instance. In this case when it asks for devices on that default address, both will respond and a collision could occur. The devices include a small bit of timing that allows them to avoid this problem. After receiving a message from the host, the devices wait a short random time before responding, and then only do so after "snooping" the bus to make sure it was not busy. With two dongles plugged in, for instance, when the bus is first setting up and queries that address, one of them will be the first to respond due to the random wait timer. The other will notice the bus was busy and not respond. The host will then send out another message to that original address, but since one device has moved to a new address, only the other will then respond. This process continues until no one responds to the request on the original address, meaning there are no more devices of that type to enumerate. Data rates on the bus are theoretically as high as 125 kbit/s. However, the actual speed is at best half that, due to there being only one pin being shared between the computer and devices, and in practice, throughput is even less, as the entire system was driven by how fast the computer polls the bus. The classic Mac OS is not particularly well suited to this task, and the bus often gets bogged down at about 10 kbit/s. Early Teleport modems running at 2400 bit/s have no problems using Apple Desktop Bus, but later models were forced to move to the more expensive RS-422 ports as speeds moved to 14.4 kbit/s and higher. Problems While Mini-DIN connectors cannot be plugged in the "wrong way", it is possible to have trouble finding the right way without looking inside the circular connector's shroud. Apple attempted to help by using U-shaped soft plastic grips around the connectors to key both plugs and sockets so the flat side has a specific relation to the shell keyway, but this feature was ignored by some third-party manufacturers. Additionally, there are four ways to orient the receiving socket on a device such as a keyboard; various Apple keyboards use at least three of these possible orientations. The mini-DIN connector is only rated for 400 insertions and it is easy to bend a pin if not inserted with caution; in addition, the socket can become loose, resulting in intermittent function. Some Apple Desktop Bus devices lack a pass-through connector, making it impossible to daisy-chain more than one such device at a time without obscure splitter units. Few mice or trackballs have them. One peculiarity of Apple Desktop Bus is that in spite of being electrically unsafe for hot-swapping on all but a few machines, it has all of the basic capabilities needed for hot-swapping (like modern buses) implemented in its software and hardware. On practically all original Apple Desktop Bus systems, it is not safe to plug a device once the system is powered on. This can cause the opening of a soldered-in fuse on the motherboard. If brought to an authorised dealer, this can result in a motherboard swap at a significant expense. A simple alternative is to obtain a fuse at a nominal cost and wire it in parallel across the open motherboard fuse (not necessarily requiring soldering). Patents 4,875,158 Ashkin; Peter B. (Los Gatos, CA), Clark; Michael (Glendale, CA) 4,910,655 Ashkin; Peter B. (Los Gatos, CA), Clark; Michael (Glendale, CA) 4,912,627 Ashkin; Peter B. (Los Gatos, CA), Clark; Michael (Glendale, CA) 4,918,598 Ashkin; Peter B. (Los Gatos, CA), Clark; Michael (Glendale, CA) 5,128,677 Donovan; Paul M. (Santa Clara, CA), Caruso; Michael P. (Sudbury, MA) 5,175,750 Donovan; Paul M. (Santa Clara, CA), Caruso; Michael P. (Sudbury, MA) 5,828,857 Scalise; Albert M. (San Jose, CA) See also List of device bandwidths HP-IL ACCESS.bus References External links About the ADB Manager Apple Documentation on the ADB Protocol Apple doc on the ADB port Complete hardware specification of ADB devices and protocol (Apple Guide to the Macintosh Family Hardware) Microchip Application note on ADB device development Computer buses Serial buses Macintosh internals Computer-related introductions in 1986
18288798
https://en.wikipedia.org/wiki/United%20States%20v.%20American%20Library%20Ass%27n
United States v. American Library Ass'n
United States v. American Library Association, 539 U.S. 194 (2003), was a decision in which the United States Supreme Court ruled that the United States Congress has the authority to require public schools and libraries receiving E-Rate discounts to install web filtering software as a condition of receiving federal funding. In a plurality opinion, the Supreme Court ruled that: 1.) public libraries' use of Internet filtering software does not violate their patrons' First Amendment free speech rights; 2.) The Children's Internet Protection Act is not unconstitutional. Facts The Children's Internet Protection Act (CIPA) was passed by Congress in 2000. CIPA required that in order to qualify for federal assistance for Internet access, public libraries must install software that blocked images deemed obscene or child pornography, and other material which could be dangerous for minor children. The American Library Association, a group of public libraries, library associations, library patrons, and website publishers challenged this law. They claimed that it improperly required them to restrict the First Amendment rights of library patrons. The Court, in a decision written by Chief Justice Rehnquist, ruled on whether public libraries' use of Internet filtering software violated patrons' First Amendment rights, as well as whether CIPA was a valid exercise of Congress' spending power by requiring filters for any library who wanted to receive federal funds for Internet access. Background of CIPA The Children’s Internet Protection Act (CIPA) is a federal law enacted by Congress to address concerns about access to offensive content over the Internet on school and library computers. CIPA imposed certain types of requirements on any school or library that receives funding under the E-rate program or Library Services and Technology Act (LSTA) grants, which subsidize internet technology and connectivity for schools and libraries. In early 2001, the FCC issued rules implementing CIPA. Conclusion In a plurality decision written by Chief Justice Rehnquist, the Supreme Court reversed the District Court's decision, and upheld the constitutionality of the Children's Internet Protection Act (CIPA), which requires public libraries receiving federal funds related to Internet access to install filtering devices on computer terminals that block images that constitute obscenity or child pornography, and any other material deemed harmful to minors. The Court reversed the judgment of the District Court that this content-based restriction on Internet speech was invalid on its face because available filtering devices "overblock" some constitutionally protected material, and thus do not meet the First Amendment's narrow tailoring requirement. The Supreme Court addressed this concern with the argument that any filters can be easily disabled should the patron ask. The Supreme Court also held that the public forum principles on which the district court relied are "out of place in the context of this case" and that Internet access in public libraries "is neither a 'traditional' nor a 'designated' public forum." A public forum is created when the government makes an affirmative choice to open up an area for use as a public forum. Libraries, however, do not acquire Internet terminals in order to "create a public forum for Web publishers to express themselves, any more than it collects books in order to provide a public forum for the authors of books to speak." The Court explained that the Internet is simply "another method for making information available in a school or library . . . [and is] no more than a technological extension of the book stack." Justices Anthony M. Kennedy and Stephen G. Breyer filed opinions concurring in the judgment. Both noted that CIPA imposed a comparatively small burden on library Internet users that was not disproportionate to any potential speech-related harm, especially in light of the libraries' ability to unblock sites. Dissent John Paul Stevens dissented, submitting that CIPA unlawfully conditioned receipt of government funding on the restriction of First Amendment rights because CIPA denied the libraries any discretion in judging the merits of the blocked websites. Justice David H. Souter also dissented. In his dissent, he acknowledged the legitimacy of the government's interest in protecting children from obscene material. However, he did not believe CIPA was narrowly tailored to achieve this legitimate interest. He focused on the language of CIPA which said the library "may" unblock the filters for "bona fide research or other lawful purposes", which imposed eligibility on unblocking and left it up to the librarians discretion. He believed this would prevent adults from accessing lawful and constitutionally protected speech. He suggested that to prevent this, children could be restricted to blocked terminals, leaving unblocked terminals available to adults. He believed CIPA to be an unconstitutional "content-based restriction on communication of material in the library's control that an adult could otherwise lawfully see" rising to the level of censorship. Justice Ruth Bader Ginsburg joined Souter's dissent. Reaction and Later Cases The American Civil Liberties Union (ACLU) said that it was "disappointed" that the Supreme Court held that "Congress can force public libraries to install blocking software on library Internet terminals, but noted that the ruling minimized the law's impact on adults, who can insist that the software be disabled". "'Although we are disappointed that the Court upheld a law that is unequivocally a form of censorship, there is a silver lining. The Justices essentially rewrote the law to minimize its effect on adult library patrons,' said Chris Hansen, a senior staff attorney with the ACLU, which had challenged the law on behalf of libraries, adult and minor library patrons, and Internet content providers." On January 26, 2016, the Wisconsin 3rd District Court of Appeals determined in Wisconsin v. David J. Reidinger did not have a First Amendment right to view pornography in a public library. Following a bench trial, Reidinger was found to have violated WIS. ADMIN. CODE § UWS 18.11(2) and was fined $295. A student supervisor at the McIntyre Library on the UWEC campus, testified she received a complaint from a student at 10:40 p.m. on December 14, 2014. The complaining student testified that she and her roommate were working on homework at the library when they noticed Reidinger watching pornographic material on the computer next to them. Two university police officers responded to the complaint. The court upheld his conviction of disorderly conduct. See also Miller v. California Miller test Florence v. Shurtleff References Further reading United States v. American Library Association: A Missed Opportunity for the Supreme Court to Clarify Application of First Amendment Law to Publicly Funded Expressive Institutions by: Robert Corn-Revere Internet Censorship: United States v. American Library Association by: Martha McCarthy, Ph.D. High Court Hears Arguments on Library Internet Filters by: Tony Mauro External links United States Supreme Court cases United States Supreme Court cases of the Rehnquist Court United States Free Speech Clause case law 2003 in United States case law American Library Association
19189
https://en.wikipedia.org/wiki/Mumbai
Mumbai
Mumbai (, ; also known as Bombay — the official name until 1995) is the capital city of the Indian state of Maharashtra. According to the United Nations, as of 2018, Mumbai is the second-most populous city in India after Delhi and the eighth-most populous city in the world with a population of roughly 2 crore (20 million). As per the Indian government population census of 2011, Mumbai was the most populous city in India with an estimated city proper population of 1.25 crore (12.5 million) living under the Municipal Corporation of Greater Mumbai. Mumbai is the centre of the Mumbai Metropolitan Region, the sixth most populous metropolitan area in the world with a population of over 2.3 crore (23 million). Mumbai lies on the Konkan coast on the west coast of India and has a deep natural harbour. In 2008, Mumbai was named an alpha world city. It has the highest number of millionaires and billionaires among all cities in India. Mumbai is home to three UNESCO World Heritage Sites: the Elephanta Caves, Chhatrapati Shivaji Maharaj Terminus, and the city's distinctive ensemble of Victorian and Art Deco buildings designed in the 19th and 20th centuries. The seven islands that constitute Mumbai were earlier home to communities of Marathi language speaking Koli people. For centuries, the seven islands of Bombay were under the control of successive indigenous rulers before being ceded to the Portuguese Empire, and subsequently to the East India Company in 1661, through the dowry of Catherine Braganza when she was married off to Charles II of England. During the mid-18th century, Bombay was reshaped by the Hornby Vellard project, which undertook reclamation of the area between the seven islands from the sea. Along with construction of major roads and railways, the reclamation project, completed in 1845, transformed Bombay into a major seaport on the Arabian Sea. Bombay in the 19th century was characterised by economic and educational development. During the early 20th century it became a strong base for the Indian independence movement. Upon India's independence in 1947 the city was incorporated into Bombay State. In 1960, following the Samyukta Maharashtra Movement, a new state of Maharashtra was created with Bombay as the capital. Mumbai is the financial, commercial, and the entertainment capital of India. It is also one of the world's top ten centres of commerce in terms of global financial flow, generating 6.16% of India's GDP, and accounting for 25% of industrial output, 70% of maritime trade in India (Mumbai Port Trust and JNPT), and 70% of capital transactions to India's economy. Mumbai has the eighth-highest number of billionaires of any city in the world, and Mumbai's billionaires had the highest average wealth of any city in the world in 2008. The city houses important financial institutions and the corporate headquarters of numerous Indian companies and multinational corporations. It is also home to some of India's premier scientific and nuclear institutes. The city is also home to Bollywood and Marathi cinema industries. Mumbai's business opportunities attract migrants from all over India. Etymology The name Mumbai (Marathi: , Gujarati: મુંબઈ, Hindi: मुंबई) derived from Mumbā or Mahā-Ambā—the name of the patron goddess (kuladevata) Mumbadevi of the native Koli community— and ā'ī meaning "mother" in the Marathi language, which is the mother tongue of the Koli people and the official language of Maharashtra. The Koli people originated in Kathiawar and Central Gujarat, and according to some sources they brought their goddess Mumba with them from Kathiawar (Gujarat), where she is still worshipped. However, other sources disagree that Mumbai's name was derived from the goddess Mumba. The oldest known names for the city are Kakamuchee and Galajunkja; these are sometimes still used. In 1508, Portuguese writer Gaspar Correia used the name "Bombaim" in his Lendas da Índia ("Legends of India"). This name possibly originated as the Galician-Portuguese phrase bom baim, meaning "good little bay", and Bombaim is still commonly used in Portuguese. In 1516, Portuguese explorer Duarte Barbosa used the name Tana-Maiambu: Tana appears to refer to the adjoining town of Thane and Maiambu to Mumbadevi. Other variations recorded in the 16th and the 17th centuries include: Mombayn (1525), Bombay (1538), Bombain (1552), Bombaym (1552), Monbaym (1554), Mombaim (1563), Mombaym (1644), Bambaye (1666), Bombaiim (1666), Bombeye (1676), Boon Bay (1690), and Bon Bahia. After the English gained possession of the city in the 17th century, the Portuguese name was anglicised as Bombay. Ali Muhammad Khan, imperial dewan or revenue minister of the Gujarat province, in the Mirat-i Ahmedi (1762) referred to the city as Manbai. The French traveller Louis Rousselet, who visited in 1863 and 1868, states in his book L’Inde des Rajahs, which was first published in 1877: "Etymologists have wrongly derived this name from the Portuguese Bôa Bahia, or (French: "bonne bai", English: "good bay"), not knowing that the tutelar goddess of this island has been, from remote antiquity, Bomba, or Mumba Devi, and that she still..., possesses a temple". By the late 20th century, the city was referred to as Mumbai or Mambai in Marathi, Konkani, Gujarati, Kannada and Sindhi, and as Bambai in Hindi. The Government of India officially changed the English name to Mumbai in November 1995. This came at the insistence of the Marathi nationalist Shiv Sena party, which had just won the Maharashtra state elections, and mirrored similar name changes across the country and particularly in Maharashtra. According to Slate magazine, "they argued that 'Bombay' was a corrupted English version of 'Mumbai' and an unwanted legacy of British colonial rule." Slate also said "The push to rename Bombay was part of a larger movement to strengthen Marathi identity in the Maharashtra region." While the city is still referred to as Bombay by some of its residents and by some Indians from other regions, mention of the city by a name other than Mumbai has been controversial, resulting in emotional outbursts sometimes of a violently political nature. People from Mumbai A resident of Mumbai is called Mumbaikar in Marathi, in which the suffix kar means a resident of. The term had been in use for quite some time but it gained popularity after the official name change to Mumbai. Older terms such as Bombayite are also in use. History Early history Mumbai is built on what was once an archipelago of seven islands: Isle of Bombay, Parel, Mazagaon, Mahim, Colaba, Worli, and Old Woman's Island (also known as Little Colaba). It is not exactly known when these islands were first inhabited. Pleistocene sediments found along the coastal areas around Kandivali in northern Mumbai suggest that the islands were inhabited since the South Asian Stone Age. Perhaps at the beginning of the Common Era, or possibly earlier, they came to be occupied by the Koli fishing community. In the 3rd century BCE, the islands formed part of the Maurya Empire, during its expansion in the south, ruled by the Buddhist emperor Ashoka of Magadha. The Kanheri Caves in Borivali were excavated from basalt rock in the first century CE, and served as an important centre of Buddhism in Western India during ancient Times. The city then was known as Heptanesia (Ancient Greek: A Cluster of Seven Islands) to the Greek geographer Ptolemy in 150 CE. The Mahakali Caves in Andheri were cut out between the 1st century BCE and the 6th century CE. Between the 2nd century BCE and 9th century CE, the islands came under the control of successive indigenous dynasties: Satavahanas, Western Satraps, Abhira, Vakataka, Kalachuris, Konkan Mauryas, Chalukyas and Rashtrakutas, before being ruled by the Shilaharas from 810 to 1260. Some of the oldest edifices in the city built during this period are the Jogeshwari Caves (between 520 and 525), Elephanta Caves (between the sixth to seventh century), Walkeshwar Temple (10th century), and Banganga Tank (12th century). King Bhimdev founded his kingdom in the region in the late 13th century and established his capital in Mahikawati (present day Mahim). The Pathare Prabhus, among the earliest known settlers of the city, were brought to Mahikawati from Saurashtra in Gujarat around 1298 by Bhimdev. The Delhi Sultanate annexed the islands in 1347–48 and controlled it until 1407. During this time, the islands were administered by the Muslim Governors of Gujarat, who were appointed by the Delhi Sultanate. The islands were later governed by the independent Gujarat Sultanate, which was established in 1407. The Sultanate's patronage led to the construction of many mosques, prominent being the Haji Ali Dargah in Worli, built in honour of the Muslim saint Haji Ali in 1431. From 1429 to 1431, the islands were a source of contention between the Gujarat Sultanate and the Bahmani Sultanate of Deccan. In 1493, Bahadur Khan Gilani of the Bahmani Sultanate attempted to conquer the islands but was defeated. Portuguese and British rule The Mughal Empire, founded in 1526, was the dominant power in the Indian subcontinent during the mid-16th century. Growing apprehensive of the power of the Mughal emperor Humayun, Sultan Bahadur Shah of Gujarat was obliged to sign the Treaty of Bassein with the Portuguese Empire on 23 December 1534. According to the treaty, the Seven Islands of Bombay, the nearby strategic town of Bassein and its dependencies were offered to the Portuguese. The territories were later surrendered on 25 October 1535. The Portuguese were actively involved in the foundation and growth of their Roman Catholic religious orders in Bombay. They called the islands by various names, which finally took the written form Bombaim. The islands were leased to several Portuguese officers during their regime. The Portuguese Franciscans and Jesuits built several churches in the city, prominent being the St. Michael's Church at Mahim (1534), St. John the Baptist Church at Andheri (1579), St. Andrew's Church at Bandra (1580), and Gloria Church at Byculla (1632). The Portuguese also built several fortifications around the city like the Bombay Castle, Castella de Aguada (Castelo da Aguada or Bandra Fort), and Madh Fort. The English were in constant struggle with the Portuguese vying for hegemony over Bombay, as they recognised its strategic natural harbour and its natural isolation from land attacks. By the middle of the 17th century the growing power of the Dutch Empire forced the English to acquire a station in western India. On 11 May 1661, the marriage treaty of Charles II of England and Catherine of Braganza, daughter of King John IV of Portugal, placed the islands in possession of the English Empire, as part of Catherine's dowry to Charles. However, Salsette, Bassein, Mazagaon, Parel, Worli, Sion, Dharavi, and Wadala still remained under Portuguese possession. From 1665 to 1666, the English managed to acquire Mahim, Sion, Dharavi, and Wadala. In accordance with the Royal Charter of 27 March 1668, England leased these islands to the English East India Company in 1668 for a sum of £10 per annum. The population quickly rose from 10,000 in 1661, to 60,000 in 1675. The islands were subsequently attacked by Yakut Khan, the Muslim Koli admiral of the Mughal Empire, in October 1672, Rickloffe van Goen, the Governor-General of Dutch India on 20 February 1673, and Siddi admiral Sambal on 10 October 1673. In 1687, the English East India Company transferred its headquarters from Surat to Bombay. The city eventually became the headquarters of the Bombay Presidency. Following the transfer, Bombay was placed at the head of all the company's establishments in India. Towards the end of the 17th century, the islands again suffered incursions from Yakut Khan in 1689–90. The Portuguese presence ended in Bombay when the Marathas under Peshwa Baji Rao I captured Salsette in 1737, and Bassein in 1739. By the middle of the 18th century, Bombay began to grow into a major trading town, and received a huge influx of migrants from across India. Later, the British occupied Salsette on 28 December 1774. With the Treaty of Surat (1775), the British formally gained control of Salsette and Bassein, resulting in the First Anglo-Maratha War. The British were able to secure Salsette from the Marathas without violence through the Treaty of Purandar (1776), and later through the Treaty of Salbai (1782), signed to settle the outcome of the First Anglo-Maratha War. From 1782 onwards, the city was reshaped with large-scale civil engineering projects aimed at merging all the seven islands of Bombay into a single amalgamated mass by way of a causeway called the Hornby Vellard, which was completed by 1784. In 1817, the British East India Company under Mountstuart Elphinstone defeated Baji Rao II, the last of the Maratha Peshwa in the Battle of Khadki. Following his defeat, almost the whole of the Deccan Plateau came under British suzerainty, and was incorporated into the Bombay Presidency. The success of the British campaign in the Deccan marked the end of all attacks by native powers. By 1845, the seven islands coalesced into a single landmass by the Hornby Vellard project via large scale land reclamation. On 16 April 1853, India's first passenger railway line was established, connecting Bombay to the neighbouring town of Thana (now Thane). During the American Civil War (1861–1865), the city became the world's chief cotton-trading market, resulting in a boom in the economy that subsequently enhanced the city's stature. The opening of the Suez Canal in 1869 transformed Bombay into one of the largest seaports on the Arabian Sea. In September 1896, Bombay was hit by a bubonic plague epidemic where the death toll was estimated at 1,900 people per week. About 850,000 people fled Bombay and the textile industry was adversely affected. While the city was the capital of the Bombay Presidency, the Indian independence movement fostered the Quit India Movement in 1942 and the Royal Indian Navy mutiny in 1946. Independent India After India's independence in 1947, the territory of the Bombay Presidency retained by India was restructured into Bombay State. The area of Bombay State increased, after several erstwhile princely states that joined the Indian union were integrated into the state. Subsequently, the city became the capital of Bombay State. In April 1950, Municipal limits of Bombay were expanded by merging the Bombay Suburban District and Bombay City to form the Greater Bombay Municipal Corporation. The Samyukta Maharashtra movement to create a separate Maharashtra state including Bombay was at its height in the 1950s. In the Lok Sabha discussions in 1955, the Congress party demanded that the city be constituted as an autonomous city-state. The States Reorganisation Committee recommended a bilingual state for Maharashtra–Gujarat with Bombay as its capital in its 1955 report. Bombay Citizens' Committee, an advocacy group of leading Gujarati industrialists lobbied for Bombay's independent status. Following protests during the movement in which 105 people lost their lives in clashes with the police, Bombay State was reorganised on linguistic lines on 1 May 1960. Gujarati-speaking areas of Bombay State were partitioned into the state of Gujarat. Maharashtra State with Bombay as its capital was formed with the merger of Marathi-speaking areas of Bombay State, eight districts from Central Provinces and Berar, five districts from Hyderabad State, and numerous princely states enclosed between them. As a memorial to the martyrs of the Samyukta Maharashtra movement, Flora Fountain was renamed as Hutatma Chowk (Martyr's Square) and a memorial was erected. The following decades saw massive expansion of the city and its suburbs. In the late 1960s, Nariman Point and Cuffe Parade were reclaimed and developed. The Bombay Metropolitan Region Development Authority (BMRDA) was established on 26 January 1975 by the Government of Maharashtra as an apex body for planning and co-ordination of development activities in the Bombay metropolitan region. In August 1979, a sister township of New Bombay was founded by the City and Industrial Development Corporation (CIDCO) across the Thane and Raigad districts to help the dispersal and control of Bombay's population. The textile industry in Bombay largely disappeared after the widespread 1982 Great Bombay Textile Strike, in which nearly 250,000 workers in more than 50 textile mills went on strike. Mumbai's defunct cotton mills have since become the focus of intense redevelopment. Industrial development began in Mumbai. When its economy started focusing on the fields of petrochemicals, electronics, electronics and automobile. In 1954 Hindustan Petroleum comissoned Mumbai Refinery at Trombay and BPCL Refinery. The Jawaharlal Nehru Port, which handles 55–60% of India's containerized cargo, was commissioned on 26 May 1989 across the creek at Nhava Sheva with a view to de-congest Bombay Harbour and to serve as a hub port for the city. The geographical limits of Greater Bombay were coextensive with municipal limits of Greater Bombay. On 1 October 1990, the Greater Bombay district was bifurcated to form two revenue districts namely, Bombay City and Bombay Suburban, though they continued to be administered by same Municipal Administration. The years from 1990 to 2010 saw an increase in violence and terrorism activities. Following the demolition of the Babri Masjid in Ayodhya, the city was rocked by the Hindu-Muslim riots of 1992–93 in which more than 1,000 people were killed. In March 1993, a series of 13 coordinated bombings at several city landmarks by Islamic extremists and the Bombay underworld resulted in 257 deaths and over 700 injuries. In 2006, 209 people were killed and over 700 injured when seven bombs exploded on the city's commuter trains. In 2008, a series of ten coordinated attacks by armed terrorists for three days resulted in 173 deaths, 308 injuries, and severe damage to several heritage landmarks and prestigious hotels. The three coordinated bomb explosions in July 2011 that occurred at the Opera house, Zaveri Bazaar and Dadar were the latest in the series of terrorist attacks in Mumbai which resulted in 26 deaths and 130 injuries. Mumbai is the commercial capital of India and has evolved into a global financial hub. For several decades it has been the home of India's main financial services, and a focus for both infrastructure development and private investment. From being an ancient fishing community and a colonial centre of trade, Mumbai has become South Asia's largest city and home of the world's most prolific film industry. Geography Mumbai is on a narrow peninsula on the southwest of Salsette Island, which lies between the Arabian Sea to the west, Thane Creek to the east and Vasai Creek to the north. Mumbai's suburban district occupies most of the island. Navi Mumbai is east of Thane Creek and Thane is north of Vasai Creek. Mumbai consists of two distinct regions: Mumbai City district and Mumbai Suburban district, which form two separate revenue districts of Maharashtra. The city district region is also commonly referred to as the Island City or South Mumbai. The total area of Mumbai is 603.4 km2 (233 sq mi). Of this, the island city spans 67.79 km2 (26 sq mi), while the suburban district spans 370 km2 (143 sq mi), together accounting for 437.71 km2 (169 sq mi) under the administration of Municipal Corporation of Greater Mumbai (MCGM). The remaining areas belong to various Defence establishments, the Mumbai Port Trust, the Atomic Energy Commission and the Borivali National Park, which are out of the jurisdiction of the MCGM. The Mumbai Metropolitan Region which includes portions of Thane, Palghar and Raigad districts in addition to Greater Mumbai, covers an area of 4,355 km2 (1681.5 sq mi). Mumbai lies at the mouth of the Ulhas River on the western coast of India, in the coastal region known as the Konkan. It sits on Salsette Island (Sashti Island), which it partially shares with the Thane district. Mumbai is bounded by the Arabian Sea to the west. Many parts of the city lie just above sea level, with elevations ranging from 10 m (33 ft) to 15 m (49 ft); the city has an average elevation of 14 m (46 ft). Northern Mumbai (Salsette) is hilly, and the highest point in the city is 450 m (1,476 ft) at Salsette in the Powai–Kanheri ranges. The Sanjay Gandhi National Park (Borivali National Park) is located partly in the Mumbai suburban district, and partly in the Thane district, and it extends over an area of 103.09 km2 (39.80 sq mi). Apart from the Bhatsa Dam, there are six major lakes that supply water to the city: Vihar, Lower Vaitarna, Upper Vaitarna, Tulsi, Tansa and Powai. Tulsi Lake and Vihar Lake are located in Borivili National Park, within the city's limits. The supply from Powai lake, also within the city limits, is used only for agricultural and industrial purposes. Three small rivers, the Dahisar River, Poinsar (or Poisar) and Ohiwara (or Oshiwara) originate within the park, while the polluted Mithi River originates from Tulsi Lake and gathers water overflowing from Vihar and Powai Lakes. The coastline of the city is indented with numerous creeks and bays, stretching from the Thane creek on the eastern to Madh Marve on the western front. The eastern coast of Salsette Island is covered with large mangrove swamps, rich in biodiversity, while the western coast is mostly sandy and rocky. Soil cover in the city region is predominantly sandy due to its proximity to the sea. In the suburbs, the soil cover is largely alluvial and loamy. The underlying rock of the region is composed of black Deccan basalt flows, and their acidic and basic variants dating back to the late Cretaceous and early Eocene eras. Mumbai sits on a seismically active zone owing to the presence of 23 fault lines in the vicinity. The area is classified as a Seismic Zone III region, which means an earthquake of up to magnitude 6.5 on the Richter magnitude scale may be expected. Climate Mumbai has a tropical climate, specifically a tropical wet and dry climate (Aw) under the Köppen climate classification. It varies between a dry period extending from October to May and a wet period peaking in June. The cooler season from December to February is followed by the hotter season from March to May. The period from June to about the end of September constitutes the south west monsoon season, and October and November form the post-monsoon season. Flooding during monsoon is a major problem for Mumbai. Between June and September, the south west monsoon rains lash the city. Pre-monsoon showers are received in May. Occasionally, north-east monsoon showers occur in October and November. The maximum annual rainfall ever recorded was for 1954. The highest rainfall recorded in a single day was on 26 July 2005. The average total annual rainfall is for the Island City, and for the suburbs. The average annual temperature is , and the average annual precipitation is . In the Island City, the average maximum temperature is , while the average minimum temperature is . In the suburbs, the daily mean maximum temperature range from to , while the daily mean minimum temperature ranges from to . The record high is set on 14 April 1952, and the record low is set on 27 January 1962. Tropical Cyclones are rare in the city, The worst Cyclone to ever impact Mumbai was the 1948 Mumbai Cyclone where gusts reached in Juhu, The storm left 38 people dead and 47 missing, The storm reportedly impacted Bombay for 20 hours and left the city devastated Air pollution is a major issue in Mumbai. According to the 2016 World Health Organization Global Urban Ambient Air Pollution Database, the annual average PM2.5 concentration in 2013 was 63 μg/m3, which is 6.3 times higher than that recommended by the WHO Air Quality Guidelines for the annual mean PM2.5. The Central Pollution Control Board for the Government of India and the Consulate General of the United States, Mumbai monitor and publicly share real-time air quality data. In December 2019, IIT Bombay, in partnership with the McKelvey School of Engineering of Washington University in St. Louis, launched the Aerosol and Air Quality Research Facility to study air pollution in Mumbai, among other Indian cities. Economy Mumbai is India's second largest city (by population) and is the financial and commercial capital of the country as it generates 6.16% of the total GDP. It serves as an economic hub of India, contributing 10% of factory employment, 25% of industrial output, 33% of income tax collections, 60% of customs duty collections, 20% of central excise tax collections, 40% of India's foreign trade and in corporate taxes. Along with the rest of India, Mumbai has witnessed an economic boom since the liberalisation of 1991, the finance boom in the mid-nineties and the IT, export, services and outsourcing boom in the 2000s. Although Mumbai had prominently figured as the hub of economic activity of India in the 1990s, the Mumbai Metropolitan Region is presently witnessing a reduction in its contribution to India's GDP. Recent estimates of the economy of the Mumbai Metropolitan Region is estimated to be $400 billion (PPP metro GDP) ranking it either the most or second-most productive metro area of India. Many of India's numerous conglomerates (including Larsen & Toubro, State Bank of India (SBI), Life Insurance Corporation of India (LIC), Tata Group, Godrej and Reliance), and five of the Fortune Global 500 companies are based in Mumbai. This is facilitated by the presence of the Reserve Bank of India (RBI), the Bombay Stock Exchange (BSE), the National Stock Exchange of India (NSE), and financial sector regulators such as the Securities and Exchange Board of India (SEBI). Until the 1970s, Mumbai owed its prosperity largely to textile mills and the seaport, but the local economy has since then diversified to include finance, engineering, diamond-polishing, healthcare and information technology. The key sectors contributing to the city's economy are: finance, gems & jewellery, leather processing, IT and ITES, textiles, petrochemical, electronics manufacturing, automobiles, and entertainment. Nariman Point and Bandra Kurla Complex (BKC) are Mumbai's major financial centres. Despite competition from Bangalore, Hyderabad and Pune, Mumbai has carved a niche for itself in the information technology industry. The Santacruz Electronic Export Processing Zone (SEEPZ) and the International Infotech Park (Navi Mumbai) offer excellent facilities to IT companies. State and central government employees make up a large percentage of the city's workforce. Mumbai also has a large unskilled and semi-skilled self-employed population, who primarily earn their livelihood as hawkers, taxi drivers, mechanics and other such blue collar professions. The port and shipping industry is well established, with Mumbai Port being one of the oldest and most significant ports in India. Dharavi, in central Mumbai, has an increasingly large recycling industry, processing recyclable waste from other parts of the city; the district has an estimated 15,000 single-room factories. Mumbai has been ranked sixth among top ten global cities on the billionaire count with 28, and it has 46,000 millionaires. With a total wealth of around $96,000 crore ($960 billion), it is the richest Indian city and 12th richest city in the world. and seventh in the list of "Top Ten Cities for Billionaires" by Forbes magazine (April 2008), and first in terms of the average wealth of billionaires. , the Globalization and World Cities Study Group (GaWC) has ranked Mumbai as an "Alpha world city", third in its categories of Global cities. Mumbai is the third most expensive office market in the world, and was ranked among the fastest cities in the country for business startup in 2009. Civic administration Greater Mumbai, an area of , consisting of the Mumbai City and Mumbai Suburban districts, extends from Colaba in the south, to Mulund and Dahisar in the north, and Mankhurd in the east. Its population as per the 2011 census was 12,442,373. It is administered by the Municipal Corporation of Greater Mumbai (MCGM) (sometimes referred to as the Brihanmumbai Municipal Corporation), formerly known as the Bombay Municipal Corporation (BMC). The MCGM is in charge of the civic and infrastructure needs of the metropolis. The mayor, who serves for a term of two and a half years, is chosen through an indirect election by the councillors from among themselves. The municipal commissioner is the chief executive officer and head of the executive arm of the municipal corporation. All executive powers are vested in the municipal commissioner who is an Indian Administrative Service (IAS) officer appointed by the state government. Although the municipal corporation is the legislative body that lays down policies for the governance of the city, it is the commissioner who is responsible for the execution of the policies. The commissioner is appointed for a fixed term as defined by state statute. The powers of the commissioner are those provided by statute and those delegated by the corporation or the standing committee. The Municipal Corporation of Greater Mumbai was ranked 9th out of 21 cities for best governance & administrative practices in India in 2014. It scored 3.5 on 10 compared to the national average of 3.3. The two revenue districts of Mumbai come under the jurisdiction of a district collector. The collectors are in charge of property records and revenue collection for the central government, and oversee the national elections held in the city. The Mumbai Police is headed by a police commissioner, who is an Indian Police Service (IPS) officer. The Mumbai Police is a division of the Maharashtra Police, under the state Home Ministry. The city is divided into seven police zones and seventeen traffic police zones, each headed by a deputy commissioner of police. The Mumbai Traffic Police is a semi-autonomous body under the Mumbai Police. The Mumbai Fire Brigade, under the jurisdiction of the municipal corporation, is headed by the chief fire officer, who is assisted by four deputy chief fire officers and six divisional officers. The Mumbai Metropolitan Region Development Authority (MMRDA) is responsible for infrastructure development and planning of Mumbai Metropolitan Region. Mumbai is the seat of the Bombay High Court, which exercises jurisdiction over the states of Maharashtra and Goa, and the union territory of Dadra and Nagar Haveli and Daman and Diu. Mumbai also has two lower courts, the small causes court for civil matters, and the sessions court for criminal cases. Mumbai also has a special Terrorist and Disruptive Activities (TADA) court for people accused of conspiring and abetting acts of terrorism in the city. Politics Mumbai had been a traditional stronghold and birthplace of the Indian National Congress, also known as the Congress Party. The first session of the Indian National Congress was held in Bombay from 28 to 31 December 1885. The city played host to the Indian National Congress six times during its first 50 years, and became a strong base for the Indian independence movement during the 20th century. The 1960s saw the rise of regionalist politics in Bombay, with the formation of the Shiv Sena on 19 June 1966, under the leadership of Balasaheb Thackeray out of a feeling of resentment about the relative marginalisation of the native Marathi people in Bombay. Shiv Sena switched from 'Marathi Cause' to larger 'Hindutva Cause' in 1985 and joined hands with Bhartiya Janata Party (BJP) in the same year. The Congress had dominated the politics of Bombay from independence until the early 1980s, when the Shiv Sena won the 1985 Bombay Municipal Corporation elections. In 1989, the Bharatiya Janata Party (BJP), a major national political party, forged an electoral alliance with the Shiv Sena to dislodge the Congress in the Maharashtra Legislative Assembly elections. In 1999, several members left the Congress to form the Nationalist Congress Party (NCP) but later allied with the Congress as part of an alliance known as the Democratic Front. Other parties such as Maharashtra Navnirman Sena (MNS), Samajwadi Party (SP), Bahujan Samaj Party (BSP), All India Majlis-e-Ittehadul Muslimeen (AIMIM) and several independent candidates also contest elections in the city. In the Indian national elections held every five years, Mumbai is represented by six parliamentary constituencies: North, North West, North East, North Central, South Central, and South. A member of parliament (MP) to the Lok Sabha, the lower house of the Indian Parliament, is elected from each of the parliamentary constituencies. In the 2019 national election, all six parliamentary constituencies were won by the BJP and Shiv Sena in alliance, with both parties winning three seats each. In the Maharashtra state assembly elections held every five years, Mumbai is represented by 36 assembly constituencies. A member of the legislative assembly (MLA) to the Maharashtra Vidhan Sabha (legislative assembly) is elected from each of the assembly constituencies. In the 2019 state assembly election, out of the 36 assembly constituencies, 16 were won by the BJP, 11 by the Shiv Sena, 6 by the Congress, 2 by the NCP and one by independent candidate. Elections are also held every five years to elect corporators to power in the MCGM. The Corporation comprises 227 directly elected Councillors representing the 24 municipal wards, five nominated Councillors having special knowledge or experience in municipal administration, and a mayor whose role is mostly ceremonial. In the 2012 municipal corporation elections, out of the 227 seats, the Shiv Sena-BJP alliance secured 107 seats, holding power with the support of independent candidates in the MCGM, while the Congress-NCP alliance bagged 64 seats. The tenure of the mayor, deputy mayor, and municipal commissioner is two and a half years. Transport Public transport Public transport systems in Mumbai include the Mumbai Suburban Railway, Monorail, Metro, Brihanmumbai Electric Supply and Transport (BEST) buses, black-and-yellow meter taxis, auto rickshaws and ferries. Suburban railway and BEST bus services together accounted for about 88% of the passenger traffic in 2008. Auto rickshaws are allowed to operate only in the suburban areas of Mumbai, while taxis are allowed to operate throughout Mumbai, but generally operate in South Mumbai. Taxis and rickshaws in Mumbai are required by law to run on compressed natural gas (CNG), and are a convenient, economical, and easily available means of transport. Railway The Mumbai Suburban Railway, popularly referred to as Locals forms the backbone of the city's transport system. It is operated by the Central Railway and Western Railway zones of the Indian Railways. Mumbai's suburban rail systems carried a total of 63 lakh (6.3 million) passengers every day in 2007. Trains are overcrowded during peak hours, with nine-car trains of rated capacity 1,700 passengers, actually carrying around 4,500 passengers at peak hours. The Mumbai rail network is spread at an expanse of 319 route kilometres. 191 rakes (train-sets) of 9 car and 12 car composition are utilised to run a total of 2,226 train services in the city. The Mumbai Monorail and Mumbai Metro have been built and are being extended in phases to relieve overcrowding on the existing network. The Monorail opened in early February 2014. The first line of the Mumbai Metro opened in early June 2014. Mumbai is the headquarters of two zones of the Indian Railways: the Central Railway (CR) headquartered at Chhatrapati Shivaji Terminus (formerly Victoria Terminus), and the Western Railway (WR) headquartered at Churchgate. Mumbai is also well connected to most parts of India by the Indian Railways. Long-distance trains originate from Chhatrapati Shivaji Terminus, Dadar, Lokmanya Tilak Terminus, Mumbai Central, Bandra Terminus, Andheri and Borivali. Bus Mumbai's bus services carried over 55 lakh (5.5 million) passengers per day in 2008, which dropped to 28 lakh (2.8 million) in 2015. Public buses run by BEST cover almost all parts of the metropolis, as well as parts of Navi Mumbai, Mira-Bhayandar and Thane. The BEST operates a total of 4,608 buses with CCTV cameras installed, ferrying 45 lakh (4.5 million) passengers daily over 390 routes. Its fleet consists of single-decker, double-decker, vestibule, low-floor, disabled-friendly, air-conditioned and Euro III compliant diesel and compressed natural gas powered buses. BEST introduced air-conditioned buses in 1998. BEST buses are red in colour, based originally on the Routemaster buses of London. Maharashtra State Road Transport Corporation (MSRTC, also known as ST) buses provide intercity transport connecting Mumbai with other towns and cities of Maharashtra and nearby states. The Navi Mumbai Municipal Transport (NMMT) and Thane Municipal Transport (TMT) also operate their buses in Mumbai, connecting various nodes of Navi Mumbai and Thane to parts of Mumbai. Buses are generally favoured for commuting short to medium distances, while train fares are more economical for longer distance commutes. The Mumbai Darshan is a tourist bus service which explores numerous tourist attractions in Mumbai. Bus Rapid Transit System (BRTS) lanes have been planned throughout Mumbai. Though 88% of the city's commuters travel by public transport, Mumbai still continues to struggle with traffic congestion. Mumbai's transport system has been categorised as one of the most congested in the world. Water Water transport in Mumbai consists of ferries, hovercraft and catamarans. Services are provided by both government agencies as well as private partners. Hovercraft services plied briefly in the late 1990s between the Gateway of India and CBD Belapur in Navi Mumbai. They were subsequently scrapped due to lack of adequate infrastructure. Road Mumbai is served by National Highway 48, National Highway 66, National Highway 160 and National Highway 61. The Mumbai–Chennai and Mumbai–Delhi prongs of the Golden Quadrilateral system of National Highways start from the city. The Mumbai-Pune Expressway was the first expressway built in India. The Eastern Freeway was opened in 2013. The Mumbai Nashik Expressway, Mumbai-Vadodara Expressway, are under construction. The Bandra-Worli Sea Link bridge, along with Mahim Causeway, links the island city to the western suburbs. The three major road arteries of the city are the Eastern Express Highway from Sion to Thane, the Sion Panvel Expressway from Sion to Panvel and the Western Express Highway from Bandra to Bhayander. Mumbai has approximately of roads. There are five tolled entry points to the city by road. Mumbai had about 721,000 private vehicles as of March 2014, 56,459 black and yellow taxis , and 106,000 auto rickshaws, as of May 2013. Air The Chhatrapati Shivaji Maharaj International Airport (formerly Sahar International Airport) is the main aviation hub in the city and the second busiest airport in India in terms of passenger traffic. It handled 3.66 crore (36.6 million) passengers and 694,300 tonnes of cargo during FY 2014–2015. An upgrade plan was initiated in 2006, targeted at increasing the capacity of the airport to handle up to 4 crore (40 million) passengers annually and the new terminal T2 was opened in February 2014. The proposed Navi Mumbai International airport to be built in the Kopra-Panvel area has been sanctioned by the Indian Government and will help relieve the increasing traffic burden on the existing airport. The Juhu Aerodrome was India's first airport, and now hosts the Bombay Flying Club and a heliport operated by state-owned Pawan Hans. Sea Mumbai is served by two major ports, Mumbai Port Trust and Jawaharlal Nehru Port Trust, which lies just across the creek in Navi Mumbai. Mumbai Port has one of the best natural harbours in the world, and has extensive wet and dry dock accommodation facilities. Jawaharlal Nehru Port, commissioned on 26 May 1989, is the busiest and most modern major port in India. It handles 55–60% of the country's total containerised cargo. Ferries from Ferry Wharf in Mazagaon allow access to islands near the city. The city is also the headquarters of the Western Naval Command, and also an important base for the Indian Navy. Utility services Under colonial rule, tanks were the only source of water in Mumbai, with many localities having been named after them. The MCGM supplies potable water to the city from six lakes, most of which comes from the Tulsi and Vihar lakes. The Tansa lake supplies water to the western suburbs and parts of the island city along the Western Railway. The water is filtered at Bhandup, which is Asia's largest water filtration plant. India's first underground water tunnel was completed in Mumbai to supply water to the Bhandup filtration plant. About 70 crore (700 million) litres of water, out of a daily supply of 350 crore (3500 million) litres, is lost by way of water thefts, illegal connections and leakages, per day in Mumbai. Almost all of Mumbai's daily refuse of 7,800 metric tonnes, of which 40 metric tonnes is plastic waste, is transported to dumping grounds in Gorai in the northwest, Mulund in the northeast, and to the Deonar dumping ground in the east. Sewage treatment is carried out at Worli and Bandra, and disposed of by two independent marine outfalls of and at Bandra and Worli respectively. Electricity is distributed by the Brihanmumbai Electric Supply and Transport (BEST) undertaking in the island city, and by Reliance Energy, Tata Power, and the Maharashtra State Electricity Distribution Co. Ltd (Mahavitaran) in the suburbs. Power supply cables are underground, which reduces pilferage, thefts and other losses. Cooking gas is supplied in the form of liquefied petroleum gas cylinders sold by state-owned oil companies, as well as through piped natural gas supplied by Mahanagar Gas Limited. The largest telephone service provider is the state-owned MTNL, which held a monopoly over fixed line and cellular services up until 2000, and provides fixed line as well as mobile WLL services. Mobile phone coverage is extensive, and the main service providers are Vodafone Essar, Airtel, MTNL, Loop Mobile, Reliance Communications, Idea Cellular and Tata Indicom. Both GSM and CDMA services are available in the city. Mumbai, along with the area served by telephone exchanges in Navi Mumbai and Kalyan is classified as a Metro telecom circle. Many of the above service providers also provide broadband internet and wireless internet access in Mumbai. , Mumbai had the highest number of internet users in India with 1.64 crore (16.4 million) users. Cityscape Architecture The architecture of the city is a blend of Gothic Revival, Indo-Saracenic, Art Deco, and other contemporary styles. Most of the buildings during the British period, such as the Victoria Terminus and Bombay University, were built in Gothic Revival style. Their architectural features include a variety of European influences such as German gables, Dutch roofs, Swiss timbering, Romance arches, Tudor casements, and traditional Indian features. There are also a few Indo-Saracenic styled buildings such as the Gateway of India. Art Deco styled landmarks can be found along the Marine Drive and west of the Oval Maidan. Mumbai has the second largest number of Art Deco buildings in the world after Miami. In the newer suburbs, modern buildings dominate the landscape. Mumbai has by far the largest number of skyscrapers in India, with 956 existing skyscrapers and 272 under construction . The Mumbai Heritage Conservation Committee (MHCC), established in 1995, formulates special regulations and by-laws to assist in the conservation of the city's heritage structures. Mumbai has three UNESCO World Heritage Sites, the Chhatrapati Shivaji Terminus, the Elephanta Caves and the Victorian and Art Deco Ensemble. In the south of Mumbai, there are colonial-era buildings and Soviet-style offices. In the east are factories and some slums. On the West coast are former-textile mills being demolished and skyscrapers built on top. There are 237 buildings taller than 100 m, compared with 327 in Shanghai and 855 in New York. Demographics According to the 2011 census, the population of Mumbai city was 12,479,608. The population density is estimated to be about 20,482 persons per square kilometre. The living space is 4.5 square metres per person. Mumbai Metropolitan Region was home to 20,748,395 people by 2011. Greater Mumbai, the area under the administration of the MCGM, has a literacy rate of 94.7%, higher than the national average of 86.7%. The number of slum-dwellers in the Mumbai Metropolitan Region is estimated to be 90 lakh (9 million), up from 60 lakh (6 million) in 2001 which constitutes approximately 41.8% of the region. The sex ratio in 2011 was 838 females per 1,000 males in the island city, 857 in the suburbs, and 848 as a whole in Greater Mumbai, all numbers lower than the national average of 914 females per 1,000 males. The low sex ratio is partly because of the large number of male migrants who come to the city to work. Residents of Mumbai call themselves Mumbaikar, Mumbaiite, Bombayite or Bombaiite. Mumbai suffers from the same major urbanization problems seen in many fast growing cities in developing countries: poverty and unemployment. With available land at a premium, Mumbai residents often reside in cramped, relatively expensive housing, usually far from workplaces, and therefore requiring long commutes on crowded mass transit, or clogged roadways. Many of them live in close proximity to bus or train stations although suburban residents spend significant time travelling southward to the main commercial district. Dharavi, Asia's second largest slum (if Karachi's Orangi Town is counted as a single slum) is located in central Mumbai and houses between 800,000 and 10 lakh (one million) people in , making it one of the most densely populated areas on Earth with a population density of at least 334,728 persons per square kilometre. The number of migrants to Mumbai from outside Maharashtra during the 1991–2001 decade was 11.2 lakh (1.12 million), which amounted to 54.8% of the net addition to the population of Mumbai. The number of households in Mumbai is forecast to rise from 42 lakh (4.2 million) in 2008 to 66 lakh (6.6 million) in 2020. The number of households with annual incomes of 20 lakh (2 million) rupees will increase from 4% to 10% by 2020, amounting to 660,000 families. The number of households with incomes from 10-20 lakh (1–2 million) rupees is also estimated to increase from 4% to 15% by 2020. According to the 2016 report of the Central Pollution Control Board, Mumbai is the noisiest city in India, ahead of Lucknow, Hyderabad and Delhi. Ethnic groups and religions The religious groups represented in Mumbai as of 2011 include Hindus (65.99%), Muslims (20.65%), Buddhists (4.85%), Jains (4.10%), Christians (3.27%) and Sikhs (0.49%). The linguistic/ethnic demographics in the Greater Mumbai Area are: Maharashtrians (32%), Gujaratis (20%), with the rest hailing from other parts of India. Native Christians include East Indian Catholics, who were converted by the Portuguese during the 16th century, while Goan and Mangalorean Catholics also constitute a significant portion of the Christian community of the city. Jews settled in Bombay during the 18th century. The Bene Israeli Jewish community of Bombay, who migrated from the Konkan villages, south of Bombay, are believed to be the descendants of the Jews of Israel who were shipwrecked off the Konkan coast, probably in the year 175 BCE, during the reign of the Greek ruler, Antiochus IV Epiphanes. Mumbai is also home to the largest population of Parsi Zoroastrians in the world, numbering about 60,000, however their population is declining rapidly. Parsis migrated to India from Greater Iran following the Muslim conquest of Persia in the seventh century. The oldest Muslim communities in Mumbai include the Dawoodi Bohras, Ismaili Khojas, and Konkani Muslims. Language Marathi is the official language and the working language in bureaucracy of the city along with English and Hindi. Mumbai has a large polyglot population like all other metropolitan cities of India. Sixteen major languages of India are spoken in Mumbai, with the most common being Marathi and its dialect East Indian. These are spoken by 4,396,870 people which is 32.24% of the population(Marathi as a single language is spoken by 22% of the population). Hindi is spoken by 3,582,719 of the population that’s 25.90% of the population making it the second largest dominat language in Mumbai. Many Hindi speakers are workers from Uttar Pradesh and Bihar who migrate seasonally to Mumbai to work as labourers. Gujarati with 2,640,990 speakers is spoken by 20.4% of the population, making it the third largest language after Marathi and Hindi. Other languages spoken include Urdu is spoken by 11.69% of the population. English is extensively spoken and is the principal language of the city's white collar workforce. A colloquial form of Hindi, known as Bambaiya – a blend of Hindi, Marathi, Gujarati, Konkani, Urdu, Indian English and some invented words – is spoken on the streets. Tamil, Kannada, Telugu, Malayalam, Odia, Punjabi, Sindhi, Tulu, Assamese, Bhojpuri are other minority languages spoken in Mumbai. In the Suburbs, Marathi is spoken by 36.78% of the population of suburban Mumbai, and Gujarati by 31.21%. Food Mumbai has a variety of street food, including the Vada pav. Culture Mumbai's culture offers a blend of traditional and cosmopolitan festivals, food, entertainment, and night life. The city's cosmopolitan and urban-centric modern cultural offerings are comparable to other world capitals. Mumbai bears the distinction of being the most cosmopolitan city of India. Its history as a major trading centre and the expansion of an education middle class has led to a diverse range of cultures, religions, and cuisines coexisting in the city. The variety and abundance of restaurants, cinemas, theatres, sports events and museums are a product of Mumbai's unique cosmopolitan culture. Mumbai is the birthplace of Indian cinema—Dadasaheb Phalke laid the foundations with silent movies followed by Marathi talkies—and the oldest film broadcast took place in the early 20th century. Mumbai also has a large number of cinema halls that feature Bollywood, Marathi and Hollywood movies. The Mumbai International Film Festival and the award ceremony of the Filmfare Awards, the oldest and prominent film awards given for Hindi film industry in India, are held in Mumbai. Despite most of the professional theatre groups that formed during the British Raj having disbanded by the 1950s, Mumbai has developed a thriving "theatre movement" tradition in Marathi, Hindi, English, and other regional languages. Contemporary art is featured in both government-funded art spaces and private commercial galleries. The government-funded institutions include the Jehangir Art Gallery and the National Gallery of Modern Art. Built in 1833, the Asiatic Society of Bombay is one of the oldest public libraries in the city. The Chhatrapati Shivaji Maharaj Vastu Sangrahalaya (formerly The Prince of Wales Museum) is a renowned museum in South Mumbai which houses rare ancient exhibits of Indian history. Mumbai has a zoo named Jijamata Udyaan (formerly Victoria Gardens), which also harbor's a garden. The rich literary traditions of the city have been highlighted internationally by Booker Prize winners Salman Rushdie, Aravind Adiga. Marathi literature has been modernized in the works of Mumbai-based authors such as Mohan Apte, Anant Kanekar, and Gangadhar Gadgil, and is promoted through an annual Sahitya Akademi Award, a literary honor bestowed by India's National Academy of Letters. Mumbai residents celebrate both Western and Indian festivals. Diwali, Holi, Eid, Christmas, Navratri, Good Friday, Dussera, Moharram, Ganesh Chaturthi, Durga Puja and Maha Shivratri are some of the popular festivals in the city. The Kala Ghoda Arts Festival is an exhibition of a world of arts that encapsulates works of artists in the fields of music, dance, theatre, and films. A week-long annual fair known as Bandra Fair, starting on the following Sunday after 8 September, is celebrated by people of all faiths, to commemorate the Nativity of Mary, mother of Jesus, on 8 September. The Banganga Festival is a two-day music festival, held annually in the month of January, which is organised by the Maharashtra Tourism Development Corporation (MTDC) at the historic Banganga Tank in Mumbai. The Elephanta Festival—celebrated every February on the Elephanta Islands—is dedicated to classical Indian dance and music and attracts performers from across the country. Public holidays specific to the city and the state include Maharashtra Day on 1 May, to celebrate the formation of Maharashtra state on 1 May 1960, and Gudi Padwa which is the New Year's Day for Marathi people. Beaches are a major tourist attraction in the city. The major beaches in Mumbai are Girgaum Chowpatty, Juhu Beach, Dadar Chowpatty, Gorai Beach, Marve Beach, Versova Beach, Madh Beach, Aksa Beach, and Manori Beach. Most of the beaches are unfit for swimming, except Girgaum Chowpatty and Juhu Beach. Essel World is a theme park and amusement centre situated close to Gorai Beach, and includes Asia's largest theme water park, Water Kingdom. Adlabs Imagica opened in April 2013 is located near the city of Khopoli off the Mumbai-Pune Expressway. Media [[File:Times of India Building.jpg|thumb|''The Times of Indias first office is opposite the Chhatrapati Shivaji Terminus where it was founded.]] Mumbai has numerous newspaper publications, television and radio stations. Marathi dailies enjoy the maximum readership share in the city and the top Marathi language newspapers are Maharashtra Times, Navakaal, Lokmat, Loksatta, Mumbai Chaufer, Saamana and Sakaal. Popular Marathi language magazines are Saptahik Sakaal, Grihashobhika, Lokrajya, Lokprabha & Chitralekha. Popular English language newspapers published and sold in Mumbai include The Times of India, Mid-day, Hindustan Times, DNA India, and The Indian Express. Newspapers are also printed in other Indian languages. Mumbai is home to Asia's oldest newspaper, Bombay Samachar, which has been published in Gujarati since 1822. Bombay Durpan, the first Marathi newspaper, was started by Balshastri Jambhekar in Mumbai in 1832. Numerous Indian and international television channels can be watched in Mumbai through one of the Pay TV companies or the local cable television provider. The metropolis is also the hub of many international media corporations, with many news channels and print publications having a major presence. The national television broadcaster, Doordarshan, provides two free terrestrial channels, while three main cable networks serve most households. The wide range of cable channels available includes Zee Marathi, Zee Talkies, ETV Marathi, Star Pravah, Mi Marathi, DD Sahyadri (All Marathi channels), news channels such as ABP Majha, IBN-Lokmat, Zee 24 Taas, sports channels like ESPN, Star Sports, National entertainment channels like Colors, Sony, Zee TV and Star Plus, business news channels like CNBC Awaaz, Zee Business, ET Now and Bloomberg UTV. News channels entirely dedicated to Mumbai include Sahara Samay Mumbai. Zing a popular Bollywood gossip channel is also based out of Mumbai. Satellite television (DTH) has yet to gain mass acceptance, due to high installation costs. Prominent DTH entertainment services in Mumbai include Dish TV and Tata Sky. There are twelve radio stations in Mumbai, with nine broadcasting on the FM band, and three All India Radio stations broadcasting on the AM band. Mumbai also has access to Commercial radio providers such as Sirius. The Conditional Access System (CAS) started by the Union Government in 2006 met a poor response in Mumbai due to competition from its sister technology Direct-to-Home (DTH) transmission service. Bollywood, the Hindi film industry based in Mumbai, produces around 150–200 films every year. The name Bollywood is a blend of Bombay and Hollywood. The 2000s saw a growth in Bollywood's popularity overseas. This led filmmaking to new heights in terms of quality, cinematography and innovative story lines as well as technical advances such as special effects and animation. Studios in Goregaon, including Film City, are the location for most movie sets. The city also hosts the Marathi film industry which has seen increased popularity in recent years, and TV production companies. Mumbai is a hub of Indian film making. Several other Indian language films such as Bengali, Bhojpuri, Gujarati, Malayalam, Tamil, Telugu and Urdu are also occasionally shot in Mumbai. Slumdog Millionaire, an English language British film, was shot entirely in Mumbai and has garnered 8 Oscar awards. Education Schools Schools in Mumbai are either "municipal schools" (run by the MCGM) or private schools (run by trusts or individuals), which in some cases receive financial aid from the government. The schools are affiliated with either of the following boards: Maharashtra State Board (MSBSHSE) The All-India Council for the Indian School Certificate Examinations (CISCE) The National Institute of Open Schooling (NIOS) The Central Board for Secondary Education (CBSE) The International Baccalaureate (IB) The International General Certificate of Secondary Education (IGCSE). Marathi or English is the usual language of instruction. The primary education system of the MCGM is the largest urban primary education system in Asia. The MCGM operates 1,188 primary schools imparting primary education to 485,531 students in eight languages (Marathi, Hindi, Gujarati, Urdu, English, Tamil, Telugu, and Kannada). The MCGM also imparts secondary education to 55,576 students through its 49 secondary schools. Higher education Under the 10+2+3/4 plan, students complete ten years of schooling and then enrol for two years in junior college, where they select one of three streams: arts, commerce, or science. This is followed by either a general degree course in a chosen field of study, or a professional degree course, such as law, engineering and medicine. Most colleges in the city are affiliated with the University of Mumbai, one of the largest universities in the world in terms of the number of graduates. The University of Mumbai is one of the premier universities in India. It was ranked 41 among the Top 50 Engineering Schools of the world by America's news broadcasting firm Business Insider in 2012 and was the only university in the list from the five emerging BRICS nations viz Brazil, Russia, India, China and South Africa. Moreover, the University of Mumbai was ranked 5th in the list of best universities in India by India Today in 2013 and ranked at 62 in the QS BRICS University rankings for 2013, a ranking of leading universities in the five BRICS countries (Brazil, Russia, India, China and South Africa). Its strongest scores in the QS University Rankings: BRICS are for papers per faculty (8th), employer reputation (20th) and citations per paper (28th). It was ranked 10th among the top Universities of India by QS in 2013. With 7 of the top ten Indian Universities being purely science and technology universities, it was India's 3rd best Multi Disciplinary University in the QS University ranking. The Indian Institute of Technology Bombay (IIT Bombay),Mumbai, Institute of Chemical Technology (formerly UDCT / UICT), Veermata Jijabai Technological Institute (VJTI), which are India's premier engineering and technology schools, along with SNDT Women's University are the autonomous universities located in Mumbai. In April 2015, IIT Bombay launched the first U.S.-India joint EMBA program alongside Washington University in St. Louis. Thadomal Shahani Engineering College is the first and the oldest private engineering college affiliated to the federal University of Mumbai and is also pioneered to be the first institute in the city's university to offer undergraduate level courses in Computer Engineering, Information Technology, Biomedical Engineering and Biotechnology. Grant Medical College established in 1845 and Seth G.S. Medical College are the leading medical institutes affiliated with Sir Jamshedjee Jeejeebhoy Group of Hospitals and KEM Hospital respectively. Mumbai is also home to National Institute of Industrial Engineering (NITIE), Jamnalal Bajaj Institute of Management Studies (JBIMS), Narsee Monjee Institute of Management Studies (NMIMS), S P Jain Institute of Management and Research, Tata Institute of Social Sciences (TISS) and several other management schools. Government Law College and Sydenham College, respectively the oldest law and commerce colleges in India, are based in Mumbai. The Sir J. J. School of Art is Mumbai's oldest art institution. It also has one of the best law schools or universities of the country which is National Law Universities (NLU). Mumbai is home to two prominent research institutions: the Tata Institute of Fundamental Research (TIFR), and the Bhabha Atomic Research Centre (BARC). The BARC operates CIRUS, a 40 MW nuclear research reactor at their facility in Trombay.Bombay Veterinary College now Mumbai Veterinary College is the oldest and premier Veterinary College of India and Asia. Its foundation stone is laid in the year of 1886. The ICAR-Central Institute of Fisheries Education (CIFE) is a Deemed to be University and institution of higher learning for fisheries science in Mumbai, India. CIFE has over four decades of leadership in human resource development with its alumni aiding in the development of fisheries and aquaculture worldwide, producing notable contributions to research and technological advancements to its credit. The institute is one of four deemed to be universities operating under the Indian Council for Agricultural Research (ICAR); the other three being the Indian Veterinary Research Institute (IVRI), the [National Dairy Research Institute] (NDRI) and the Indian Agriculture Research Institute (IARI) Sports Cricket is more popular than any other sport in the city. Mumbai is home to the Board of Control for Cricket in India (BCCI) and Indian Premier League (IPL). Mumbai cricket team, the first-class team of the city has won 41 Ranji Trophy titles, the most by any team. The city based Mumbai Indians compete in the Indian Premier League. Mumbai has two international cricket grounds, the Wankhede Stadium and the Brabourne Stadium. The first cricket test match in India was played in Mumbai at the Bombay Gymkhana. The biggest cricketing event to be staged in the city so far is the final of the 2011 ICC Cricket World Cup which was played at the Wankhede Stadium. Mumbai and London are the only two cities to have hosted both a World Cup final and the final of an ICC Champions Trophy which was played at the Brabourne Stadium in 2006. Football is another popular sport in the city, with the FIFA World Cup and the English Premier League being followed widely. In the Indian Super League, the city is represented by Mumbai City FC. While the city based Kenkre FC competes in the I-League (matches in the city are played at the Cooperage Ground). When the Elite Football League of India was introduced in August 2011, Mumbai was noted as one of eight cities to be awarded a team for the inaugural season. Mumbai's first professional American football franchise, the Mumbai Gladiators, played its first season, in Pune, in late 2012. In Hockey, Mumbai is home to the Mumbai Marines and Mumbai Magicians in the World Series Hockey and Hockey India League respectively. Matches in the city are played at the Mahindra Hockey Stadium. The Indian Badminton League (IBL), now known as the Premier Badminton League is also visiting Mumbai since its inaugural edition in 2013 when the final was held in Mumbai's National Sports Club of India. In the second season, the final of the 2016 Premier Badminton League was held between home-squad Mumbai Rockets and the Delhi Dashers (formerly Delhi Acers), the visitors eventually claiming the title. The opening ceremony was also held in Mumbai while the finals in Delhi. In the 2017 Premier Badminton League (also known as Vodafone PBL 2017 for sponsorship reasons) the Mumbai Rockets beat the Hyderabad Hunters 3–1 to proceed to the final. In the final they lost 3–4 to the Chennai Smashers. U Mumba is the team representing Mumbai in the country's professional Kabaddi league, Pro Kabaddi. The Mumbai Leg of Pro Kabaddi is held at the NSCI, Worli. Rugby is another growing sport in Mumbai with league matches being held at the Bombay Gymkhana from June to November. Every February, Mumbai holds derby races at the Mahalaxmi Racecourse. Mcdowell's Derby is also held in February at the Turf Club in Mumbai. In March 2004, the Mumbai Grand Prix was part of the F1 powerboat world championship, and the Force India F1 team car was unveiled in the city, in 2008. In 2004, the annual Mumbai Marathon was established as a part of "The Greatest Race on Earth". Mumbai had also played host to the Kingfisher Airlines Tennis Open, an International Series tournament of the ATP World Tour, in 2006 and 2007.Regional and Professional Sports Teams from MumbaiFormer Regional and Professional Sports Teams from Mumbai''' See also Geology of Mumbai List of tallest buildings in Mumbai List of people from Mumbai List of twin towns and sister cities in India References Sources External links Official website of the Municipal Corporation of Greater Mumbai Cities and towns in Mumbai City district Cities in Maharashtra Indian capital cities Metropolitan cities in India Populated coastal places in India Port cities in India Port cities and towns of the Arabian Sea Populated places established in 1507 Former Portuguese colonies 1507 establishments in India
40933052
https://en.wikipedia.org/wiki/Monika%20Henzinger
Monika Henzinger
Monika Henzinger (born as Monika Rauch, 17 April 1966 in Weiden in der Oberpfalz) is a German computer scientist, and is a former director of research at Google. She is currently a professor at the University of Vienna. Her expertise is mainly on algorithms with a focus on data structures, algorithmic game theory, information retrieval, search algorithms and Web data mining. She is married to Thomas Henzinger and has three children. Career She completed her PhD in 1993 from Princeton University under the supervision of Robert Tarjan. She then became an assistant professor of computer science at Cornell University, a research staff at Digital Equipment Corporation, an associate professor at the Saarland University, a director of research at Google, and a full professor of computer science at École Polytechnique Fédérale de Lausanne. She is currently a full professor of computer science at the University of Vienna, Austria. Awards 1995: NSF Career Award 1997: Best Paper, ACM SOSP Conference 2001: Top 25 Women on the Web Award 2004: European Young Investigator award 2009: Olga Taussky Pauli Fellowship 2010: Member of the "Junge Kurie" of the Austrian Academy of Sciences 2013: Honorary Doctorate of the Technical University of Dortmund, Germany 2013: ERC Advanced Grant from the European Research Council 2013: Elected to Academia Europaea 2014: One of ten inaugural fellows of the European Association for Theoretical Computer Science 2014: Elected to German Academy of Sciences Leopoldina 2017: Fellow of the Association for Computing Machinery 2021: Wittgenstein Award Selected publications . . . References External links Home page 1966 births Living people German women computer scientists German computer scientists Theoretical computer scientists Google people Princeton University alumni Cornell University faculty École Polytechnique Fédérale de Lausanne faculty University of Vienna faculty Members of Academia Europaea Members of the German Academy of Sciences Leopoldina Fellows of the Association for Computing Machinery Game theorists European Research Council grantees People from Weiden in der Oberpfalz
41215289
https://en.wikipedia.org/wiki/Software-defined%20infrastructure
Software-defined infrastructure
Software-defined infrastructure (SDI) is the definition of technical computing infrastructure entirely under the control of software with no operator or human intervention. It operates independent of any hardware-specific dependencies and is programmatically extensible. In the SDI approach, an application's infrastructure requirements are defined declaratively (both functional and non-functional requirements) such that sufficient and appropriate hardware can be automatically derived and provisioned to deliver those requirements. Typical deployments require software-defined networking (SDN) and cloud computing capabilities as a minimal point of entry. The benefits of SDI is that it lowers/eliminates effort towards infrastructure maintenance, allows companies to move focus to other parts of the software, ensures consistence while also allowing for extensibility, remote deployment through configuration without downtime, and allows you to leverage the power of versioning such as git. Advanced capabilities enable the transition from one configuration to another without downtime as mentioned before, by automatically calculating the set of state changes between one configuration and another and an automated transition step between each step, thus achieving the complete change via software. See also Infrastructure as Code References Software design
421489
https://en.wikipedia.org/wiki/VirtualDubMod
VirtualDubMod
VirtualDubMod was an open-source video capture and processing tool for Microsoft Windows, based on Avery Lee's VirtualDub. History Version 1.5.10.2 (build 2542) was released on 21 February 2006. VirtualDub's author, which hosts VirtualDubMod's forums, claimed that development had been abandoned. A version labeled as "VirtualDubMod 1.6.0.0 SURROUND", dated 9 April 2006, was released by a company called Aud-X. A Version 1.5.10.3 build 2550 was released by VirtualDub-Fr. Features VirtualDubMod merged several specialized forks of VirtualDub posted on the Doom9 forums. Added features included Matroska (MKV) support, OGM support, and MPEG-2 support. One notable feature that remains missing in VirtualDubMod is the ability to program timed video captures, which was present in one VirtualDub fork called VirtualDubVCR. Despite the abandonment of development of VirtualDubMod, some of its features can be added to VirtualDub through input plugins and ACM codecs provided by users on VirtualDub forums. See also Avidemux - a cross platform program similar to VirtualDub, available for Linux, Windows and Mac. Comparison of video editing software References External links Free video software Free software programmed in C Free software programmed in C++ Free software programmed in Pascal Windows-only free software
1115689
https://en.wikipedia.org/wiki/Sargon%20%28chess%29
Sargon (chess)
Sargon (or SARGON) is a line of chess-playing software for personal computers. The original SARGON from 1978 was written in assembly language by Dan and Kathleen "Kathe" Spracklen for the Z80-based Wavemate Jupiter III. History SARGON was introduced at the 1978 West Coast Computer Faire where it won the first computer chess tournament held strictly for microcomputers, with a score of 5–0. This success encouraged the authors to seek financial income by selling the program directly to customers. Since magnetic media were not widely available at the time, the authors placed an advert in Byte magazine selling for $15 photocopied listings that would work in any Z80-based microcomputer. Availability of the source code allowed porting to other machines. For example, the March–April 1979 issue of Recreational Computing describes a project that converted Sargon to an 8080 program by using macros. Later the Spracklens were contacted by Hayden Books and a book was published. Commercialization When magnetic media publishing became widely available, a US Navy petty officer, Paul Lohnes, ported Sargon to the TRS-80, altering the graphics, input, and housekeeping routines but leaving the Spracklens' chess-playing algorithm intact. Paul consulted with the Spracklens, who were both living in San Diego at the time, to make the TRS-80 version an instant success with the help of Hayden Book's newly established software division: Hayden Software. Paul was not involved in further refinements to the TRS-80 version due to his reassignment to sea duty shortly after signing the deal with Hayden Software. In the early 1980s, SARGON CHESS was ported to the Nascom (by Bits & PCs, 1981), Exidy Sorcerer, and Sharp MZ 80K. A complete rewrite was necessary later for the Apple II, programmed by Kathleen's brother Gary Shannon. Both were published by Hayden Software. Improved versions The Spracklens made significant improvements on the original program and released Sargon II. Sargon 2.5, sold as a ROM module for the Chafitz Modular Game System, was identical to Sargon II but incorporated pondering. It received a 1641 rating at the Paul Masson tournament in June–July 1979, and 1736 at the San Jose City College Open in January 1980. Sargon 3.0 finished in seventh place at the October 1979 North American Computer Chess Championship. The competition had improved, but 3.0 drew against Cray Blitz and easily defeated Mychess, its main microcomputer rival. In December, 3.0 easily won the second microcomputer championship in London. Sargon III was a complete rewrite from scratch. Instead of an exchange evaluator, this version used a capture search algorithm. Also included was a chess opening repertoire. This third version was written originally for the 6502 assembler. In 1978, Sargon was converted to Z80 neumonics/assembler code by Paul H. Lohnes, as self taught computer enthusiast while he was still in the US Navy. He sold the publishing rights to Hayden Software for the Radio Shack TRS-80 platform. It was commercially published for other computing platforms by Hayden Software in 1983. Apple contacted the Spracklens and, after a port for 68000 assembly, Sargon III was the first third-party executable software for the Macintosh. Legacy After the demise of Hayden Software, later chess programs were also released under the name Sargon, including Sargon IV (Spinnaker Software), Sargon V (Activision) and a CD-i title simply named Sargon Chess. The Spracklens concurrently wrote the engines for the dedicated chess computers produced by Fidelity Electronics, which won the first four World Microcomputer Chess Championships. The Botvinnik game The three-time world chess champion Mikhail Botvinnik played a game with Sargon in 1983 at Hamburg. He did not play his best moves but only tested the program's capabilities. Botvinnik himself was also involved in chess program development. White: Mikhail Botvinnik Black: SARGON Hamburg, 1983 1.c4 e5 2.Nc3 d6 3.g3 Be6 4.Bg2 Nc6 5.d3 Nf6 6.f4 Be7 7.Nf3 O-O 8.O-O Qd7 9.e4 Bg4 10.h3 Bxh3 11.f5 Bxg2 12.Kxg2 Nb4 13.a3 Na6 14.b4 c5 15.b5 Nc7 16.Rh1 a6 17.b6 Nce8 18.Ng5 Qc6 19.Rb1 Bd8 20.Nd5 h6 21.Nf3 Nxd5 22.exd5 Qd7 23.g4 a5 24.Nd2 Ra6 25.Ne4 Rxb6 26.Rxb6 Bxb6 27.f6 Nxf6 28.Nxf6+ gxf6 29.Bxh6 Re8 30.Qf3 Bd8 31.Qh3 Qa4 32.Bd2 Kf8 33.Rf1 Kg8 34.Qh6 Qd7 35.Kg3 f5 36.Rh1 f4+ 37.Kf3 1-0 References External links 1978 video games Apple II games Chess software Commercial video games with freely available source code Commodore 64 games Commodore VIC-20 games CP/M games Assembly language software Video games developed in the United States
9329210
https://en.wikipedia.org/wiki/Super%20key%20%28keyboard%20button%29
Super key (keyboard button)
Super key is an alternative name for the Windows key or Command key when using Linux or BSD operating systems or software. The Super key was originally a modifier key on a keyboard designed for the Lisp machines at MIT. History The "space cadet" keyboard, designed in 1978 for the Lisp machine, introduced two new modifier keys: "Super" and "Hyper" compared to the earlier Knight keyboard. Both keys became supported in the Emacs text editor, which was adapted to other operating systems and used at institutions other than MIT. Beginning in 1984, the X Window System (a graphical user interface for Unix-like operating systems) supported modifier keys called Meta, Super, and Hyper, along with the more common Shift, Control, and Alt. Many Emacs commands used the Meta key, so this was soon emulated with other key combinations, such as Escape-X in place of Meta-X. Emacs commands using the Super key still presented a challenge, while the few Hyper key commands gradually fell into disuse. In the mid-1990s the appearance of keyboards with the Windows key offered a new option for mapping another modifier key from the Unix world. At first, around 1996, it was common practice to map the "Meta" shift key onto the Windows key. However, because of the number of alternate key combinations used in Emacs, adding an actual Meta key did not provide much added functionality. This made Super the first key of interest in emulating, and therefore it became the standard Windows key assignment. To avoid using a Microsoft trademark, much Linux documentation refers to the key as "Super". This can confuse some users who still consider it a "Windows key". In KDE Plasma documentation it is called the Meta key even though the X11 "Super" shift bit is used. Most Linux desktop environments use the Super key for window management and application launching, not only for commands used by applications. Much of this is similar to the use of the Windows key in the Windows operating system. In GNOME 3, letting go of the Super key defaults to showing the activities window. In Openbox the Super key is an available modifier key, but is not used in any default shortcuts. Under Unity, the key is used to control launcher and manage windows. In elementary OS, the Super key shows a shortcut overlay and is used for several system, window, and workspace functions. In i3, the Super key along with Shift key are being used by default as modifiers used to control the behavior and layout of windows. macOS X11 emulation on macOS puts the Super shift state on the Command or "Apple" key. References Computer keys
58437041
https://en.wikipedia.org/wiki/Privacy%20and%20blockchain
Privacy and blockchain
A blockchain is a shared database that records transactions between two parties in an immutable ledger. Blockchains document and confirm pseudonymous ownership of all existing coins within a cryptocurrency ecosystem at any given time through cryptography. After a transaction is validated and cryptographically verified by other participants or nodes in the network, it is made into a "block" on the blockchain. A block contains information about the time the transaction occurred, previous transactions, and details about the transaction. Once recorded as a block, transactions are ordered chronologically and cannot be altered. This technology rose to popularity after the creation of Bitcoin, the first application of blockchain technology, which has since catalyzed other cryptocurrencies and applications. Due to its nature of decentralization, transactions and data are not verified and owned by one single entity as they are in typical systems. Rather, the validity of transactions are confirmed by any node or computer that has access to the network. Blockchain technology secures and authenticates transactions and data through cryptography. With the rise and widespread adoption of technology, data breaches have become frequent. User information and data are often stored, mishandled, and misused, causing a threat to personal privacy. Many are pushing for the widespread adoption of blockchain technology for its ability to increase user privacy, data protection, and data ownership. Blockchain and privacy protection Private and public keys A key aspect of privacy in blockchains is the use of private and public keys. Blockchain systems use asymmetric cryptography to secure transactions between users. In these systems, each user has a public and private key. These keys are random strings of numbers and are cryptographically related. It is mathematically impossible for a user to guess another user's private key from their public key. This provides an increase in security and protects users from hackers. Public keys can be shared with other users in the network because they give away no personal data. Each user has an address that is derived from the public key using a hash function. These addresses are used to send and receive assets on the blockchain, such as cryptocurrency. Because blockchain networks are shared to all participants, users can view past transactions and activity that has occurred on the blockchain. Senders and receivers of past transactions are represented and signified by their addresses; users' identities are not revealed. Public addresses do not reveal personal information or identification; rather, they act as pseudonymous identities. It is suggested that users do not use a public address more than once; this tactic avoids the possibility of a malicious user tracing a particular address' past transactions in an attempt to reveal information. Private keys are used to protect user identity and security through digital signatures. Private keys are used to access funds and personal wallets on the blockchain; they add a layer of identity authentication. When individuals wish to send money to other users, they must provide a digital signature that is produced when provided with the private key. This process protects against theft of funds. Peer-to-peer network Blockchain technology arose from the creation of Bitcoin. In 2008, Satoshi Nakamoto released a paper describing the technology behind blockchains. In his paper, he explained a decentralized network that was characterized by peer-to-peer transactions involving cryptocurrencies or electronic money. In typical transactions carried out today, users put trust into central authorities to hold their data securely and execute transactions. In large corporations, a large amount of users' personal data is stored on single devices, posing a security risk if an authority's system is hacked, lost, or mishandled. Blockchain technology aims to remove this reliance on a central authority. To achieve this, blockchain functions in a way where nodes or devices in a blockchain network can confirm the validity of a transaction rather than a third party. In this system, transactions between users such as sending and receiving cryptocurrency) are broadcast to every node in the network. Before the transaction is recorded as a block on the blockchain, nodes must ensure a transaction is valid. Nodes must check past transactions of the spender to ensure he/she did not double spend or spend more funds than they own. After nodes confirm a block is valid, consensus protocols such as proof of work and proof of stake are deployed by miners. These protocols allow nodes to reach a state of agreement on the order and number of transactions. Once a transaction is verified, it is published on the blockchain as a block. Once a block is created it cannot be altered. Through blockchain's decentralized nature and elimination of the need for a central authority, user privacy is increased. Peer-to-peer networks allow users to control their data, decreasing the threat of third parties to sell, store, or manipulate personal information. Cryptographic methods for Private Blockchains Zero-knowledge proofs A Zero-knowledge proof is a cryptographic method by which one party (the prover) can prove to another party (the verifier) that a given statement is true, without conveying any information apart from the fact that the statement is indeed true. The "prover" does not reveal any information about the transaction. Such proofs are typically introduced into blockchain systems using ZK-SNARKs in order to increase privacy in blockchains. In typical "non-private" public blockchain systems such as Bitcoin, a block contains information about a transaction such as the sender and receivers addresses and the amount sent. This public information can be used in conjunction with Clustering algorithms to link these "pseudo-anonymous" addresses to users or real-world identities. Since zero-knowledge proofs reveal nothing about a transaction, except that it is valid, the effectiveness of such techniques are drastically reduced. A prominent example of a cryptocurrency using ZK proofs is Zcash. Ring signatures Another method of obfuscating the flow of transactions on the public blockchain are Ring signatures, a method used by Monero. Mixing Cryptocurrency tumblers can also be used as a method to increase privacy even in a pseudoanonymous cryptocurrency. Also instead of using mixers as an add-on service, mixing of public addresses can be built-in as a metheod in the blockchain system, as in Dash. Comparison of blockchain privacy systems Private blockchains Private blockchains (or permissioned blockchains) are different from public blockchains, which are available to any node that wishes to download the network. Critics of public blockchains say because everyone can download a blockchain and access the history of transactions, there is not much privacy. In private blockchains, nodes must be granted access to participate, view transactions, and deploy consensus protocols. Because transactions listed on a private blockchain are private, they ensure an extra layer of privacy. Because private blockchains have restricted access and nodes must be specifically selected to view and participate in a network, some argue that private blockchains grant more privacy to users. While private blockchains are considered the most realistic way to adopt blockchain technology into business to maintain a high level of privacy, there are disadvantages. For example, private blockchains delegate specific actors to verify blocks and transactions. Although some argue that this provides efficiency and security, concerns that in nature, private blockchains are not truly decentralized because the verification of transactions and control are put back into the hands of a central entity, have arisen. Hybrid blockchains Hybrid blockchains allow more flexibility to determine which data remain private and which data can be shared publicly. A hybrid approach is compliant with GDPR and allows entities to store data on clouds of their choices to be in compliance with local laws to protect peoples privacy. A hybrid blockchain contains characteristics of private and public blockchains. Not every hybrid blockchain contains the same characteristics. Bitcoin and Ethereum do not share the same characteristics, although they are both public blockchains. Use cases for privacy protection Financial transactions After Satoshi Nakamoto spurred the creation of blockchain technology through Bitcoin, cryptocurrencies rose in popularity. Cryptocurrencies are digital assets that can be used as an alternative form of payment to fiat money. In current financial systems, there exists many privacy concerns and threats. Centralization is an obstacle in typical data-storage systems. When individuals deposit money, a third party intermediary is necessary. When sending money to another user, individuals must trust that a third party will complete this task. Blockchain decreases the need for this trust in a central authority. Cryptographic functions allow individuals to send money to other users. Because of Bitcoin's widespread recognition and sense of anonymity, criminals have taken advantage of this by purchasing illegal items using Bitcoin. Through the use of cryptocurrencies and its pseudonymous keys that signify transactions, illegal purchases are difficult to trace to an individual. Due to the potential and security of blockchains, many banks are adopting business models that use this technology. Health care records In recent years, more than 100 million health care records have been breached. In attempts to combat this issue, solutions often result in the inaccessibility of health records. Health providers regularly send data to other providers. This often results in mishandling of data, losing records, or passing on inaccurate and old data. In some cases, only one copy of an updated health record exists; this can result in the loss of information. Health records often contain personal information such as names, social security numbers and home addresses. Overall, it is argued by some that the current system of transferring health information compromises patient privacy to make records easy to transfer. As blockchain technology expanded and developed in recent years, many have pressed to shift health record storage onto the blockchain. Rather than having both physical and electronic copies of records, blockchains could allow the shift to electronic health records (EHR). Medical records on the blockchain would be in the control of the patient rather than a third party, through the patients' private and public keys. Patients could then control access to their health records, making transferring information less cumbersome. Because blockchain ledgers are immutable, health information could not be deleted or tampered with. Blockchain transactions would be accompanied by a timestamp, allowing those with access to have updated information. Legal The notarization of legal documents protects the privacy of individuals. Currently, documents must be verified through a third party or a notary. Notarization fees can be high. Transferring documents takes time and can lead to lost or mishandled information. Many are pressing for the adoption of blockchain technology for the storage legal documents. Documents cannot be tampered with and can be easily accessed by those who are granted permission to access them. Information is protected from theft and mishandling. Another possible use of blockchain technology is the execution of legal contracts using smart contracts, in which nodes automatically execute terms of a contract. By using smart contracts, people will no longer rely on a third party to manage contracts, allowing an increase in privacy of personal information. Legality of blockchain and privacy GDPR With the April 2016 adoption of the General Data Protection Regulation in the European Union, questions regarding blockchain's compliance with the act have arisen. GDPR applies to those who process data in the EU and those who process data outside the EU for people inside the EU. Personal data is "any information relating to an identified or identifiable natural person". Because identities on a blockchain are associated with an individual's public and private keys, this may fall under the category of personal data because public and private keys enable pseudonymity and are not necessarily connected to an identity. A key part of the GDPR lies in a citizen's right to be forgotten, or data erasure. The GDPR allows individuals to request that data associated with them to be erased if it is no longer relevant. Due to the blockchain's nature of immutability, potential complications if an individual who made transactions on the blockchain requests their data to be deleted exist. Once a block is verified on the blockchain, it is impossible to delete it. IRS Because cryptocurrency prices fluctuate, many treat the purchase of cryptocurrencies as an investment. By purchasing these coins, buyers hope to later sell them at a higher price. Internal Revenue Service (IRS) are currentlyfacing struggles because many bitcoin holders do not include revenue from cryptocurrencies in their income reports, especially those who engage in many microtransactions. In response to these concerns, IRS issued a notice that people must apply general tax principles to cryptocurrency and treat the purchase of it as an investment or stock. IRS has enacted that if people fail to report their income from cryptocurrency, they could be subject to civil penalties and fines. In attempts to enforce these rules and avoid potential tax fraud, IRS has called on Coinbase to report users who have sent or received more than $US20,000 worth of cryptocurrency in a year. The nature of blockchain technology makes enforcement difficult. Because blockchains are decentralized, entities cannot keep track of purchases and activity of a user. Pseudonymous addresses make it difficult to link identities with users, being a perfect outlet for people to launder money. Blockchain Alliance Because virtual currencies and the blockchain's protection of identity has proved to be a hub for criminal purchases and activity, FBI and Justice Department created Blockchain Alliance. This team aims to identify and enforce legal restrictions on the blockchain to combat criminal activities through open dialogue on a private-public forum. This allows law enforcers to fight the illegal exploitation of the technology. Examples of criminal activity on the blockchain include hacking cryptocurrency wallets and stealing funds. Because user identities are not tied to public addresses, it is difficult to locate and identify criminals. Fair information practices Blockchain has been acknowledged as a way to solve fair information practices, a set of principles relating to privacy practices and concerns for users. Blockchain transactions allow users to control their data through private and public keys, allowing them to own it. Third-party intermediaries are not allowed to misuse and obtain data. If personal data are stored on the blockchain, owners of such data can control when and how a third party can access it. In blockchains, ledgers automatically include an audit trail that ensures transactions are accurate. Concerns regarding blockchain privacy Transparency Although many advocate for the adoption of blockchain technology because it allows users to control their own data and exclude third parties, some believe certain characteristics of this technology infringe on user privacy. Because blockchains are decentralized and allow any node to access transactions, events and actions of users are transparent. Sceptics worry malicious users can trace public keys and addresses to specific users. If this was the case, a user's transaction history would be accessible to anyone, resulting in what some consider to be a lack of privacy. Decentralization Due to blockchain's decentralized nature, a central authority is not checking for malicious users and attacks. Users might be able to hack the system anonymously and escape. Because public blockchains are not controlled by a third party, a false transaction enacted by a hacker who has a user's private key cannot be stopped. Because blockchain ledgers are shared and immutable, it is impossible to reverse a malicious transaction. Private keys Private keys provide a way to prove ownership and control of cryptocurrency. If one has access to another's private key, one can access and spend these funds. Because private keys are crucial to accessing and protecting assets on the blockchain, users must store them safely. Storing the private key on a computer, flashdrive or telephone can pose potential security risks if the device is stolen or hacked. If such a device is lost, the user no longer have access to the cryptocurrency. Storing it on physical media, such as a piece of paper, also leaves the private key vulnerable to loss, theft or damage. Cases of privacy failure MtGox In 2014, MtGox was the world's largest Bitcoin exchange at the time; it was located in Tokyo, Japan. The exchange suffered the largest blockchain hack of all time. During 2014, MtGox held an enormous portion of the Bitcoin market, accounting for more than half of the cryptocurrency at the time. Throughout February, hackers infiltrated the exchange, stealing $US450 million in Bitcoin. Many in the blockchain community were shocked because blockchain technology is often associated with security. This was the first major hack to occur in the space. Although analysts tracked the public address of the robbers by looking at the public record of transactions, the perpetrators were not identified. This is a result of the pseudonymity of blockchain transactions. DAO Hack While blockchain technology is anticipated to solve privacy issues such as data breaching, tampering, and other threats, it is not immune to malicious attacks. In 2016, the DAO opened a funding window for a particular project. The system was hacked during this period, resulting in the loss of cryptocurrency then worth $US3.6 million from the Ether fund. Due to the ever-changing price of cryptocurrencies, the amount stolen has been estimated at $US64-100. Coinbase Coinbase, the world's largest cryptocurrency exchange that allows users to store, buy, and sell cryptocurrency, has faced multiple hacks since its founding in 2012. Users have reported that due to its log-in process that uses personal telephone numbers and email addresses, hackers have targeted the numbers and emails of well-known individuals and CEOS in the blockchain space. Hackers then used the email addresses to change the users' verification numbers, consequently stealing thousands of dollars worth of cryptocurrency from Coinbase user wallets. By North Korea In January 2022 a report by blockchain analysis company Chainalysis found that state backed North Korean hacker groups had stolen almost $400 million in digital assets in at least seven attacks on cryptocurrency platforms in 2021. A UN panel also stated that North Korea has used stolen crypto funds to fund it's nuclear and ballistic missile programmes while avoid international sanctions. Privacy vs. auditing in blockchains The introduction of "private" or "anonymous" cryptocurrencies such as ZCash and Monero, highlighted the problem of blockchain auditing, with exchages and government entities limiting use of those currencies. Therefore, as the notions of privacy and auditing in blockchains are contradictory, auditing blockchains with privacy characteristics has become a research focus of the academic community. References Blockchains Privacy
14946338
https://en.wikipedia.org/wiki/Mycenaean%20pottery
Mycenaean pottery
Mycenaean pottery is the pottery tradition associated with the Mycenaean period in Ancient Greece. It encompassed a variety of styles and forms including the stirrup jar. The term "Mycenaean" comes from the site Mycenae, and was first applied by Heinrich Schliemann. Archaeology and Mycenaean pottery Definitions The archaeological code The term "Mycenaean" has been imposed upon a matrix of abbreviational archaeological names, which amount to an archaeological code. This code had been standardized by various archaeological conventions. An archaeological name is the name given to a layer (or layers) at a site that is being professionally excavated. Archaeology depends on the fact that layers of types of soil and content accumulate over time, which undisturbed, can yield information about the various times. Typically an archaeological name identifies the site and the relative position of the layer. Predetermined knowledge of the layer provides information about the time and other circumstances of the artefacts found within it. Pottery is an especially good diagnostic of time period when found within a layer. It is not perfect or certain, however. The custom of naming layers began with Heinrich Schliemann's excavations at Troy. He identified cities layered on top of one another: the first, the second, etc., starting at the bottom of the heap. These later became Troy I, Troy II, etc. Arthur Evans, excavator of Knossos and a friend of Schliemann's, followed this custom at Knossos; however, foreseeing that the layers there would likely be repeated elsewhere, he preferred the name "Minoan" (abbreviation M) over Knossos, as he believed that one of the high kings of Crete ruling from Knossos was the legendary King Minos. He also chose Minoan for the name of the civilization. Subsequently, Carl Blegen, excavator of Pylos, who found the main cache of Linear B tablets on his legendary first day's dig, extended Evans' system to "Helladic" (adjective of Hellas, abbreviation H) and "Cycladic" (adjective of Cyclades, abbreviation C). The Cyclades are a specific number of islands in the Aegean Sea. Subsequently, by convention, following Evans, the layers were grouped and numbered: E, M, L for early, middle, and late, Roman numerals for the subdivision of each one of those, capital letters still further down, Arabic numerals, small letters, and after that if necessary descriptive terms. For example, LH IIIA2 means "Late Helladic, subperiod III, subperiod A, subperiod 2." There would always have to be at least two subperiods at each level, i.e., there cannot be a IIIA without at least a IIIB. The rules are invariant. "Helladic" must refer to a site on the mainland. "Minoan" must mean Crete, and Cycladic must mean in the Cyclades; there are no exceptions. Islands such as Cyprus and Rhodes, not in any of the three code areas, were allowed their own systems, such as Cypriote I, Cypriote II, etc. It was then up to the archaeologists to match these layers to layers in the code. The code has been institutionalized by such organizations as the British and American Schools at Athens. One archaeologist cannot arbitrarily change the convention. He can propose for consideration; typically, most proposals are not adopted. Individual art historians writing books often attempt to redefine the terms for their own works; for example, there has been a relatively current attempt to redefine "Helladic" by removing Peloponnesus from it and placing it in a supposed "Aegean" category, entirely ad hoc. In the code, the very core of "Helladic" is Mycenae and Pylos in the Peloponnesus. There is not much likelihood that the conventions of the major archaeologists will be redefined, so to speak, by rogue art historians. Naming issues The code per se does not allow for cross-assemblages; that is, artefacts that turn up in more than one defined area, such as Minoan pottery at Akrotiri. Anything found at Akrotiri is Cycladic, not Minoan. The meaning, of course, is "Cycladic pottery like Minoan pottery." It does not help to hypothesize that the Akrotiri Minoan pottery is imported. Until recently, there was no way to know whether it was imported or not, all statements being suppositional. Evans supposed Mycenaean pottery was from Crete. The archaeologists had recourse to culture names such as "Mycenaean" and "Minoan", and could speak with meaning about the Mycenaean pottery of Cyprus. Strictly speaking, Cyprus had none, only Cypriote pottery. Helladic pottery could never turn up anywhere else but mainland Greece, except for the uncertain rescue concept of importation. Late Mycenaean pottery could be LH, LC, LM, or anywhere else it was found, without any implication of provenance. "Mycenaean" meant the cultural style. In his 1941 work, The Mycenaean Pottery: Analysis and Classification, a widely used handbook, the Swedish archaeologist, Arne Furumark, attempted to redefine LH and LC as a bona fide archaeological term: Mycenaean I, Mycenaean II, etc., but without general success. The preference is for LH and LC; however, "Mycenaean" remains in use as a general term. Some archaeologists drop the Helladic, etc.,and use "Mycenaean" with the same subperiod numbering: Mycenaean I, II, III, etc. Dating issues The main issues with the code are inadvertent due to the nature of the method. The layer names represent only relative sequences: before and after some other layer, or first in the order, second, etc. There is no internal tie to a larger scheme, such as a calendar date on any calendar, ancient or modern. Due to Evans' synchronization of EM, MM, and LM with Egyptian chronology, all the Is, IIs and IIIs were originally assumed to be coterminous, but it soon became obvious that they could not be coterminous at every location, e.g., an IA calendar date might not be the same on Crete as on the mainland. The archaeologists were not free, however, to reinvent the periods. The typology of the layers must remain the same regardless of whether it was before or after its parallel type in some other region. If there was no parallel type, i.e., if different civilizations were being compared, such as Trojan and Mycenaean, then the synchronization of the layers was critical in understanding events that related to them both; for example, exactly which Troy fought the Mycenaean Greeks in the Trojan War? Selection of the wrong one would result in serious historical error. It was obvious that more had to be done to provide calendar, or "absolute" dates. The process is ongoing. The best hope is perhaps radiocarbon dating of wooden artefacts in the layer. However, it is not precise enough to capture the time of the periods exactly. A second method where available is dendrochronology (tree-ring dating). A master sequence of tree rings has been composed in cases where enough wood to display rings has survived. The chronology is still uncertain enough to result in multiple schemes proposed by different archaeologists. History of the concept of ceramic Mycenaeans Archaeologists use changes in pottery styles as an indication of broader changes in culture. If the pottery style was continuous, they presumed a continuous culture, and if the pottery changed suddenly, then so did the culture, it was thought. Inevitably in such a hypothesis the pottery names and concepts acquired burdens of meaning that for the most part had nothing to do with pottery. They became historical terms of general culture. Origin of the ceramic Many writers compare prehistory to a stage on which different ceramic characters appear and play a role. The first of the ceramic characters were the Mycenaeans, whom Schliemann brought into existence in the first excavations of Mycenae. It was 1876. Schliemann has already tried his hand at excavating Troy (1871–1873), with Frank Calvert, scion of the family that owned the land at Hisarlik, Turkey, suspected site of Troy. Schliemann and his wife, Sophie, found a treasure in gold there, before he and Calvert had a falling-out. Their joint excavation could not continue. They escaped from Troy, smuggling out the treasure, much to the chagrin of the Ottoman government, which sued for a share, amidst allegations that he and Calvert had worked a fraud for the publicity. Permission to dig was cancelled. Shadows of the scandal live on. Schliemann applied for permission to dig at Mycenae. The Archaeological Society of Athens was willing to sponsor him, but the Greek government insisted on sending an ephor, Panagiotis Stamatakis, to make certain that no skullduggery was afoot. Schliemann wasted no time. Hiring 125 diggers and 4 carts he excavated Mycenae in a single year, 1876, publishing Mycenae in 1878. It contained all his notes from 1876. As it turned out, more treasure was discovered, as equally contextually unlikely as the treasure from Troy, but validated by Stamatakis, clearing Schliemann's name. A certain perverse element of the yellow journals subsequently put forward similar wild tales of him and Sophie sneaking around in the dark to place the treasure, but the legitimate world of archaeology accepted the findings. His permission to excavate in Turkey was restored on his settling the suit. After he finished Mycenae he went back to Troy. By then he had learned the value of the ceramics as "index pottery", where before he had used dynamite to get through difficult layers. Stamatakis' endorsement cleared the path for general acceptance of Schliemann's work. Gladstone's Preface to Mycenae pointed out that previously the architecture in the Argolid was being called "Cyclopean", after the myth of the Cyclopes who built with large stones. In the book, Schliemann refers to Cyclopean walls, houses, bridges, roads, and the architects. When he refers to the portable art and its manufacturers, he calls them Mycenaean. The Mycenaeans were thus born from Cyclopean parentage. When it came to pottery, Schliemann went a step further than the adjective, turning it into an abstract noun:"fragments of the usual Mycenaean pottery", "spatial ornamentation characteristic of Mycenaean art," and especially concerning the collection viewed by the king of Greece, who came to visit at his request, "the large collection of prehistoric Mycenaean antiquities produced by my excavations." "Mycenaean" thenceforward was a type of artefact, not just items found at Mycenae. In search of the Mycenaean horizon Schliemann had had created a culture name from his excavations at Mycenae and the conviction that the Cyclopean architecture in evidence there and at Tiryns was the work of the legendary civilization depicted in the Iliad and reflected in the Greek myths. In Mycenae he had transitioned from a general culture name to specific collections of artefacts: Mycenaean sculpture, Mycenaean jewelry, Mycenaean pottery, etc. According to the principle of archaeological uniformitarianism, adopted in the genesis of cultural archaeology from geological archaeology, if Mycenaean is not just a place name, there ought to be an archaeological horizon, Mycenaean, of which the assemblage, Mycenaean pottery, is the indicator. The collection at Mycenae then would only be one instance of a horizon and an assemblage located at numerous as yet unexcavated locations. The first step in finding them would be a clear definition of what they are. Schliemann and the contemporary archaeologists thus began a grand phase of search and definition for the Mycenaean horizon. Two younger friends of Schliemann, Adolf Furtwängler and Georg Loeschcke, who had assisted him in the excavation of Olympia, shortly took on the task of classifying the pottery he had discovered, publishing the results in Mykenische Thongfässe in 1879, only a year after Mycenae. It was the first known handbook of Mycenaean pottery. As there were no existing guidelines on which to improve, the two were forced to rely on classical standards of color and pattern (e.g., red-figure, black-figure, etc.). The main criterion was the color of the pattern. In type 1 the pattern was black; type 2, predominantly black, some red; type 3, predominantly red, some black; type 4, only red (the scheme was later abandoned). In 1879 the archaeologists looked confidently forward to seeing the scheme at every further excavation of Mycenaean pottery. The major test came all too swiftly with the excavation of Phylakopi on the island of Melos, 1896-1899, by the British School at Athens. Some of the major archaeologists were there, especially those who were to play a significant role later at Knossos: Evans, Hogarth, and Duncan MacKenzie, supervisor of excavation, who would later serve in that capacity for Evans on Crete. Phylakopi was first founded on bedrock. The initial city was overlain by a second, and that by a third, hence Phylakopi I, II, III. The pottery section of the report was written by Campbell Cowan Edgar. By the time he finished writing in 1904, Evans was already at Knossos, and his topic was obsolete. The Mycenaean horizon seemed even further away than ever. The classification of Phylakopi pottery was a monumental task right from the first. Edgar reports that during the excavation seasons some 10,000 - 20,000 fragments per day were being excavated and hauled away in bushel baskets. When the sorting was finally done two general classes had emerged, the earliest being Phylakopi I, termed geometric ware from its simple geometric patterns. The term "Cycladic" was loosely used to mean this type. In contrast to it an entirely new type appeared in the middle of Phylakopi II and soon predominated. It bore some resemblance to pottery of the Argolid. Leaping at the chance, the archaeologists classified it as Mycenaean, a move they would soon regret. Its diversity was far greater than that of Argolid pottery, and its beginning and end elusive. Edgar complained: "we must draw the line somewhere, unless the term (Mycenaean) is to cover the whole prehistoric culture of Greece." Throughout his report he seems to be struggling to describe indefinite numbers of patterns without any simple themes. Turning to Furtwängler and Loeschcke he devised the following scheme: In the first phase, black matte decoration is the only style to be found. Popular motifs are straight bands, spirals, birds and fish. Red and brown lustrous decoration come into play alongside black matte in this phase. Birds and fish are still popular, and we start to see flowers painted on wares as well. In this phase both red and brown lustrous and black matte are still around, but the lustrous decorations have surpassed the matte in popularity. The flower becomes much more popular. Red/Black and Red lustrous are still seen in this final phase, and black matte has completely disappeared. Shape and decorative motifs do not change much during this phase. Falling in the uncertain period between Phylakopi and Knossos, the scheme was released to the public as Phylakopian Mycenaean. That it was not. It was the inconclusive and unfinished work of Edgar. His standard for being Mycenaean was contemporaneity with the pottery in the Argolid; i.e., despite the great diversity of the pottery, the only ceramic identity available on which to pin it was "Mycenaean." This deficit was increasingly unsatisfactory to all the archaeologists at Phylakopi. Moreover, the supposed Mycenaean pottery represented a break with the preceding Cycladic. According to the prevailing standard, it must have arrived on Phylakopi by either invasion or importation from somewhere else. Edgar resisted this view at first. Assigning Phases I and II to Phylakopi II, Edgar attempted to make a case that it evolved from the earlier geometric by a sort of "freeing" of the patterns from repetitive geometric forms to obtain a "looser system." He called it "early Mycenaean" (as opposed to geometric "pre-Mycenaean"). Phases 3 and 4 were to be assigned entirely to Phylakopi III. They were termed "The Later Local Pottery of the Mycenaean Period." Edgar concedes that they were swimming in deeper waters, so to speak, than originally planned. He makes a noble effort to connect Phase 3 with the geometric, but the complexities of Phase 4 are far beyond his comparisons. He gives up, remarking "The fourth and final stage of the Phylakopi settlement is marked by predominance of imported Mycenaean ware ...," which reflects a general realization by the British School that Phylakopi does not contain the answer to the riddle of Mycenaean provenance. They decide to abandon Phylakopi in favor of the most likely source of the imports, Crete. The Cretan Mycenaeans Foreseeing the end, Arthur Evans and his younger friend and protégé, David George Hogarth, left the excavation at Phylakopi early. Not much could be done at Crete, as the Ottoman Empire was not giving permission to excavate there. Evans went home to tend to the Ashmolean and other affairs while Hogarth investigated Crete. While on a recreational tour in northern Italy Evans' beloved wife died unexpectedly of poor health. For a year he was totally lost, wandering dejectedly around the Mediterranean. In 1899 he received a cable from his young friend to come at once, as the political situation in Crete was changing rapidly. Crete was breaking away from the Ottomans with the support of the British Empire. After it became independent it would seek admission to Greece. Suddenly becoming his old self, Evans arrived in Heraklion as a journalist again, investigating everything, becoming a thorn in everyone's side. The last Turkish troops were ferried off the island by the British fleet. The new government was issuing permissions to excavate. Evans was first in line, so to speak. At Hogarth's suggestion he acquired land at Knossos, and in a grand style reminiscent of Schliemann's, led an army of Greek diggers there to excavate. MacKenzie was called from Phylakopi to superintend. Hogarth, not entirely at ease with Evans as commander, left the scene after the first year. Knossos was excavated 1900-1905. The site survives today mainly because of a decision made by Evans to restore it, not a total rebuilding, but an enhancement here and there for safety purposes. Hiring an architect (D.T. Fyfe), he shored up crumbling walls, re-cemented patios, replaced pillars, put collapsed ceilings back in place, and so on. Today's tourist site, which must be maintained every year, is the result. Much of the larger storage pottery was put back in place. Evans went so far as to hire an artist to reconstruct frescoes from fallen pieces. These practices are criticised by some, lauded by others. True, the palace is not exactly as Evans found it. It is verisimilar. On the other hand, neither was the destroyed palace (it burned) the same over the entire Mycenaean Period. Evans actually owned the site, supported in this by his father and his family wealth. He was nevertheless a partisan of the British School, to whom he left the site in his will (it is now managed by the Greek government). They had gone a step further than Schliemann in investigating the full extent of Mycenaean uniformitarianism. If the Achaeans ruled the islands, they thought, then Mycenaean pottery should be excavatable there. This expectation had only been partly satisfied by their excavation of Phylakopi. Edgar calls the earlier pottery sometimes "Cycladic" and sometimes "Aegean," meaning the actual Aegean Sea. The later pottery was a virtual "type x" with no explanation. Evans went to Knossos in the expectation of elucidating "type x" (which turned out to be "Minoan"). He hoped to be able to recreate a family tree, with branches of mainland, Cycladic, and "type x" on it. According to this pre-conceived expectation he had to find two common ancestors, one for Cretan-Cycladic, and one for Cretan-Mycenaean, an expectation that Wace later called his "pan-Minoan theories,". He was to devise a new term for the first, "Aegean." He regarded "Mycenaean" as a local development of "Minoan." At Knossos, the British archaeologists were beginning work on a site generally considered "Mycenaean" by other professional site visitors. Surface finds supported this view. A palace appeared to be in the layer closest to the surface. For nine weeks of the next six months, February–June, 1900, the British performed an amazing feat, clearing two acres of the top layer, and uncovering all the major features of the site. They brought in British tools, such as the wheelbarrow, and hired as many men as were necessary, insuring harmony by selecting as many Christians as Ottomans, and treating them all equally. They lovingly named each room and feature with colorful names appropriate to the finds there. The prediction concerning the horizon of the latest layer turned out to be correct; a palace had come to view, which could be identified by its pottery as indubitably Mycenaean, according to the handbook of Furtwängler and Loeschke. The confirmation was a liberal presence of an old Mycenaean friend, the stirrup jar. Schliemann had discovered it in Troy VI, but it appeared more regularly in mainland Mycenaean sites. The Trojan presence was attributed to importation. To the ample Mycenaean material at Knossos was added a new script and the confirmation of an old. In the "Room of the Chariot Tablets" some wooden boxes had been smashed open spilling a cache of inscribed, hardened clay tablets. This writing Evans termed "a linear form of script." By that he meant to oppose it to another, more rarely testified script, which he called "hieroglyphic," defined as a "conventional pictographic class." He had previously been a student of Cretan signatories. He explained his new term by saying "Unlike the regular arrangement of the linear script in separate lines from right to left, these hieroglyphic characters ... present a much more jumbled aspect." Origin of the ceramic Minoans In the summary for the first year's excavation, Evans remarks: "In spite of the complicated arrangement of some parts of the Palace, a great unity prevails throughout the main lines of its ground-plan." It was the relative disunity, the "complicated arrangement," with which he was now to concern himself, which created the Minoans and brought them to center stage. Previously in the report he had pointed to "certain later modifications of the original plan." This original plan, a previous unity, came to the fore now. He said: "various evidences are at hand of the transformation or remodelling of individual chambers," such as walls of gypsum blocks. In short, Evans had discovered archaeologically a "first palace" under the Mycenaean, which was now the "second palace." Deep test pits to the virgin soil under the hill had revealed many meters of Neolithic habitation; i.e., before the "Palace Period,", the Neolithic people had an ancient city at that location (modern studies have shown it to have been the most ancient city of the Mediterranean). At the start of the Palace Period, the entire top of the hill was made level. In some areas the Neolithic is succeeded by the Mycenaean. In others another layer intervenes. Hogarth, who had been assigned to excavate houses on the periphery, in the "town," found that the clearance was general; moreover, that the layer of the first palace, which was directly under the houses (instead of the Neolithic), contained extensive debris, including the mysterious Kamares ware. At Phylakopi the British had not known how to classify it. It went down as being the unknown pottery, Type x, to be clarified in Crete. Now it was in the first palace layer, which Evans termed at that time "the Kamares period." In addition, many blocks of the first palace were incised with the sign of the double axe, which Evans called "the special badge of the old Cretan and Carian divinity the god of the labrys, of Labranda, and the labyrinth. The word most often used in Evans' report on the first season to mean the time before the Mycenaeans is pre-Mycenaean. The ceramics were so different that the Mycenaean pottery seemed to have been imported from the mainland. At Phylakopi the archaeologists had had only one class available to them into which to stuff a great diversity of ceramics: "Mycenaean." Now the two classes were not enough; Evans was having to resort to such terminology as "the latest pre-Mycenaean period." Over the winter he decided to apply a solution he had already formulated for the classification of scripts. The linear writing, he had established, was Mycenaean. It supplanted the hieroglyphic script and was associated with a general change of culture, which could be interpreted as an ethnic invasion and appropriation of the Knossos region. At first Evans called the culture of the hieroglyphs "prae-Mycenaean." At the end of his treatise on the topic he redefined the latter to "Minoan" based on a re-examination of the historical legends. Herodotus claimed to be narrating the fate of the Cretans under their king, Minos, as related by the Eteocretans ("true Cretans") of Praesos in eastern Crete. Minos led a large part of the population around the Knossos region to an attempted colonization of Sicily. The Sicanian natives resisted. Minos was killed in battle. Attempting to escape the Cretans found they could not get home and settled in southern Italy instead. Meanwhile, Greeks from Thessaly and "men of various nations" took advantage of the depopulation to settle and appropriate the Knossos region. The tale seemed to have all the elements required to solve Evans' nomenclature problem. If the resettlement of Knossos was by Mycenaeans, then the reign of Minos was pre-Mycenaean. Taking "Minos" to be a dynastic name, such as Pharaoh or Caesar, Evans felt justified in replacing "prae-Mycenaean" by "Minoan." Such a view would imply that the Mycenaeans were Hellenic. Not quite ready for that step, Evans seized on the "various nations" to suggest that the linear script was "prae-Phoenician," thus falling short of Schliemann's prophecy and missing the decipherment of the century, as some of the linear script was surely Greek, and there must have been Greeks at Knossos. In his account for the second season, he now distinguished everywhere between "Minoan" and "Mycenaean" features of the palace, and writes of the Minoan palace (the first) and the Mycenaean Palace (the second). The sources also gave him a date. Eusebius dates the Greek settlement to 1415 BC. He was on the verge of great discoveries himself and would shortly present the Minoan civilization to the world. Helladic periodization Mycenaean pottery was produced from c. 1600 BC to c. 1000 BC by Mycenaean Greek potters. It is divided by some archaeologists into four major phases. Mycenaeans rose in prominence around 1600 BC and stayed in control of Greece until about 1100 BC. Evidence shows that they spoke an early form of Greek. They took control of Crete c. 1450 BC. The collapse of Mycenaean Greece states was followed by the Greek Dark Ages Much of the finest Mycenaean pottery used or adapted styles from the very well-established tradition of Minoan pottery, especially in areas closer to Crete. Conversely, an abundance of Mycenaean pottery is found in Italy and Sicily, suggesting that they were in contact and traded with the Mycenaeans. Early Mycenaean There is some question as to how much of the pottery of this age relies on Minoan pottery for both their shapes and the patterns. For at least the first half of the seventeenth century BC there is only a small portion of all pottery produced that is in the Minoan style. LH I-IIA pottery can be distinguished by the use of a more lustrous paint than the predecessors. While this is more common during this age, there was a considerable amount of pottery produced in the Middle Helladic period style, using matte paints and middle Helladic shapes. Where the first recognizably Mycenaean pottery emerged is still under debate. Some believe that this development took place in the northeast Peloponnese (probably in the vicinity of Mycenae). There is also evidence that suggests that the style appeared in the southern Peloponnese (probably Lanconia) as a result of the Minoan potters taking up residence at coastal sites along the Greek Mainland. LH I (c. 1675/1650 – 1600/1550 BC) The pottery during this period varies greatly in style from area to area. Due to the influence of Minoan Crete, the further south the site, the more the pottery is influenced by Minoan styles. The easiest way to distinguish the pottery of this period from that of the late Middle Helladic is the use of a fine ware that is painted in a dark-on-light style with lustrous paints. This period also marks the appearance of a fine ware that is coated all over with paint varying from red and black in color. This ware is monochrome painted and is directly descended from grey and black Minyan ware (which disappear during LH I). A form of the yellow Minyan style also appears in this period, merging into Mycenaean unpainted wares. Additionally, Mycenaean art is different from that of the Minoans in that the different elements of a work are distinguished from each other much more clearly, whereas Minoan art is generally more fluid. There is also some carry-over of matte-painted wares from the Middle Helladic period into LH I. The majority of large closed vessels that bear any painted decorations are matte. They are generally decorated in two styles of matte paints known as Aeginetan Dichrome and Mainland Polychrome. Some of the preferred shapes during this period were the vapheio cup, semi globular cup, alabastron, and piriform jar. LH IIA (c. 1600/1550 – 1490/1470 BC) During this period there is a drastic increase in the amount of fine pottery that is decorated with lustrous paints. An increase in uniformity in the Peloponnese (both in painting and shape) can be also seen at this time. However, Central Greece is still defined by Helladic pottery, showing little Minoan influence at all, which supports the theory that Minoan influence on ceramics traveled gradually from south to north. By this period, matte-painted pottery is much less common and the Grey Minyan style has completely disappeared. In addition to the popular shapes of LH I goblets, jugs, and jars have increased in popularity. Middle Mycenaean During this phase, Minoan civilization slowly decreased in importance and eventually the Mycenaeans rose in importance, possibly even temporarily being in control of the Cretan palace of Knossos. The mainland pottery began to break away from Minoan styles and Greek potters started creating more abstract pottery as opposed to the previously naturalistic Minoan forms. This abstract style eventually spread to Crete as well. LH IIB (c. 1490/1470 – 1435/1405 BC) During this period the most popular style was the Ephyraean style; most commonly represented on goblets and jugs. This style is thought to be a spin-off of the Alternating style of LM IB. This style has a restricted shape range, which suggests that potters may have used it mostly for making matching sets of jugs, goblets and dippers. It is during LH IIB that the dependence on Minoan ceramics is completely erased. In fact, looking at the pottery found on Crete during this phase suggests that artistic influence is now flowing in the opposite direction; the Minoans are now using Mycenaean pottery as a reference. Ivy, lilies, and nautili are all popular patterns during this phase and by now there is little to no matte painting. LH IIIA1 (c. 1435/1405 – 1390/1370 BC) During LH IIIA1, there are many stylistic changes. Most notably, the Mycenaean goblet begins to lengthen its stem and have a more shallow bowl. This stylistic change marks the beginning of the transformation from goblet to kylix. The vapheio cup also changes into an early sort of mug and becomes much rarer. Also during this period, the stirrup jar becomes a popular style and naturalistic motifs become less popular. Palatial Period Not long after the beginning of this phase there is evidence of major destruction at the palace at Knossos on Crete. The importance of Crete and Minoan power decreases and Mycenaean culture rises in dominance in the southern Aegean. It was during this period that the Levant, Egypt and Cyprus came into close and continuous contact with the Greek world. Masses of Mycenaean pottery found in excavated sites in the eastern Mediterranean show that not only were these ancient civilizations in contact with each other, but also had some form of established trade. The Koine style (from Greek koinos = "common") is the style of pottery popular in the first three quarters of this era. This form of pottery is thus named for its intense technical and stylistic uniformity, over a large area of the eastern and central Mediterranean. During LH IIIA it is virtually impossible to tell where in Mycenaean Greece a specific vase was made. Pottery found on the islands north of Sicily is almost identical to that found in Cyprus and the Levant. It is only during the LH IIIB period that stylistic uniformity decreased; around the same time that the amount of trade between the Peloponnese and Cyprus dramatically decreased. LH IIIA2 (c. 1390/1370 – 1320/1300 BC) It is in this period that the kylix truly becomes the dominant shape of pottery found in settlement deposits. The stirrup jar, piriform jar, and alabastron are the shapes most frequently found in tombs from this era. Also during LH IIIA2 two new motifs appear: the whorl shell and LH III flower. These are both stylized rather than naturalistic, further separating Mycenaean pottery from Minoan influence. Excavations at Tell el-Amarna in Egypt have found large deposits of Aegean pottery. These findings provide excellent insight to the shape range (especially closed forms) of Mycenaean pottery. By this time, monochrome painted wares were almost exclusively large kylikes and stemmed bowls while fine unpainted wares are found in a vast range of shapes. LH IIIB (c. 1320/1300 – 1190 BC) The presence of the deep bowl as well as the conical kylix in this age is what allows one to differentiate from LH IIIA. During LH IIIB paneled patterns also appear. Not long into this phase the deep bowl becomes the most popular decorated shape, although for unpainted wares the kylix is still the most produced. One can further distinguish the pottery from this period into two sub-phases: LH IIIB1: this phase is characterized by an equal presence of both painted deep bowls and kylikes. The kylikes at this time are mostly Zigouries. LH IIIB2: during this phase there is an absence of decorates kylikes and deep bowl styles further develop into the Rosette form. It is unknown how long each sub-phase lasted, but by the end of LH IIIB2 the palaces of Mycenae and Tiryns and the citadel at Midea had all been destroyed. The palace of Pylos was also destroyed at some point during this phase, but it is impossible to tell when in relation to the others the destruction took place. Post-palatial period During this period, the differences in ceramics from different regions become increasingly more noticeable, suggesting further degradation of trade at this time. Other than a brief 'renaissance' period that took place mid-twelfth century that brought some developments, the pottery begins to deteriorate. This decline continues until the end of LH IIIC, where there is no place to go but up in terms of technical and artistic pottery. The shapes and decorations of the ceramics discovered during this final period show that the production of pottery was reduced to little more than a household industry, suggesting that this was a time of poverty in Greece. It is possible to divide this phase into several different sub-phases. Early phase At this time, the 'medium band' form of deep bowl appears and most painted shapes in this phase have linear decoration. Occasionally new shapes (like the 'carinated cup') and new decorations appear, helping to distinguish wares from this period from those of earlier phases. Around the same time as the destruction of the great palaces and citadels is recovered an odd class of handmade pottery lacking any ancestry in the Mycenaean world. Similar pottery is also found in other areas both to the East (e.g. Troy, Cyprus and Lebanon) and to the West (Sardinia and Southern Italy). Most of the scholars in recent times agree that such a development is probably to be interpreted as the result of long-range connections with the Central Mediterranean area (and in particular with southern Italy), and some have connected this with the appearance in the Eastern Mediterranean of the so-called Sea Peoples Developed phase In this sub-phase there is increased development in pattern painted pottery. Scenes of warriors (both foot soldiers and on chariots) become more popular. The majority of the developments however are representational motifs in a variety of regional styles: Late phase There is very little pottery found during this phase, thus not providing much information. It is clear, however, that the bountiful decorations of the developed phase are no longer around. When patterns did occur in this phase, they were very simple; most of the pottery was decorated with a simple band or a solid coat of paint. Mycenaean pottery as commodities Manufacture The earliest form of the potter's wheel was developed in the Near East around 3500 BC. This was then adopted by the people of Mesopotamia who later altered the performance of the wheel to make it faster. Around 2000 years later, during the Late Helladic period, Mycenaeans adopted the wheel. The idea behind the pottery wheel was to increase the production of pottery. The wheels consisted of a circular platform, either made of baked clay, wood or terracotta and were turned by hand; the artist usually had an assistant that turned the wheel while he molded the clay. Clay is dug from the ground, checked for impurities and placed on the wheel to be molded. Once the potter gets the shape he desires, the potter stops the wheel, allowing the excess water to run off. The artist then spins it again to ensure the water is off then it is placed in a kiln. The kiln was usually a pit dug in the ground and heated by fire; these were estimated to reach a temperature of 950 degree Celsius (1,742 degree Fahrenheit). Later kilns were built above ground to be easier to maintain and ventilate. During the firing of the pottery, artists went through a three-phase firing in order to achieve the right colour (further reading). Many historians question how Mycenaean potter's developed the technique of glossing their pottery. Some speculate that there is an "elite or a similar clay mineral in a weak solution" of water. This mixture is then applied to the pottery and placed in the kiln to set the surface. Art Historians suggest that the "black areas on Greek pots are neither pigment nor glaze but a slip of finely sifted clay that originally was of the same reddish clay used." Considering the appearance of the pottery, many Mycenaean fragments of pottery that have been uncovered, has indicated that there is colour to the pottery. Much of this colouring comes from the clay itself; pigments are absorbed from the soil. Vourvatsi pots start off with a pink clay "due merely to long burial in the deep red soil of the Mesmogia". "The colours of the clay vary from white and reds to yellows and browns. The result of the pottery is due to the effects of the kiln; this ties in the three-phases of firing." Phase One: Oxidizing. Oxygen is added to the kiln, thus creates the slip and pot to turn red Phase Two: Reducing. The shutter in the kiln is closed, reducing the amount of oxygen the pottery receives, this causes both the slip and pot to turn black. Phase Three: Re-oxidizing. Oxygen is then released back into the kiln, causing the coarser material to turn red and the smother silica-laden slip to remain black. Production centers The two main production centers during Mycenaean times were Athens and Corinth. Attributing pottery to these two cities is done based on two distinct and different characteristics: shapes (and color) and detailed decoration. In Athens the clay fired rich red and decorations tended towards the geometric style. In Corinth the clay was light yellow in color and they got their motifs from more natural inspirations. Anatomy The anatomy of a vessel can be separated into three distinct parts: orifice, body and base. There are many different shapes depending on where the vessel was made, and when. The body is the area between the orifice (opening) and base (bottom). The maximum diameter of a vessel is usually at the middle of the body or a bit higher. There are not many differences in the body; the shape is pretty standard throughout the Mycenaean world. The orifice is mouth of the vessel, and is subject to many different embellishments, mostly for functional use. The opening is further divided into two categories: Unrestricted: an unrestricted orifice is when the opening is equal to or greater than the maximum diameter. Restricted: contrarily, is when the opening is less than the maximum diameter. The space between the orifice and the body can be divided into two specific shapes: Neck: a restriction of the opening that is above the maximum diameter. Collar: an extension of the opening that does not reduce the orifice. The base is the underside of the vessel. It is generally flat or slightly rounded so that it can rest on its own, but certain wares (especially of the elite variety) have been known to be extremely rounded or pointed. Utilization of pottery There are many different and distinct forms of pottery that can have either very specific or multi-functional purposes. The majority of forms, however are for holding or transporting liquids. The form of a vessel can help determine where it was made, and what it was most likely used for. Ethnographic analogy and experimental archaeology have recently become popular ways to date a vessel and discover its function. Analysis of function Vessel function can be broken down into three main categories: storage, transformation/processing and transfer. These three categories can be further broken down by asking questions such as: hot or cold? liquid or dry? frequency of transactions? duration of use? distance carried The main problem with pottery is that it is very fragile. While well-fired clay is virtually indestructible in terms of decay, if bumped or dropped it will shatter. Other than this, it is very useful in keeping rodents and insects out and as it can be set directly into a fire it is very popular. There are a few different classes of pottery, generally separated into two main sections: utilitarian and elite. Utilitarian pottery is generally plainwares, sometimes with decorations, made for functional, domestic use, and constitutes the bulk of the pottery made. Elite pottery is finely made and elaborately decorated with great regard for detail. This form of pottery is generally made for holding precious liquids and for decoration. Throughout the different phases of Mycenaean pottery different shapes have risen and fallen in prominence. Others evolve from previous forms (for example, the Kylix (drinking cup) evolved from the Ephyraean goblet). There are many different shapes of pottery found from the Mycenaean world. They can serve very specific tasks or be used for different purposes. Some popular uses for pottery at this time are: saucepans, storage containers, ovens, frying pans, stoves, cooking pots, drinking cups and plates. Documented types Ancient pottery differs from modern in the fundamental prevalence of utilitarian intent. Where a potter or glass-blower today would spend time creating ceramics or glassware that are individual works of art, or a small class of elite decorative ware, which have no other purpose than display as art, and serving as a repository of stored wealth, the ancient Greeks and Romans seldom had resources to spend on that sort of craftsmanship. They concentrated instead on the mass production of pottery for sale to the general population, either locally, or after export. Thus standard utilitarian types developed, as described above. Typology is best known from the Iron Age, when histories were written, and stories were told pictorially on the pots themselves. In classical Greece a vocabulary of pottery types developed. There were amphorae for transport, pithoi for storage, kraterai for mixing wine, kylikes for drinking it, and so on. The words bring to mind certain well-defined images. The pottery is easy to identify. The clarity is far different for the types of the Bronze Age. Good guesses can be made about the functions of some of these types. In the absence of the native names, they have been labelled with classical names reflective of the best guesses as to their functions. The historian is not entirely in the dark about the names and functions of this pottery. Mycenaean accountants have left records in Linear B on clay tablets of names of the pots and their contents in their or their employers' possession. The main difficulty in understanding native concepts is the uncertainty concerning the referents. The pots still lie about in large quantities, or did before excavation, in the rooms of the palaces that were destroyed. Matching the observed types to the names in the documents remains an ongoing task. Ventris and Chadwick listed 14 types of pottery ideogram, numbers 200-213, whose presence in a tablet signified a record of the pottery on the shelf. These ideograms are not exact representations of real pottery, but are only verisimilar symbols. A few, such as the stirrup jar, can easily be matched to a type still extant. Most cannot be, but are subject to debate. There are usually variants of each one. The Linear B nouns are given. Some remain unknown or possibly incomplete. Others are obviously the prototypes of Iron Age names. There is no guarantee, however, that the pottery remained the same during the interim. Numbers 200-208 are qualified in the tablets with the BRONZE ideogram, signifying that they were of metal. Apparently the same form was often used for metal as for terracotta. The ideograms are included here for that reason, with terracotta possible instances. The table below displays representative instances of the ideograms and includes possible matches in the real pottery. Usually exact matches are not considered possible, but in a few instances, such as the easily identifiable stirrup jar, there is clarity. Other types known from archaeology The possible types associated with the Linear B documents do not cover all the pottery found in the palaces. There are a few possible reasons: perhaps only some jars got recorded, or perhaps the ideograms are more general than known. Faced with uncertainty the theorists naturally applied classical names to them. There is no guarantee that the Mycenaean pots have the same or similar functions as the classical ones, or that the classical names exist in Linear B form. As with the ideograms, some types are clearly represented by prototypes in the Bronze Age; others are only guesswork. Some shapes with specific functions are: Stamnos: a wine jar Krateriskos: miniature mixing bowl Aryballos, Lekythoi, Alabastra: for holding precious liquids Many shapes can be used for a variety of things, such as jugs (oinochoai) and cups (kylikes). Some, however, have very limited uses; such as the kyathos which is used solely to transfer wine into these jugs and cups. Ephyrean goblet This goblet is the finest product of a Mycenaean potter's craft. It is a stout, short stemmed goblet that is Cretan in origin with Mycenaean treatment. Its decoration is confined to the center of each side and directly under the handles. Stirrup jar The stirrup jar is used for storage and transportation, most commonly of oil and wine that was invented in Crete. Its body can be globular, pear-shaped or cylindrical. The top has a solid bar of clay shaped in two stirrup handles and a spout. Alabastron The alabastron is the second most popular shape (behind the stirrup jar). It is a squat jar with two to three ribbon-handles at the opening. Decoration of Mycenaean pottery Artists used a variety of tools to engrave designs and pictures onto the pottery. Most of the tools used were made up of stones, sticks, bones and thin metal picks. Artists used boar-hair brushes and feathers used to distribute the sifted clay evenly on the pottery. Geometric style The geometric style of decorating pottery has been popular since Minoan times. Although it did decrease in abundance for some time, it resurfaced c. 1000 BC. This form of decoration consists of light clay and a dark, lustrous slip of design. Around 900 BC it became very popular in Athens and different motifs; such as abstract animals and humans began to appear. Among the popular shapes for geometric pottery are: Circles Checkers Triangles Zigzags Meander (art) Lustrous painted wares Lustrous painted wares slowly rise in popularity throughout the Late Helladic period until eventually they are the most popular for of painted wares. There are four distinct forms of lustrous decorations: The first style sees the ware covered entirely with brilliant decoration, with red or white matte paint underneath. This form consists of wares with a yellower tone with black lustrous decorations. In the third style, the yellow clay becomes paler and floral and marine motifs in black paint are popular. The final style has matte red clay with a less lustrous black paint. Human and animal decorations that are geometric in form. Fine wares vs. common wares Fine wares are made from well purified clay of a buff color. They have thin, hard walls and a smooth, well polished slip. The paint is generally lustrous and the decorations can be: Birds Fish Animals (commonly oxen and horses) Humans This form of ware is generally of a high class; making it more expensive and elite. Common wares are plain and undecorated wares used for everyday tasks. They are made from a red coarse and porous clay and often contain grit to prevent cracking. Later on in the Helladic period the tendency to decorate even common wares surfaces. Pattern vs. pictorial style Pattern The pattern style is characterized by motifs such as: scales spirals chevrons octopuses shells flowers Throughout the Late Helladic era, the patterns become more and more simplified until they are little more than calligraphic squiggles. The vase painter would cover the majority of the vase with horizontal bands, applied while the pottery was still on the wheel. There is a distinct lack of invention in this form of decoration. Pictorial The majority of pictorial pottery has been found on Cyprus, but it originates in the Peloponnese. It is most likely copied or inspired from the palace frescoes but the vase painters lacked the ability at this time to recreate the fluidity of the art. The most common shape for this form of decoration are large jars, providing a larger surface for the decoration; usually chariot scenes. Issues of art history Wace noted even in the first publication of Documents that a conflict had developed over the interpretation of Mycenaean artifacts in the history of Greek art. Schliemann had believed that the Mycenaeans were Greeks. Wace described him as "overawed" by critics penning their views under the facade of expertise into not fully publishing his views. The gist of their arguments was that Mycenaean art was completely different from classical art. The Mycenaeans were most likely easterners, perhaps Phoenicians. Greek art really begins in the Geometric Period about 1000 BC. At that time the slate was wiped clean, so to speak. All culture became suddenly different, writing was lost completely, and previous art styles came to a swift end. They explained this hypothetical change as the first entry of the Greeks into Greece at that time. Wace termed this view "orthodox" because any other was speculative and unsupported by any certain evidence. Even Ventris when he first began his analysis of the script never suspected that it was Greek. When he began to consider the triplets of Alice Kober, a classics major from New York City, linguist, and voracious scholar, who also had taught herself braille, and had received a Guggenheim Fellowship to study Linear B, he was able to match some words with Cretan place names and objects depicted in the ideograms. A triplet was a group of three sequences of signs exactly the same except for the final syllables. Kober had hypothesized that the last signs were the endings of an inflected word. Her death in 1950 of cancer, just when Ventris was beginning his work, prevented her from going further. In a flash of insight Ventris realized that some of the words could be interpreted as Cretan place names and material objects matching their depictions in the ideograms. He had developed a grid, or table of unknown vowels in rows and unknown consonants in columns. At each intersection of a vowel row and a consonant column was a CV syllable to be matched with a sign. Once the syllabic value for a sign was known, the vowel and the consonant were known and could be applied to the other intersections in the grid. The place names gave him enough syllabic values to see that the language is Greek written in syllabic characters. It was suggested that he contact John Chadwick, a linguist and classics professor at Cambridge, who had been a code-breaker of another syllabic writing system, Japanese, in World War II. Chadwick and peers at Cambridge had been trying to "break" Linear B as an exercise. Linear B was already broken, but Chadwick and Ventris became fast friends and collaborators. The reaction of the established scholarly world was somewhat less than sanguine. The idea of Mycenaeans being Greeks, as Schliemann had suggested, was abhorrent to them, as it was bringing a role reversal to the former "experts." Resistance went on for decades, but the preponderance of evidence eventually gave the decipherment of Linear B an inevitable certainty. Currently no one seriously denies that Linear B is Greek writing. One implication is that the pottery and other cultural features associated with Linear B are Greek also. Even Evans resisted that conclusion, suggesting instead that the Greeks adopted Minoan cultural features, instead of bringing their own gifts to the banquet of history, so to speak. Mycenae after all is not a native Greek word. In opposition, the archaeologists accepting the decipherment developed a theory that the Greeks or the speakers of a predecessor language, had entered Greece at the beginning of the Middle Bronze Age, overrunning and incorporating the pre-Greek speakers, who had given Mycenae its name, this event being reflected in the shaft graves at Mycenae. The Late Bronze Age was thus a floruit of Greek imperial domination, which Tsountas and others were now calling "The Mycenaean Age." Society and culture Submycenaean is now generally regarded as the final stage of Late Helladic IIIC (and perhaps not even a very significant one), and is followed by Protogeometric pottery (1050/25–900 BC). Archaeological evidence for a Dorian invasion at any time between 1200 and 900 BC is absent, nor can any pottery style be associated with the Dorians. Dendrochronological and C14 evidence for the start of the Protogeometric period now indicates this should be revised upwards to at least 1070 BC, if not earlier. The remnants of Mycenaean pottery allow archaeologists to date the site they have excavated. With the estimated time of the site, this allows historians to develop timelines that contribute to the understanding of ancient civilization. Furthermore, with the extraction of pottery, historians can determine the different classes of people depending on where the pottery shards were taken from. Due to the large amount of trading the Mycenae people did, tracking whom they traded with can determine the extent of their power and influence in their society and others. Historians then can learn the importance of who the Mycenae people were, where pottery mainly comes from, who was reigning at that time and the different economic standards. Historians don't know why the power of dominance changed from the Minoans to the Mycenaes, but much of the influence of pottery comes from the Minoans' culture. Shapes as well as design are direct influences from the Minoans. The Mycenae didn't change the design of their pottery all that much, but the development of the stirrup jar became a huge influence on other communities. Fresco paintings became an influence on the pictures painted on the pottery. Most of these images depict the warlike attitude of the Mycenae; as well, animals became a common feature painted on the pottery. Through the excavation of tombs in Greece, archaeologists believe that much of the pottery found belongs to the upper class. Pottery was seen as slave work or that of the lower class. Graves with few pots or vessels indicate the burial was for a poorer family; these are usually not of much worth and are less elaborate then that of the higher class. Pottery was used for ceremonies or gifts to other rulers in the Mycenaean cities. For historians to decipher what pottery was used for, they have to look for different physical characteristics that would indicate what it was used for. Some indicators can be: Where the pottery was extracted from (i.e., houses, graves, temples) Dimension and shape: what the capacity is, stability, manipulation and how easy it is to extract its content Surface wear: scratches, pits or chips resulting from stirring, carrying, serving and washing Soot deposit: if it was used for cooking Pottery was mainly used for the storage of water, wine and olive oil and for cooking. Pottery was also "used as a prestige object to display success or power". Most grave sites contain pottery to serve as a passing into another life. Along with burial rituals and gifts, pottery was widely traded. Much of the Mycenae's wealth came from the trading they did along the coast of the Mediterranean. When power passed from the Minoans to the Mycenae, Crete and Rhodes became major trading points. Trading eventually moved further north, as far as Mount Olympus. With the growing power and influence, trading went as far as Egypt, Sicily, and the coast of Italy. Other sites where pottery was discovered are Baltic, Asia Minor, Spain, and most of the Aegean. Another society that the Mycenae traded with were the Neolithic. Around 1250 BC, the Mycenae combined forces to take over Troy due to high taxation of ships through the channel among other reasons. With the coming of the Bronze Age Collapse, famine became more prevalent and many families moved to places closer to food production around the Eastern Mediterranean. With a declining population, production of pottery also declined. Pottery did not become a lost art form like many others, but it became more rugged. With the establishment of trade, prices were agreed upon before ships were sent out. Other materials such as olive oil, wine, fabrics and copper were traded. See also Cycladic chronology Cypriot Bichrome ware Helladic chronology Helladic period Minoan chronology Minyan ware Notes References Chadwick's second edition includes the first edition by Chadwick and Michael Ventris, which is considered the "Constitution" of Linear B studies. It contains a Foreword by A. J. B. Wace, deceased by the second edition. A. Furumark, Mycenaean Pottery I: Analysis and Classification (Stockholm 1941, 1972) Reynold Higgins. Minoan and Mycenaean Art. (London, 1967) Kleiner, Fred S. Gardner's Art Through the Ages. (Boston, 2010) Spyridon Marinatos. Crete and Mycenae. (London, 1960) P. A. Mountjoy, Mycenaean Decorated Pottery: A Guide to Identification (Göteborg 1986) Mycenaean Pictorial Art and Pottery Includes a Preface by William Ewart Gladstone, then MP. Lord William Taylour. The Mycenaeans. (London, 1964) Further reading Betancourt, Philip P. 2007. Introduction to Aegean Art. Philadelphia: INSTAP Academic Press. Preziosi, Donald, and Louise A. Hitchcock. 1999. Aegean Art and Architecture. Oxford: Oxford University Press. External links Pottery Ancient Greek vase-painting styles Ancient Greek pottery
361571
https://en.wikipedia.org/wiki/System%20console
System console
One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel. Another, older, meaning of system console, computer console, hardware console, operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel, keyboard/printer and keyboard/display. History Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first programmable computer, the Manchester Baby, used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM. Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on. On early minicomputers, the console was a serial console, an RS-232 serial link to a terminal such as a ASR-33 or later a DECWriter or DEC VT100. This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems, e.g. those from Sun Microsystems, Hewlett-Packard and IBM, still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect a terminal to any of the attached servers. Today, serial consoles are often used for accessing headless systems, usually with a terminal emulator running on a laptop. Also, routers, enterprise network switches and other telecommunication equipment have RS-232 serial console ports. On PCs and workstations, the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers (KVM switches) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet. Some PC BIOSes, especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems, e.g. FreeBSD and Linux, can be configured for serial console operation either during bootup, or after startup. Starting with the IBM 9672, IBM large systems hves used a Hardware Management Console (HMC), consisting of a PC and a specialized application, instead of a 3270 or serial link. Other IBM product lines also use an HMC, e.g., System p. It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources. See also Command-line interface (CLI) Console application Console server Linux console Virtual console Win32 console References External links BIOS Computer systems Computer terminals Out-of-band management Console User interfaces System console ru:Консоль#Программное обеспечение
2855698
https://en.wikipedia.org/wiki/Tangerine%20Microtan%2065
Tangerine Microtan 65
The Tangerine Microtan 65 (sometimes abbreviated M65) is a 6502 based single board microcomputer, first sold in 1979, which could be expanded into, what was for its day, a comprehensive and powerful system. The design became the basis for what later became the ORIC ATMOS and later computers, which has similar keyboard addressing and tape I/O as in the Microtan 65. The Microtan 65 has a single step function that can be used for debugging at the hardware level. The computer was available as ready-built boards or as kits consisting of board and components requiring soldering together. The Microtan 65 was intended as a general purpose microcomputer which could be used by laboratories, Original Equipment Manufacturers (OEMs) and the computer enthusiast, and it was designed with expandability in mind. In this way the customer could customise the system, be it as a specialised control system, as a learning tool, or as a general purpose computing device. Price of the Microtan 65 board in 1981 was £79.35 (inc. VAT) in kit form or £90.85 ready-assembled. The system was not generally available in the shops. To accompany the hardware and to offer further support to users, a magazine was created, the Tansoft Gazette (name inspired by the Liverpool Software Gazette). This was edited by Tangerine employee Paul Kaufman who continued as editor when the magazine was renamed Oric Owner. Tansoft also became the name of Tangerine Computer's official software house which supplied a number of software products and books for the Microtan system and subsequently for the Oric range of computers. Main board The Microtan 65 was quite simple by today's standards, with: an NMOS 6502 CPU running at 750 kHz clock rate 1K byte of RAM, used both for display memory and user programs 1K byte of ROM for the monitor program video logic and a television RF modulator, for the 16 rows of 32 characters display a software scanned hexadecimal keypad an optional ASCII keyboard Display The major advance that the Microtan 65 had over a lot of the competition at that time was that the video display was flicker free. At the time a lot of microcomputers would either access the screen memory asynchronously to the video timing (causing flicker and splats on the screen), or would write to the screen memory during a non-display period (which was slow). The Microtan 65 got over this problem by making use of an incidental feature of the 6502. The 6502 (unlike most other CPUs) has a regular period in each instruction cycle when all CPU activity is inside the chip, leaving the external memory available without using complex external arbitration logic. This made video display design simpler and meant that video accesses could be made at maximum speed. This technique is also used on the Oric-1 and Atmos, and in the unrelated Apple II. The 32×16 characters was the reason that the 6502 was clocked at 750 kHz. To get the circuitry to work at a (nearly) standard video rate meant that the pixel clock had to be 6 MHz. When the Microtan 65 was designed only a 1 MHz 6502 was available, and so 750 kHz was used (6 MHz divided by 8). Software The 1K byte monitor program (later increased to a 2K) is called TANBUG. The software facilities were rudimentary: M = Memory modify / examine L = List a block of memory G = Go command (Run a program) R = Registers display / modify S = set Single step mode N = set Normal mode (cancel S command) P = Proceed command (execute next instruction in Single step mode) B = set Breakpoints O = calculate Offset for use in branch instructions C = Copy a block of memory Memory map The Microtan 65 memory map is shown below ($ representing a hexadecimal memory address): $0000 Zero Page $0100 Stack $0200 Screen RAM $0300 $0400 End of Microtan 65 RAM - map continued from $0400 to $0700 as RAM on TANEX $8000 I/O $C000 $F800 TANBUG V2 $FFFF The screen memory occupies the space between $200 and $3FF. In addition to the standard 8 bits of screen RAM, there was an additional single bit RAM shadowing the $200 to $300 space. This was configured as a 9th bit write-only plane, and was used by the Microtan 65 for rudimentary, or "chunky", graphics. Setting the 9th bit displayed a Minitel type block graphic. The display is 32 characters across by 16 lines down, with memory address $200 representing the top left hand displayed character, $220 the second row, etc. The character representation is standard ASCII. Several pieces of Microtan 65 software write to the bottom line by writing to memory starting at $3E0 - the leftmost character on the bottom line, rather than vectoring through TANBUG. Input/output I/O in the Microtan 65 is decoded into a 16 KB space to simplify the hardware. In fact the 1 KB of RAM is mirrored through the bottom 32 KB, the I/O through the next 16 KB, and the EPROM through the top 16 KB. If you added an expansion board (see TANEX below) the decoding was modified and the wasted space reclaimed. In common with other 6502 designs, I/O is mapped into the memory space. There is no dedicated I/O space as on the Z80, 8086 etc. The I/O ports are (when fully decoded): Write to $BFF0 Clear Keyboard Flag (Keyboard would generate an IRQ) Read from $BFF0 Turn Graphics On (enables "9th bit" graphics writes) Write to $BFF1 Used by the hardware single step Write to $BFF2 To write a scan pattern to the hex keypad (if fitted) Write to $BFF3 Turn off Graphics (disable "9th bit" graphics writes) Read From $BFF3 Read Keyboard Port (either keypad or ASCII keyboard) TANEX expansion board . Adding a TANEX board provided a number of features: an add-on to TANBUG called XBUG space for an additional 7K bytes of RAM five EPROM sockets two 6522 VIAs a 6551 UART, providing a cassette interface for storing and retrieving programs (300 baud CUTS, and 2400 baud), and a serial interface Without a TANEX board, and due to deliberately ambiguous address decoding, the address $F7F7 would appear to the 6502 to have the same data as $FFF7. In TANBUG, this is a jump to an internal monitor routine. With TANEX installed, $F7F7 is decoded properly, and that address is an entry point into XBUG. XBUG provided features such as cassette tape loading and saving, a simple assembler / disassembler, hex calculator. The ROM sockets on TANEX could be used to run a 10K Microsoft Extended BASIC, a two-pass assembler, or even (and more likely given the hardware bias of the Microtan 65) code written for a specific hardware control application. Price of the TANEX board in 1981 was £49.45 as a "minimum configuration" kit - lacking one of the 6522 VIAs and the 6551 and with 1K of RAM - and £60.95 for a similar board fully assembled. TANEX was also available with the board fully populated with chips (although excluding XBUG, ROMs & BASIC) and in this form the costs were £103.16 for the kit and £114.66 ready assembled. Further Expansion The Microtan 65 was designed as a modular system able to be expanded as required, and for this each board included an 80-pin connector at one end allowing it to be plugged into a backplane-type motherboard. A simple two-socket "Mini Motherboard" connected the Microtan 65 and TANEX boards for minimum expansion and in this form the system was also available ready-built from Tangerine, complete with case and full ASCII keyboard, as the Tangerine Micron, costing £395.00 in 1981. For further expansion the builder could purchase the full "System Motherboard" which featured an additional ten sockets, bringing the total available sockets to twelve. For housing this, a "System Rack", rack-based case was available, in black and silver with a black front panel trimmed in Tangerine's trademark orange. Additional boards became available with time, including a 40K memory board - TANRAM, made up of 32K of dynamic and 8K of static RAM, bringing the total non-paged memory to 48K, a dedicated parallel I/O board featuring 16 parallel input/output ports; a similar dedicated serial I/O board featuring 8 serial input/output ports, a disk controller board for use with disk drives, a Disk Operating System - TANDOS 65, a high resolution graphics board featuring 8K of static graphics RAM giving a resolution of 256 × 256 pixels, a dedicated 32K ROM board, (aimed mainly at OEM and general purpose applications or for use with AIM, KIM and SYM systems), capable of holding either 8 × 2732 or 16 × 2716 EPROMs; and a 32K RAM board featuring two 16K banks of 4116 dynamic RAM, again intended for similar purposes to the 32K ROM board. In addition, several third-party suppliers offered boards designed for use with the Microtan 65 system. In addition to the BASIC programming language Tangerine also released on disk TANFORTH, an extended version of FIG FORTH featuring a full FORTH compiler and editor. See also Research Machines 380Z References External links This article was based on: Geoff Mcdonald's webpage - author's experience with the Microtan 65 Fabrice Frances' website - includes a Microtan 65 emulator written in Java Binary Dinosaurs - tracing the history of computers www.microtan.ukpc.net - manuals, ROM images, magazine and newsletter articles Personal Computer News: Back From The Brink - A new look at the Microtan 65 - a DIY micro that narrowly escaped extinction 6502-based home computers Home computers Early microcomputers Computer-related introductions in 1979 Tangerine Computer Systems
1659535
https://en.wikipedia.org/wiki/RKWard
RKWard
RKWard is a transparent front-end to the R programming language, a scripting-language with a strong focus on statistics functions. RKWard tries to combine the power of the R language with the ease of use of commercial statistical packages. RKWard is written in C++ and although it can run in numerous environments, it was designed for and integrates the KDE desktop environment with the Qt (software) libraries. Features RKWard's features include: Spreadsheet-like data editor Syntax highlighting, code folding and code completion Data import (e.g. SPSS, Stata and CSV) Plot preview and browsable history R package management Workspace browser GUI dialogs for all kinds of statistics and plots Interface RKWard aims to be easy to use, both for people with deep knowledge of R, and for users who, although they have experience in statistics, are not familiar with the language. The application design offers the possibility of using the graphic tools as well as ignoring many of them and using the program as integrated development environment. It includes a workspace viewer, which gives access to packages, functions and variables loaded by R or imported from other sources. It also has a file viewer, and data set editing windows, display of the contents of the variables, help, command log and HTML output. It also offers components that help in code editing and direct order execution, such as the script window and the R console, where you can enter complete commands or programs as you would in the original R text interface. It provides additional help such as syntax coloring documentation of functions while writing, and includes the feature of capturing graphs or emerging dialogs produced by offering additional options for handling, saving and exporting them. Package Management The R package management is carried out through a configuration dialog that allows one to, either automatically (because a plug-in requires it) or manually, install new packages from the repository's official project, update existing ones, delete them or upload / download them from the workspace. Add-ons system Thanks to its add-ons system RKWard constantly expands the number of functions that can be accessed without writing the code directly. These components allow, from a graphical user interface, instructions to be generated in R for the most common or complex statistical operations. In this way, even without having deep knowledge about the language it is possible to perform advanced data analysis or elaborated graphs. The results of the computations are formatted and presented as HTML, making it possible, with a single click and drag, to export tables and graphs to, for example, office suites. rk.Teaching RKTeaching (stylized as rk.Teaching) is a package specially designed for use in teaching and learning statistics, integrating modern packages (such as R2HTML, plyr and ggplot2 among others) as RKWard native outputs. As of 2020, RKTeaching is in version 1.3.0. See also Comparison of statistical packages R interfaces References External links RKWard homepage 'RKWard: A Comprehensive Graphical User Interface and Integrated Development Environment for Statistical Analysis with R', Stefan Rödiger, Thomas Friedrichsmeier, Prasenjit Kapat, Meik Michalke, Journal of Statistical Software Free R (programming language) software Free software programmed in C++ Free statistical software KDE Applications PHP software
1688759
https://en.wikipedia.org/wiki/Software%20deployment
Software deployment
Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur at the producer side or at the consumer side or both. Because every software system is unique, the precise processes or procedures within each activity can hardly be defined. Therefore, "deployment" should be interpreted as a general process that has to be customized according to specific requirements or characteristics. History When computers were extremely large, expensive, and bulky (mainframes and minicomputers), the software was often bundled together with the hardware by manufacturers. If business software needed to be installed on an existing computer, this might require an expensive, time-consuming visit by a systems architect or a consultant. For complex, on-premises installation of enterprise software today, this can still sometimes be the case. However, with the development of mass market software for the new age of microcomputers in the 1980s came new forms of software distribution first cartridges, then Compact Cassettes, then floppy disks, then (in the 1990s and later) optical media, the internet and flash drives. This meant that software deployment could be left to the customer. However, it was also increasingly recognized over time that configuration of the software by the customer was important and that this should ideally have a user-friendly interface (rather than, for example, requiring the customer to edit registry entries on Windows). In pre-internet software deployments, deployments (and their closely related cousin, new software releases) were of necessity expensive, infrequent, bulky affairs. It is arguable therefore that the spread of the internet made end-to-end agile software development possible. Indeed, the advent of cloud computing and software as a service meant that software could be deployed to a large number of customers in minutes, over the internet. This also meant that typically, deployment schedules were now determined by the software supplier, not by the customers. Such flexibility led to the rise of continuous delivery as a viable option, especially for less risky web applications. Deployment activities Release The release activity follows from the completed development process and is sometimes classified as part of the development process rather than deployment process. It includes all the operations to prepare a system for assembly and transfer to the computer system(s) on which it will be run in production. Therefore, it sometimes involves determining the resources required for the system to operate with tolerable performance and planning and/or documenting subsequent activities of the deployment process. Installation and activation For simple systems, installation involves establishing some form of command, shortcut, script or service for executing the software (manually or automatically). For complex systems it may involve configuration of the system possibly by asking the end user questions about its intended use, or directly asking them how they would like it to be configured and/or making all the required subsystems ready to use. Activation is the activity of starting up the executable component of software for the first time (not to be confused with the common use of the term activation concerning a software license, which is a function of Digital Rights Management systems.) In larger software deployments on servers, the main copy of the software to be used by users - "production" - might be installed on a production server in a production environment. Other versions of the deployed software may be installed in a test environment, development environment and disaster recovery environment. In complex continuous delivery environments and/or software as a service systems, differently-configured versions of the system might even exist simultaneously in the production environment for different internal or external customers (this is known as a multi-tenant architecture), or even be gradually rolled out in parallel to different groups of customers, with the possibility of canceling one or more of the parallel deployments. For example, Twitter is known to use the latter approach for A/B testing of new features and user interface changes. A "hidden live" group can also be created within a production environment, consisting of servers that are not yet connected to the production load balancer, for the purposes of blue-green deployment. Deactivation Deactivation is the inverse of activation, and refers to shutting down any already-executing components of a system. Deactivation is often required to perform other deployment activities, e.g., a software system may need to be deactivated before an update can be performed. The practice of removing infrequently used or obsolete systems from service is often referred to as application retirement or application decommissioning. Uninstallation Uninstallation is the inverse of installation. It is the removal of a system that is no longer required. It may also involve some reconfiguration of other software systems in order to remove the uninstalled system's dependencies. Update The update process replaces an earlier version of all or part of a software system with a newer release. It commonly consists of deactivation followed by installation. On some systems, such as on Linux when using the system's package manager, the old version of a software application is typically also uninstalled as an automatic part of the process. (This is because Linux package managers do not typically support installing multiple versions of a software application at the same time unless the software package has been specifically designed to work around this limitation.) Built-in update Mechanisms for installing updates are built into some software systems (or, in the case of some operating systems such as Linux, Android and iOS, into the operating system itself). Automation of these update processes ranges from fully automatic to user initiated and controlled. Norton Internet Security is an example of a system with a semi-automatic method for retrieving and installing updates to both the antivirus definitions and other components of the system. Other software products provide query mechanisms for determining when updates are available. Version tracking Version tracking systems help the user find and install updates to software systems. For example: Software Catalog stores version and other information for each software package installed on a local system. One click of a button launches a browser window to the upgrade web page for the application, including auto-filling of the user name and password for sites that require a login. On Linux, Android and iOS this process is even easier because a standardized process for version tracking (for software packages installed in the officially supported way) is built into the operating system, so no separate login, download and execute steps are required so the process can be configured to be fully automated. Some third-party software also supports automated version tracking and upgrading for certain Windows software packages. Deployment roles The complexity and variability of software products have fostered the emergence of specialized roles for coordinating and engineering the deployment process. For desktop systems, end-users frequently also become the "software deployers" when they install a software package on their machine. The deployment of enterprise software involves many more roles, and those roles typically change as the application progresses from the test (pre-production) to production environments. Typical roles involved in software deployments for enterprise applications may include: in pre-production environments: application developers: see Software development process build-and-release engineers: see Release engineering release managers: see Release management deployment coordinators: see DevOps in production environments: system administrator database administrator release coordinators: see DevOps operations project managers: see ITIL See also Application lifecycle management Product lifecycle management Systems management System deployment Software release Definitive Media Library Readme Release management Deployment environment References External links Standardization efforts Solution Installation Schema Submission request to W3C OASIS Solution Deployment Descriptor TC OMG Specification for Deployment and Configuration of Component-based Distributed Applications (OMG D&C) JSR 88: Java EE Application Deployment Articles Resources Visual Studio Release Management Software distribution System administration Software release
2931056
https://en.wikipedia.org/wiki/3Com%20Audrey
3Com Audrey
The 3Com Ergo Audrey is a discontinued internet appliance from 3Com. It was released to the public on October 17, 2000 for USD499 as the only device in the company's "Ergo" initiative to be sold. Once connected to an appropriate provider, users could access the internet, send and receive e-mail, play audio and video, and synchronize with up to two Palm OS-based devices. Audrey was the brainchild of Don Fotsch (formerly of Apple Computer and U.S. Robotics) and Ray Winninger. Don and Ray had a vision for a family of appliances, each designed for a specific room in the house. The brand Ergo was meant to convey that intent, as in "it's in the kitchen, ergo it's designed that way". There were plans to serve other rooms in the house as well. They considered the kitchen to be the heart of the home and the control room for the home manager. Don coined the phrase "Internet Snacking" to describe the lightweight web browsing done in this environment. The name Audrey was given to this first product to honor Audrey Hepburn. It was meant to deliver the elegance that she exuded. The project codename was "Kojak", named after the Telly Savalas character. The follow-on product targeted for the family room was code named "Mannix". 3Com discontinued the product on June 1, 2001, in the wake of the dot-com crash, after only seven and a half months on the market. Only 3Com direct customers received full refunds for the product and accessories. Customers who had bought Audrey devices through other vendors were not offered refunds and never even notified about the refunds. The remaining Audrey hardware was liquidated and embraced by the hardware hacker community. Hardware The Audrey is a touchscreen, passive matrix LCD device and came equipped with a stylus. All applications were touch-enabled. Since the standard infrared keyboard was only needed for typing tasks, it could be hung out of the way on the rear of the unit. The stylus was to be placed in a receptacle on the top of the screen with an LED that flashed when email arrived. Buttons on the right side of the screen were used to access the web browser, email application, and calendar, and a wheel knob at the bottom selected different "channels" of push content. The 3Com Audrey is powered by a 200 MHz Geode GX 1 CPU, with 16 MB of flash ROM and 32 MB of RAM. It measures 9 x 11.8 x 3.0 inches (22.86 x 29.97 x 7.62 cm), and weighs 4.1 pounds (1.86 kg). It is powered by the QNX operating system. The Audrey is equipped with a modem, two USB ports, and a CompactFlash socket. A USB Ethernet adapter was commonly used for broadband subscribers. The Audrey was also available in such shades as "linen" (off-white), "meadow" (green), "ocean" (blue), "slate" (grey), and "sunshine" (light yellow). Hacking After the demise of official support, the Audrey drew the attention of computer enthusiasts. They quickly discovered an exploit to launch a pterm session. Using privilege escalation techniques, the root password in the passwd file could be edited, opening the box to further experimentation. Many of the tools for the QNX operating system development platform were quickly adapted for use in the Audrey, including an updated web browser (Voyager), an MP3 player, digital rotating photoframe, and other applications. The CompactFlash slot was also investigated. Although it could not be used for storage expansion, the Audrey was set to flash its operating system from the slot. Soon, a variety of replacement OS images were distributed among enthusiasts. As the device could utilize an optional Ethernet connection, it was an easy task to mount a remote disk drive served up by a neighboring desktop system, thus allowing for virtually unlimited storage capability. Similar devices Devices similar to the Audrey included the i-Opener, the Virgin Webplayer and the Gateway Touch Pad. References Information appliances Tablet computers Computer-related introductions in 2000
9277280
https://en.wikipedia.org/wiki/Macecraft%20Software
Macecraft Software
Macecraft Software is a Finnish computer software company founded in 2001 by Jouni Vuorio and Jani Vuorio. The company is mainly known for its utility software jv16 PowerTools. Other products include standalone registry cleaners, RegSupreme Pro and RegSupreme. Before 2003, as a hobby Jouni Vuorio developed a freeware software called RegCleaner. The transition from creating freeware software to shareware products also generated heated discussion. In December 2013, a crowdfunding campaign was launched at indiegogo with the aim of making Jv16 PowerTools free and open source. The Thunderclap Web site said that the campaign reached 252% of its goal of 500 supporters with 1,2558 subscribers, but Macecraft said that the campaign did not reach its financial goal, so the software was not made free and open source. Instead, contributors were given software updates. The Macecraft discussion forum went offline for a prolonged period at about this time but eventually came back online, with apologies for the prolonged absence for maintenance. References External links Macecraft, Inc. Software companies of Finland 2001 establishments in Finland
52513729
https://en.wikipedia.org/wiki/Mishi%20Choudhary
Mishi Choudhary
Mishi Choudhary is a technology lawyer and online civil liberties activist working in the United States and India. She is the Legal Director at the Software Freedom Law Center and the Founder of SFLC.in. SFLC.in brings together lawyers, policy analysts and technologists to fight for digital rights, produces reports, and studies on the state of the Indian internet, also has a productive legal arm. Under her leadership, SFLC.in has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. Education Choudhary earned a bachelor's degree in political science and an LLB degree with Honors from the University of Delhi. As the first Free and Open Source Fellow of the Software Freedom Law Center, she earned a Master of Laws degree from Columbia Law School where she was a Harlan Fiske Stone Scholar. Career Choudhary started her career as a litigator at the Delhi High Court and the Supreme Court of India. Following the completion of her Free and Open Source Fellowship, she started working with Software Freedom Law Center as a Counsel. She served as the Director of International Practice from 2011 to 2013. In July, 2013 she was appointed the Legal Director of SFLC. At SFLC, she served as the primary legal representative of many of the world's most significant free software developers and non-profit distributors, including Debian, Free Software Foundation, Kodi (software), the Apache Software Foundation, and OpenSSL. She continues to serve some of these projects along with established businesses and startups, Governments using free software in their products and service offerings in the US, Europe, India, China and Korea. As of 2017, she is the only lawyer in the world to appear simultaneously on briefs in the US and Indian Supreme Courts in the same term. She was one of the lead counsels in Shreya Singhal v. Union of India representing Mouthshut.com in which the Supreme Court of India delivered a landmark verdict, ruling Section 66A of the Information Technology Act as unconstitutional. In 2018, she launched her technology law and policy practice "Mishi Choudhary And Associates". Honors and awards In 2015, she was named one of the Asia Society's 21 young leaders building Asia's future for her work as a technology lawyer and online civil liberties activist. In 2016, the Aspen Institute named her as a Fellow of the sixth class of the Kamalnayan Bajaj Fellowship and a member of the Aspen Global Leadership Network. In 2017, she won a Digital Women Award in the "Social Impact" category for her work with SFLC.in. In 2017, the Open (Indian magazine) listed her as a Freedom Fighter and one of the emerging legal guardians of the Free Internet. She is also current member of code of Conduct Committee for Linux Kernel References Year of birth missing (living people) Living people Columbia Law School alumni Delhi University alumni 20th-century Indian women lawyers 20th-century Indian lawyers 20th-century American women lawyers 20th-century American lawyers 21st-century Indian women lawyers 21st-century Indian lawyers 21st-century American women lawyers 21st-century American lawyers
895342
https://en.wikipedia.org/wiki/Linear%20Tape-Open
Linear Tape-Open
Linear Tape-Open (LTO) is a magnetic tape data storage technology originally developed in the late 1990s as an open standards alternative to the proprietary magnetic tape formats that were available at the time. Hewlett Packard Enterprise, IBM, and Quantum control the LTO Consortium, which directs development and manages licensing and certification of media and mechanism manufacturers. The standard form-factor of LTO technology goes by the name Ultrium, the original version of which was released in 2000 and stored of data in a cartridge. The ninth generation of LTO Ultrium was announced in 2020 and can hold in a cartridge of the same physical size. Upon introduction, LTO Ultrium rapidly defined the super tape market segment and has consistently been the best-selling super tape format. LTO is widely used with small and large computer systems, especially for backup. History Half-inch (½-inch, 12.65 mm) magnetic tape on open reels has been used for data storage since the 1950s, with the IBM 7 track. In the mid 1980s, IBM and DEC put this kind of tape into a single reel, enclosed cartridge. Although the physical tape was nominally the same size, the technologies and intended markets were significantly different and there was no compatibility between them. IBM called its format the 3480 (after the 3480, the one product that used it) and designed it to meet the demanding requirements of its mainframe products. DEC originally called theirs CompacTape, but later it was renamed DLT and sold to Quantum Corporation. In the late 1980s, Exabyte's Data8 format, derived from Sony's dual-reel cartridge 8 mm video format, saw some popularity, especially with UNIX systems. Sony followed this success with their own now-discontinued 8 mm data format, Advanced Intelligent Tape (AIT). By the late 1990s, Quantum's DLT and Sony's AIT were the leading options for high-capacity tape storage for PC servers and UNIX systems. These technologies were (and still are) tightly controlled by their owners. Consequently, there was little competition between vendors and the prices were relatively high. To counter this, IBM, HP and Seagate formed the LTO Consortium, which introduced a more open format focusing on the same mid-range market segment. Much of the technology is an extension of the work done by IBM at its Tucson lab during the previous 20 years. Initial plans called for two LTO formats to directly compete with these market leaders: Ultrium with half-inch tape on a single reel, optimized for high capacity, and Accelis with 8 mm tape on dual reels, optimized for low latency. Around the time of the release of LTO-1, Seagate's magnetic tape division was spun off as Seagate Removable Storage Solutions, later renamed Certance, which was subsequently acquired by Quantum. Generations Despite the initial plans for two form-factors of LTO technology, only Ultrium was ever produced. The other proposed format was Accelis, developed in 1997 for fast access to data by using a two-reel cartridge that loads at the midpoint of the 8 mm wide tape to minimize access time. IBM's (short-lived) 3570 Magstar MP product pioneered this concept. The real-world performance never exceeded that of the Ultrium tape format, so there was never a demand for Accelis and no drives or media were commercially produced. As of 2008, LTO Ultrium was very popular and there were no commercially available LTO Accelis drives or media. In common usage, LTO generally refers only to the Ultrium form factor. The first generation of Ultrium tapes were going to be available with four types of cartridge, holding 10 GB, 30 GB, 50 GB, and 100 GB. Only the full length 100 GB tapes were produced. As of 2020, nine generations of LTO Ultrium technology have been made available and three more are planned. Between generations, there are strict compatibility rules that describe how and which drives and cartridges can be used together. Data capacity and speed figures above are for uncompressed data. Most manufacturers list compressed capacities on their marketing material. Capacities are often stated on tapes as double the actual value; they assume that data will be compressed with a 2:1 ratio (IBM uses a 3:1 compression ratio in the documentation for its mainframe tape drives. Sony uses a 2.6:1 ratio for SAIT). See Compression below and the table above. The units for data capacity and data transfer rates generally follow the "decimal" SI prefix convention (e.g. mega = 106), not the binary interpretation of a decimal prefix (e.g. mega = 220). Minimum and maximum reading and writing speeds are drive-dependent. Drives usually support variable-speed operation to dynamically match the data rate flow. This nearly eliminates tape backhitching or "shoe-shining", maximizing overall throughput and device/tape life. Compatibility In contrast to other tape technologies, an Ultrium cartridge is rigidly defined by a particular generation of LTO technology and cannot be used in any other way (with the exception of LTO-M8, see below). While Ultrium drives are also defined by a particular generation, they are required to have some level of compatibility with older generations of cartridges. The rules for compatibility between generations of drives and cartridges are as follows: Up to and including LTO-7, an Ultrium drive can read data from a cartridge in its own generation and the two prior generations. LTO-8 drives can read LTO-7 and LTO-8 tape, but not LTO-6 tape. An Ultrium drive can write data to a cartridge in its own generation and to a cartridge from the one prior generation in the prior generation's format. Some LTO-8 drives may write previously unused LTO-7 tapes with an increased, uncompressed capacity of 9 TB (Type M (M8)). Only new, unused LTO-7 cartridges may be initialized as LTO-7 Type M. Once a cartridge is initialized as Type M it may not be changed back to a 6 TB LTO-7 cartridge. LTO-7 Type M cartridges are only initialized to Type M in an LTO-8 drive. LTO-7 drives are not capable of reading LTO-7 Type M cartridges. An Ultrium drive cannot make any use of a cartridge from a more recent generation. For example, an LTO-2 cartridge can never be used by an LTO-1 drive and even though it can be used in an LTO-3 drive, it performs as if it were in an LTO-2 drive. Within the compatibility rules stated above, drives and cartridges from different vendors are expected to be interchangeable. For example, a tape written on any one vendor's drive should be fully readable on any other vendor's drive that is compatible with that generation of LTO. Core technology Tape specifications Physical structure LTO Ultrium tape is laid out with four wide data bands sandwiched between five narrow servo bands. The tape head assembly, that reads from and writes to the tape, straddles a single data band and the two adjacent servo bands. The tape head has 8, 16, or 32 data read/write head elements and 2 servo read elements. The set of 8, 16, or 32 tracks are read or written in a single, one-way, end-to-end pass that is called a "wrap". The tape head shifts laterally to access the different wraps within each band and also to access the other bands. Writing to a blank tape starts at band 0, wrap 0, a forward wrap that runs from the beginning of the tape (BOT) to the end of the tape (EOT) and includes a track that runs along one side of the data band. The next wrap written, band 0, wrap 1, is a reverse wrap (EOT to BOT) and includes a track along the other side of the band. Wraps continue in forward and reverse passes, with slight shifts toward the middle of the band on each pass. The tracks written on each pass partially overlap the tracks written on the previous wrap of the same direction, like roof shingles. The back and forth pattern, working from the edges into the middle, conceptually resembles a coiled serpent and is known as linear serpentine recording. When the first data band is filled (they are filled in 3, 1, 0, 2 order across the tape), the head assembly is moved to the second data band and a new set of wraps is written in the same linear serpentine manner. The total number of tracks on the tape is (4 data bands) × (11 to 52 wraps per band) × (8, 16, or 32 tracks per wrap). For example, an LTO-2 tape has 16 wraps per band, and thus requires 64 passes to fill. Logical structure Since LTFS is an open standard, LTFS-formatted tapes are usable by a wide variety of computing systems. The block structure of the tape is logical so interblock gaps, file marks, tape marks and so forth take only a few bytes each. In LTO-1 and LTO-2, this logical structure has CRC codes and compression added to create blocks of 403,884 bytes. Another chunk of 468 bytes of information (including statistics and information about the drive that wrote the data and when it was written) is then added to create a "dataset". Finally error correction bytes are added to bring the total size of the dataset to 491,520 bytes (480 KiB) before it is written in a specific format across the eight heads. LTO-3 and LTO-4 use a similar format with 1,616,940-byte blocks. The tape drives use a strong error correction algorithm that makes data recovery possible when lost data is within one track. Also, when data is written to the tape it is verified by reading it back using the read heads that are positioned just "behind" the write heads. This allows the drive to write a second copy of any data that fails the verify without the help of the host system. Positioning times While specifications vary somewhat between different drives, a typical LTO-3 drive will have a maximum rewind time of about 80 seconds and an average access time (from beginning of tape) of about 50 seconds. Because of the serpentine writing, rewinding often takes less time than the maximum. If a tape is written to full capacity, there is no rewind time, since the last pass is a reverse pass leaving the head at the beginning of the tape (number of tracks ÷ tracks written per pass is always an even number). Durability LTO tape is designed for 15 to 30 years of archival storage. If tapes are archived for longer than 6 months they have to be stored at a temperature between 16 to 25°C (61 to 77°F) and between 20 – 50% RH. Both drives and media should be kept free from airborne dust or other contaminants from packing and storage materials, paper dust, cardboard particles, printer toner dust etc. Depending on the generation of LTO technology, a single LTO tape should be able to sustain approximately 200-364 full file passes. There is a large amount of lifespan variability in actual use. One full file pass is equal to writing enough data to fill an entire tape and takes between 44 and 208 end-to-end passes. Regularly writing only 50% capacity of the tape results in half as many end-to-end tape passes for each scheduled backup, and thereby doubles the tape lifespan. LTO uses an automatic verify-after-write technology to immediately check the data as it is being written, but some backup systems explicitly perform a completely separate tape reading operation to verify the tape was written correctly. This separate verify operation doubles the number of end-to-end passes for each scheduled backup, and reduces the tape life by half. Optional technology The original release of LTO technology defined an optional data compression feature. Subsequent generations of LTO have introduced new optional technology, including WORM, encryption, and partitioning features. Compression The original LTO specification describes a data compression method LTO-DC, also called Streaming Lossless Data Compression (SLDC). It is very similar to the algorithm ALDC which is a variation of LZS. LTO-1 through LTO-5 are advertised as achieving a "2:1" compression ratio, while LTO-6 and LTO-7, which apply a modified SLDC algorithm using a larger history buffer, are advertised as having a "2.5:1" ratio. This is inferior to slower algorithms such as gzip, but similar to lzop and the high speed algorithms built into other tape drives. The actually achievable ratio generally depends on the compressibility of the data, e.g. for precompressed data such as ZIP files, JPEG images, and MPEG video or audio the ratio will be close to or equal to 1:1. WORM New for LTO-3 was write once read many (WORM) capability. This is useful for legal record keeping, and for protection from accidental or intentional erasure, for example from ransomware, or simply human error. An LTO-3 or later drive will not erase or overwrite data on a WORM cartridge, but will read it. A WORM cartridge is identical to a normal tape cartridge of the same generation with the following exceptions: the cartridge memory identifies it to the drive as WORM, the servo tracks are slightly different to allow verification that data has not been modified, the bottom half of the cartridge shell is gray, and it may come with tamper-proof screws. WORM-capable drives immediately recognize WORM cartridges and include a unique WORM ID with every dataset written to the tape. There is nothing different about the tape medium in a WORM cartridge. Encryption The LTO-4 specification added a feature to allow LTO-4 drives to encrypt data before it is written to tape. All LTO-4 drives must be aware of encrypted tapes, but are not required to support the encryption process. All current LTO manufacturers support encryption natively enabled in the tape drives using Application Managed Encryption (AME). The algorithm used by LTO-4 is AES-GCM, which is an authenticated, symmetric block cipher. The same key is used to encrypt and decrypt data, and the algorithm can detect tampering with the data. Tape drives, tape libraries, and backup software can request and exchange encryption keys using either proprietary protocols, or an open standard like OASIS's Key Management Interoperability Protocol. Partitioning The LTO-5 specification introduced the partitioning feature that allows a tape to be divided into two separately writable areas, known as partitions. LTO-6 extends the specification to allow 4 separate partitions. The Linear Tape File System (LTFS) is a self-describing tape format and file system made possible by the partition feature. File data and filesystem metadata are stored in separate partitions on the tape. The metadata, which uses a standard XML schema, is readable by any LTFS-aware system and can be modified separately from the data it describes. The Linear Tape File System Technical Work Group of the Storage Networking Industry Association (SNIA) works on the development of the format for LTFS. Without LTFS, data is generally written to tape as a sequence of nameless "files", or data blocks, separated by "filemarks". Each file is typically an archive of data organized using some variation of tar format or proprietary container formats developed for and used by backup programs. In contrast, LTFS utilizes an XML-based index file to present the copied files as if organized into directories. This means LTFS-formatted tape media can be used similarly to other removable media (USB flash drive, external hard disk drive, and so on). While LTFS can make a tape appear to behave like a disk, it does not change the fundamentally sequential nature of tape. Files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. In spite of these disadvantages, there are several use cases where LTFS-formatted tape is superior to disk and other data storage technologies. While LTO seek times can range from 10 to 100 seconds, the streaming data transfer rate can match or exceed disk data transfer rates. Additionally, LTO cartridges are easily transportable and the latest generation can hold more data than other removable data storage formats. The ability to copy a large file or a large selection of files (up to 1.5 TB for LTO-5 or 2.5 TB for LTO-6) to an LTFS-formatted tape, allows easy exchange of data to a collaborator or saving of an archival copy. Cartridges , only Fujifilm and Sony continue to manufacture current LTO media. Compliance-verified licensed manufacturers of LTO technology media at one time were EMTEC, Imation, Fujifilm, Maxell, TDK, and Sony. All other brands of media are manufactured by these companies under contract. Since its bankruptcy in 2003, EMTEC no longer manufactures LTO media products. Imation ended all magnetic tape production in 2011, but continued making cartridges using TDK tape. They later withdrew from all data storage markets, and changed their name to Glassbridge Enterprises in 2017. TDK withdrew from the data tape business in 2014. Verbatim and Quantegy both licensed LTO technology, but never manufactured their own compliance-verified media. Maxell also withdrew from the market. In addition to the data cartridges, there are also Universal Cleaning Cartridges (UCC), which work with all drives. Dimensions All formats use the same cartridge dimensions, 102.0 × 105.4 × 21.5 mm. Memory Every LTO cartridge has a cartridge memory chip inside it. It is made up of 511, 255, or 128 blocks of memory, where each block is 32 bytes for a total of 16 KiB for LTO-6 to 8; 8 KiB for LTO-4 and 5; and 4 KiB on LTO-1 to 3 and cleaning cartridges. This memory can be read or written, one block at a time, via a non-contacting passive 13.56 MHz RF interface. This memory is used to identify tapes, to help drives discriminate between different generations of the technology, and to store tape-use information. Every LTO drive has a cartridge memory reader in it. The non-contact interface has a range of 20 mm. External readers are available, both built into tape libraries and PC based. One such reader, Veritape, connects by USB to a PC and integrates with analytical software to evaluate the quality of tapes. This device is also rebranded as the Spectra MLM Reader and the Maxell LTO Cartridge Memory Analyzer. Proxmark3 and other generic RFID readers are also able to read data. Labels The LTO cartridge label in tape library applications commonly uses the bar code symbology of USS-39. A description and definition is available from the Automatic Identification Manufacturers (AIM) specification Uniform Symbol Specification (USS-39) and the ANSI MH10.8M-1993 ANSI Barcode specification. Leader pin The tape inside an LTO cartridge is wound around a single reel. The end of the tape is attached to a perpendicular leader pin that is used by an LTO drive to reliably grasp the end of the tape and mount it in a take-up reel inside the drive. Older single-reel tape technologies, such as 9 track tape and DLT, used different means to load tape onto a take-up reel. When a cartridge is not in a drive, the pin is held in place at the opening of the cartridge with a small spring. A common reason for a cartridge failing to load into a drive is the misplacement of the leader pin as a result of the cartridge having been dropped. The plastic slot where the pin is normally held is deformed by the drop and the leader pin is no longer in the position that the drive expects it to be. Erasing The magnetic servo tracks on the tape are factory encoded. Using a bulk eraser, degaussing, or otherwise exposing the cartridge to a strong magnetic field, will erase the servo tracks along with the data tracks, rendering the cartridge unusable. Erasing the data tracks without destroying the servo tracks requires special equipment. The erasing head used in these erasers has four magnetic poles that match the width and the location of the data bands. The gaps between the poles correspond to the servo tracks, which are not erased. Tapes erased by this equipment can be recorded again. Cleaning Although keeping a tape drive clean is important, normal cleaning cartridges are abrasive and frequent use will shorten the drive's lifespan. LTO drives have an internal tape head cleaning brush that is activated when a cartridge is inserted. When a more thorough cleaning is required the drive signals this on its display and/or via Tape Alert flags. Cleaning cartridge lifespan is usually from 15 to 50 cleanings. There are 2 basic methods of initiating a cleaning of a drive: robot cleaning and software cleaning. In addition to keeping the tape drive clean, it is also important to keep the media clean. Debris on the media can be deposited onto drive components that are in contact with the tape. This debris can result in increased media wear which generates more debris. Removing excessive debris from tape can reduce the number of data errors. Cleaning of the media requires special equipment. These cleaners are also used by Spectra Logic to clean new media that is marketed as "CarbideClean" media. HP LTO Gen.1 drives have a cleaning strategy that will prevent the drive from using the cleaning tape if it is not needed. In a change of strategy, HP LTO Gen 2, 3 and 4 drives will always clean when a Universal Cleaning Cartridge is inserted, whether the drive requires cleaning or not. Mechanisms , compliance-verified licensed manufacturers of current LTO technology mechanisms are IBM, Hewlett-Packard, and Quantum, although both Hewlett Packard and Quantum have stopped new development of drive mechanisms. The mechanisms, also known as tape drives or streamers, are available in Full-height and Half-height form factors. These drives are frequently packaged into external desktop enclosures or carriers that fit into a robotic tape library. Sales and market In the course of its existence, LTO has succeeded in completely displacing all other low-end/mid-range tape technologies such as AIT, DLT, DAT/DDS, and VXA. And after the exit of Oracle StorageTek T10000 of the high-end market, only the IBM 3592 series is still under active development. LTO also competes against hard disk drives (HDDs), and its continuous improvement has prevented the predicted "death of tape" at the hands of disk. The presence of five certified media manufacturers and four certified mechanism manufacturers for a while produced a competitive market for LTO products. However, , there are only two manufacturers developing media, Sony and Fuji, and only IBM is developing mechanisms. The LTO organization publishes annual media shipments measured in both units and compressed capacity. In 2017, a record 108,457 petabytes (PB) of total tape capacity (compressed) shipped, an increase of 12.9 percent over the previous year. Cartridge unit shipments decreased to about 18 million units down from a peak of about 27 million units in 2008. Public information on tape drive sales is not readily available. Unit shipment peaked at about 800,000 units in 2008, but have declined since then to about 400,000 units in 2010, and to less than 250,000 by the end of 2018 As HDD prices have dropped, disk has become cheaper relative to tape drives and cartridges. , at any capacity, the cost of a new LTO tape drive plus one cartridge is much greater than that of a new HDD of the same or greater storage capacity. However, most new tape cartridges still have a lower price per gigabyte than HDDs, so that at very large subsystem capacities, the total price of tape-based subsystems can be lower than HDD based subsystems, particularly when the higher operating costs of HDDs are included in any calculation. Tape is also used as offline copy, which can be protection against ransomware that cipher or delete data (e.g. tape is pulled out of the tape library, blocked from writing after making copy or using WORM technology). In 2019, many businesses used tape for backup and archiving. References External links Linear Tape Open Consortium IBM's LTO Redbook: IBM System Storage Tape Library Guide for Open Systems ECMA-319: Ultrium 1 Format IBM LTO Ultrium Cartridge Label Specification, Revision 6 Computer storage tape media Ecma standards Tape-based computer storage
14182603
https://en.wikipedia.org/wiki/Univa
Univa
Univa was a software company that developed workload management and cloud management products for compute-intensive applications in the data center and across public, private, and hybrid clouds, before being acquired by Altair Engineering in September 2020. Univa software manages diverse application workloads and resources, helping enterprises scale and automate infrastructure to maximize efficiency and throughput while also helping them manage cloud spending. Univa’s primary market was High Performance Computing (HPC). Its products were used in a variety of industries, including manufacturing, life sciences, energy, government labs and universities. Univa software was used to manage large-scale HPC, analytic, and machine learning applications across these industries. Products and services Univa developed, sold, and supported Univa Grid Engine software, Univa's version of the popular Grid Engine workload manager. Univa also offered Navops Launch, a solution providing cloud migration, cloud automation, and cloud spend management for users of Univa Grid Engine. Univa announced in January 2011 that it hired personnel formerly working for Oracle and Sun Microsystems on Grid Engine development. On April 12, 2011, Univa rolled out its initial commercial release of Univa Grid Engine based on open source Grid Engine. The commercial released offered new functionality related to performance, and resource control for small and large clusters and provided enterprise customers with support services. On October 22, 2013 Univa announced it acquired the intellectual property, copyrights and trademarks pertaining to the Grid Engine technology from Oracle and that it would support Oracle Grid Engine customers. Univa announced Navops in May 2016, a new business unit with products enabling enterprises to easily migrate to the cloud or deploy hybrid clouds. Among the public clouds supported by Navops Launch are Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure. Navops Launch provides cloud automation features to dynamically configure cloud-based Univa Grid Engine clusters and scale them based on application workloads and configurable policies. Navops Launch also provides cloud spend management features to help organizations monitor and manage cloud spending by extracting information from cloud-specific APIs. Organizations can use Navops Launch to manage spending by project, group, department, and cost-center. Univa enhanced both Univa Grid Engine and Navops Launch, supporting new operating environments, machine architectures, and cloud providers. Key areas of focus for both Univa Grid Engine and Navops Launch have been around performance and scalability. Both were demonstrated by a well-publicized one million core cluster deployment in AWS announced in June 2018. Other areas of focus have been around managing containerized and GPU-aware application workloads. Univa offered support services, education services, and consulting services related to installation, tuning, and configuration for all of its products. History Univa was founded in 2004 under the name Univa Corporation by Carl Kesselman, Ian Foster, and Steve Tuecke and was at that time primarily known for providing open source products and technical support based around the Globus Toolkit. On September 17, 2007, the company announced that it would merge with the Austin, Texas-based United Devices and operate under the new name Univa UD. The company operated as Univa UD until formally dropping "UD" and returning to common use of Univa Corporation. Univa announced, on January 17, 2011, that it had hired the principal and founding engineers of the Grid Engine team. On Oct 22, 2013 Univa announced that it had acquired the intellectual property as well the copyrights and trademarks pertaining to the Grid Engine technology from Oracle and trademarks pertaining to the Grid Engine technology from Oracle and that it will take over supporting Oracle Grid Engine customers. In June 2014, a technical partnership between Sahara Force India Formula One Team and Univa was announced. In May 2016, Univa announced a new business unit, Navops with a product line based on its Grid Engine cluster workload scheduling technology previously developed at Sun Microsystems and later acquired by Univa from Oracle. Navops Launch enables enterprises to migrate workloads to cloud or hybrid cloud environments. In March 2018 Univa open-sources Navops Launch (née Unicloud) as Project Tortuga under an Apache 2.0 license. In September 2019, Univa announced enhancements to Navops Launch in a 2.0 release, including an enhanced UI, improved automation, and spend management features. In September 2020, Altair Engineering, a global technology company providing solutions in data analytics, product development, and high-performance computing (HPC) acquired Univa. See also Computational grid grid.org Job schedulers High-performance computing Supercomputing Big data Distributed computing Grid computing CPU scavenging Cloud computing References 2020 mergers and acquisitions Supercomputers Cloud computing providers Defunct software companies of the United States Companies based in DuPage County, Illinois Software companies established in 2004 Software companies disestablished in 2020 Lisle, Illinois Grid computing
37292164
https://en.wikipedia.org/wiki/CD%20Projekt
CD Projekt
CD Projekt S.A. () is a Polish video game developer, publisher and distributor based in Warsaw, founded in May 1994 by Marcin Iwiński and Michał Kiciński. Iwiński and Kiciński were video game retailers before they founded the company, which initially acted as a distributor of foreign video games for the domestic market. The department responsible for developing original games, CD Projekt Red, best known for The Witcher series, was formed in 2002. In 2008, CD Projekt launched the digital distribution service GOG.com (originally as Good Old Games). The company began by translating major video-game releases into Polish, collaborating with Interplay Entertainment for two Baldur's Gate games. CD Projekt was working on the PC version of Baldur's Gate: Dark Alliance when Interplay experienced financial difficulties. The game was cancelled and the company decided to reuse the code for their own video game. It became The Witcher, a video game based on the works of Andrzej Sapkowski. After the release of The Witcher, CD Projekt worked on a console port called The Witcher: White Wolf; but development issues and increasing costs almost led the company to the brink of bankruptcy. CD Projekt later released The Witcher 2: Assassins of Kings in 2011 and The Witcher 3: Wild Hunt in 2015, with the latter winning various Game of the Year awards. In 2020, the company released Cyberpunk 2077, an open-world role-playing game based on the Cyberpunk 2020 tabletop game system, for which it opened a new division in Wrocław. A video game distribution service, GOG.com, was established by CD Projekt to help players find old games. Its mission is to offer games free of digital rights management (DRM) to players and its service was expanded to cover new AAA and independent games. In 2009, CD Projekt's then-parent company, CDP Investment, announced its plans to merge with Optimus S.A. in a deal intended to reorganise CD Projekt as a publicly traded company. The merger was closed in December 2010 with Optimus as the legal surviving entity; Optimus became the current incarnation of CD Projekt S.A. in July 2011. By September 2017, it was the largest publicly traded video game company in Poland, worth about 2.3 billion, and by May 2020, had reached a valuation of , making it the largest video game industry company in Europe ahead of Ubisoft. CD Projekt joined WIG20, an index of the 20 largest companies on the Warsaw Stock Exchange, in March 2018. History Founding CD Projekt was founded in May 1994 by Marcin Iwiński and Michał Kiciński. According to Iwiński, although he enjoyed playing video games as a child they were scarce in then-communist Poland. Marcin Iwiński, in high school, was selling cracked copies of Western video games at a Warsaw marketplace. In high school, Iwiński met Kiciński, who became his business partner; at that time, Kiciński was also selling video games. Wanting to conduct business legitimately, Iwiński and Kiciński began importing games from US retailers and were the first importers of CD-ROM games. After Poland's transition to a primarily market-based economy in the early 90s, they founded their own company. Iwiński and Kiciński founded CD Projekt in the second quarter of 1994. With only $2,000, they used a friend's flat as a rent-free office. Localization When CD Projekt was founded, their biggest challenge was overcoming video game piracy. The company was one of the first in Poland to localize games; according to Iwiński, most of their products were sold to "mom-and-pop shops". CD Projekt began partial localization for developers such as Seven Stars and Leryx-LongSoft in 1996, and full-scale localization a year later. According to Iwiński, one of their first successful localization titles was for Ace Ventura; whereas previous localizations had only sold copies in the hundreds, Ace Ventura sold in the thousands, establishing the success of their localization approach. With their methods affirmed, CD Projekt approached BioWare and Interplay Entertainment for the Polish localization of Baldur's Gate. They expected the title to become popular in Poland, and felt that no retailer would be able to translate the text from English to Polish. To increase the title's popularity in Poland, CD Projekt added items to the game's packaging and hired well-known Polish actors to voice its characters. Their first attempt was successful, with 18,000 units shipped on the game's release day (higher than the average shipments of other games at the time). The company continued to work with Interplay after the release of Baldur's Gate, collaborating on a PC port for the sequel Baldur's Gate: Dark Alliance. To develop the port, CD Projekt hired Sebastian Zieliński (who had developed Mortyr 2093-1944) and Adam Badowski. Six months after development began, Interplay experienced financial problems and cancelled the PC version. CD Projekt continued to localize other games after Dark Alliances cancellation, and received Business Gazelle awards in 2003 and 2004. CD Projekt Red Enthusiasm for game distribution ebbed, and CD Projekt's founders wondered if the company should continue as a distributor or a game developer after Dark Alliances cancellation. With the game cancelled and its code owned by CD Projekt, the company planned to use them to develop their first original game. They intended to develop a game series based on Andrzej Sapkowski's Wiedźmin books (which were popular in Poland) and the author accepted the company's development proposal. The franchise rights had been sold to Metropolis Software in 1997 and a playable version of the first chapter was made, but then left abandoned. CD Projekt acquired the rights to the Wiedźmin franchise in 2002. According to Iwiński, he and Kiciński had no idea how to develop a video game at that time. To develop the game, the company formed a video-game development studio (CD Projekt Red Sp. z o.o., headed by Sebastian Zieliński) in Łódź in 2002. The studio made a demonstration game, which Adam Badowski called "a piece of crap" in retrospect. The demo was a role-playing game with a top-down perspective, similar to Dark Alliance and Diablo, and used the game engine which powered Mortyr. Iwiński and Kiciński pitched the demo to a number of publishers, without success. The Łódź office closed and the staff, except for Zieliński, moved to the Warsaw headquarters. Zieliński left the company, and Kiciński headed the project. Although the game's development continued, the demo was abandoned. According to CD Projekt, the development team had different ideas for the game and lacked overall direction; as a result, it was returned to the drawing board in 2003. The team, unfamiliar with video-game development, spent nearly two years organising production. They received assistance from BioWare, who helped promote the game at the 2004 Electronic Entertainment Expo by offering CD Projekt space in their booth next to Jade Empire. BioWare also licensed their Aurora game engine to the company. The game's budget exceeded expectations. The original 15-person development team expanded to about 100, at a cost of 20 million złoty. According to Iwiński, content was removed from the game for budgetary reasons but the characters' personalities were retained; however, there was difficulty in translating the game's Polish text into English. Atari agreed to publish the game. After five years of development, the game brought Wiedźmin to an international audience, and so the company adopted the English name, The Witcher, coined by Adrian Chmielarz. The Witcher was released in 2007 to generally positive reviews. Sales were satisfactory, and the development of sequels began almost immediately after The Witcher release. The team began the design work for The Witcher 2: Assassins of Kings, and experimented with consoles to develop a new engine for The Witcher 3. Their development was halted when the team began work on The Witcher: White Wolf, a console version of The Witcher. Although they collaborated with French studio Widescreen Games for the console port, it entered development limbo. Widescreen demanded more manpower, money and time to develop the title, complaining that they were not being paid; according to Iwiński, CD Projekt paid them more than their own staff members. The team cancelled the project, suspending its development. Unhappy with the decision, Atari demanded that CD Projekt repay them for funding the console port development and Iwiński agreed that Atari would be the North American publisher of the sequel of The Witcher 2. CD Projekt acquired Metropolis Software in 2008. The dispute over White Wolf was costly; the company faced bankruptcy, with the financial crisis of 2007–08 as a contributing factor. To stay afloat, the team decided to focus on The Witcher 2 with the Witcher 3 engine. When the engine (known as Red Engine) was finished, the game could be ported to other consoles. To develop The Witcher 2, the company suspended development of Metropolis' first-person shooter, titled They. After three-and-a-half years of development, The Witcher 2: Assassins of Kings was released in 2011 to critical praise and sales of more than 1.7 million copies. After The Witcher 2, CD Projekt wanted to develop an open-world game of a quality similar to their other games, and the company wanted to add features to avoid criticism that it was Witcher 2.5. They wanted to push the game's graphics boundaries, releasing it only for the PC and eighth-generation consoles. This triggered debate on the team, some of whom wanted to release the game for older consoles to maximise profit. The Witcher 3: Wild Hunt took three-and-a-half years to develop and cost over $81 million. A report alleged that the team had to crunch extensively for a year in order to meet release date deadlines. After multiple delays, it was released in May 2015 to critical praise. Wild Hunt was commercially successful, selling six million copies in its first six weeks and giving the studio a profit of 236 million złoty ($62.5 million) in the first half of 2015. The team released 16 free content downloads and two paid expansions, Hearts of Stone and Blood and Wine. The team decided that The Witcher 3: Wild Hunt would be the final game in the series with Geralt. Regarding the future of the Witcher series, Konrad Tomaszkiewicz, game director of The Witcher 3, stated in May 2016 that he hoped to continue working with the series sometime in the future, but had nothing planned at the time. As of 2017, the series had sold over 33 million copies. A spin-off of the series, Gwent: The Witcher Card Game, based on the popular card game in The Witcher 3, was released in 2018. The success of The Witcher 3 enabled CD Projekt to expand. In March 2016, the company announced that they had another role-playing game in development, and that the title is scheduled to be released in the period of 2017 to 2021. They also announced plans for expansion, where the Red division will expand two-fold. It also listed itself at Warsaw Stock Exchange, riding on the success of The Witcher 3. In March 2018, the opening of a new studio in Wrocław was announced. Acquired from a studio called Strange New Things, it is headed by former Techland COO Paweł Zawodny and composed of other ex-Techland, IO Interactive, and CD Projekt Red employees. In August 2018, CD Projekt established Spokko, a development studio focused on mobile gaming. The Witcher 3s success as well as CD Projekt RED's customer-friendly policies during that period enabled the studio to earn a lot of goodwill within the gaming community. However, the studio's working conditions were questioned after disgruntled employees flooded the company's profile at Glassdoor with negative reviews. Iwinski later responded by saying that the studio's approach to making games "is not for everyone". Following the successful release of The Witcher 3: Wild Hunt, Cyberpunk 2077, the studio's next title, became one of the most anticipated video games of all time. It is an open-world role-playing game based on the Cyberpunk 2020 tabletop system created by Mike Pondsmith. The game was initially introduced in May 2012. The hype for the title, alongside the release of The Witcher TV series on Netflix, enabled CD Projekt to become the most valuable video game company in Europe in May 2020, surpassing Ubisoft. The game suffered multiple delays, with the team stressing that they would not release the game until it was ready. While management introduced a "non-obligatory crunch" model for the team to lessen the effects of game development on their personal lives, management broke their promise and forced all developers to crunch and worked six days a week. The game was released in December 2020. The PC version received generally positive reviews and became one of the biggest video game launches for PC. Development cost was fully recouped based on pre-order sales alone. However, the console versions were plagued with technical issues and software bugs, with some players reporting that these versions were unplayable. The studio was accused of hiding the poor state of the console versions from its customers during the game's marketing. On 18 December 2020 the game was removed from the PlayStation online store. Kiciński acknowledged that the company's approach to marketing the console versions eroded players' trust in the studio, and promised to release patches for the game. In early February 2021, CD Projekt Red was hit by a ransomware attack, with the attackers able to acquire the source code to several of the studio's games, including Gwent, The Witcher 3 and Cyberpunk 2077 as well as administrative files. The attackers demanded CD Projekt Red pay them a large sum of money within a few days under threat of leaking or selling the stolen code and files. CD Projekt refused to negotiate with the attackers, stating to the press that "We will not give in to the demands or negotiate with the actor", affirming no personal information was obtained in the attack and that they were working with law enforcement to track down the attackers. Security analysts saw the code being auctioned on the dark web for a minimum price of , and subsequently closed later with the attackers stating they had received an offer that satisfied them. Within a week of these auctions, the code was being shared online via social media, and CD Projekt began using DMCA takedown notices to remove postings of its code. In March 2021, CD Projekt Red acquired Vancouver, Canada-based Digital Scapes Studios and rebranded the studio as CD Projekt Red Vancouver. In May 2021, it was reported that Tomaszkiewicz had resigned from studio following workplace bullying allegations. In October 2021, CD Projekt Red acquired Boston-based independent studio The Molasses Flood, the developer of The Flame in the Flood. REDengine REDengine is a game engine developed by CD Projekt Red exclusively for their nonlinear role-playing video games. It is the replacement of the Aurora Engine CD Projekt Red had previously licensed from BioWare for the development of The Witcher. REDengine is portable across 32- and 64-bit software platforms and runs under Microsoft Windows. REDengine was first used in The Witcher 2: Assassins of Kings for Microsoft Windows. REDengine 2, an updated version of REDengine used in The Witcher 2, also runs under Xbox 360 and both OS X and Linux, however these ports were made using a compatibility layer similar to Wine called eON. REDengine 3 was designed exclusively for a 64-bit software platform, and also runs under PlayStation 4, Xbox One, and Nintendo Switch. REDengine 2 utilized middleware such as Havok for physics, Scaleform GFx for the user interface, and FMOD for audio. The engine was used for the Xbox 360 port of The Witcher 2. REDengine 3 was designed to run exclusively on a 64-bit software platform. CD Projekt Red created REDengine 3 for the purpose of developing open world video game environments, such as those of The Witcher 3: Wild Hunt. It introduces improvements to facial and other animations. Lighting effects no longer suffer from reduced contrast ratio. REDengine 3 also supports volumetric effects enabling advanced rendering of clouds, mist, fog, smoke, and other particle effects. There is also support for high-resolution textures and mapping, as well as dynamic physics and an advanced dialogue lip-syncing system. However, due to limitations on texture streaming, the use of high-resolution textures may not always be the case. REDengine 3 has a flexible renderer prepared for deferred or forward+ rendering pipelines. The result is a wide array of cinematic effects, including bokeh depth-of-view, color grading and lens flares associated with multiple lighting. The terrain system in REDengine 3 uses tessellation and layers varying material, which can then be easily blended. Cyberpunk 2077 uses REDengine 4, the next iteration of the REDengine. It introduces support for ray-traced global illumination and other effects, with this technique being applied in Cyberpunk 2077. Game distribution In 2008, the company introduced Good Old Games, a distribution service with a digital rights management-free strategy. The service aims to help players find "good old games", preserving old games. To do so, the team needed to unravel licensing issues for defunct developers or negotiate with publishers for distribution rights. To recover old code for conversion to modern platforms, they had to use retail versions or second-hand games. CD Projekt partnered with small developers and large publishers, including Activision, Electronic Arts and Ubisoft, to broaden the service's portfolio of games to triple-A and independent video games. Despite suspicions that it was a "doomed project", according to managing director Guillaume Rambourg, it has expanded since its introduction. Indeed, , GOG.com had seen 690,000 units of CD Projekt Red's game The Witcher 3: Wild Hunt redeemed through the service, more than the second largest digital seller Steam (approx. 580,000 units) and all other PC digital distribution services combined. As of 8 July 2019, every third Cyberpunk 2077 digital pre-order was sold on GOG.com. Income from GOG.com (known internally as CD Projekt Blue) accrues to CD Projekt Red. Subsidiaries Games developed Business views The company claims to focus on certain aspects and features of a game at a time, in order to ensure quality. The company focuses on the development of role-playing games, with the team working on established franchises with a fan base and introducing lesser-known franchises to a wide audience. The studio has been praised for prioritising quest design over the size of the game world in its open-world games. CD Projekt Red opposes the inclusion of digital-rights-management technology in video games and software. The company believes that DRM is ineffective in halting software piracy, based on data from sales of The Witcher 2: Assassins of Kings. CD Projekt Red found that their initial release (which included DRM technology) was pirated over 4.5 million times; their DRM-free re-release was pirated far less. The Witcher 3: Wild Hunt and Cyberpunk 2077 were released without DRM technology. The team believes that free downloadable content should be an industry standard. The company demonstrated this by releasing several free DLC for Wild Hunt. According to Studio Head Adam Badowski, CD Projekt Red avoided becoming a subsidiary of another company in order to preserve their financial and creative freedom and ownership of their projects. In 2015 Electronic Arts was rumoured to be attempting to acquire CD Projekt, but this was denied by Iwiński who said that maintaining the company's independence was something he would be fighting for. Financial details on development, marketing and release costs are freely published, citing being "open in communication" as one of the company's core values. The company used to follow Rockstar Games' business model, where the company works on a single project with a large team and avoids working on multiple projects at the same time. In March 2021 they changed their strategy. From 2022 they will work on multiple AAA games in parallel. Controversy In December 2021, CD Project agreed to pay $1.85 million while negotiating to settle a class-action lawsuit over the problematic release of Cyberpunk 2077, a game which had several technical issues that many users experienced. References External links Polish companies established in 1994 Companies based in Warsaw Companies listed on the Warsaw Stock Exchange Multinational companies headquartered in Poland Polish brands Video game companies established in 1994 Video game companies of Poland Video game development companies Video game publishers
32707651
https://en.wikipedia.org/wiki/Cape%20Coast%20Technical%20University
Cape Coast Technical University
Formally Cape Coast Polytechnic is a public tertiary institution in the Central Region of Ghana. Cape Coast Polytechnic was in existence in 1984 as a second cycle institution. In began operating under Ghana Education Service in 1986. It was then allowed to offer intermediate courses and also award non-tertiary certificates. After the enactment of the PNDCL 321 in 1992, the Technical University was upgraded to a tertiary level that allowed it to run programmes associated with the award of Higher National Diploma. Courses Cape Coast Technical University as an institution have twelve (12) academic departments and three schools. The courses run by Cape Coast Technical University are Arts, Business, applied sciences, and Engineering, Mechanical Engineering, Procurement & Supply Chain Management Building Technology, Bachelor of Science in Statistics Accounting with Computing, Civil Engineering, Secretaryship & Management Studies, Marketing and Telecommunication Engineering. Addition of Cyber Security Unit In October 2020, a cyber security department and forensic lab was commissioned. This was part of the National Initiative for Cyber Security Engineering Science and Technology and Education Programme (NICESTEP). It is the brain child of Cyber Ghana in conjunction with the Royal Academy of Engineering, United Kingdom and the Llyods Register Foundation, all in the UK. This was to provide skills training,jobs creation and to curb cyber crime in Ghana. References Polytechnics in Ghana Central Region (Ghana)
19336369
https://en.wikipedia.org/wiki/Web%20threat
Web threat
A web threat is any threat that uses the World Wide Web to facilitate cybercrime. Web threats use multiple types of malware and fraud, all of which utilize HTTP or HTTPS protocols, but may also employ other protocols and components, such as links in email or IM, or malware attachments or on servers that access the Web. They benefit cybercriminals by stealing information for subsequent sale and help absorb infected PCs into botnets. Web threats pose a broad range of risks, including financial damages, identity theft, loss of confidential information/data, theft of network resources, damaged brand/personal reputation, and erosion of consumer confidence in e-commerce and online banking. It is a type of threat related to information technology (IT). The IT risk, i.e. risk affecting has gained and increasing impact on society due to the spread of IT processes. Reaching path Web threats can be divided into two primary categories, based on delivery method – push and pull. Push-based threats use spam, phishing, or other fraudulent means to lure a user to a malicious (often spoofed) website which then collects information and/or injects malware. Push attacks use phishing, DNS poisoning (or pharming), and other means to appear to originate from a trusted source. Precisely-targeted push-based web threats are often referred to as spear phishing to reflect the focus of their data gathering attack. Spear phishing typically targets specific individuals and groups for financial gain. In other push-based web threats, malware authors use social engineering such as enticing subject lines that reference holidays, popular personalities, sports, pornography, world events and other hot topics to persuade recipients to open the email and follow links to malicious websites or open attachments with malware that accesses the Web. Pull-based web threats are often referred to as “drive-by” threats by experts (and more commonly as “drive-by downloads” by journalists and the general public), since they can affect any website visitor. Cybercriminals infect legitimate websites, which unknowingly transmit malware to visitors or alter search results to take users to malicious websites. Upon loading the page, the user's browser passively runs a malware downloader in a hidden HTML frame (IFRAME) without any user interaction. Growth of web threats Giorgio Maone wrote in 2008 that "if today’s malware runs mostly runs on Windows because it’s the commonest executable platform, tomorrow’s will likely run on the Web, for the very same reason. Because, like it or not, the Web is already a huge executable platform, and we should start thinking of it this way, from a security perspective." The growth of web threats is a result of the popularity of the Web – a relatively unprotected, widely and consistently used medium that is crucial to business productivity, online banking, and e-commerce as well as the everyday lives of people worldwide. The appeal of Web 2.0 applications and websites increases the vulnerability of the Web. Most Web 2.0 applications make use of AJAX, a group of web development programming tools used for creating interactive web applications or rich Internet applications. While users benefit from greater interactivity and more dynamic websites, they are also exposed to the greater security risks inherent in browser client processing. Examples In September 2008, malicious hackers broke into several sections of BusinessWeek.com to redirect visitors to malware-hosting websites. Hundreds of pages were compromised with malicious JavaScript pointing to third-party servers. In August 2008, popular social networking sites were hit by a worm using social engineering techniques to get users to install a piece of malware. The worm installs comments on the sites with links to a fake site. If users follow the link, they are told they need to update their Flash Player. The installer then installs malware rather than the Flash Player. The malware then downloads a rogue anti-spyware application, AntiSpy Spider. by humanitarian, government and news sites in the UK, Israel and Asia. In this attack the compromised websites led, through a variety of redirects, to the download of a Trojan. In September 2017, visitors to TV network Showtime's website found that the website included Coinhive code that automatically began mining for Monero cryptocurrency without user consent. The adoption of online services has brought about changes in online services operations following the advancement of mobile communication techniques and the collaboration with service providers as a result, the online service technology has become more conductive to individuals. One of the most recent mobile technological wonders The Coinhive software was throttled to use only twenty percent of a visiting computer's CPU to avoid detection. Shortly after this discovery was publicized on social media, the Coinhive code was removed. Showtime declined to comment for multiple news articles. It's unknown if Showtime inserted this code into its website intentionally or if the addition of cryptomining code was the result of a website compromise. Coinhive offers code for websites that requires user consent prior to execution, but less than 2 percent of Coinhive implementations use this code. German researchers have defined cryptojacking as websites executing cryptomining on visiting users' computers without prior consent. With 1 out of every five hundred websites hosting a cryptomining script, cryptojacking is a persistent web threat. Prevention and detection Conventional approaches have failed to fully protect consumers and businesses from web threats. The most viable approach is to implement multi-layered protection—protection in the cloud, at the Internet gateway, across network servers and on the client. See also Asset (computing) Attack (computing) Botnets Browser security Countermeasure (computer) Cybercrime Cyberwarfare Denial-of-service attack High Orbit Ion Cannon IT risk Internet safety Internet security Low Orbit Ion Cannon Man-in-the-browser rich Internet applications Threat (computer) Vulnerability (computing) Web applications Web development web threat solution References Internet security Web security exploits Cybercrime Cyberwarfare
47900503
https://en.wikipedia.org/wiki/BlackBerry%20Priv
BlackBerry Priv
The BlackBerry Priv is a slider smartphone developed by BlackBerry Limited. Following a series of leaks, it was officially announced by BlackBerry CEO John Chen on September 25, 2015, with pre-orders opening on October 23, 2015, for a release on November 6, 2015. The Priv is the first BlackBerry-branded smartphone that does not run the company's proprietary BlackBerry OS or BlackBerry 10 (BB10) platforms. It instead uses Android, customized with features inspired by those on BlackBerry phones, and security enhancements. With its use of Android—one of two smartphone platforms that significantly impacted BlackBerry's early dominance in the smartphone industry—the company sought to leverage access to the larger ecosystem of software available through the Google Play Store (as opposed to BlackBerry 10 devices, which were limited to native BB10 apps from BlackBerry World and Android apps from the third-party Amazon Appstore running in a compatibility subsystem), in combination with a slide-out physical keyboard and privacy-focused features. The BlackBerry Priv received mixed reviews. Critics praised the Priv's user experience for incorporating BlackBerry's traditional, productivity-oriented features on top of the standard Android experience, including a notifications feed and custom e-mail client. Some critics felt that the device's physical keyboard did not perform as well as those on previous BlackBerry devices, and that the Priv's performance was not up to par with other devices using the same system-on-chip. The Priv was also criticized for being more expensive than similarly equipped devices in its class. Development While BlackBerry was dominant in the early smartphone market, partially due to a large market share within the enterprise and governmental markets, the company had struggled in recent years due to the worldwide statistical dominance of the plethora of Android smartphones, and Apple Inc. and its iPhone line, the biggest maker of Android devices being Samsung Electronics. By June 2015, the company's market share in the U.S. consumer market had fallen to 1.2%. Facing a struggling ecosystem for native, third-party software on BlackBerry 10, BlackBerry added a compatibility layer for Android software to the OS, and allowed developers to repackage their Android apps for distribution on BlackBerry World. Later versions added the ability for users to manually install Android app packages. Beginning with the BlackBerry Passport, Amazon Appstore was bundled with BlackBerry 10 to provide an additional source of third-party Android software. BlackBerry CEO John S. Chen hoped that Amazon's own smartphone, the Fire Phone, would bolster the adoption of the Amazon store and attract more major developers to it, and in turn, BlackBerry's ecosystem. However, the Fire Phone was a commercial failure, which led to BlackBerry's decision to develop an Android phone of its own. In early 2014, BlackBerry's device head Ron Louks proposed that the company construct an Android device. Company officials, including Chen, showed concerns over the project, as they believed the platform was not secure enough. However, Louks gained support for the project after outlining plans for hardware-based security. At Mobile World Congress 2015, BlackBerry's device head Ron Louks briefly presented a non-functioning prototype of a new, BlackBerry 10 phone that featured a sliding keyboard and a screen curved across both sides, similar to the Samsung Galaxy S6 Edge that was unveiled during the same convention. In July 2015, new images of the curved device leaked under the codename "Venice"; unlike the version presented at MWC, it was now shown to be running Android 5.0 "Lollipop" rather than BlackBerry 10. Information about the device's software leaked in August 2015, showing a "stock" Android experience augmented with ports of features and apps from BB10, such as BlackBerry Hub. In response to the leaks, Chen officially confirmed during a September 2015 earnings call that BlackBerry would release a high-end Android-based device, now known as the Priv (standing for both "privilege" and "privacy"), in late 2015. Chen felt that the decision to produce an Android phone was to help BlackBerry's device business sustain itself, saying that "we have some really committed diehards. I respect that there's a lot of heritage here, a lot of pride. If the math doesn't add up, the math doesn't add up. We could keep the pride and die hungry or we can eat well and not so proud, maybe. So I chose to eat well. It's good for the company to continue to have a shot at building handsets." He also argued that the decision was meant to "[take] advantage of what the industry can offer", whilst continuing to leverage BlackBerry's "core strength". BlackBerry promoted certain security enhancements made to the build of Android bundled with the Priv, which include utilizing features of its SoC to embed unchangeable cryptographic keys in the device hardware which are used to validate critical boot components, thus establishing a "root of trust" designed to foil attempts to tamper with the OS. Additionally, kernel security enhancements are mentioned. (Provided by grsecurity) BlackBerry also promoted that storage encryption would be enabled by default on the Priv, as well as a general company commitment to timely patch releases for known Android security vulnerabilities, subject to carrier approval. The company stated that BlackBerry 10 devices would continue to co-exist alongside Android-based devices; BlackBerry COO Marty Beard explained that BlackBerry 10 is able to meet "very high-end security needs" that cannot currently be met by Android, while Chen reported that the platform has seen adoption in enterprise and governmental markets. Chen stated that he would consider dropping BlackBerry 10 if his company were able to port all of its security features to Android. However, BlackBerry has not released any new BlackBerry 10-powered devices since, and discontinued its BlackBerry Classic in July 2016. Specifications Hardware The Priv features a , 1440p AMOLED display, which is slightly curved around the horizontal sides of the device. The rear of the device is coated in a "glass weave" material. The screen can be slid up to reveal a hardware keyboard; similar to the BlackBerry Passport, the keyboard is touch-sensitive and can register sliding gestures across its keys for scrolling, text selection, and autocomplete suggestions. a bezel on the left acts as a power button whilst two bezels on the right act as volume up/down buttons separately. Unlike the Passport, the shorter, shallower key can only be used as a mute button (on the passport this key called up the assistant). The Priv utilizes a hexa-core Snapdragon 808 system-on-chip with 3 GB of RAM, consisting of four low-power Cortex-A53 cores and two Cortex-A57 cores, and includes a non-removable 3420 mAh battery which BlackBerry claims can last for 22-and-a-half hours of "mixed use". The Priv includes 32 GB of internal storage, with the option to expand the amount of available storage using a microSD card up to 2 TB in size. The device features an 18-megapixel rear-facing camera with phase detection autofocus and optical image stabilization, and a 2-megapixel front camera. Software The Priv shipped with Android 5.1.1 "Lollipop" a month after Android Marshmallow was launched, using a "stock" user experience customized with additional features and BlackBerry-developed apps. BlackBerry Hub (which originates from BlackBerry 10) aggregates notifications and content from multiple sources and allows for granular management of messages and "snoozing" based on time, location or network availability. Hub can also be accessed alongside BlackBerry Search and Google Search options when swiping from the bottom of the screen. The "Productivity Edge" feature allows a tab to be shown on either the left or right curve of the display, which can be dragged out to display an agenda screen. A progress bar can also be displayed on an edge when the device is charging. An application's home screen widget can be made available from its respective shortcut icon by swiping, which displays the widget as a pop-up window. The DTEK app allows users to view an overview of the security and privacy status of the device based on best practices, and provide notifications when apps attempt to access sensitive information or permissions. The Priv also integrates with the pre-existing Android for Work suite, which allows personal and work-oriented data on a device to be segregated (similarly to the BlackBerry Balance features on BB10). In late-April 2016, BlackBerry began to release an upgrade to Android 6.0 "Marshmallow"; along with features added to the core Android platform (which includes a new permissions system, and systems to reduce background activity when the device is not being physically handled to conserve battery power), it adds S/MIME, Slack, Skype, and Pinterest support to BlackBerry Hub, slide input on the physical keyboard, faster autofocus, and 24 fps and 120 fps video recording modes. The Priv was not updated to Android 7.0 "Nougat". Regular security updates ended in December 2017, although out-of-band updates would still be released in case of critical vulnerabilities. Reception The Priv received mixed reviews. The Verge felt that the Priv's design was "quite good", noting that the thinness of the device's two halves averted it from feeling "top-heavy" when its keyboard was in use. The Wall Street Journal described the frame and curved screen as being "a pleasure to hold and look at." Ars Technica was critical of the device's overall build quality, with its back cover described as feeling "spongy" its sliding mechanism described as being "scratchy" and "friction-filled", while its curved screen was considered a "rather useless gimmick" that was inappropriate for the device's target market. Ars Technica also panned the hardware mute button for being "counterintuitive", as it does not mute the device unless it is already playing audio, otherwise triggering the system volume controls. The hardware keyboard received mixed reviews. The Wall Street Journal described the keyboard as being the "smartphone equivalent of a Colonial butter churn", noting that, although it was faster to type on in comparison to virtual keyboards, the BlackBerry Classic had a larger and wider keyboard. Ars Technica felt it was "unpleasant" to use due to its size, flat keys and how it interacts with the OS. PC Magazine described its display as being "beautiful", while Ars Technica felt that it had "grainy" color reproduction. In regards to performance, PC Magazine described the Priv as having "[benchmarked] like [an LG G4] that's been throttled down after some gaming", noting that its AnTuTu scores were lower than those of the Nexus 6P and Samsung Galaxy S6. It also noted that while it wasn't "technically" unresponsive, "there are some complex animations and missed touch or typing inputs that might make you feel like it is". The Wall Street Journal felt that the Priv felt "inexcusably slow" at times, reporting instances of slow or unresponsive apps. The battery life was praised, with The Wall Street Journal remarking that it "outran" the Nexus 5X, 6P and iPhone 6S, while PC Magazine credited its aggressive suppression of background activity as improving its standby battery life, stating that it survived a weekend of use with 25% capacity still remaining. In contrast, The Verge and Ars Technica claimed they were unable to reach the device's advertised battery life. PC Magazine praised the Priv's call quality, noting that it had a clear microphone and "delivers very loud maximum speakerphone and earpiece volume with zero distortion or wobble." The Priv's rear-facing camera was criticized for having autofocus issues and for producing washed-out images with poor contrast and low-light performance. Ars Technica approved of the phone's software for staying close to Android's default user experience while providing optional enhancements, but noted that it did not ship with the latest version of Android. BlackBerry Hub was also praised by The Verge allowing users to "filter all of [their notifications] in a million different ways to get super productive views of what you need to get done really quickly", but criticized for not supporting all services and for not allowing users to archive messages from Gmail within the interface. PC Magazine noted that some of the features BlackBerry added to Android could be accomplished with widgets on other devices. The Wall Street Journal felt that most of the enhancements to the OS were useful and the email interface was "[trumping] all others when it comes to formatting options", although it was slower than the Gmail app. The focus on user privacy was described as "not running very deep"; although the DTEK app and its privacy suggestions were well received, it was noted that most of the improvements to privacy and security were not exclusive to the Priv and that the hardened kernel, while making the phone more secure in theory, had not been externally audited. PC Magazine similarly questioned the security features, noting that Android "Marshmallow" allows app permissions to be revoked individually (a function not implemented by DTEK) and supports device encryption, and that using Google services requires users to agree to data collection by the company to begin with. The Priv was also criticized for its high price in comparison to other recent "flagship" phones with better specifications. In conclusion, while commending BlackBerry for being in "way better shape with the Priv than it was with any of its BB10 devices", the Priv was ultimately described by Ars Technica as being "passable" and recommended the Nexus 6P as a cheaper alternative. The Wall Street Journal felt that the Priv was "a really good phone for people who want a [hardware] keyboard and a more secure Android experience", but that it "isn't going to put BlackBerry back on top again". The Verge felt that the Priv was a "remarkable" debut for Android on BlackBerry hardware, albeit marred by its performance issues. Sales BlackBerry did not provide specific sales numbers for the Priv, only stating in April 2016 that it had sold a total of 650,000 devices during the fiscal quarter ending February 29, 2016, and that these numbers were down from its original projection of 800,000. An unnamed AT&T executive stated to CNET that a large number of customers were returning the device, and that the company believed BlackBerry had priced the Priv too high. See also BlackBerry Torch, a previous line of slider smartphones produced by BlackBerry Limited References External links Android (operating system) devices Priv Mobile phones with an integrated hardware keyboard Mobile phones with 4K video recording Mobile phones introduced in 2015 Discontinued smartphones Slider phones
69832671
https://en.wikipedia.org/wiki/Monorail%20Inc.
Monorail Inc.
Monorail Inc., later the Monorail Computer Corporation, was an American computer company founded in 1995 in Marietta, Georgia, by Doug Johns. The company's Monorail PC, which was an all-in-one computer with a flat-panel LCD, prefigured the similarly designed iMac G4 by over five years. According to Bloomberg Businessweek, the company "helped spawn a revolution in personal computers ... [selling] machines for as little as $999 in an effort to woo new price-sensitive users", to which larger manufacturers like Compaq and Packard Bell NEC followed suit. History Foundation (1995–1996) Founder Doug Johns (born 1948) was previously the president of Compaq's personal computer division. He left in 1993, citing long hours, and took a sabbatical while holding his stock options in Compaq. Johns observed the computer industry from a distance and observed that the companies earning the most in sales were intelligent with their packaging and had their logistics in order. Johns felt that Compaq were opposites in the latter regard: "It seemed like we always had monitors in Singapore when we needed them in Europe or too many computers in Germany when we needed them in Italy". Johns felt that he could compete with Dell and Gateway 2000 in the build-to-order market and sold $2 million worth of Compaq's shares to put into the formation of Monorail in 1995. Johns scouted his colleagues from Compaq and other technology companies to manage the company and design its products. The California-based NameLab was hired to conjure the Monorail name. As Johns felt that packaging optimization was crucial, he and his colleagues built the prototype for an all-in-one computer that fit within a volume of , per FedEx's specification for the optimal box size for a package weighing between . As a box of that size and weight capacity could not remotely fit a cathode-ray tube monitor—a display technology common for the time—Johns fit the computer with a flat panel liquid-crystal display borrowed from a laptop. The result was the Monorail PC, a system roughly 80 percent the size of a desktop computer but twice as heavy as a laptop. Phelps Technologies and FedEx were hired for manufacturing and distribution respectively. Meanwhile, James Clarke, a former SunTrust Banks executive whom Johns hired as chief financial officer, negotiated a deal with his former employer to handle accounts receivable. Shortly before the computer's release, CompUSA agreed to sell the Monorail PC in fall 1996; merchandising chief Larry Mondry was attracted by the computer's form factor, while Monorail promised to minimize inventory. Phelps, a metal press based in Kansas City, entered the computer manufacturing business in the early 1990s assembling Compaq's Prolinea systems, later earning contract work from Dell, Xerox, AT&T, and IBM. The Monorail PC was the first computer Phelps manufactured from the ground up. Phelps in turn hired Complex Plastics of Boulder, Colorado, for injection molded PC/ABS plastic parts and Colorado Electronic Hardware for electronic components. Monorail erstwhile sourced the LCD from Sharp, a 10.4-in passive-matrix panel. Phelps assembled the Monorail from their 200,000 sq ft factory in Kansas City, in which they employed between 500 and 600 employees. As production ramped up, Phelps added a production line for the Monorail PC in their 60,000 sq ft factory in Houston. Phelps' combined workforce on the Monorail peaked at 1,200 in 1996. In contrast, Monorail's Marietta headquarters were 40 people strong by October that year. Market introduction (1996–1997) The Monorail PC was released in late November 1996 at a suggested retail price of $999. A successor model upgrading the Monorail PC's processor from a 75 MHz AMD K5 to a 200 MHz Intel Pentium was released in December 1996. As the company's profit margin off the computer was slim, any production stoppage or mishap put the company at risk of insolvency. Purchasers who peeled or punctured a label in order to disassemble the Monorail PC voided their one-year warranty, only the first 90 days of which providing no labor charges. Doubling the stock 8 MB and adding 512 KB or video memory cost $199 plus shipping. Stephen Manes of the New York Times opined that these surcharges led to the user "paying a huge premium over conventional upgrades". The computer sold well at CompUSA, becoming one of the top five best-selling computers within two months. Some of CompUSA's 122 locations hiked the price up by as much as $100, which CompUSA's marketing VP Andrew Watson said was a consequence of the company's flexible pricing agreement. Monorail later gained Circuit City, MicroWarehouse, and Egghead Computer as retail partners and Ingram Micro, SED International, and Tech Data as enterprise distributors. While the Monorail PC initially seemed to be performing well, stiff competition from computer companies introducing sub-$1000 desktops in 1997 led sales to flounder. According to Roy Edwards of Colorado Electronic, "the same time the Monorail product hit the market, the market fell apart and there seemed to be increased competition". Production was marred by a number of setbacks, the first occurring when the die used to stamp out the steel case of the computer broke in early 1997, leading to a week-long shortage until a subcontractor successfully built a new die in five days. More urgently, Phelps found itself millions of dollars in debt and stalled manufacturing of the Monorail PC amid layoffs, leading to an order backlog of 90 days toward the end of 1997. In January 1998, Monorail fired Phelps and turned to Synnex. Phelps filed for Chapter 11 bankruptcy the following month. Reorganization and decline (1997–2003) In late December 1997, Monorail introduced three standard mini-towers with processors ranging from an AMD K6 clocked at 166 Mhz to a Pentium MMX at 200 MHz. Monorail switch manufacturers again in March 1998, hiring SCI Systems Inc. of Huntsville, Alabama, to make its computer towers. The all-in-one Monorail PC was discontinued around the same time. In the fourth quarter that year, the company began selling network-enhanced computers for businesses under the Monorail NPC brand. These computers were manufactured by SCI. They received lukewarm praise from technology journalists but sold fairly well, with 110 million units of NPC products sold in 1999 alone. Johns left Monorail in 2002 to become senior VP of worldwide operations at Internet Security Systems. In 2009, he was named chief executive officer of Nivis, a developer and integrator of wireless sensor networks based in Atlanta. The website for Monorail Inc. went dark sometime in 2003. Citations References Compare to next available archive snapshot. External links 1995 establishments in Georgia (U.S. state) 2003 disestablishments in Georgia (U.S. state) American companies established in 1995 American companies disestablished in 2003 Computer companies established in 1995 Computer companies disestablished in 2003 Defunct computer companies of the United States Defunct computer hardware companies
491379
https://en.wikipedia.org/wiki/American%20Computer%20Science%20League
American Computer Science League
ACSL, or the American Computer Science League, is an international computer science competition among more than 300 schools. Originally founded in 1978 as the Rhode Island Computer Science League, it then became the New England Computer Science League. With countrywide and worldwide participants, it became the American Computer Science League. It has been in continuous existence since 1978. Each yearly competition consists of four contests. All students at each school may compete but the team score is the sum of the best 3 or 5 top scores. Each contest consists of two parts: a written section (called "shorts") and a programming section. Written topics tested include "what does this program do?", digital electronics, Boolean algebra, computer numbering systems, recursive functions, data structures (primarily dealing with heaps, binary search trees, stacks, and queues), lisp programming, regular expressions and Finite State Automata, bit string flicking, graph theory, assembly programming and prefix/postfix/infix notation. Divisions There are five divisions in ACSL: Elementary, Classroom, Junior, Intermediate, and Senior. The Elementary Division is a non-programming competition for grades 3 - 6. It tests one topic per contest. The Classroom Division is a non-programming competition for all grades and consists of a 10 question test on 4 topics each contest. Junior Division is recommended for middle school students (no students above the ninth grade may compete in it). Intermediate and Senior Divisions are for secondary school students, Intermediate being easier and Senior being more difficult. At the All-Star Contest, the Junior teams consist of 5 members each while the Senior and Intermediate teams can consist of 3 or 5 members. Each team competes against other same-sized teams in its division. Regular season The Regular Season, in which individual students compete to get their school team qualified for the All-Star Contest, consists of four rounds. These rounds consist of a programming part and a written part. In the programming part, students have 72 hours to complete a program in any computer language to perform the given task. In the written part, students have a total of 30 minutes to answer 5 questions based on given topics. Students then receive a score of up to 10 points (5 for written and 5 for programming). For the Classroom Division, students receive 45 minutes to solve 10 written problems. For the Elementary Division, students have 30 minutes to solve 5 written problems. Prizes are awarded to top scoring teams and students based upon cumulative scores after the fourth contest. All-Star Contest The All-Star Contest is held at a different location every year. Teams are given 4 hours to earn up to 60 (40 for Junior Division) points by successfully completing various programs. Individuals are then given 1 hour (45 minutes for Junior Division) to take a 12 (8 for Junior Division) question multiple choice test based on the categories of the written questions in the Regular Season rounds. The scores of the programming and the team's individual scores are added together to determine the winners. Prizes are given to teams with the highest scores and individuals based on their performance on the multiple choice test. See also List of computer science awards References External links ACSL web site including past winners Computer science competitions
17211581
https://en.wikipedia.org/wiki/FlagShip
FlagShip
FlagShip is both an object oriented and procedural programming language, based on the xBase language dialect and conventions. FlagShip is available for and is cross-compatible to different computer platforms, such as Linux, Unix and Microsoft Windows. As a true compiler, it translates xBase source code to native 32-bit or 64-bit executables, using the same source-code and databases. Recent history The first FlagShip version was introduced by multisoft Datentechnik GmbH in 1992 to port Clipper, dBASE III+, FoxBase and FoxPro applications to different operating systems, i.e. OpenServer, AIX, Solaris, HP-UX, SINIX and many other Unix systems. In 1995 also Linux ports became available. In 2002, Visual FlagShip (abbreviated as VFS) was announced for Linux, and in 2004 additionally for 32/64-bit based Windows operating systems. The current VFS product line covers all common 32-bit and 64-bit operating systems (Windows NT, 2000, XP, Vista, 7, Server 2008). Programming FlagShip is a programming and development tool designed mainly for professional software developers. Visual FlagShip makes a GUI-based application from textual xBase codem which can then be modified using object-oriented programming or procedural programming. The same source and the same application supports GUI, textual and stream mode (e.g. for Web or background). The i/o mode is either detected automatically from the current environment (heterogenal application), or can be specified at compile time or at run-time using command-line switch. Example For example, these few statements, stored in text file address.prg USE address ALIAS adr SHARED NEW SET COLOR TO "W+/B,GR+/R,W/B,W/B,GR+/BG" SET GUICOLOR OFF cls @ 1, 0 SAY "Id No. " GET adr->IdNum PICT "999999" VALID IdNum > 0 @ 3, 0 SAY "Company" GET adr->Company @ 3,35 SAY "Branch" GET adr->Branch WHEN !empty(adr->Company) @ 4, 0 SAY "Name " GET adr->Name VALID !empty(adr->Name) @ 4,35 SAY "First " GET adr->First @ 6, 0 SAY "Country" GET adr->Country PICTURE "!" + repli("x",24) @ 8, 0 SAY "Zip " GET adr->Zip PICT "@!" VALID !empty(adr->Zip) @ 9, 0 SAY "City " GET adr->City @ 10, 0 SAY "Street " GET adr->Street @ 6,35,11.4,47 GET adr->Type RADIOGROUP {"Male","Female","Company","None"} @ 7,50 GET adr->Interest CHECKBOX CAPTION "Interested party" @ 8,50 GET adr->Customer CHECKBOX CAPTION "Customer" @ 9,50 GET adr->Reseller CHECKBOX CAPTION "Reseller" @ 10,50 GET adr->Distrib CHECKBOX CAPTION "Distributor" READ are compiled by: >FlagShip address.prg -o address which creates an executable (i.e. address.exe in Windows) See also xBase Clipper (programming language) Harbour (software) External links FlagShip (multisoft) home page VFS screenshots and specs Fourth-generation programming languages XBase programming language family Query languages
603183
https://en.wikipedia.org/wiki/Psi%20%28instant%20messaging%20client%29
Psi (instant messaging client)
Psi is a free instant messaging client for the XMPP protocol (including such services as Google Talk) which uses the Qt toolkit. It runs on Linux (and other Unix-like operating systems), Windows, macOS and OS/2 (including eComStation and ArcaOS). User interface of program is very flexible in customization. For example, there are "multi windows" and "all in one" modes, support of different iconsets and themes. Ready-to-install deb and RPM packages are available for many Linux distributions. Successful ports of Psi were reported for Haiku, FreeBSD and Sun Solaris operating systems. Due to Psi's free/open-source nature, several forks have appeared, which occasionally contain features that may appear in future official Psi versions. Project name 'Psi' is the twenty-third letter of the Greek alphabet (Ψ), which is used as the software's logo. Mission statement The goal of the Psi project is to create a powerful, yet easy-to-use XMPP client that tries to strictly adhere to the XMPP drafts and XMPP XEPs. This means that in most cases, Psi will not implement a feature unless there is an accepted standard for it in the XMPP community. Doing so ensures that Psi will be compatible, stable, and predictable. History The application was created by Justin Karneges and it began as a side project. At various points during its existence Karneges was paid to develop the codebase, during which Psi flourished. Typically however, the release cycle of Psi is relatively slow, but the client has always been seen by its fans as a very stable and powerful instant messaging client. Karneges left the project in late 2004 to pursue other endeavors. In 2002 Michail Pishchagin started hacking Qt code which later became libpsi library. Pishchagin joined the team in March 2003 and he is responsible for many large chunks in Psi code. In November 2004, maintenance was taken over by Kevin Smith, a long-time contributor to the project. In 2009, Smith handed maintenance back to Karneges, who also maintains Iris, the Qt/C++ XMPP library upon which Psi is based. Remko Tronçon started writing his custom patches for Psi in 2003, and became an official developer in May 2005. In 2009 a Psi fork named Psi+ was started. Project purposes are: implementation of new features, writing of patches and plugins for transferring them to upstream. As of 2017 all active Psi+ developers have become official Psi developers, and now Psi+ is just a development branch of Psi with rolling release development model. Users who wants to receive new features and bug fixes very quickly may use Psi+ on daily basis. Users who do not care about new trends and prefer constancy may choose Psi as it uses classical development model and its releases are quite rare. Features Because XMPP allows gateways to other services, which many servers support, it can also connect to Yahoo!, AIM, Gadu-Gadu, ICQ and Microsoft networks. Other services available using gateway servers include RSS and Atom news feeds, sending SMS messages to cellular networks and weather reports. As of 2012, Psi has language packs for 20 languages, with more being created. Emoticon packs are supported using the jisp format. Many jisp emoticon packs are available, including ones from AIM, iChat, and Trillian. Psi supports file transfers between other XMPP clients, and it is possible to send to or receive files from other IM networks, if the user's servers support this. Psi supports Contact Is Typing Notification (which works with Yahoo!, MSN, and AIM contacts). Version 0.10, released in January 2006, brought automatically resizing contact list and composing window in chat dialogs, tabbed chats, support for Growl messaging system on Mac OS X, window transparency and many other changes. Support of audio and video calls in Psi via Jingle is implemented via officially supported plugin PsiMedia. Encryption Security is also a major consideration, and Psi provides it for both client-to-server (TLS) and client-to-client (OpenPGP, OTR, and OMEMO) via appropriate plugins. Encryption of messages in group chats is supported only via OMEMO plugin. See also Comparison of instant messaging clients References External links Official website Interview with Justin Karneges 2001 software Free instant messaging clients Free XMPP clients Instant messaging clients for Linux MacOS instant messaging clients Windows instant messaging clients Portable software Software that uses Qt
21466479
https://en.wikipedia.org/wiki/University%20of%20Ioannina
University of Ioannina
The University of Ioannina (UoI; Greek: Πανεπιστήμιο Ιωαννίνων, Panepistimio Ioanninon) is a university located 5 km southwest of Ioannina, Greece. The university was founded in 1964, as a charter of the Aristotle University of Thessaloniki and became an independent university in 1970. As of 2017, there is a student population of 25,000 enrolled at the university (21,900 at the undergraduate level and 3,200 at the postgraduate level) and 580 faculty members, while teaching is further supplemented by 171 Teaching Fellows and 132 Technical Laboratory staff. The university Administrative Services are staffed with 420 employees. University of Ioannina is one of the leading academic institutions in Greece. History The efforts for the establishment of a University in Ioannina and in the wider area were apparent in the last years before the revolution. At that time, prominent Epirote intellectuals had attempted to establish University Schools in the Epirus region. Since the 1950s there was a great need for the establishment of a university in the area that would validate the region's cultural significance and history. In 1962 a committee was established in Athens under the name "Central Committee for the establishment of a University in Ioannina" that fought for the particular goal. In a proclamation that they published in July 1962, they called for every citizen in the region to also fight for the cause. The change of government, which took place in 1964, passed a new law that changed the organization and administration of the faculties of the University of Ioannina. Within the framework of the educational reform, on May 8, 1964, the establishment of a Department of Philosophy in Ioannina was announced by the daily press, as a branch of the Aristotle University of Thessaloniki. The Department was founded and started its operation in the academic year 1964-65 with two hundred students. Campus The campus is located 6 km from the centre of Ioannina and is one of the largest university campuses in Greece. It is linked to the town by Greek National Road 5 and can be reached from the city either by public transportation or by car. It covers an area of almost with many green open spaces which surround the four main building complexes. The buildings cover an area of 170,000 m2, consisting of lecture halls, offices, laboratories, libraries, amphitheaters, etc. Large classes are held in auditoriums, while scientific meetings and exhibitions are held in the Conference Centre located in the Medical Sciences complex. Two buildings accommodate the Student Residence Halls, both being close to the Student Union building; a multi-purpose building that houses the student restaurant and a large Hall of Ceremonies together with "Phegos"; a restaurant where the academic community and visitors can eat or celebrate special occasions, such as the graduation day. One of the most attractive places on the campus is the old Monastery of Dourouti, an 18th-century building, which is being renovated to serve as a guest house for visitors to the university. Schools and departments As of 2017, the University of Ioannina consists of seven schools and fifteen departments. Academic excellence and rankings The University is ranked 501st-600th in The Times Higher Education (THE) annual list. According to the 2017 Leiden rankings the University of Ioannina is ranked #2 overall in Greece. According to the 2014 Leiden ranking of Greek Universities, the University of Ioannina is ranked #1 in Medicine and Life Sciences, #2 in Physics, #2 in Mathematics, #2 in Computer Science, #3 in Earth Sciences and #2 overall. According to 2010 rankings published in Springer's journal Scientometrics, the Physics department at the University of Ioannina is ranked #2 in Greece, the Material Science department #3, and the Chemistry department #4. State academic evaluation In 2015 the external evaluation committee gave University of Ioannina a Positive evaluation. An external evaluation of all academic departments in Greek universities was conducted by the Hellenic Quality Assurance and Accreditation Agency (HQAA). The following Ioannina departments have been evaluated: Department of Physics (2010) Department of Mathematics (2011) Department of Chemistry (2011) Department of Computer Science and Engineering (2011) Department of Materials Science and Engineering (2011) Department of Biological Applications and Technologies (2011) Department of Medicine (2013) Department of Primary School Education (2013) Department of Pre-School Education (2013) Department of Plilosophy, Education & Archaeology (2013) Department of Philology (2014) Department of History and Archaeology (2014) Department of Economics (2014) Department of Plastic Arts and Arts Sciences (2014) Research institutes Research indicators refer to any work published by the University of Ioannina on all scientific disciplines and to research projects launched through the UoI Special Research Fund. Ioannina Biomedical Research Institute The Biomedical Research Institute (BRI) was founded in 1998. It operated as an autonomous institute until 2001, under the name Ioannina Biomedical Research Institute (IBRI). From September 2001, the institute was merged with the Foundation for Research and Technology-Hellas (FORTH), and its new name is Foundation for Research and Technology-Hellas/Biomedical Research Institute. BRI has three research directions: Molecular Medicine, Biomedical Technology, and Molecular Epidemiology. In order to fulfill the research aims the institute has established collaborations with the Medical School of the University of Ioannina, as well as with other departments of the university, the University Hospital, and the other institutes of FORTH. Institute of Transportation & Telecommunications The University of Ioannina has allied with the regional local authorities and industrial partners to establish an Institute of Transportation & Telecommunication in Igoumenitsa. The institute's expertise in transportation and telecommunication will be concentrated on research, consultancy, and regional development. University Library and Information Centre The University of Ioannina has the largest, in terms of effective surface area (14,500 sq. m. divided into six storeys), single library in Greece, which has been named the University of Ioannina Library and Information Centre. The equipment of the library includes 31 workstations, 504 reading stations, a 120-seat auditorium, an Art Gallery Exhibition Room, a Seminar Room with a seating capacity of 20 people, and 12 carrels. Users can check availability on the online library catalogue (OPAC) and use photocopying and reservation services. The professionally qualified librarians run training sessions on information handling techniques and assist the users with their inquiries. Other library services include book loans from its own collection or from other libraries in Greece (Interlibrary Loan), reading rooms, access to international information sources via the Internet, access to digitised material of the university, access electronic catalogue databases, access Greek and foreign library catalogues, Library users include all members of the academic community of the university and the public. The library covers the fields of: Medicine, Chemistry, Physics, Biological Applications and Technologies, Materials Science and Engineering, Economics, Computer Science, Plastic Arts and Art Sciences, Primary and Pre-School Education, Philosophy, Education, and Psychology. The Student Collection has multiple copies of books related to the course lectures delivered in all departments. The library collection has scientific books, textbooks, and journals, newspapers, digital and audiovisual material, encyclopedias, dictionaries, and indexes, books that can be borrowed by students, a collection in Braille, and equipment that can be used by visually impaired people, scientific journals and databases of the Hellenic Academic Libraries Link (HEALLINK), which can be accessed via the Internet, digitised documents (journal articles, books, etc.) which are produced by the librarians. Student life The university numbers today more than 25,000 students. Among them, there are approximately 21,900 undergraduate students. A number of organised postgraduate study programmes are on offer that combine taught and research elements both at Master's and Doctoral level. Approximately 1,500 students are involved in full-time study mode progressing to a master's degree, while more than 1,700 students are pursuing their studies at Doctoral level. Catering The Student Refectory provides meals to all undergraduate and postgraduate students on a full-board (breakfast, lunch, dinner) daily basis. The refectory is located on campus and covers an area of 4,500 sq. m. Accommodation The Halls house almost 650 students on two sites, while a number of rooms have been adapted for disabled students. All accommodation is mixed, with a number of standard rooms (with shared bathroom facilities) or en suite rooms. The majority of rooms are single study-bedrooms although there are some shared rooms (two or three people). Certain residences are reserved mainly for exchange students (i.e. Erasmus). Erasmus programme The University of Ioannina has exchange agreements with universities in mainland Europe through the Erasmus+ programme of the European Commission. The University of Ioannina welcomes foreign students who wish to expand their academic horizons by spending a semester or a year studying in Ioannina. See also List of universities in Greece Aristotle University of Thessaloniki Balkan Universities Network References External links University of Ioannina University of Ioannina Internal Quality Assurance Unit University Library & Information Centre University of Ioannina DASTA Office (Career Office & Innovation Unit) University of Ioannina - Science and Technology Park University of Ioannina - "Odyssey" Innovation competition 2012-2013 University of Ioannina - Science and Technology Park hosted companies Hellenic Quality Assurance and Accreditation Agency (HQAA) Department of Physics, HQAA Final Report, 2010 Department of Mathematics, HQAA Final Report, 2011 Department of Chemistry, HQAA Final Report, 2011 Department of Computer Science and Engineering, HQAA Final Report, 2011 Department of Biological Applications and Technologies, HQAA Final Report, 2011 Department of Materials Science and Engineering, HQAA Final Report, 2011 Department of Medicine, HQAA Final Report, 2013 Department of Primary School Education, HQAA Final Report, 2013 Department of Pre-School Education, HQAA Final Report, 2013 Department of Plilosophy, Education and Archaeology, HQAA Final Report, 2013 Department of Philology, HQAA Final Report, 2014 Department of History and Archaeology, HQAA Final Report, 2014 Department of Economics, HQAA Final Report, 2014 Department of Plastic Arts and Arts Sciences, HQAA Final Report, 2014 Greek Research & Technology Network (GRNET) okeanos (GRNET's cloud service) synnefo - Open Source Cloud Software (GRNET) Universities in Greece Educational institutions established in 1964 Public universities Education in Ioannina Buildings and structures in Ioannina 1964 establishments in Greece
7666616
https://en.wikipedia.org/wiki/List%20of%20computer%20hardware%20manufacturers
List of computer hardware manufacturers
Current notable computer hardware manufacturers: Cases List of computer case manufacturers: Aigo AMAX Information Technologies Antec AOpen ASRock Asus be quiet! CaseLabs (defunct) Chassis Plans Cooler Master Corsair Dell Deepcool DFI ECS EVGA Corporation Foxconn Fractal Design Gigabyte Technology IBall Lian Li MSI MiTAC NZXT Phanteks Razer Rosewill Seasonic Shuttle SilverStone Technology Thermaltake XFX Zalman Rack-mount computer cases AMAX Information Technologies Antec AOpen Laptop computer cases Clevo MSI Motherboards Top motherboard manufacturers: ASRock Asus Biostar EVGA Corporation Gigabyte Technology MSI (Micro-Star International) Intel List of motherboard manufacturers: Acer ACube Systems Albatron AMAX Information Technologies AOpen Chassis Plans DFI (industrial motherboards), stopped producing LanParty motherboards in 2009 ECS (Elitegroup Computer Systems) EPoX (partially defunct) First International Computer Foxconn Fujitsu Gumstix Intel (NUC and server motherboards) Lanner Inc (industrial motherboards) Leadtek Lite-On NZXT Pegatron PNY Technologies Powercolor Sapphire Technology Shuttle Inc. Simmtronics Supermicro Tyan VIA Technologies Vigor Gaming ZOTAC Defunct: BFG Technologies Chaintech Soyo Group Inc Universal Abit (formerly ABIT) Chipsets for motherboards AMD Redpine Signals Intel Nvidia ServerWorks Silicon Integrated Systems VIA Technologies Central processing units (CPUs) Note: most of these companies only make designs, and do not manufacture their own designs. Top x86 CPU manufacturers: AMD Intel List of CPU manufacturers (most of the companies sell ARM-based CPUs, assumed if nothing else stated): Arm Ltd. (sells designs only) Apple Inc. (ARM-based CPUs) Broadcom Inc. (ARM-based, e.g. for Raspberry Pi) Fujitsu (its ARM-based CPU used in top supercomputer, still also sells its SPARC-based servers) Hitachi (its own designs and ARM) Hygon (x86-based) HiSilicon (acquired by Huawei), stopped making its ARM-based design IBM (now only designs two architectures) Ingenic Semiconductor (MIPS-based) Marvell (its ThunderX3 ARM-based) MCST (its own designs and SPARC) MediaTek (ARM chips, and MIPS chips) Nvidia (sells ARM-based, and bought the ARM company) Qualcomm (ARM-based) Rockchip (ARM-based) Amlogic (ARM-based) Allwinner (ARM-based) Samsung (ARM-based) SiFive (RISC-V-based, e.g. HiFive Unleashed) Texas Instruments (its own designs and ARM) Via (formerly Centaur Technology division), its own x86-based design Wave Computing (previously MIPS Technologies), licenses MIPS CPU design Zhaoxin (its own x86 design based on Via's) Acquired or defunct: Cyrix (its own x86-based) Freescale (acquired by NXP Semiconductors), PowerPC-based Motorola (its own designs) NexGen (acquired by AMD) Oracle (previously Sun Microsystems), both made SPARC CPUs, but Oracle laid off that department Rise Technology SigmaTel (acquired by Freescale) Tilera (its own design) Transmeta (its own x86-based) WinChip Hard disk drives (HDDs) Internal List of current hard disk drive manufacturers: Seagate Technology Toshiba Western Digital External Note: the HDDs internal to these devices are manufactured only by the internal HDD manufacturers listed above. List of external hard disk drive manufacturers: ADATA Buffalo Technology Freecom G-Technology brand of Western Digital Hyundai IoSafe-Hard drive safes LaCie (brand of Seagate) LG Maxtor (brand of Seagate) Promise Technology Samsung Seagate Technology Silicon Power Sony Toshiba Transcend Information TrekStor Verbatim Corporation Western Digital Drive controller and RAID cards 3ware Adaptec Asus Areca Technology ATTO Technology Dell Hewlett Packard Enterprise Intel LG LSI PNY Promise Technology StarTech.com Solid-state drives (SSDs) Many companies manufacture SSDs but only six companies actually manufacture the Nand Flash devices that are the storage element in most SSDs. Optical disc drives (ODDs) List of optical disc drive manufacturers: Asus Hitachi-LG Data Storage (HLDS) LG Electronics Panasonic Philips & Lite-on Digital Solutions Corporation Optiarc Pioneer Sony Corporation TEAC Toshiba Samsung Storage Technology Fans Aigo Antec Arctic be quiet! Corsair Cooler Master Deepcool Delta Electronics Ebm-papst Inventec Minebea (NMB) Nidec Noctua Scythe Thermaltake Zalman Fan controllers Asus (bundled with top of the range ROG motherboards) Corsair GELID Solutions NZXT Scythe Thermaltake Zalman Cooler Master Computer cooling systems List of computer cooling system manufacturers: Aigo AMD Antec Arctic Asetek Asus be quiet! Cooler Master Corsair Deepcool ebm-papst Fractal Design Foxconn GELID Solutions Gigabyte Technology Hama Photo Intel Nidec Noctua NZXT Saint-Gobain (tubing system) SilverStone Technology Thermalright Thermaltake Vigor Gaming Zalman Non-refillable liquid cooling (AiO) List of non-refillable liquid cooling manufacturers: Cooler Master "Seidon Series" Corsair "H-Series" Deepcool "CAPTAIN Series" "MAELSTROM Series" EKWB EVGA Corporation Fractal Design "Kelvin Series" Lian Li NZXT "Kraken Series" SilverStone Technology "Tundra Series" Thermaltake "Water2.0 Series" Zalman "SKADI series" "Reserator 3 Max" "LQ series" "Reserator 3 Max Dual" Zotac (stopped producing water coolers) Refillable liquid cooling kits List of refillable liquid cooling kits manufacturers: Thermaltake Water block List of water block manufacturers: Corsair EKWB EVGA Corporation Thermaltake Zalman Video-card cooling List of graphics card cooling manufacturers: Arctic Cooler Master Corsair Deepcool "v series" EVGA Corporation GELID Solutions Zotac Computer monitors List of companies that are actively manufacturing and selling computer monitors: Alienware Apple Acer AOC Monitors Asus AOpen BenQ Chassis Plans Dell Eizo Fujitsu Hewlett-Packard Iiyama (company) Gateway HannStar Lenovo LG MSI NEC Philips Planar Systems Samsung Sharp Sony Tatung Company ViewSonic Video cards (graphics cards) List of video card manufacturers: Asrock Asus AMD Biostar Chaintech Club 3D Diamond Multimedia ECS ELSA Technology EVGA Corporation Foxconn Gainward Gigabyte Technology HIS Hercules Computer Technology, Inc. Leadtek Matrox Nvidia MSI Palit PNY Point of View PowerColor S3 Graphics Sapphire Technology SPARKLE XFX Zotac BFG (defunct) EPoX (defunct) Oak Technology (defunct) 3dfx Interactive (defunct) Graphics processing units (GPUs) Advanced Micro Devices ARM Holdings (Mali GPUs, first designed by acquired Falanx) ATI Technologies (Acquired by Advanced Micro Devices) Broadcom Limited Imagination Technologies (PowerVR) Intel Matrox Nvidia Qualcomm Via (S3 Graphics division) Vivante Corporation ZiiLABS Tseng Labs(acquired by ATI) XGI 3dfx Interactive (Bankrupt) Keyboards List of keyboard manufacturers: A4Tech Alps Amkette Arctic Behavior Tech Computer (BTC) Chassis Plans Cherry Chicony Electronics Corsair Cooler Master CTI Electronics Corporation Das Keyboard Fujitsu–Siemens Gigabyte Technology G.Skill Hama Photo HyperX IBall intex Kensington Computer Products Group Key Tronic Lite-On Logitech Microsoft Razer Saitek Samsung SteelSeries Targus Terabyte Thermaltake Trust TypeMatrix Umax Unicomp Happy Hacking Keyboard Drop (company) Mouse List of mouse manufacturers: A4Tech Acer Alienware Arctic Asus Behavior Tech Computer (BTC) Belkin Cooler Master Corsair Creative Technology CTI Electronics Corporation Fellowes, Inc. Flextronics General Electric Gigabyte Technology Hama Photo IBall intex TVS Electronics Kensington Computer Products Group Key Tronic Labtec Lite-On Logitech Mad Catz Microsoft Mitsumi OCZ Technology Razer Saitek Samsung SilverStone Technology Sony SteelSeries Targus Terabyte Toshiba Trust Umax Verbatim Corporation Zalman Mouse pads List of mouse pad manufacturers: A4Tech Acer Alienware Corsair Logitech Razer SteelSeries Targus Trust Verbatim Corporation Joysticks List of Joystick manufacturers: Saitek Logitech CTI Electronics Corporation Microsoft Thrustmaster Sony Speakers List of computer speaker manufacturers: Altec Lansing AOpen (stopped making speakers) Auzentech Behringer Bose Corporation Cemex Cerwin-Vega Corsair Creative Technology Edifier General Electric Gigabyte Technology Hama Photo Harman International Industries (acquisition) (division: Harman Kardon, JBL) Hercules IBall Intex Klipsch Logic Logitech M-Audio MartinLogan Philips Plantronics (acquisitions) Razer Shuttle Inc. Sonodyne Sony SteelSeries Teufel Trust Yamaha Modems List of modem manufacturers: 3Com Agere Systems Alcatel Aopen Arris Group Asus AVM GmbH Belkin International, Inc. Coolpad D-Link Cisco Huawei JCG Linksys Microcom Motorola Netgear Netopia Telebit TP-Link USRobotics Zhone Technologies Zoom Telephonics ZyXEL Network interface cards (NICs) List of network card manufacturers: 3Com Asus Atheros Belkin Chelsio Communications Cisco CNet D-Link Gigabyte Technology Hewlett Packard Enterprise IBM Intel JCG Linksys Ralink Mellanox Netgear Raza Microelectronics Solarflare StarTech.com TP-Link USRobotics Zoom Chipsets for network cards ASIX Atheros Aquantia Broadcom Emulex Fujitsu Hewlett Packard Enterprise Intel LSI Corporation Nvidia Marvell Technology Group Mellanox Proxim Qlogic Qualcomm Ralink Realtek Solarflare VIA Technologies Winbond There are a number of other companies (AMD, Microchip, Altera, etc) making specialized chipsets as part of other ICs, and they are not often found in PC hardware (laptop, desktop or server). There are also a number of now defunct companies (like 3com, DEC, SGI) that produced network related chipsets for us in general computers. Power supply units (PSUs) List of power supply unit (PSU) manufacturers: ADATA Antec Arctic be quiet! Cooler Master Corsair Deepcool Delta Electronics Dynex EVGA Corporation Fractal Design Foxconn FSP Group Gigabyte Technology Lian-Li Lite-On Maplin NZXT OCZ Technology PC Power and Cooling Seasonic Seventeam SilverStone StarTech.com Super Flower Thermaltake Trust XFX Xilence Zalman Random-access memory (RAM) modules Note that the actual memory chips are manufactured by a small number of DRAM manufacturers. List of memory module manufacturers: ADATA Apacer Asus Axiom Buffalo Technology Chaintech Corsair Dataram Fujitsu G.Skill GeIL HyperX IBM Infineon Kingston Technology Lenovo Crucial Mushkin Netlist PNY Rambus Ramtron International Rendition Renesas Technology Samsung Semiconductor Sandisk Sea Sonic SK Hynix Silicon Power Super Talent Toshiba Transcend Virtium Wilk Elektronik Winbond Wintec Industries Inc. Random-access memory (RAM) chips List of current DRAM manufacturers: Micron Technology Samsung Semiconductor SK hynix ChangXin Memory Technologies Nanya Technology Powerchip Semiconductor (as a foundry) Winbond (specialty and mobile DRAM) List of former or defunct DRAM manufacturers: NEC, Hitachi, later Elpida Memory (went bankrupt, bought by Micron) Mitsubishi, later Elpida Siemens, spun off Infineon Technologies, spun off Qimonda (went bankrupt, IP bought by Micron and others) Inotera, bought by Micron Intel (Intel 1103) Mostek Mosel Vitelic Inc (ProMOS Technologies spun off from Mosel Vitelic) Toshiba (DRAM business sold to Micron) Vanguard International Semiconductor Corporation List of fabless DRAM companies: Rambus In addition, other semiconductor manufacturers include SRAM or eDRAM embedded in larger chips. Headphones List of headphone manufacturers: AKG Acoustics Altec Lansing Amkette Andrea Electronics Asus Audio-Technica Beats Electronics Beyerdynamic Biostar Bose Corporation Bush (brand) Corsair Creative Technology Edifier Fostex Grado Labs Hercules IHome JBL JLab Audio JVC (brand of JVCKenwood) Klipsch Audio Technologies Koss Corporation Meze Headphones Microsoft Monster Cable Panasonic Philips Plantronics Plantronics Gamecom Razer Roccat Samsung Sennheiser Shure Skullcandy SMS Audio Sonodyne Sony Stax Earspeakers SteelSeries Thermaltake Technics (brand) Thinksound Thrustmaster Turtle Beach Systems Ultrasone V-Moda Yamaha Image scanners List of image scanner manufacturers: Brother Canon Fujitsu Kodak Lexmark Microtek Mustek Systems Panasonic Plustek Ricoh Seiko Epson Umax Visioneer XEROX Sound cards List of sound card manufacturers: Ad Lib, Inc. Gravis Analog Devices Asus Aureal Semiconductor Auzentech C-Media Conrad Creative Technology Diamond Multimedia Avid Audio E-MU Systems Ensoniq ESS Technology Focusrite Hercules HT Omega Korg Lexicon M-Audio MOTU PreSonus Razer Realtek Roland Speedlink StarTech.com Silicon Integrated Systems TerraTec Turtle Beach VIA Technologies Yamaha TV tuner cards List of TV tuner card manufacturers: AVerMedia Asus Diamond Multimedia EVGA Corporation EyeTV Gigabyte Technology Hauppauge Computer Works KWorld Leadtek Micro-Star International Pinnacle Systems Plextor Powercolor TerraTec Umax USB flash drives List of USB flash drive manufacturers: ADATA Aigo Apacer ATP Electronics Corsair Crucial Technology Imation IronKey Kingston Technology Konami Lexar Maxell Netac OCZ PNY Quantum Corporation Ritek Super Talent Samsung SanDisk Seagate Silicon Power Sony Strontium Technology Toshiba Transcend TrekStor Umax Verbatim VisiOn Wilk Elektronik Webcams List of webcam manufacturers: A4Tech Behavior Tech Computer Canon Creative Technology D-Link FaceVsion General Electric Hama Photo Hewlett-Packard iMicro Intel Labtec Lenovo Logitech Kodak Microsoft Philips Samsung Silicon Power Trust TP-Link See also List of computer system manufacturers List of laptop brands and manufacturers List of flash memory controller manufacturers List of solid-state drive manufacturers Market share of personal computer vendors List of computer hardware manufacturers in the Soviet Union References Computing-related lists Lists of information technology companies Lists of manufacturers
5648093
https://en.wikipedia.org/wiki/Audience%20response
Audience response
Audience response is a type of interaction associated with the use of audience response systems, to create interactivity between a presenter and its audience. Systems for co-located audiences combine wireless hardware with presentation software, and systems for remote audiences may use telephones or web polls for audiences watching through television or the Internet. Various names are used for this technology, including real time response, the worm, dial testing, and audience response meters. In educational settings, such systems are often called "student response systems" or "personal response systems." The hand-held remote control that students use to convey their responses to questions is often called a "clicker." More recent entrants into the market do not require specialized hardware. There are commercial and open-source, cloud-based tools that allow responses from the audience using a range of personal computing devices such as cell phones, smartphones, and laptops. These types of systems have added new types of functionality as well, such as free text responses that are aggregated into sortable word clouds, as well as the more traditional true/false and multiple choice style questions. This type of system also mitigates some of the concerns articulated below in the "Challenges of audience response" section. Co-located audiences Hardware Based Audience Response: The presenter uses a computer and a video projector to project a presentation for the audience to see. In the most common use of such audience response systems, presentation slides built with the audience response software display questions with several possible answers, more commonly referred to as multiple choice questions. The audience participates by selecting the answer they believe to be correct and pushing the corresponding key on their individual wireless keypad. Their answer is then sent to a base station – or receiver – that is also attached to the presenter's computer. The audience response software collects the results, and the aggregate data is graphically displayed within the presentation for all to see. Some clickers also have additional keys, allowing the presenter to ask (and audience members to answer) True/False questions or even questions calling for particular numerical answers. Depending on the presenter's requirements, the data can either be collected anonymously (e.g., in the case of voting) or it can be traced to individual participants in circumstances where tracking is required (e.g., classroom quizzes, homework, or questions that ultimately count towards a student's course grade). Incoming data may also be stored in a database that resides on the host computer, and data reports can be created after the presentation for further analysis. Software/Cloud Based Audience Response: The presenter uses a computer to create the questions, sometimes called polls. In this case however, those questions can be open ended, dial testing, and votable open ended as well as multiple choice. Those questions are then downloaded into the presenter's presentation program of choice. During the presentation, the questions automatically display within the presentation program, or from a web browser, and can in some cases even be displayed only on the participant's tablet computer or smartphone. Results are instantly tabulated via the internet, and presented on screen in real time, including grading the "correct" answer if desired. Some services offer presenters real time moderation for open ended responses or questions prior to displaying them on screen. Depending on the presenter's requirements, the data can be collected anonymously, or it can be traced to individual participants who have created accounts in advance of the poll. This method is commonly used on corporate training where attendance must be verified, and in classrooms, where grades must be assigned. Data from both methods can be saved and analyzed by the presenter and loaded manually or via API into learning management systems. Distributed, virtual, or hybrid Only software or cloud based audience response systems can accommodate distributed audiences, due to the inconveniences and costs of hardware devices. Benefits There are many reasons for the use of audience response systems (ARS). The tendency to answer based on crowd psychology is reduced because, unlike hand raising, it is difficult to see which selection others are making. The ARS also allows for faster tabulation of answers for large groups than manual methods. Additionally, many college professors use ARS systems to take attendance or grade answers in large lecture halls, which would be highly time-consuming without the system. Audience response offers many potential benefits to those who use it in group settings. Improve attentiveness: In a study done at four University of Wisconsin campuses (University of Wisconsin–Milwaukee, University of Wisconsin–Eau Claire, University of Wisconsin–Oshkosh, and University of Wisconsin–Whitewater), faculty members and students in courses using clickers were given a survey that assessed their attitudes about clicker use in Fall 2005 and its effect on teaching and learning. Of the 27 faculty members who responded to the survey, 94 percent either agreed or strongly agreed with the claim "Clickers increased student engagement in the classroom," with the remaining six percent responding that they were neutral about that claim. (None of the faculty respondents disagreed or strongly disagreed with the claim.) Similarly, 69 percent of the 2,684 student respondents agreed or strongly agreed with the claim "Clickers led me to become engaged in class," with only 13 percent disagreeing or strongly disagreeing with that claim. Increase knowledge retention: In the same University of Wisconsin study, 74 percent of the faculty respondents agreed or strongly agreed with the claim "Clickers have been beneficial to my students' learning," with the remaining 26 percent choosing a "neutral" response. (No faculty respondent disagreed or strongly disagreed with the claim.) Similarly, 53 percent of the student respondents agreed or strongly agreed with the claim "Clickers have been beneficial to my learning," with only 19 percent disagreeing or strongly disagreeing with that claim. In a different but related study, Catherine Crouch and Eric Mazur more directly measured the results of Peer Instruction and "ConcepTests" on student learning and retention of information at the end of a semester. Faculty members using this "Peer Instruction" pedagogical technique present information to students, then ask the students a question that tests their understanding of a key concept. Students indicate their answer to the instructor using an audience response system, and then they discuss with their fellow students why they chose a particular answer, trying to explain to one another their underlying thinking. The instructor then asks the question again to see the new student results. The study authors used scanned forms and hand-raising for audience response in the initial year of the study, and then they switched to a computer-based audience response system in the following years. The "clicker" use was only part of a multi-pronged attempt to introduce peer instruction, but overall they found that "the students taught with P[eer] I[instruction] (Spring 2000, N = 155) significantly outperformed the students taught traditionally (Spring 1999, N = 178)" on two standard tests – the "Force Concept Inventory and the Mechanics Baseline Test" – and on traditional course exams as well. A Johns Hopkins study on the use of audience response systems in Continuing Medical Education (CME) for physicians and other health personnel found no significant difference in knowledge scores between ARS and non-ARS participants in a clinical round table trial involving 42 programs across the United States. Poll anonymously: Unlike a show of hands or a raising of cards with letters on them, sending responses by hand-held remotes is much more anonymous. Except perhaps for a student (our audience member) who watches what the person next to him/her submits, the other students (or audience members) can't really see what response his/her fellow audience members are giving, and the software that summarizes the results aggregates the responses, listing what percent of respondents chose a particular answer, but not what individual respondents said. With some audience response systems, the software allows you to ask questions in truly anonymous mode, so that the database (or "gradebook") is not even associating answers with individual respondents. Track individual responses: The "clickers" that audience members use to send their responses to the receiver (and thus to the presenter's computer) are often registered to a particular user, with some kind of identifying number. When a user sends his/her response, the information is stored in a database (sometimes called the "Gradebook" in academic models of audience response systems) associated with each particular number, and presenters have access to that information after the end of the interactive session. Audience response systems can often be linked to a Learning management system, which increases the ability to keep track of individual student performance in an academic setting. Display polling results immediately: The audience response system includes software that runs on the presenter's computer that records and tabulates the responses by audience members. Generally, once a question has ended (polling from the audience has ceased), the software displays a bar chart indicating what percent of audience members chose the various possible responses. For questions with right/wrong answers, audience members can get immediate feedback about whether they chose the correct answer, since it can be indicated on the bar chart. For survey-type polling questions, audience members can see from the summary how many other audience members chose the same response, along with how many audience members (or what percent of the audience) chose different responses. Create an interactive and fun learning environment: Clickers are in many ways novel devices, so the novelty itself can add interest to the learning environment. More important, though, is the interactive nature of audience response systems. Having been asked a particular question about a concept or opinion, students are genuinely interested in seeing the results. They want to learn if they answered the question correctly, and they want to see how their response compares to the responses of their fellow audience members. The increased student engagement cited in the University of Wisconsin study (see footnote 1 below) attests to the ability of audience response systems to improve the learning environment. Confirm audience understanding of key points immediately: In the University of Wisconsin study previously cited, faculty members were unanimous in their recognition of this key advantage of audience response systems. In other words, 100% of the faculty respondents either agreed or strongly agreed with the claim "Clickers allowed me to assess student knowledge on a particular concept.". Students also recognized this benefit for their own self-assessment. 75% of student respondents agreed or strongly agreed with the claim, "Clickers helped me get instant feedback on what I knew and didn't know." In a published article, a member of the University of Massachusetts Amherst Physics Education Research Group (UMPERG)articulated this advantage in more detail, using the term "Classroom Communication System (CCS)" for what we have been calling an audience response system: By providing feedback to an instructor about students' background knowledge and preconceptions, CCS-based pedagogy can help the instructor design learning and experiences appropriate to student's state of knowledge and explicitly confront and resolve misconceptions. By providing frequent feedback about students' ongoing learning and confusions, it can help an instructor dynamically adjust her teaching to students' real, immediate, changing needs. Gather data for reporting and analysis: Unlike other forms of audience participation (such as a show of hands or holding up of response cards), audience response systems use software to record audience responses, and those responses are stored in a database. Database entries are linked to a particular user, based on some ID number entered into the handheld remote device or based on a registration between the user and the company that manufactures the handheld device. Answers can be analyzed over time, and the data can be used for educational research or other forms of analysis. Challenges Audience response systems may present some difficulties in both their deployment and use. The per-unit purchase price of ARS devices, typically 10 times the cost of a software only solution The maintenance and repair of devices when owned by a central unit or organization The configuration, troubleshooting and support of the related presentation software (which may or may not work well with ARS devices) The reliability and performance of the devices under non-optimal conditions of the room in which the devices are used For hardware only applications: a Lack of open ended questions, dial testing capabilities, and other non standard question formats. Applications Audience response is utilized across a broad range of industries and organizations. A few examples include: Political Campaigns Political news events Corporate training Control self-assessment Delegate voting Public participation in municipal or environmental planning Market research Decision support Game shows e.g. Ask the audience on Who Wants to be a Millionaire? Conferences and events Executive decision making Continuing medical education ROI measurement and assessment Sales Effectiveness Training Hospital patient exit surveys Audience response systems An audience response system (ARS), or personal response system (PRS), allow large groups of people to vote on a topic or answer a question. Depending on the solution chosen, each person has a device with which selections can be made, or a mobile device that they can use to respond. In a hardware solution, each remote communicates with a computer via receivers located around the room or via a single receiver connected to the presenter's computer using a USB connector. In a software solution, each device communicates with the question via SMS or the internet. After a set time – or after all participants have answered – the system ends the polling for that particular question and tabulates the results. Typically, the results are instantly made available to the participants via a bar graph displayed on the projector but can also be viewed in a web browser for some systems. In situations where tracking is required, the serial number of each remote control or the students identity number is entered beforehand in the control computer's database. In this way the answer of each individual can later be identified. In addition to the presenter's computer and projector, the typical audience response system has the following components: base station (receiver)--for hardware based solutions only wireless keypads (one for each participant)--or mobile devices for software/cloud based solutions audience response system software History Since the 1960s, a number of companies have offered Response Systems, several of whom are now defunct or changed their business model. Circa 1966, Audience Studies Institute of Hollywood, California developed a proprietary analog ARS system for evaluating the response of a theater audience to unreleased motion pictures, television shows and commercials. This early ARS was used by ASI's clients – major motion picture and television studios and advertising agencies – to evaluate the effectiveness of whatever it was they wanted to accomplish: for example, selling more products, increasing movie ticket sales, and achieving a higher fee per commercial slot. Often, a client would show different versions to different audiences, e.g. different movie endings, to gauge their relative effectiveness. ASI would give out free tickets on the street to bring people into the theater, called the "Preview House," for particular showings where each attendee would fill out a questionnaire and then be placed in a seat with a "dial" handset outfitted with a single knob that each attendee would turn to a position to indicate his or her level of interest. Turning the knob all the way left for "dull" to turning all the way to the right for "great." In 1976, ASI upgraded their system to become fully digital, have Yes/No buttons and, in some cases, numeric keys for entering in numbers, choices and monetary amounts. Another of the industry’s very earliest systems was the Consensor. In the late 1960s and early 1970s, William W. (Bill) Simmons, an IBM executive, reflected on how unproductive most meetings were. Simmons had become essentially a nonacademic futurist in building up IBM's long-range planning operations. He was one of the pioneers of applied futures studies in the private sector, that is, future studies applied to corporate planning. Through this work he had met Theodore J. (Ted) Gordon of The Futures Group (now part of Palladium International). Gordon had conceived and partially developed what would today be called an audience response system, and Simmons immediately saw practical applications for it in large corporate meetings, to allow people to air their true opinions in anonymous fashion, so that each individual's Likert scale answer value for a question would remain secret, but the group's average, weighted with weighting factors, would be instantly displayed. Thus (something approximating) the group's true consensus would be known, even though individual middle managers or aspiring junior executives would not have to jeopardize their conformity to effect this result. (IBM's organizational culture was famous for its valuing of conformity; and this was common at other firms, too.) Simmons retired from IBM in January 1972, and soon after he formed a startup company with Gordon, called Applied Futures, Inc., to develop and market the system, which they called the Consensor [connoting consensus + sensor]. Applied Futures was one of the first audience response companies. In 1972, while Gordon and his assistant Harold S. (Hal) Becker were still working on development, Applied Futures filed for a patent (), which was granted in 1973 with Gordon and Becker as inventors. Another patent, filed for in 1974 and granted in 1976 (), lists Simmons and James A. Marquis. Sales began in 1974. The Consensor was a system of dials, wires, and three lights; red, yellow, and green. A question was asked verbally and people would turn their dials anywhere from 0 to 10. If the majority agreed, the green lamp would light. If not, either the yellow or red lamp would light, depending on the level of disagreement. Although business was strong for this fledgling company, the command-and-control management style of the day proved a formidable opponent to this new tool, which promoted consensus building. In his memoir Simmons describes how junior-executive sales prospects tended to like the idea, imagining themselves heroically speaking truth to power (but not paying any price for being a maverick), while their senior-executive bosses tended to see the Consensor as "a blatant attempt to impose democratic procedures into a corporate hierarchy that is anything but democratic." Simmons observed that "A majority of corporations are run as fiefdoms, with the CEO playing the role of Supreme Power; he may be a benevolent dictator, but nonetheless still a dictator." He described this type of senior executives, with ironic tone, as "secure in the knowledge of their own infallibility." Nonetheless, Applied Futures sold plenty of units to business firms and government agencies. In October 1984, it became a subsidiary of Brooks International Corporation, a management consulting firm. One of the early educational uses of an audience response system occurred at Rice University. Students in a computer-equipped classroom were able to rate how well they understood portions of a lecture, answer multiple choice questions, and answer short essay questions. Results could be tallied and displayed to the class. Audience response technology has evolved over time, moving away from hardware that required extensive wiring towards hand held wireless devices and small, portable receivers. In the 1980s, the Consensor product line evolved toward peripherals that could be plugged into a PC, and a software application to run thereon. Wireless LANs allow today's peripherals to be cordless. Another example of this is Microsoft's Mouse Mischief, a PowerPoint add-in, which has made it easier for teachers, professors, and office professionals to integrate audience response into their presentations. The advent of smartphones has made possible systems in which audience members download an app (or run it as SaaS in their web browser) which then communicates with the audience response system (which is itself just software running on someone's device, whether desktop, laptop, tablet, or phone) via the local wireless network, the cellular telephone network, or both. In this model, the entire audience response system is a software product; all of the hardware is what the users brought with them. Experts There are two books that have been written specifically about audience response systems by people who are considered experts in the use of audience response technology. In 2009, Derek Bruff, a professor at Vanderbilt University, published Teaching with Classroom Response Systems: Creating Active Learning Environments. In 2015, David Campt, a meeting strategist and civic engagement consultant, released Read the Room for Real: How a Simple Technology Creates Better Meetings was published; this book focused on using audience response technology in non-academic environments. Hardware The majority of current audience response systems use wireless hardware. Two primary technologies exist to transmit data from the keypads to the base stations: infrared (IR) and radio frequency (RF). A few companies also offer Web-based software that routes the data over the Internet (sometimes in a unified system with IR and RF equipment). Cell phone-based systems are also becoming available. Infrared The oldest of these technologies, IR audience response systems are better suited for smaller groups. IR uses the same technology as a TV remote, and is therefore the only one of the four technologies that requires line-of-sight between the keypad and receiver. This works well for a single keypad but can fail due to interference when signals from multiple keypads arrive simultaneously at the receiver. IR systems are typically more affordable than RF systems, but do not provide information back to the keypad. Use in educational settings Audience response systems can be used as a way of incorporating active learning in a lecture or other classroom-type setting, for example by quizzing students, taking a quick survey, etc. They can also be used for taking attendance. They can be used effectively by students as young as 9 or 10, depending on their maturity level. An educator is able to generate worksheets and let students enter their answer choices at their own pace. After each question, the educator is able to instantly show the results of any quiz, for example in the form of histogram thus creating rapid 2-way feedback about how well learners are doing. The fact that students can send responses anonymously means that sensitive topics can be included more readily than would otherwise be the case. An example of this is in helping students to learn about plagiarism. Audience response systems can also be used in classroom settings to simulate randomized controlled trials (RCT) such as 'Live the Trial', a mock RCT used to teach the concepts of clinical research. The mock trial answered the question 'Do red smarties make you happier?". Radio frequency (RF) Ideal for large group environments, RF systems can accommodate hundreds of voters on a single base station. Using some systems, multiple base stations can be linked together in order to handle audiences that number in thousands. Other systems allow over a thousand on just one base. Because the data travels via radio frequency, the participant merely needs to be within range of the base station (300 – 500 feet). Some advanced models can accommodate additional features, such as short word answers, user log-in capabilities, and even multi-site polling. Internet Web-based audience response systems work with the participants' existing computing devices. These include notebook computers, smartphones and PDAs, which are typically connected to the Internet via Wi-Fi, as well as classroom desktop computers. If the facilitator's computer is also Wi-Fi-enabled, they can even create their own IP network, allowing a closed system that doesn't depend on a separate base station. The web server resides on or is accessible to the facilitator's computer, letting them control a set of web pages presenting questions. Participants log into the server using web browsers and see the questions with forms to input their responses. The summarized responses are available on a different set of pages, which can be displayed through the projector and also on each participant's device. Internet has also made it possible to gather audience responses in massive scale. Various implementations of the concept exist. For example, Microsoft featured Bing Pulse during the 2013 State of The Union (US) address by president Barack Obama. The system allowed registered users to input their responses (positive, negative, neutral) to the address and visualized the results as a trending graph in real time. Bing Pulse has since been used to cast over 35 million votes during national news broadcasts and other live meetings. Over 10,000 viewers powered the iPowow Viewer Vote which tracked live viewer emotional response for Channel 7 during the 2013 Australian Federal Election debates and displayed as a live "worm" graph on the broadcast screen. For advertising and media research, online "dial testing" using an onscreen scale slider that is controlled by a mouse (or finger swipe on a touchscreen) is being used in conjunction with surveys and online communities to gather continuous feedback on video or audio files. Cell phone The familiarity and widespread use of cell phones and text messaging has now given rise to systems that collect SMS responses and display them through a web page. These solutions don't require specialized voting hardware, but they do require telecom hardware (such as a mobile phone) and software, along with a web server, and therefore tend to be operated by dedicated vendors selling usage. They are typically favored by traveling speaking professionals and large conference halls that don't want to distribute, rent, or purchase proprietary ARS hardware. Computing devices with web browsers can also use these serviceLLs through SMS gateways, if a separate web interface isn't provided. Cell Phone enabled response systems, such as SMS Response System, are able to take text inputs from the audience and receive multiple responses to questions per SMS. This allows a new pedagogical approach to teaching and learning, such as the work by Derek Bruff and an initiative on SMSRS. The advantage of using such SMS type of response system is not limited to the logistical advantage of the presenter keeping no device inventory, it comes with an associated range of pedagogical advantages, such as agile learning, peer instruction (as possible with all types of response systems), it affords additional educational features like MCQ-Reasoning – a feature developed in a SMSRS system in Singapore that allows respondents to tag a reason to their choice of options in an MCQ, thus eliminating potential case of "guessing-the-correct-answer" syndrome, and text mining of SMS responses (to provide the gist of the messages collectively in a visual map). Interactive SMS Forum is another feature that is proprietary to SMS-type response systems where audiences not only post their questions, but can also answer the questions posted by others via SMS. Smartphone / HTTP voting With increasing penetration of smartphones with permanent internet connections, live audience response/voting can be achieved over the HTTP protocol. SMS is still a solid solution because of its penetration and stability, but won't easily allow multi-voting support and might cause problem with multi-country audiences. The issue with SMS not supporting multi-country audiences is projected to be solved with SMS hubbing. In classrooms and conferences with Wi-Fi support or anywhere with GPRS coverage, software systems can be used for live audience feedback, mood measurement or live polling. These systems frequently support voting with both mobile apps as well as mobile browsers. These apps invoke available local area networks (LAN) and provide a charge-free and cuts the needs to devoted hardware. With mobile apps and browser enabled voting, there aren't any setup costs for hardware since the audience uses their own phones as voting devices and the result is often presented in any browser controlled by the lecturer. With a standard mobile browser solution these are click and go solutions without additional installations. Therefore, live audiences can be reached, and smartphone voting can be used – as with SMS – in any number of different locations. With the GPRS solution the audience does not necessary need to be in the same area as the lecturer as with radio frequency, infrared or Bluetooth-based response systems. Software Audience response software enables the presenter to collect participant data, display graphical polling results, and export the data to be used in reporting and analysis. Usually the presenter can create and deliver her entire presentation with the ARS software, either as a stand-alone presentation platform or as a plug-in to PowerPoint or Keynote. See also Interactive whiteboard Presentation software Public speaking Learning management system References Bibliography Audience measurement Polling terms Learning methods Promotion and marketing communications de:Audience Response System
12953246
https://en.wikipedia.org/wiki/IEEE%20Computer%20Society
IEEE Computer Society
IEEE Computer Society (sometimes abbreviated the Computer Society or CS) is a professional society of the Institute of Electrical and Electronics Engineers (IEEE). Its purpose and scope is "to advance the theory, practice, and application of computer and information processing science and technology" and the "professional standing of its members". The CS is the largest of 39 technical societies organized under the IEEE Technical Activities Board. The IEEE Computer Society sponsors workshops and conferences, publishes a variety of peer-reviewed literature, operates technical committees, and develops IEEE computing standards. It supports more than 200 chapters worldwide and participates in educational activities at all levels of the profession, including distance learning, accreditation of higher education programs in computer science, and professional certification in software engineering. History The IEEE Computer Society traces its origins to the Subcommittee on Large-Scale Computing, established in 1946 by the American Institute of Electrical Engineers (AIEE), and to the Professional Group on Electronic Computers (PGEC), established in 1951 by the Institute of Radio Engineers (IRE). When the AIEE merged with the IRE in 1963 to form the Institute of Electrical and Electronics Engineers (IEEE), these two committees became the IEEE Computer Group. The group established its own constitution and bylaws in 1971 to become the IEEE Computer Society. The CS maintains its headquarters in Washington, D.C. and additional offices in California, China, and Japan. Main activities The IEEE Computer Society maintains volunteer boards in six program areas: education, membership, professional activities, publications, standards, and technical and conference activities. In addition, 12 standing boards and committees administer activities such as the CS elections and its awards programs to recognize professional excellence. Education and professional development The IEEE Computer Society participates in ongoing development of college computing curricula, jointly with the Association for Computing Machinery (ACM). Other educational activities include software development certification programs and online access to e-learning courseware and books. Publications The IEEE Computer Society is a leading publisher of technical material in computing. Its publications include 12 peer-reviewed technical magazines and 25 scholarly journals called Transactions, as well as conference proceedings, books, and a variety of digital products. The Computer Society Digital Library (CSDL) is the primary repository of the Computer Society's digital assets and provides subscriber access to all CS publications, as well as conference proceedings and other papers, amounting to more than 810,000 pieces of content. In 2014, the IEEE Computer Society launched the complementary monthly digest Computing Edge magazine, which consists of curated articles from its magazines. Technical conferences and activities The IEEE Computer Society sponsors more than 200 technical conferences each year and coordinates the operation of several technical committees, councils, and task forces. The IEEE Computer Society maintains 12 standards committees to develop IEEE standards in various areas of computer and software engineering (e.g., the Design Automation Standards Committee and the IEEE 802 LAN/MAN Standards Committee). In 2010, the IEEE Computer Society introduced Special Technical Communities (STCs) as a new way for members to develop communities focusing on selected technical areas. Current topics include broadening participation, cloud computing, education, eGov, haptics, multicore, operating systems, smart grids, social networking, sustainable computing, systems engineering, and wearable and ubiquitous technologies. Technical Communities The IEEE Computer Society currently has 31 technical communities. A technical community (TC) is an international network of professionals with common interests in computer hardware, software, its applications, and interdisciplinary fields within the umbrella of the IEEE Computer Society. A TC serves as the focal point of the various technical activities within a technical discipline which influences the standards development, conferences, publications, and educational activities of the IEEE Computer Society. Following are the current technical communities: Technical Community on Business Informatics and Systems (TCBIS) Technical Community on Computer Architecture (TCCA) Technical Community on Cloud Computing (TCCLD) Technical Community on Computational Life Sciences (TCCLS) Technical Community on Computer Communications (TCCC) Technical Community on Data Engineering (TCDE) Technical Community on Dependable Computing and Fault Tolerance (TCFT) Technical Community on Distributed Processing (TCDP) Technical Community on Intelligent Informatics (TCII) Technical Community on Internet (TCI) Technical Community on Learning Technology (TCLT) Technical Community on Mathematical Foundations of Computing (TCMF) Technical Community on Microprocessors and Microcomputers (TCMM) Technical Community on Microprogramming and Microarchitecture (TCuARCH) Technical Community on Multimedia Computing (TCMC) Technical Community on Multiple-Valued Logic (TCMVL) Technical Community on Pattern Analysis and Machine Intelligence (TCPAMI) Technical Community on Parallel Processing (TCPP) Technical Community on Real-Time Systems (TCRTS) Technical Community on Scalable Computing (TCSC) Technical Community on Security and Privacy (TCSP) Technical Community on Semantic Computing (TCSEM) Technical Community on Services Computing (TCSVC) Technical Community on Simulation (TCSIM) Technical Community on Visualization and Graphics (VGTC) Technical Community on VLSI (TCVLSI) Technical Community on Software Engineering (TCSE) Technical Community on Test Technology (TTTC) Technical Community on VLSI Technical Community on VLSI (TCVLSI) is a constituency of IEEE Computer Society (IEEE-CS) that oversees various technical activities related to computer hardware, integrated circuit design, and software for computer hardware design. TCVLSI is one of the technical ccommunities of IEEE-CS that covers various specializations of computer science and computer engineering discipline. IEEE-CS is the largest of the 39 societies of Institute of Electrical and Electronics Engineers (IEEE). The technical scope of TCVLSI covers the computer-aided design (CAD) or electronic design automation (EDA) techniques to facilitate the very-large-scale integration (VLSI) design process. The VLSI may include various types of circuits and systems, such as digital circuits and systems, analog circuits, as well as mixed-signal circuits and systems. The emphasis of TCVLSI widely covers the integrating the design, CAD, fabrication, application, and business aspects of VLSI, encompassing both hardware and software. Membership in TCVLSI is open and free of charge to researchers, practitioners and students, and general prospective members are not required to be members of IEEE or IEEE Computer Society. However, to serve on the executive committee, a member needs to belong to the IEEE Computer Society. The Chair of the TCVLSI is elected by the voting members of TCVLSI. Other executive members of TCVLSI are appointed by the Chair. The TCVLSI sponsors conferences, special sessions, and workshops for the IEEE-CS. TCVLSI also runs VLSI Circuits and Systems Letter, three times a year, which has many components including a very selective dissemination of quick papers, TCVLSI member news, upcoming conferences, workshops, call for papers, and funding opportunities of interest to members of TCVLSI. TCVLSI provides several student travel grants for the TCVLSI sponsored conferences. TCVLSI also sponsors best paper awards for the sponsored conferences. The VLSI Circuits and Systems Letter (VCAL) is published four times a year, and provides timely updates on science, engineering, and technologies as well as educations and opportunities related to VLSI circuits and systems. The Editor-in-Chiefs are Anirban Sengupta, Indian Institute of Technology Indore, India and Saraju P. Mohanty, University of North Texas, United States. The current Chair is Anirban Sengupta, Indian Institute of Technology Indore, India. Past Chairs 2014–2018: Saraju P. Mohanty, Professor, University of North Texas 2002–2014: Joseph Cavallaro, Professor, Rice University 2000–2002: Vijaykrishnan Narayanan, Professor, Pennsylvania State University 1996–2000: Nagarajan "Ranga" Ranganathan, Professor, University of South Florida 1984–1986: Amar Mukherjee, Professor, University of Central Florida Awards TCVLSI, IEEE-CS introduced the following awards from 2018. IEEE-CS-TCVLSI Best Ph.D. Dissertation/Thesis Award IEEE-CS TCVLSI Mid-Career Research Achievement Award IEEE-CS TCVLSI Distinguished Research Award IEEE-CS TCVLSI Distinguished Leadership Award IEEE-CS-TCVLSI Life-Time Achievement Award IEEE-CS TCVLSI Outstanding Editor Award TCVLSI sister conferences ARITH, IEEE Symposium on Computer Arithmetic ARITH 2021: June 13–16, 2021, Torino, Italy ARITH 2020: June 7–10, 2020, Portland, Oregon, USA ARITH 2019: June 10–12, 2019, Kyoto, Japan ASAP, IEEE International Conference on Application-specific Systems, Architectures and Processors ASAP 2020: July 6–8, 2020, Manchester, UK ASAP 2019: July 15–17, 2019, Cornell Tech, New York, USA ASAP 2018: 10–12 July 2018, Milan, Italy ASYNC, IEEE International Symposium on Asynchronous Circuits and Systems ASYNC 2020: May 17–20, 2020, Snowbird, Utah, USA ASYNC 2019: May 12–15, 2019, Aomori, Japan iSES, IEEE International Symposium on Smart Electronic Systems (formerly iNIS) iSES 2020: December 14–16, 2020, Chennai, India iSES 2019: December 16–18, 2019, Rourkela, India ISVLSI, IEEE Computer Society Symposium on VLSI ISVLSI 2021: July 6–8, 2021, Tampa, Florida ISVLSI 2020: July 6–8, 2020, Limassol, Cyprus ISVLSI 2019: July 8–10, 2019, Miami, FL, USA IWLS, IEEE International Workshop on Logic & Synthesis IWLS 2020: July 18 – 19, 2020, San Francisco, CA, USA IWLS 2019: June 21 – 23, 2019, Lausanne, Switzerland IWLS 2018: June 23 – 24, 2018, San Francisco, CA, USA MSE, IEEE International Conference on Microelectronic Systems Education MSE 2017: May 11–12, 2017, Banff, Canada SLIP, ACM/IEEE System Level Interconnect Prediction SLIP 2019: June 2, 2019, Las Vegas Convention Center, Las Vegas, NV SLIP 2018: June 23, 2018, Moscone Center West, San Francisco, CA, USA ECMSM, IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics ECMSM 2019: June 24–26, 2019, Toulouse, France Technically co-sponsored conferences ACSD, International Conference on Application of Concurrency to System Design ACSD 2018: June 24–29, 2018, Bratislava, Slovakia VLSID, International Conference on VLSI Design VLSID 2021: January 2–7, 2021, IIT Guwahati, Assam, India VLSID 2019: January 5–9, 2019, New Delhi, India Technical Community on Visualization and Graphics The Technical Community on Visualization and Graphics (VGTC) is a constituency of IEEE Computer Society (IEEE-CS) that oversees various technical activities related to visualization, computer graphics, virtual and augmented reality, and interaction. VGTC is one of the technical community/councils of IEEE-CS that covers various specializations of computer science and computer engineering. The VGTC has two flagship annual conferences. The annual executive committee meeting is held during the same week as IEEE Visualization. IEEE Visualization Academy: The IEEE Visualization Academy (or in short Vis Academy) was established in 2018 by the IEEE VGTC Executive Committee, with the inaugural "class" of inductees to include all the Visualization Career Awardees and all the Visualization Technical Achievement Awardees, from 2004 to 2019, for a total of 30 unique inductees. Induction into the Vis Academy is the highest and most prestigious honor in the field of visualization. IEEE-CS Awards The IEEE Computer Society recognizes outstanding work by computer professionals who advance the field in three areas of achievement: Technical Awards (e.g., the IEEE Women of the ENIAC Computer Pioneer Award or the W. Wallace McDowell Award), Education Awards (e.g., Taylor L. Booth Education Award), and Service Awards (e.g., Richard E. Merwin Distinguished Service Award). In 2018, the organization won First Place in the Los Angeles Press Club's annual Southern California Journalism Awards for "Untold Stories: Setting the Record Straight on Tech's Racial History", in the minority/immigration reporting online category. A record number of entries for the awards were submitted that year from the biggest publishing, broadcasting, online, and media outlets around the world. See also Association for Computing Machinery Association of Information Technology Professionals Australian Computer Society British Computer Society Canadian Information Processing Society China Computer Federation IEEE Technical Activities Board Institute of Electrical and Electronics Engineers Institution of Analysts and Programmers ISCA Influential Paper Award New Zealand Computer Society References External links IEEE societies Computer science organizations
6225827
https://en.wikipedia.org/wiki/Molinux
Molinux
Molinux was an operating system based on Ubuntu sponsored by the autonomous community of Castilla-La Mancha and the Fundación Ínsula Barataria. The name "Molinux" derives from the Spanish word molino, meaning "mill" or "windmill". Each version of Molinux is named after a character from the classic Spanish novel Don Quixote, by Miguel de Cervantes. Project information Molinux was an initiative begun in 2005 by the government of Castilla-La Mancha to introduce the Castile-La Mancha community to the forefront of the Information Society. The Molinux project is intended to attack the digital divide by reducing the cost of software and offering an easy-to-use operating system. The sponsoring regional government's commitment to the open source philosophy is such that they have committed not to impose the use of Molinux. "The advantage is that the software is free to compete with anyone, and the user can choose between using this or any other software." Latest version Molinux 6.2 (codename "Merlín") was launched on 2010-12-24. It was based on Ubuntu 10.10. Main features Based on Ubuntu "Lucid" 10.04 Linux kernel 2.6.32 GNOME 2.30 OpenOffice.org 3.2 Mozilla Firefox 3.6 X.Org Server 1.7 The distribution's artistic team has delivered new desktop backgrounds depicting images from the autonomous community and some abstract designs, as well as brand new icons for the panels, menus and desktop. An interesting new feature is a new backup manager that automates backing up of data to external devices or over the local network. References External links Official site Fundación Ínsula Barataria Educational operating systems Spanish-language Linux distributions State-sponsored Linux distributions Ubuntu derivatives Linux distributions
2057536
https://en.wikipedia.org/wiki/Service%20Availability%20Forum
Service Availability Forum
The Service Availability Forum (SAF or SA Forum) is a consortium that develops, publishes, educates on and promotes open specifications for carrier-grade and mission-critical systems. Formed in 2001, it promotes development and deployment of commercial off-the-shelf (COTS) technology. Description Service availability is an extension of high availability, referring to services that are available regardless of hardware, software or user fault and importance. Key principles of service availability: Redundancy – "backup" capability in case of need to failover due to a fault Stateful and seamless recovery from failures Minimization of mean time to repair (MTTR) – time to restore service after an outage Fault prediction & avoidance – take action before something fails The traditional definitions of high availability have their roots in hardware systems where redundancy of equipment was the primary mechanism for achieving uptime over a specific period. As software has come to dominate the landscape, the probability of failure is often much higher for applications than it is for hardware and so these concepts have been extended encompass an overall view of service availability where downtime, irrespective of its cause, is an exceptionally rare event. Services and applications should always be available, whether it is during abnormal system operation, scheduled maintenance, or software upgrade, for example. SA Forum support commercial off-the-shelf (COTS) technology for uninterrupted service availability, application portability and seamless integration. Collaborating industry organizations include the following: CP-TA (Communications Platforms Trade Association): ensure interoperability on xTCA platforms. PICMG (PCI Industrial Computer Manufacturers Group): develop open specifications that adapt PCI technology for use in high-performance telecommunications and industrial computing applications. SCOPE Alliance: enable and promote the availability of open carrier grade base platforms based on COTS hardware / software and Free and open-source software (FOSS) building blocks, and to promote interoperability between such components. The Linux Foundation: promote, protect, and standardize Linux by providing unified resources and services needed for open source to successfully compete with closed platforms. Specifications Specifications for carrier-grade service availability include: Hardware Platform Interface (HPI) Application Interface Specification (AIS) Mapping Specifications Java Mapping Specifications HPI-to-AdvancedTCA Mapping Specifications Educational resources The SA Forum free educational materials enable self-guided training the SA Forum specifications: Application Webcasts Tutorials Whitepapers See also High availability Commercial off-the-shelf Advanced Telecommunications Computing Architecture (ATCA) Communications Platforms Trade Association PICMG SCOPE Alliance SAFplus The Linux Foundation OpenSAF OpenHPI Hardware Platform Interface Application Interface Specification References External links Technology consortia
31072730
https://en.wikipedia.org/wiki/Sri%20Manakula%20Vinayagar%20Engineering%20College
Sri Manakula Vinayagar Engineering College
Sri Manakula Vinayagar Engineering College (SMVEC) Puducherry, union territory, India. SMVEC was established in the year 1999. Location and access The college is situated on Puducherry-Villupuram road (Nantional Highway 45A) in Madagadipet, Pondicherry, South India. Blocks The blocks (buildings) in the college campus are Administration Block Science and Humanities Block School of Architecture Block Mechanical Block EEE block ECE Block University Block College Canteen Workshop Fluid Mechanics Laboratory Machinery Laboratory Boys Hostel Girls Hostel Courses B.Tech – Electronics and Communication Engineering B.Tech – Electrical and Electronics Engineering B.Tech – Computer Science and Engineering B.Tech – Information Technology B.Tech – Instrumentation and Control Engineering B.Tech – Mechanical Engineering B.Tech – Civil Engineering B.Tech – Bio Medical Engineering B.Tech – Mechatronics Engineering B.Tech – Computer Science and Business Systems B.Tech – Fashion Technology B.Tech – Artificial Intelligence and Data Science B.Tech – Computer and Communication Engineering B.Arch – School of Architecture M.Tech - Electronics and Communication Engineering M.Tech - VLSI & Embedded Systems M.Tech - Power Electronics and Drives M.Tech - Computer Science and Engineering M.Tech - Networking M.Tech - Manufacturing Engineering MBA - Master of Business Administration MCA - Master of Computer Application Ph.D - Mechanical Engineering Admission B.Tech Courses: A pass in the higher secondary examination of the (10+2) curriculum (academic stream) prescribed by the Government of Tamil Nadu or any other examination equivalent there to with a minimum of 45% marks (40% marks for reserved category) in aggregate of subjects – Mathematics, Physics and any one of the following optional subjects: Chemistry / Biotechnology/ Computer Science / Biology (Botany & Zoology) or an examination of any University or Authority recognized by the Executive Council of the Pondicherry University as equivalent thereto. Number of attempts : Candidates should have passed the qualifying examination with a maximum of two attempts for General and three attempt for the candidates belonging to the SC / ST communities and two attempts for candidates belonging to the OBC / BCM / MBC / and BTC communities. Age Limit : The upper age limit is 21 years on 1 July of the academic year ( Relaxation up to three years for candidates belonging to OBC / BCM / MBC / EBC and BT communities and five years for candidates belonging to SC / ST communities B.Tech course (Lateral entry): The minimum qualification for admission is a pass in the three year diploma or four year sandwich diploma course in engineering / technology with a minimum of 50% marks (45% marks for reserved category) in aggregate in the subjects covered from 3rd to final semester or a pass in any B.Sc. course from a recognized University as defined by the UGC, with a minimum of 50% marks (45% marks for reserved category) and passed XII standard with Mathematics as a subject, with a maximum of 20% of the sanctioned intake, which will be over and above, supernumerary to the approved intake. Alumni The main aim of the association is to maintain a link between the college and alumni and share details of mutual growth, achievement and advancement in fields. The goals and objectives of the alumni association are contained in the constitution of the Alumni Association adopted in 2012 as its purposes Library The college has a library with 77,782 volumes of books, with Indian and foreign journals, periodicals, newsletters and magazines. Open access system is being followed. A separate reference section and reading hall is also available. Transport The institution operates a fleet of 55 buses and a van for the students and staff to commute from the towns of effective transport facility covering every nook and corner of Puducherry, Cuddalore, Neyveli, Tindivanam, Villupuram. More than 80% of students use the transport service. Trust Sri Manakula Vinayaga Educational Trust (SMVE Trust) was formed with the avowed objective of imparting quality medical and technical education especially to the rural sections of the society. SMVE Trust was established in 1998 under the chairmanship of the beloved founder Shri. N. Kesavan and is progressing under the guidance of Shri. M. Dhanasekaran, chairman and managing director, Shri. S.V. Sugumaran, vice chairmanan, Dr. K. Narayanasamy, secretary. The trust has established “Mailam Engineering College” in 1998 and “Sri Manakula Vinayagar Engineering College” in 1999. See also Pondicherry Engineering College National Institute of Technology Puducherry Perunthalaivar Kamarajar Institute of Engineering and Technology References Engineering colleges in Puducherry Universities and colleges in Pondicherry (city)