id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
6098316
https://en.wikipedia.org/wiki/Meanings%20of%20minor%20planet%20names%3A%20134001%E2%80%93135000
Meanings of minor planet names: 134001–135000
134001–134100 |-id=003 | 134003 Ingridgalinsky || || Ingrid Galinsky (born 1962), the Science Processing and Operation Center Test Lead for the OSIRIS-REx asteroid sample-return mission. || |-id=008 | 134008 Davidhammond || || Dave Hammond (born 1983), the Science Processing Lead at the Science Processing and Operations Center for the OSIRIS-REx asteroid sample-return mission. || |-id=019 | 134019 Nathanmogk || || Nathan Mogk (born 1989), a systems engineer at the Science Processing and Operations Center for the OSIRIS-REx asteroid sample-return mission. His previous planetary science work included making DTMs of Mars from HiRISE data and research on three-body-problem dynamics. || |-id=027 | 134027 Deanbooher || || Daniel "Dean" Booher (born 1971) was the Quality Assurance Manager for OCAMS, the OSIRIS-REx Camera Suite, on the OSIRIS-REx asteroid sample-return mission || |-id=028 | 134028 Mikefitzgibbon || || Mike Fitzgibbon (born 1962), an OCAMS Software Engineer for the OSIRIS-REx asteroid sample-return mission and for the Space Shuttle missions with the AIS, GLO and UVSTAR instruments, and for a number of planetary missions including Mars Polar Lander, Mars Odyssey, Phoenix, Lunar Reconnaissance Orbiter, MESSENGER and MSL. || |-id=034 | 134034 Bloomenthal || || H. Philip Bloomenthal (born 1981) worked as a Systems Administrator at the University of Arizona Science Processing and Operations Center for the OSIRIS-REx asteroid sample-return mission. He helped keep the little green lights blinking. || |-id=036 | 134036 Austincummings || || Austin Cummings (born 1995), a software developer at the Science Processing and Operations Center for the OSIRIS-REx asteroid sample-return mission || |-id=039 | 134039 Stephaniebarnes || || Stephanie Barnes (born 1984), the SPOC Science Operations Engineer for the OSIRIS-REx asteroid sample-return mission. || |-id=040 | 134040 Beaubierhaus || || Beau Bierhaus (born 1972) was the science team interface to the spacecraft design and development activities for the OSIRIS-REx asteroid sample-return mission. || |-id=044 | 134044 Chrisshinohara || || Chris Shinohara (born 1965) was the Science Processing and Operations Center Manager for the OSIRIS-REx asteroid sample-return mission at the University of Arizona. He also worked on the Phoenix and Mars Reconnaissance Orbiter missions. || |-id=050 | 134050 Rebeccaghent || || Rebecca Ghent (born 1971), a Co-I for the OSIRIS-REx asteroid sample-return mission. She is also a Co-I for the Diviner thermal radiometer on the Lunar Reconnaissance Orbiter mission, and has contributed to the body of knowledge of planetary impacts, regolith development and tectonics. || |-id=063 | 134063 Damianhammond || || Damian Hammond (born 1972) was the Software Engineering Lead for the Telemetry Processing at the Science Processing and Operations Center, for the OSIRIS-REx asteroid sample-return mission. || |-id=069 | 134069 Miyo || || Miyo Itagaki (1921–2011), mother of Japanese astronomer Koichi Itagaki, who discovered this minor planet || |-id=072 | 134072 Sharonhooven || || Sharon Hooven (born 1958), the Senior Business Manager for the OSIRIS-REx asteroid sample-return mission at the University of Arizona. || |-id=081 | 134081 Johnmarshall || || John Marshall (born 1948), the Asteroid Scientist – Regolith for the OSIRIS-REx asteroid sample-return mission. || |-id=087 | 134087 Symeonplatts || || Symeon Platts (born 1991), the Senior Videographer for the OSIRIS-REx asteroid sample-return mission at the University of Arizona. || |-id=088 | 134088 Brettperkins || || Brett Perkins (born 1967), the Launch Site Integration Manager for the OSIRIS-REx asteroid sample-return mission. He served in a similar capacity for the JUNO Jupiter mission and multiple TDRSS missions. During the Space Shuttle Program he served as a test engineer and a NASA Test Director for missions from 1990 through 2011. || |-id=091 | 134091 Jaysoncowley || || Jayson Cowley (born 1959) is the United Launch Alliance Mission Manager for the OSIRIS-REx asteroid sample-return mission. He has supported NASA with Titan- IV, Delta-II and Atlas-V launch services for the MAVEN, LDCM, WISE, STSS-D and Cassini missions. || |-id=092 | 134092 Lindaleematthias || || Linda Lee Matthias (born 1968), the KSC/LSP Contamination Control Engineer for the OSIRIS-REx asteroid sample-return mission. Since 1988 she has supported over 70 successful Titan IV and NASA Launch Services Program Missions as the Planetary Protection and Contamination Control Engineer. || |-id=099 | 134099 Rexengelhardt || || Rex Engelhardt (born 1959), the KSC Launch Services Program Mission Manager for the OSIRIS-REx asteroid sample-return mission. As a Mission Manager, he has led more than 10 missions since LSP was established in 1998. He supported many payload support jobs for NASA Kennedy Space Center and the Air Force. || |} 134101–134200 |-id=105 | 134105 Josephfust || || Joseph Fust (born 1958), the United Launch Alliance spacecraft integration engineer for the OSIRIS-REx asteroid sample-return mission. He was the spacecraft integration engineer for the MAVEN mission to Mars. He also serves as spacecraft integrator for various United States Air Force and National Security missions. || |-id=109 | 134109 Britneyburch || || Britney Burch (born 1982) is a structural dynamics analyst with the NASA Launch Services Program and is the primary loads analyst for the OSIRIS-REx asteroid sample-return mission. She has previously served as an analyst with the Mars MAVEN mission and the Pegasus/IRIS mission. || |-id=112 | 134112 Jeremyralph || || Jeremy Ralph (born 1983) is the United Launch Alliance Flight Design Engineer for the Atlas V rocket, launching the OSIRIS-REx asteroid sample-return mission to 101955 Bennu. He also assisted on the SDO, MSL and LDCM missions. || |-id=124 | 134124 Subirachs || 2005 AM || Josep Maria Subirachs (1927–2014), Catalan sculptor and painter || |-id=125 | 134125 Shaundaly || || Shaun Daly (born 1979) is the KSC Launch Services Program Integration Engineer for the OSIRIS-REx asteroid sample-return mission. He has served in the aerospace industry since 1997 both for the USAF as a Crew Chief during Operation Enduring Freedom and for NASA as an avionics engineer on 25 missions || |-id=127 | 134127 Basher || || Benjamin Asher (born 1990) is an Embry-Riddle Aeronautical University alumnus and a member of the flight design team at a.i. solutions, Inc. in support of NASA's Launch Services Program for the OSIRIS-REx asteroid sample-return mission. He is also a member of the flight design team in support of the TESS mission. || |-id=130 | 134130 Apáczai || || János Apáczai Csere (1625–1659), Hungarian polyglot and mathematician || |-id=131 | 134131 Skipowens || || Skip Owens (born 1975) is the NASA LSP Integration Engineer for the OSIRIS-REx asteroid sample-return mission. He was also a LSP Flight Design Engineer for over a dozen NASA missions. Before starting with LSP in 2001, he worked spacecraft mission design at Goddard Spaceflight Center for the EO-1 and WMAP missions. || |-id=134 | 134134 Kristoferdrozd || || Kristofer Drozd (born 1993), a systems engineering graduate student at the University of Arizona. || |-id=135 | 134135 Steigerwald || || William Steigerwald (born 1967) worked on the OSIRIS-REx asteroid sample-return mission as a science writer. He has worked for over 19 years as a writer for a wide range of NASA missions in planetary science, astrobiology, astrophysics and heliophysics || |-id=138 | 134138 Laurabayley || || Laura C. Bayley (born 1988) is a student engineer at MIT responsible for test planning and assembly of the student-built Regolith X-ray Imaging Spectrometer aboard the OSIRIS-REx asteroid sample-return mission. || |-id=146 | 134146 Pronoybiswas || || Pronoy K. Biswas (born 1992) is a student engineer at MIT responsible for designing and implementing the avionics system for the student-built Regolith X-ray Imaging Spectrometer aboard the OSIRIS-REx asteroid sample-return mission. || |-id=150 | 134150 Bralower || || Harrison L. Bralower (born 1988) worked as a student engineer at MIT where he designed the detector assembly mount for the student-built Regolith X-ray Imaging Spectrometer aboard the OSIRIS-REx asteroid sample-return mission. || |-id=160 | 134160 Pluis || || Aina Vandenabeele (8 June – 1 December 2004) was the niece of Belgian astronomer Peter De Cat, who discovered this minor planet. Aina, nicknamed "Pluis", died of leukemia. The naming also honors all children with cancer. || |-id=169 | 134169 Davidcarte || || David B. Carte (born 1991) worked as a student engineer at MIT where he was responsible for the structural design and testing of the student-built Regolith X-ray Imaging Spectrometer aboard the OSIRIS-REx asteroid sample-return mission. || |-id=174 | 134174 Jameschen || || Shuyi James Chen (born 1988) worked as a student engineer at MIT where he was the lead avionics and software engineer in the development of the student-built Regolith X-ray Imaging Spectrometer aboard the OSIRIS-REx asteroid sample-return mission. || |-id=178 | 134178 Markchodas || || Mark A. Chodas (born 1990) is a student engineer at MIT working as the Lead Systems Engineer ensuring that all system components meet science requirements for the student-built Regolith X-ray Imaging Spectrometer aboard the OSIRIS-REx asteroid sample-return mission. || |-id=180 | 134180 Nirajinamdar || || Niraj K. Inamdar (born 1986) worked as a student engineer and scientist at MIT where he conducted the performance modeling in the development of the student-built Regolith X-ray Imaging Spectrometer aboard the OSIRIS-REx asteroid sample-return mission. || |} 134201–134300 |-id=244 | 134244 De Young || || Mike De Young (born 1954), American teacher, who ran the Rehoboth Christian School observatory and is the local liaison for the Calvin-Rehoboth Robotic Observatory || |-id=292 | 134292 Edwardhall || || Edward Hall (1942–2020) earned a Ph.D. at Northwestern University. He was instrumental in the development of silicon-based sensors and gallium arsenide devices at Motorola. Hall later was director of the Arizona State University nanofabrication laboratory and executive associate dean of their School of Engineering. || |} 134301–134400 |-id=329 | 134329 Cycnos || 2377 T-3 || Cycnus (or Cycnos), from Greek mythology. He was one of the many sons of Poseidon with a sea nymphs. In the Trojan War he as an ally of King Priam and was strangled by Achilles. || |-id=340 | 134340 Pluto || — || Pluto, Roman god of the Underworld, similar to the Greek Hades (see also (134340) Pluto I Charon, (134340) Pluto II Nix, and (134340) Pluto III Hydra). || |-id=346 | 134346 Pinatubo || || Mount Pinatubo, active volcano on Luzon island in the Philippines || |-id=348 | 134348 Klemperer || || Victor Klemperer (1881–1960), German son of a rabbi, who kept a diary of life under the Nazi tyranny, starting in 1933 || |-id=369 | 134369 Sahara || 1995 QE || The Sahara desert is the world's largest hot desert. Located in north Africa, it stretches from the Red Sea to the Atlantic Ocean. || |} 134401–134500 |-id=402 | 134402 Ieshimatoshiaki || 1997 RG || Toshiaki Ieshima (born 1955), Japanese amateur astronomer and member of Matsue Astronomy Club. He is an observing partner of Hiroshi Abe who discovered this minor planet. || |-id=419 | 134419 Hippothous || || Hippothous, from Greek mythology. The Trojan prince and his brothers were cursed by their father, King Priam, for their disgraceful behavior after Hector's death during the Trojan War. || |} 134501–134600 |-bgcolor=#f2f2f2 | colspan=4 align=center | |} 134601–134700 |-bgcolor=#f2f2f2 | colspan=4 align=center | |} 134701–134800 |-bgcolor=#f2f2f2 | colspan=4 align=center | |} 134801–134900 |-bgcolor=#f2f2f2 | colspan=4 align=center | |} 134901–135000 |-bgcolor=#f2f2f2 | colspan=4 align=center | |} References 134001-135000
33069395
https://en.wikipedia.org/wiki/6WIND
6WIND
6WIND is a Virtual Networking Software company delivering disaggregated and cloud-native solutions to CSPs and Enterprises globally. The company is privately held and headquartered in the West Paris area, in Montigny-le-Bretonneux. 6WIND has a global presence with offices in the US and APAC. The company provides Virtualized Networking Software which is deployed in bare-metal or in virtual machines on COTS servers in public & private clouds. Their solutions are disaggregated and containerized based on the Cloud-Native Architecture. History 6WIND was founded in 2000 as a spin-out from Thales Group (previously Thomson-CSF), a provider of electronics for aerospace, defense and security. A 3.75 million Euro investment from Sofinnova Partners and others was announced in 2004, and 5 million Euros in 2004. Partners include Red Hat, VMware and Wind River Systems. Equipment vendors that provide boards and systems utilizing 6WIND software include Emerson Network Power. Other partners include: Kalray for data centers, Hewlett-Packard for acceleration technology on ProLiant servers, Dell, Canonical, Alcatel-Lucent for Red Hat Enterprise Linux. In April 2013, the company announced it would support an open-source software project for the Data Plane Development Kit from Intel. In early 2012, 6WIND introduced a mobile edition and cloud edition of 6WINDGate, for 4G mobile phone companies and cloud computing. The company announced its Speed Series of packaged software in late 2014, marketed for network function virtualization (NFV). A product called 6WIND Virtual Accelerator allowing hypervisor scaling. A venture capital investment from Cisco Systems was announced in 2014. In 2015, the company announced its Turbo Router Turbo IPsec software. In 2016 Radware said that their Aleon NG VA product used a product of the company along with OpenStack. That same year Mirantis announced integration with 6WIND for data centers and NFV. The company promotes its performance by publishing performance tests. In 2015 Light Reading mentioned that 6WIND software allowed Italian service provider NGI to build a router marketed for software-defined networking. In August 2017, 6WIND announced a "replacement program" for Brocade vRouter users. 6WIND vRouter is based on DPDK traffic runs in the fast path outside of the Linux kernel, avoiding potential Linux kernel processing bottlenecks. This announcement has been followed by 2 articles from The Register and SDxCentral comparing 6WIND with dedicated equipment and explaining how the vRouter solution helped a Spanish ISP to become SDN ready. See also Packet processing References Software companies of France
64688333
https://en.wikipedia.org/wiki/Ukrainian%20Cyber%20Alliance
Ukrainian Cyber Alliance
The Ukrainian Cyber Alliance (UCA, ukr. Український кіберальянс, УКА) is a community of Ukrainian cyber activists from various cities in Ukraine and around the world. The alliance emerged in the spring of 2016 from the merger of two cyber activists, FalconsFlame and Trinity, and was later joined by the group RUH8 and individual cyber activists from the CyberHunt group. The hacktivists united to counter Russian aggression in Ukraine. Participation in the Russian-Ukrainian cyber war The hacktivists began to apply their knowledge to protect Ukraine in cyberspace in the spring of 2014. Over time, there was an understanding that individual attacks of war could not be won, and the hacktivists began to conduct joint operations. Gradually, some hacker groups united in the Ukrainian Cyber Alliance (UCA), in accordance with  17 of the Constitution of Ukraine to defend the independence of their country and its territorial integrity, as is the duty of every citizen. The Ukrainian Cyber Alliance exclusively transmits extracted data for analysis, reconnaissance and publication to the international intelligence community Inform Napalm, as well as to the law enforcement agencies of Ukraine. Notable operations Operation #opDonbasLeaks In the spring of 2016, the UCA conducted about one hundred successful hacks of websites and mailboxes of militants, propagandists, their curators, and terrorist organizations operating in the occupied territories. Among the targets was the mailbox of the Russian organization "Union of Volunteers of Donbass". From this was obtained passport data and photo documents of citizens of Italy, Spain, India and Finland, who are fighting in the ranks of the Prizrak Brigade, for which Russia opens and, if necessary, extends visas. It was found that Russian terrorists who were wounded during the fighting in eastern Ukraine were being treated in military hospitals of the Ministry of Defense. Hacking of the propaganda site ANNA News On April 29, 2016, the Inform Napalm website, with a call to the UCA, reported on the hacking and interface of the Abkhazian Network News Agency (ANNA News) propagandist news agency. As a result of the hacking, the site did not work for more than 5 days. The hacktivists posted their first video message on the site's pages, in which they used the Lviv Metro meme. The message stated (translation): Operation #OpMay9 On May 9, 2016, the UCA conducted operation #OpMay9. Nine sites of Donetsk People's Republic (DNR) terrorists, propagandists, and Russian private military companies (RPMCs) were hacked. The broken sites were left with the hashtags #OpMay9 and #oп9Травня and three short videos about World War II and Ukrainian contributions to the victory over Nazism – what UCA called the "serum of truth". The hacktivists also posted their new video message on the terrorist sites. The video stated: Operation #opMay18 On May 18, 2016, on the day of remembrance of the deportation of the Crimean Tatars in 1944, the UCA conducted Operation #opMay18. It targeted the website of the so-called chairman of council of ministers of the Republic of Crimea, Sergey Aksyonov, putting in his voice the fraudulent message: Channel One hacking The UCA hacked the website of Pervy Kanal (Channel One Russia), according to hacktivists, as part of a project to force Russia to deoccupy Donbass and fulfill its obligations under the Minsk agreements. Details of Pervy Kanal propagandist Serhiy Zenin's cooperation with Russian state-owned propaganda network Russia Today were also revealed, along with documentation of Zenin's salary and lavish lifestyle. In Zenin's cloud storage were found 25 videos of DNR members shooting in the settlement of Nikishine. Operation #opDay28 In 2016, on the eve of Constitution Day, the UCA conducted operation #opDay28. 17 resources of Russian terrorists were hacked, and the hacked sites played another Lviv Metro video which purported to be from the leader of DNR, O. Zakharchenko: Hacking of the Russian Ministry of Defense In July 2016, the UCA hacked the document management server of the Department of the Ministry of Defense of the Russian Federation, and made public defense contracts executed during 2015. The success of the operation was largely determined by the negligence of Russian Rear Admiral Vernigora Andrei Petrovich. At the end of November 2016, the UCA broke into the Ministry server a second time and obtained confidential data on the provision of the state defense order of 2015–2016. According to analysts of Inform Napalm, the documents show that Russia is developing a doctrine of air superiority in the event of full-scale hostilities with Ukraine, citing the amount allocated for maintenance, modernization and creation of new aircraft. Operation #op256thDay Before Programmer's Day, UCA conducted operation #op256thDay, in which more than 30 sites of Russian foreign aggression were destroyed. On many propaganda resources, the hacktivists embedded an Inform Napalm video demonstrating evidence of Russia's military aggression against Ukraine. Operation #OpKomendant The activists gained access to the postal addresses of 13 regional branches of the "military commandant's office" of the DNR in operation #OpKomendant. For six months, the data from the boxes was passed for analysis by Inform Napalm volunteers, employees of the Peacemaker Center, the Security Service of Ukraine and the Special Operations Forces of Ukraine. Hacking of Aleksey Mozgovoy In October 2016, UCA obtained 240 pages of e-mail correspondence of the leader of Prizrak Brigade, Aleksey Mozgovoy. Judging by the correspondence, Mozghovyi was completely under the control of an unknown agent with the codename "Diva". Hacking of Arsen Pavlov The UCA obtained data from the gadgets of Arsen "Motorola" Pavlov, leader of the Sparta Battalion, and his wife Olena Pavlova (Kolienkina). In the weeks leading up to his death, Pavlov was alarmed by the conflict with Russian curators. SurkovLeaks operation In October 2016, the UCA accessed the mailboxes of Vladislav Surkov, Vladimir Putin's political adviser on relations with Ukraine. Acquired emails were published by Inform Napalm in late October and early November (SurkovLeaks). The emails revealed plans to destabilize and federalize Ukraine, and with other materials demonstrated high-level Russian involvement from the start of the war in eastern Ukraine. A US official told NBC News that the emails corroborated information that the US had previously provided. The authenticity of the emails was confirmed by Atlantic Council and Bellingcat, and published by numerous Western news sources. In the aftermath of the leaks, Surkov's chief of staff resigned. Additional emails belonging to people from Surkov's environs were published in early November, detailing Russia's financing of the "soft federalization" of the Ukraine, recruiting in the Odesa region, and evidence of funding election campaigns in the Kharkiv region. The emails stated that Yuriy Rabotin, the head of the Odessa branch of the Union of Journalists of Ukraine, received payment from the Kremlin for his anti-Ukrainian activities. On April 19, 2018, the British newspaper The Times published an article stating that the SurkovLeaks documents exposed Russia's use of misinformation about the downing of Malaysia Airlines Flight 17 in order to accuse Ukraine. Hacking of the DNR Ministry of Coal and Energy In November 2016, the UCA obtained emails from the DNR's "Ministry of Coal and Energy", including a certificate prepared by the Ministry of Energy of the Russian Federation in January 2016, which detail the plans of the occupiers for the Donbass coal industry. FrolovLeaks Operation FrolovLeaks was conducted in December 2016, and produced correspondence of Kyrylo Frolov, the Deputy Director of the CIS Institute (Commonwealth of Independent States) and Press Secretary of the Union of Orthodox Citizens, for the period 1997–2016. The correspondence contains evidence of Russia's preparation for aggression against Ukraine (long before 2014). It also revealed Frolov's close ties with Sergey Glazyev, the Russian president's adviser on regional economic integration, Moscow Patriarch Vladimir Gundyaev, and Konstantin Zatulin, a member of the Foreign and Defense Policy Council, an illegitimate member of the Russian State Duma and director of the CIS Institute. The letters mention hundreds of others connected with the subversive activities of Russia's fifth column organizations in Ukraine. Hacking of Luhansk intelligence chief For some time, UCA activists monitored the computer of the Chief of Intelligence 2 AK (Luhansk, Ukraine) of the Russian Armed Forces. This officer sent reports with intelligence obtained with the help of regular Russian unmanned aerial vehicles (UAVs) – Orlan, Forpost and Takhion – which were also used to adjust fire artillery. Documents have also been published proving the existence of the Russian ground reconnaissance station PSNR-8 "Credo-M1" (1L120) in the occupied territory. In July 2017, on the basis of the obtained data, additional reconnaissance was conducted on social networks and the service of the Russian UAV Takhion (servicemen of the 138th OMSBR of the RF Armed Forces Private Laptev Denis Alexandrovich and Corporal Angalev Artem Ivanovich). The surveillance provided evidence of troop movements to the Ukraine border in August 2014. A list of these soldiers, their personal numbers, ranks, exact job titles, and information on awards for military service in peacetime were published. The operation also determined the timeline of the invasion of the Russian artillery unit of the 136th OMSBR in the summer of 2014, from the moment of loading equipment to fortifying in the occupied territory of Ukraine in Novosvitlivka, Samsonivka, and Sorokine (formerly Krasnodon). Hacking of Oleksandr Usovskyi In February and March 2017, the UCA exposed the correspondence of Belarus citizen Alexander Usovsky, a publicist whose articles were often published on the website of Ukrainian Choice, an anti-Ukrainian backed by oligarch Viktor Medvedchuk. Inform Napalm analysts conducted a study of the emails and published two articles on how the Kremlin financed anti-Ukrainian actions in Poland and other Eastern European countries. The published materials caused outrage in Poland, the Czech Republic and Ukraine. In an interview with Fronda.pl, Polish General Roman Polko, the founder of the Polish Special Operations Forces, stated his conviction that the anti-Ukrainian actions in Poland and the desecration of Polish monuments in Ukraine were inspired by the Kremlin. Polko said that the information war posed a threat to the whole of Europe, and that the Polish radicals were useful idiots manipulated by Russia. Hacking of CIS Institute An analysis of hacked emails from CIS Institute (Commonwealth of Independent States) revealed that the NGO is financed by the Russian state company Gazprom. Gazprom allocated $2 million annually to finance the anti-Ukrainian activities of the CIS Institute. The head of the institute, State Duma deputy Konstantin Zatulin, helped terrorists and former Berkut members who fled to Russia to obtain Russian passports. Hacking of Russian Foundation for Public Diplomacy Access to the mail of O. M. Gorchakovan, an employee of the Russian Foundation for Public Diplomacy, provided insight to the forms of Russia's foreign policy strategy. On the eve of the war, funding for a six-month propaganda plan in Ukraine reached a quarter of a million dollars. Under the guise of humanitarian projects, subversive activities were carried out in Ukraine, Serbia, Bosnia and Herzegovina, Bulgaria, Moldova, and the Baltic States. Hacking of Oleksandr Aksinenko UCA activists gained access to the mailbox of telephone miner Oleksandr Aksineko, a citizen of Russia and Israel. The correspondence indicates that Aksinenko's terrorist activities are supported by the Russian Federal Security Service (FSB), which advised him to "work in the same spirit". Aksinenko also sent anonymous letters to the Security Service of Ukraine (SBU) and other structures in Ukraine. #FuckResponsibleDisclosure flashmob At the end of 2017, the UCA and other IT specialists held a two-month action to assess the level of protection of Ukrainian public resources, to check whether officials were responsible with information security. Many vulnerabilities were uncovered in the information systems of government agencies. The activists identified reported these vulnerabilities openly to those who could influence the situation. The activists noted the effectiveness in publicly shaming government agencies. For example, it was found that the computer of the Main Directorate of the National Police in Kyiv region could be accessed without a password and found on a network drive 150 GB of information, including passwords, plans, protocols, and personal data of police officers. It was also found that the Bila Tserkva police website had been hacked for a long time, and only after the volunteers noticed did the situation improve. SCFM had not updated servers for 10 years. Activists also found that the website of the Judiciary of Ukraine kept reports of the courts in the public domain. The Kherson Regional Council has opened access to the joint disk. The CERT-UA website (Ukraine's computer emergency response team) posted a password from one of their email accounts. One of the capital's taxi services was found to keep open information about clients, including dates, phone numbers, and departure and destination addresses. Vulnerabilities were also revealed in Kropyvnytskyi's Vodokanal, Energoatom, Kyivenerhoremont, NAPC, Kropyvnytskyi Employment Center, Nikopol Pension Fund, and the Ministry of Internal Affairs (declarations of employees, including special units, were made public). The police opened a criminal case against "Dmitry Orlov", the pseudonym of the activist who publicized the vulnerabilities in a flash mob. They also allegedly tried to hack the Orlov website, leaving a message which threatened physical violence if he continued his activities. The activist deleted the website as it had fulfilled its function. List-1097 UCA activists obtained records of orders to provide food for servicemen of 18 separate motorized rifle brigades of the Russian Armed Forces, who were sent on combat missions during the Russian occupation of Crimea. Inform Napalm volunteers searched open sources of information for the social network profiles of servicemen named in the orders, and discovered photo evidence of their participation in the occupation of Crimea. Records also revealed how troops had been transferred to the Crimea, at Voinka. On January 31, 2017, the central German state TV channel ARD aired a story about the cyber war between Ukraine and Russia. The story documented the repeated cyber attacks by Russian hackers on the civilian infrastructure of Ukraine and efforts to counter Russian aggression in cyberspace, in particular the Surkov leaks. Representatives of the UCA were portrayed as the heroes of the story. Former State Duma deputy Denis Voronenkov (who received Ukrainian citizenship) made statements that Surkov was categorically against the annexation of Crimea. In response, the UCA released photos and audio recordings of the congress of the Union of Donbas Volunteers, from May 2016 in annexed Crimea and November 2016 in Moscow, at which Surkov was the guest of honor. Volunteers of the Inform Napalm community created a film about UCA's activities called Cyberwar: a review of successful operations of the Ukrainian Cyber Alliance in 2016. References Organizations based in Ukraine Hacker groups 2016 establishments in Ukraine
46471245
https://en.wikipedia.org/wiki/Data%20scraping
Data scraping
Data scraping is a technique where a computer program extracts data from human-readable output coming from another program. Description Normally, data transfer between programs is accomplished using data structures suited for automated processing by computers, not people. Such interchange formats and protocols are typically rigidly structured, well-documented, easily parsed, and minimize ambiguity. Very often, these transmissions are not human-readable at all. Thus, the key element that distinguishes data scraping from regular parsing is that the output being scraped is intended for display to an end-user, rather than as an input to another program. It is therefore usually neither documented nor structured for convenient parsing. Data scraping often involves ignoring binary data (usually images or multimedia data), display formatting, redundant labels, superfluous commentary, and other information which is either irrelevant or hinders automated processing. Data scraping is most often done either to interface to a legacy system, which has no other mechanism which is compatible with current hardware, or to interface to a third-party system which does not provide a more convenient API. In the second case, the operator of the third-party system will often see screen scraping as unwanted, due to reasons such as increased system load, the loss of advertisement revenue, or the loss of control of the information content. Data scraping is generally considered an ad hoc, inelegant technique, often used only as a "last resort" when no other mechanism for data interchange is available. Aside from the higher programming and processing overhead, output displays intended for human consumption often change structure frequently. Humans can cope with this easily, but a computer program will fail. Depending on the quality and the extent of error handling logic present in the computer, this failure can result in error messages, corrupted output or even program crashes. Technical variants Although the use of physical "dumb terminal" IBM 3270s is slowly diminishing, as more and more mainframe applications acquire Web interfaces, some Web applications merely continue to use the technique of screen scraping to capture old screens and transfer the data to modern front-ends. Screen scraping is normally associated with the programmatic collection of visual data from a source, instead of parsing data as in web scraping. Originally, screen scraping referred to the practice of reading text data from a computer display terminal's screen. This was generally done by reading the terminal's memory through its auxiliary port, or by connecting the terminal output port of one computer system to an input port on another. The term screen scraping is also commonly used to refer to the bidirectional exchange of data. This could be the simple cases where the controlling program navigates through the user interface, or more complex scenarios where the controlling program is entering data into an interface meant to be used by a human. As a concrete example of a classic screen scraper, consider a hypothetical legacy system dating from the 1960s—the dawn of computerized data processing. Computer to user interfaces from that era were often simply text-based dumb terminals which were not much more than virtual teleprinters (such systems are still in use , for various reasons). The desire to interface such a system to more modern systems is common. A robust solution will often require things no longer available, such as source code, system documentation, APIs, or programmers with experience in a 50-year-old computer system. In such cases, the only feasible solution may be to write a screen scraper that "pretends" to be a user at a terminal. The screen scraper might connect to the legacy system via Telnet, emulate the keystrokes needed to navigate the old user interface, process the resulting display output, extract the desired data, and pass it on to the modern system. A sophisticated and resilient implementation of this kind, built on a platform providing the governance and control required by a major enterprise—e.g. change control, security, user management, data protection, operational audit, load balancing, and queue management, etc.—could be said to be an example of robotic process automation software, called RPA or RPAAI for self-guided RPA 2.0 based on artificial intelligence. In the 1980s, financial data providers such as Reuters, Telerate, and Quotron displayed data in 24×80 format intended for a human reader. Users of this data, particularly investment banks, wrote applications to capture and convert this character data as numeric data for inclusion into calculations for trading decisions without re-keying the data. The common term for this practice, especially in the United Kingdom, was page shredding, since the results could be imagined to have passed through a paper shredder. Internally Reuters used the term 'logicized' for this conversion process, running a sophisticated computer system on VAX/VMS called the Logicizer. More modern screen scraping techniques include capturing the bitmap data from the screen and running it through an OCR engine, or for some specialised automated testing systems, matching the screen's bitmap data against expected results. This can be combined in the case of GUI applications, with querying the graphical controls by programmatically obtaining references to their underlying programming objects. A sequence of screens is automatically captured and converted into a database. Another modern adaptation to these techniques is to use, instead of a sequence of screens as input, a set of images or PDF files, so there are some overlaps with generic "document scraping" and report mining techniques. There are many tools that can be used for screen scraping. Web scraping Web pages are built using text-based mark-up languages (HTML and XHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human end-users and not for ease of automated use. Because of this, tool kits that scrape web content were created. A web scraper is an API or tool to extract data from a web site. Companies like Amazon AWS and Google provide web scraping tools, services, and public data available free of cost to end-users. Newer forms of web scraping involve listening to data feeds from web servers. For example, JSON is commonly used as a transport storage mechanism between the client and the webserver. Recently, companies have developed web scraping systems that rely on using techniques in DOM parsing, computer vision and natural language processing to simulate the human processing that occurs when viewing a webpage to automatically extract useful information. Large websites usually use defensive algorithms to protect their data from web scrapers and to limit the number of requests an IP or IP network may send. This has caused an ongoing battle between website developers and scraping developers. Report mining is the extraction of data from human-readable computer reports. Conventional data extraction requires a connection to a working source system, suitable connectivity standards or an API, and usually complex querying. By using the source system's standard reporting options, and directing the output to a spool file instead of to a printer, static reports can be generated suitable for offline analysis via report mining. This approach can avoid intensive CPU usage during business hours, can minimise end-user licence costs for ERP customers, and can offer very rapid prototyping and development of custom reports. Whereas data scraping and web scraping involve interacting with dynamic output, report mining involves extracting data from files in a human-readable format, such as HTML, PDF, or text. These can be easily generated from almost any system by intercepting the data feed to a printer. This approach can provide a quick and simple route to obtaining data without the need to program an API to the source system. See also Comparison of feed aggregators Data cleansing Data munging Importer (computing) Information extraction Open data Mashup (web application hybrid) Metadata Web scraping Search engine scraping References Further reading Hemenway, Kevin and Calishain, Tara. Spidering Hacks. Cambridge, Massachusetts: O'Reilly, 2003. . Data processing
31775114
https://en.wikipedia.org/wiki/List%20of%20moths%20of%20Oman
List of moths of Oman
Moths of Oman represent about 190 known moth species. The moths (mostly nocturnal) and butterflies (mostly diurnal) together make up the taxonomic order Lepidoptera. This is a list of moth species which have been recorded in Oman. Arctiidae Lepista arabica (Rebel, 1907) Siccia dudai Ivinskis & Saldaitis, 2008 Teracotona murtafaa Wiltshire, 1980 Autostichidae Turatia iranica Gozmány, 2000 Coleophoridae Coleophora aegyptiacae Walsingham, 1907 Coleophora arachnias Meyrick, 1922 Coleophora aularia Meyrick, 1924 Coleophora eilatica Baldizzone, 1994 Coleophora jerusalemella Toll, 1942 Coleophora microalbella Amsel, 1935 Coleophora niphomesta Meyrick, 1917 Coleophora omanica Baldizzone, 2007 Coleophora sogdianae Baldizzone, 1994 Coleophora teheranella Baldizzone, 1994 Coleophora versurella Zeller, 1849 Cossidae Azygophleps larseni Yakovlev & Saldaitis, 2011 Meharia acuta Wiltshire, 1982 Meharia philbyi Bradley, 1952 Meharia semilactea (Warren et Rothschild, 1905) Mormogystia proleuca (Hampson in Walsingham et Hampson, 1896) Crambidae Heliothela ophideresana (Walker, 1863) Geometridae Brachyglossina sciasmatica Brandt, 1941 Cleora nana Hausmann & Skou, 2008 Eupithecia mekrana Brandt, 1941 Eupithecia ultimaria Boisduval, 1840 Hemithea punctifimbria Warren, 1896 Idaea eremica (Brandt, 1941) Idaea gallagheri Wiltshire, 1983 Idaea granulosa (Warren & Rothschild, 1905) Idaea illustris (Brandt, 1941) Palaeaspilates sublutearia (Wiltshire, 1977) Pasiphila palaearctica (Brandt, 1938) Scopula caesaria (Walker, 1861) Traminda mundissima (Walker, 1861) Xanthorhoe rhodoides (Brandt, 1941) Xanthorhoe wiltshirei (Brandt, 1941) Gracillariidae Phyllonorycter jabalshamsi de Prins, 2012 Lasiocampidae Sena augustasi Zolotuhin, Saldaitis & Ivinskis, 2009 Limacodidae Deltoptera omana Wiltshire, 1976 Metarbelidae Salagena guichardi Wiltshire, 1980 Micronoctuidae Micronola wadicola Amsel, 1935 Nepticulidae Stigmella birgittae Gustafsson, 1985 Noctuidae Acantholipes circumdata (Walker, 1858) Achaea catella Guenée, 1852 Acontia akbar Wiltshire, 1985 Acontia asbenensis (Rothschild, 1921) Acontia basifera Walker, 1857 Acontia binominata (Butler, 1892) Acontia imitatrix Wallengren, 1856 Acontia minuscula Hacker, Legrain & Fibiger, 2010 Acontia peksi Hacker, Legrain & Fibiger, 2008 Acontia philbyi Wiltshire, 1988 Acontia porphyrea (Butler, 1898) Acontia saldaitis Hacker, Legrain & Fibiger, 2010 Acontia solitaria Hacker, Legrain & Fibiger, 2008 Acontia tabberti Hacker, Legrain & Fibiger, 2010 Adisura callima Bethune-Baker, 1911 Aegocera brevivitta Hampson, 1901 Agrotis biconica Kollar, 1844 Agrotis ipsilon (Hufnagel, 1766) Amyna axis Guenée, 1852 Amyna punctum (Fabricius, 1794) Anarta trifolii (Hufnagel, 1766) Androlymnia clavata Hampson, 1910 Anoba triangularis (Warnecke, 1938) Anomis flava (Fabricius, 1775) Anomis sabulifera (Guenée, 1852) Antarchaea conicephala (Staudinger, 1870) Antarchaea erubescens (Bang-Haas, 1910) Antarchaea fragilis (Butler, 1875) Anumeta spilota (Erschoff, 1874) Argyrogramma signata (Fabricius, 1775) Armada gallagheri Wiltshire, 1985 Athetis pigra (Guenée, 1852) Brevipecten hypocornuta Hacker & Fibiger, 2007 Callopistria latreillei (Duponchel, 1827) Caradrina eremicola (Plante, 1997) Caradrina soudanensis (Hampson, 1918) Caranilla uvarovi (Wiltshire, 1949) Chrysodeixis acuta (Walker, [1858]) Chrysodeixis chalcites (Esper, 1789) Clytie infrequens (Swinhoe, 1884) Clytie tropicalis Rungs, 1975 Condica capensis (Guenée, 1852) Condica illecta Walker, 1865 Condica viscosa (Freyer, 1831) Ctenoplusia fracta (Walker, 1857) Ctenoplusia limbirena (Guenée, 1852) Drasteria kabylaria (Bang-Haas, 1906) Dysgonia angularis (Boisduval, 1833) Dysgonia torrida (Guenée, 1852) Eublemma anachoresis (Wallengren, 1863) Eublemma apicipunctalis (Brandt, 1939) Eublemma khonoides Wiltshire, 1980 Eublemma odontophora Hampson, 1910 Eublemma parva (Hübner, [1808]) Eublemma roseana (Moore, 1881) Eublemma seminivea Hampson, 1896 Eublemma siticuosa (Lederer, 1858) Eulocastra alfierii Wiltshire, 1948 Eutelia adulatrix (Hübner, 1813) Feliniopsis consummata (Walker, 1857) Feliniopsis talhouki (Wiltshire, 1983) Gnamptonyx innexa (Walker, 1858) Hadjina tyriobaphes Wiltshire, 1983 Haplocestra similis Aurivillius, 1910 Helicoverpa armigera (Hübner, [1808]) Heliocheilus confertissima (Walker, 1865) Heliothis nubigera Herrich-Schäffer, 1851 Heliothis peltigera ([Denis & Schiffermüller], 1775) Heteropalpia acrosticta (Püngeler, 1904) Heteropalpia exarata (Mabille, 1890) Heteropalpia robusta Wiltshire, 1988 Heteropalpia vetusta (Walker, 1865) Hypena abyssinialis Guenée, 1854 Hypena laceratalis Walker, 1859 Hypena lividalis (Hübner, 1790) Hypena obacerralis Walker, [1859] Hypena obsitalis (Hübner, [1813]) Hypotacha isthmigera Wiltshire, 1968 Hypotacha ochribasalis (Hampson, 1896) Hypotacha raffaldii Berio, 1939 Iambiodes postpallida Wiltshire, 1977 Leucania loreyi (Duponchel, 1827) Lophoptera arabica Hacker & Fibiger, 2006 Lyncestoides unilinea (Swinhoe, 1885) Marathyssa cuneata (Saalmüller, 1891) Masalia galatheae (Wallengren, 1856) Metopoceras kneuckeri (Rebel, 1903) Mimasura dhofarica Wiltshire, 1985 Mimasura larseni Wiltshire, 1985 Mocis proverai Zilli, 2000 Nagia natalensis (Hampson, 1902) Nimasia brachyura Wiltshire, 1982 Oedicodia jarsisi Wiltshire, 1985 Ophiusa dianaris (Guenée, 1852) Ophiusa mejanesi (Guenée, 1852) Oraesia emarginata (Fabricius, 1794) Oraesia intrusa (Krüger, 1939) Ozarba adducta Berio, 1940 Ozarba atrifera Hampson, 1910 Ozarba mesozonata Hampson, 1916 Ozarba nyanza (Felder & Rogenhofer, 1874) Ozarba phlebitis Hampson, 1910 Ozarba socotrana Hampson, 1910 Pandesma robusta (Walker, 1858) Pericyma mendax (Walker, 1858) Pericyma metaleuca Hampson, 1913 Polydesma umbricola Boisduval, 1833 Polytela cliens (Felder & Rogenhofer, 1874) Prionofrontia ochrosia Hampson, 1926 Pseudozarba mesozona (Hampson, 1896) Rhynchina albiscripta Hampson, 1916 Rhynchina coniodes Vári, 1962 Rhynchodontodes larseni Wiltshire, 1983 Sideridis chersotoides Wiltshire, 1956 Sphingomorpha chlorea (Cramer, 1777) Spodoptera cilium Guenée, 1852 Spodoptera exigua (Hübner, 1808) Spodoptera littoralis (Boisduval, 1833) Spodoptera mauritia (Boisduval, 1833) Stenosticta grisea Hampson, 1912 Stenosticta sibensis Wiltshire, 1977 Tathorhynchus exsiccata (Lederer, 1855) Thiacidas postica Walker, 1855 Thysanoplusia exquisita (Felder & Rogenhofer, 1874) Trichoplusia ni (Hübner, [1803]) Trichoplusia orichalcea (Fabricius, 1775) Tytroca balnearia (Distant, 1898) Ulotrichopus stertzi (Püngeler, 1907) Ulotrichopus tinctipennis (Hampson, 1902) Vittaplusia vittata (Wallengren, 1856) Zethesides bettoni (Butler, 1898) Nolidae Bryophilopsis tarachoides Mabille, 1900 Churia gallagheri Wiltshire, 1985 Earias insulana (Boisduval, 1833) Giaura dakkaki Wiltshire, 1986 Neaxestis aviuncis Wiltshire, 1985 Xanthodes albago (Fabricius, 1794) Pterophoridae Agdistis arabica Amsel, 1958 Agdistis nanodes Meyrick, 1906 Agdistis olei Arenberger, 1976 Agdistis omani Arenberger, 2008 Arcoptilia gizan Arenberger, 1985 Deuterocopus socotranus Rebel, 1907 Diacrotricha lanceatus (Arenberger, 1986) Megalorhipida leptomeres (Meyrick, 1886) Megalorhipida leucodactylus (Fabricius, 1794) Pterophorus ischnodactyla (Treitschke, 1833) Tischeriidae Tischeria omani Puplesis & Diškus, 2003 Tortricidae Age onychistica Diakonoff, 1982 Ancylis sederana Chrétien, 1915 Bactra venosana (Zeller, 1847) Dasodis cladographa Diakonoff, 1983 Fulcrifera refrigescens (Meyrick, 1924) Xyloryctidae Eretmocera impactella (Walker, 1864) Scythris alhamrae Bengtsson, 2002 Scythris amplexella Bengtsson, 2002 Scythris cucullella Bengtsson, 2002 Scythris elachistoides Bengtsson, 2002 Scythris fissurella Bengtsson, 1997 Scythris kebirella Amsel, 1935 Scythris nipholecta Meyrick, 1924 Scythris pangalactis Meyrick, 1933 Scythris pollicella Bengtsson, 2002 Scythris valgella Bengtsson, 2002 External links AfroMoths Lists of moths by country Lists of moths of Asia Moths Moths
20189085
https://en.wikipedia.org/wiki/Redmine
Redmine
Redmine is a free and open source, web-based project management and issue tracking tool. It allows users to manage multiple projects and associated subprojects. It features per project wikis and forums, time tracking, and flexible, role-based access control. It includes a calendar and Gantt charts to aid visual representation of projects and their deadlines. Redmine integrates with various version control systems and includes a repository browser and diff viewer. The design of Redmine is significantly influenced by Trac, a software package with some similar features. Redmine is written using the Ruby on Rails framework. It is cross-platform and cross-database and supports 49 languages. Features Redmine's features include the following: Allows tracking of multiple projects Supports flexible role-based access control Includes an issue tracking system Features a Gantt chart and calendar Integrates News, documents and files management Allows Web feeds and e-mail notifications. Supports a per-project wiki and per-project forums Allows simple time tracking Includes custom fields for issues, time-entries, projects and users Supports a range of SCM integration, including (SVN, CVS, Git, Mercurial, Bazaar and Darcs) Supports multiple LDAP authentication Allows user self-registration Supports 49 languages Allows multiple databases Allows for plugins Provides a REST API Adoption , there were more than 80 major Redmine installations worldwide. Among the users of Redmine is Ruby. In 2015, Redmine was the most popular open source project planning tool. Forks Following concerns with the way the feedback and patches from the Redmine community were being handled a group of Redmine developers created a fork of the project in February 2011. The fork was initially named Bluemine, but changed to ChiliProject. After the leader of the fork moved on from ChiliProject in 2012 and development and maintenance had been announced to shut down, the project was officially discontinued in February 2015. Another fork of ChiliProject called OpenProject was active in 2015. See also Comparison of issue-tracking systems Comparison of project management software Comparison of time-tracking software Software configuration management References Sources External links Task management software Bug and issue tracking software Free project management software Free wiki software Cross-platform free software Free software programmed in Ruby 2006 software
4146322
https://en.wikipedia.org/wiki/House%20Energy%20Rating
House Energy Rating
A House Energy Rating is the index of a building's thermal performance (i.e. heating and cooling requirements) for residential homes in Australia. The Australian Building Codes Board introduced energy efficiency measures for houses into the Building Code of Australia (BCA) on 1 January 2003. It has been adopted by all Australian states and territories which did not already have an equivalent system in place. Victoria and South Australia have gone beyond the standard, and mandated, instead of 4-stars, a 5-star rating (enacted July 2004) – all new homes and apartments built in Victoria must since 2010 comply with the 6 Star standard. This means it is compulsory for new houses to have: 6 Star energy rating for the building fabric, and A rainwater tank for toilet flushing or a solar hot water system, and Water efficient shower heads and tapware. During 2006, requirements for 5-star energy ratings were introduced for new homes through the BCA in Western Australia and the Australian Capital Territory. As of mid-2007 Tasmania and the Northern Territory have not adopted 5 star requirements for new homes. As of 2010, Queensland has adopted 6-star requirements for new homes. New South Wales has not adopted requirements under the BCA and operates its own Building Sustainability Index or BASIX. Victorian consumers and building practitioners can find out more about the 5-Star energy ratings by visiting Make Your Home Green – Building Commission Ratings 6-Star rating A 6-Star rating indicates that a building achieves a higher level of thermal energy performance than, say a 5 star rating. As of November 2011, 6-Star equivalence is the current minimum requirement in most of Australia. 5-Star rating A 5-Star rating indicates that a building achieves a high level of thermal energy performance, and will require minimum levels of heating and cooling to be comfortable in winter and summer. Houses which achieve a 5 star rating, compared to the average 2 star home, should be more comfortable to live in, have lower energy bills, and costs to install heating and cooling equipment should also be lower. Energy assessments take into account different climatic conditions in different parts of the country and are benchmarked according to average household energy consumption particular to a given climatic region. The house energy rating does not currently include the efficiency of any appliances fitted or used within the house. There are also no physical testing requirements, so air tightness testing is not required as it is with the regulations in the UK. State Government initiatives ACT House Energy Rating Scheme (ACTHERS), requires new or previously lived in residential homes to have an Energy Efficiency Rating (EER) Statement, prepared by an accredited ACTHERS assessor, if they are to be sold. As of the February 2006, the required software used in assessment is FirstRate, Version 3.1 or Version 4. In Victoria all new homes built since 2005 are required to achieve a 5 Star rating. Rating can be performed using any software approved by NatHERS. In South Australia, all new homes (and alterations to existing homes) are required to achieve a 6 star rating. This requirement was introduced on 1 September 2010. Western Australia: in 2007 the WA Government introduced further energy and water usage regulatory requirements. 5 Star Plus consists of two codes: the Energy Use in Houses Code, which requires a minimum standard of energy performance for a hot water system; and the Water Use in Houses Code, which includes provisions for alternative water supplies, efficient fixtures and fittings, and grey water diversion. In Queensland it is proposed that from either 1 January 2009, or when the Building Code of Australia 2009 update is released in May 2009, that all new homes built in Queensland will be required to achieve a 5 star energy equivalent rating. Currently the minimum requirement is 3.5 stars. Software One of the best ways to achieve an energy rating on a proposed house is by using House Energy Rating Software (HERS). This kind of software will simulate a home and provide estimates for the energy needed to heat and cool that home over the course of the year. The Nationwide House Energy Rating Scheme (NatHERS) is a framework by which this kind of software is assessed, compared and accredited for use in Australia. The First Generation of accreditation included the FirstRate 4, BERS 3.2 and NatHERS software packages, allowing accredited assessor to use this software to provide energy ratings. The Second Generation of accreditation was tightened and improved, meaning that software had to become more precise, accurate and powerful in response. The Second Generation of software must take into account more features and realistically model elements such as natural ventilation, the cooling effects of ceiling fans, under-floor heating and the effects of attached dwellings such as apartments. The First Generation has been phased out for the Second Generation of software to take over. In Victoria, the First Generation of software was no longer acceptable for energy ratings after 30 April 2009. FirstRate is the software package developed by Sustainability Victoria. Over the course of 2008, Sustainability Victoria has produced a new version, FirstRate 5. FirstRate 5 received provisional Second Generation accreditation on 31 August 2007. This means that it can be used for house energy ratings now. From May 2009, it will be the only version of FirstRate accredited for use in Victoria. FirstRate Five uses the AccuRate calculation engine with a graphic interface. It includes: the ability to zone the house according to how each room will be used, the ability to rate up to 10 stars and the full range of AccuRate climate zones (69 in Australia). Other Second Generation software acceptable for use in Victoria now and after 30 April deadline includes BERS Professional and AccuRate. AccuRate shares a calculation engine with FirstRate 5 but has a more complete data input method which allows for more precise energy ratings but lacks FirstRate5's graphic interface. Each software package will be appropriate in different circumstances.Nationwide House Energy Rating Scheme (NatHERS) - Home Page Controversies The rating system does not consider factors such as sustainable materials, embodied energy, electricity sources, rainwater capture, local vegetation and access to public transportation. Some say that Carbon emissions would be a better metric to use compared to raw MegaJoules of Energy consumption. Inaccurate representation of the building in the software can result in an inaccurate assessment of the building's thermal performance. An unskilled or inexperienced assessor can easily make incorrect assumptions. Some states require in-depth training & accreditation for all assessors before they are considered qualified to use the software, while other states merely highly recommend (but not require) in-depth training & accreditation. While not a fault of the software itself, a building is often not constructed as drawn on the plans and some items (weather sealing, insulation values, window coverings, paint colours, etc.) are often different from what was originally simulated. In other unintentional cases, items are installed but are installed incorrectly in a way that compromises their thermal performance (such as foil-based insulation without an air gap). In rare cases, certain styles of buildings in certain climates have become very difficult to comply without using expensive materials (such as a house on the side of a hill with solid glass walls along three sides, to provide ample viewing of the natural scenery). While most buildings can still find a solution without any major design alterations, sometimes this is only possible with expensive materials and this drastically increases the price of the house. This is usually an indication that the building was designed with aesthetics taking a prominent position over the comfort & livability of the occupants. The rating system does not deal with problems beyond a single household such as urban sprawl, city planning, etc. Comparative energy audits of high-rise accommodation and free-standing homes would enable planning authorities to better understand what form of accommodation is more economic and energy efficient. The rating system does not account for air-tight buildings with or without heat exchange units and is predominantly aimed toward the use of traditional building materials and does not open doors for newer higher quality building materials. History The Five Star Design Rating (FSDR) was an award developed in the 1980s for "high efficiency through excellence in design and construction" which assisted builders in marketing energy efficient home designs. The certification was developed by the Glass, Mass and Insulation Council of Australia (GMI Council) together with CSIRO Division of Building Research. The GMI Council was funded by Federal and State governments (NSW, SA, Tasmania, Victoria) and by private investors. Under FSDR, the basic elements of glass, mass and insulation were the basis of the design principles of a five star home. The building industry did not widely accept the system due to its simple pass/fail rating and its restrictive guidelines. In the 1990s, individual states developed their own schemes. The Victorian scheme, based on a computer program, was eventually accepted as the most effective. However, it worked poorly in warm humid climates such as found in Queensland. The development of a nationwide House Energy Rating Scheme (NatHERS) began in 1993, based on the Victorian scheme, using the CHEETAH / CHEENATH engine developed at CSIRO. Software products NatHERS, FirstRate and Quick Rate, BERS, Q Rate and ACTHERS are based on this engine. NatHERS and BERS run the engine directly, while others use correlations based on the engine. See also Green Star (Australia) ABSA (Australia) BASIX, (NSW) (Canada) (UK) (United States) Energy conservation Environmental economics Green building Zero-energy building Low-energy house Passive house References External links Queensland’s implementation of energy efficiency requirements from 1 May 2010 Building energy rating Energy conservation in Australia
983302
https://en.wikipedia.org/wiki/FILE%20ID.DIZ
FILE ID.DIZ
FILE_ID.DIZ is a plain-text file containing a brief description of the content of archive to which it belongs. Such files were originally used in archives distributed through bulletin board systems (BBSes) and is still used in the warez scene. stands for "file identification". stands for "description in zipfile". Traditionally, a FILE_ID.DIZ should be "up to 10 lines of text, each line being no more than 45 characters long", according to v.1.9 of the specification. The concept of .DIZ files was to allow for a concise description of uploaded files to be automatically applied. Advertisements and "high-ASCII" artwork, common in .nfo files, were specifically prohibited. History Bulletin boards commonly accept uploaded files from their users. The BBS software would prompt the user to supply a description for the uploaded file, but these descriptions were often less than useful. BBS system operators spent many hours going over the upload descriptions correcting and editing the descriptions. The inclusion in archives was designed to address this problem. Clark Development and the Association of Shareware Professionals (ASP) supported the idea of this becoming a standard for file descriptions. Clark rewrote the PCBDescribe program and included it with their PCBoard BBS software. The ASP urged their members to use this description file format in their distributions. Michael Leavitt, an employee of Clark Development, released the file specification and his PCBDescribe program source code to the public domain and urged other BBS software companies to support the DIZ file. SysOps could add a common third-party script written in PPL, called "DIZ/2-PCB" that would process, rewrite, verify, and format DIZ files from archives as they were uploaded to a BBS. The software would extract the archive, examine the contents, compile a report, import the DIZ description file and then format it according to your liking. During this time, it was usual practice to add additional lines to the description, such as ads exclaiming the source of the uploaded BBS. Even since the decline of the dial-up bulletin board system, FILE_ID.DIZ files are still utilized by the warez scene in their releases of unlicensed software. They are commonly bundled as part of the complete packaging by self-described pirate groups, and indicate the number of disks, and other basic information. Along with the NFO file, it is essential to the release. Especially in terms of unlicensed software ("warez"), it was common for each file in a sequential compressed archive (an archive intentionally split into multiple parts at creation so the parts can then be individually downloaded by slower connections like dial-up. Example: .rar, .r00, .r01, .r02, etc.), to contain this file. This probably contributed to its extended popularity after the decline of the bulletin board system in the late 1990s and early 2000s until now, since even casual consumers of unlicensed software would have stumbled upon it due to its abundance. Formal structure While real-world use among BBSs varied, with the NPD world and even different BBS brands coming up with expanded versions, the official format is: Plain, 7-bit ASCII text, each line no more than 45 characters wide. Program/file name: Ideally, all uppercase and followed by one space. Carriage returns are ignored in this file. Version number: In the format "v1.123", followed by a space. ASP number: Only if an actual ASP member, otherwise ignored. Description separator: A single short hyphen "-". Description: The description of the file. The first two lines should be the short summary, as older boards cut off the rest. Anything beyond that should be extended description, for up to eight lines, the official cut-off size. Additional text could be included beyond that but might not be included by the board. Many archives would stick strictly to the 45-character plain ASCII format for the first 8 lines, then contain an appended 80-character wide 8-bit ASCII or ANSI graphic page with better-formatted documentation after that. See also .nfo — another standard for description files README Portable Application Description — a newer and more verbose alternative Standard (warez) SAUCE — an architecture or protocol created in 1994 for attaching metadata or comments to files. In use today as the de facto standard within the ANSI art community. DESC.SDI — a similar filename that had fairly wide support, including PCBoard. It tended to be limited to a single line (smaller than a FILE_ID.DIZ file). DESCRIPT.ION — a text file containing line by line file (and directory) descriptions (and optional meta data), originally introduced by JP Software in 1989 References Further reading External links FILE_ID.DIZ Specification v1.9 by Richard Holler. Public Service Announcement: file_id.diz Bulletin board systems Filenames Third-party DOS files Warez Articles with underscores in the title
29539307
https://en.wikipedia.org/wiki/Gosu%20%28programming%20language%29
Gosu (programming language)
Gosu is a statically-typed general-purpose programming language that runs on the Java Virtual Machine. Its influences include Java, C#, and ECMAScript. Development of Gosu began in 2002 internally for Guidewire Software, and the language saw its first community release in 2010 under the Apache 2 license. Gosu can serve as a scripting language, having free-form Program types (.gsp files) for scripting as well as statically verified Template files (.gst files). Gosu can optionally execute these and all other types directly from source without precompilation, which also distinguishes it from other static languages. History Gosu began in 2002 as a scripting language called GScript at Guidewire Software. It has been described as a Java variant that attempts to make useful improvements while retaining the fundamental utility and compatibility with Java. It was used to configure business logic in Guidewire's applications and was more of a simple rule definition language. In its original incarnation it followed ECMAScript guidelines. Guidewire enhanced the scripting language over the next 8 years, and released Gosu 0.7 beta to the community in November 2010. The 0.8 beta was released in December 2010, and 0.8.6 beta was released in mid-2011 with additional typeloaders, making Gosu capable of loading XML schema definition files and XML documents as native Gosu types. The latest version is 1.10, released in January 2016, along with a new IntelliJ IDEA editor plugin. Guidewire continues to support and use Gosu extensively within InsuranceSuite applications. Guidewire has decided to freeze the development of new Gosu programming language constructs at this time. Guidewire continues to evolve InsuranceSuite through RESTful APIs and Integration Frameworks that can be accessed using Java. Philosophy Gosu language creator and development lead, Scott McKinney, emphasizes pragmatism, found in readability and discoverability, as the overriding principle that guides the language's design. For instance, Gosu's rich static type system is a necessary ingredient toward best of breed tooling via static programming analysis, rich parser feedback, code completion, deterministic refactoring, usage analysis, navigation, and the like. Syntax and semantics Gosu follows a syntax resembling a combination of other languages. For instance, declarations follow more along the lines of Pascal with name-first grammar. Gosu classes can have functions, fields, properties, and inner classes as members. Nominal inheritance and composition via delegation are built into the type system as well as structural typing similar to the Go programming language. Gosu supports several file types: Class (.gs files) Program (.gsp files) Enhancement (*.gsx files) Template (*.gst files) In addition to standard class types Gosu supports enums, interfaces, structures, and annotations. Program files facilitate Gosu as a scripting language. For example, Gosu's Hello, World! is a simple one-line program:print("Hello, World!") Gosu classes are also executable a la Java:class Main { static function main(args: String[]) { print("Hello, World!") } } Data types A unique feature of Gosu is its Open Type System, which allows the language to be easily extended to provide compile-time checking and IDE awareness of information that is typically checked only at runtime in most other languages. Enhancements let you add additional functions and properties to other types, including built-in Java types such as String, List, etc. This example demonstrates adding a print() function to java.lang.String.enhancement MyStringEnhancement : String { function print() { print(this) } }Now you can tell a String to print itself:"Echo".print()The combination of closures and enhancements provide a powerful way of coding with Collections. The overhead of Java streams is unnecessary with Gosu: var list = {1, 2, 3} var result = list.where(\ elem -> elem >= 2) print(result) Uses This general-purpose programming language is used primarily in Guidewire Software's commercial products. References Further reading Video External links Official website Source code repository Programming languages Object-oriented programming languages Java programming language family JVM programming languages Software using the Apache license Programming languages created in 2002 2002 software High-level programming languages Cross-platform free software Free compilers and interpreters
20560969
https://en.wikipedia.org/wiki/Spacecraft%20Planet%20Instrument%20C-matrix%20Events
Spacecraft Planet Instrument C-matrix Events
SPICE is a NASA ancillary information system used to compute geometric information used in planning and analyzing science observations obtained from robotic spacecraft. It is also used in planning missions and conducting numerous engineering functions needed to carry out those missions. SPICE was developed at NASA's Navigation and Ancillary Information Facility (NAIF), located at the Jet Propulsion Laboratory. It has become the de facto standard for handling much of the so-called observation geometry information on NASA's planetary missions, and it is now widely used in support of science data analysis on planetary missions of other space agencies as well. Some SPICE capabilities are also used on a variety of astrophysics, solar physics and earth science missions. Data SPICE data files are usually referred to as "kernels." These files provide information such as spacecraft trajectory and orientation; target body ephemeris, size and shape; instrument field-of-view size, shape and orientation; specifications for reference frames; and tabulations of time system conversion coefficients. SPICE data are archived in a national archive center such as the NASA Planetary Data System archives. Software The SPICE system includes software referred to as The SPICE Toolkit, used for reading the SPICE data files and computing geometric parameters based on data from those files. These tools are provided as subroutine libraries in four programming languages: C, FORTRAN, IDL, MATLAB and Java Native Interface. Third parties offer Python and Ruby interfaces to the C-language Toolkit. The Toolkits also include a number of utility and application programs. The SPICE Toolkits are available for most popular computing platforms, operating systems and compilers. Extensive documentation accompanies each Toolkit. Those unable to write their own SPICE-based program may try using WebGeocalc, a browser interface to a SPICE-based geometry engine running on the NAIF server. Using WebGeocalc is much easier than writing your own program, but it still requires considerable knowledge about SPICE data and solar system geometry, and it doesn't offer the full range of computations available when using Toolkit software in your own program. The NAIF Group also offers a 3-D mission visualization program named SPICE-Enhanced Cosmographia. This program runs in the OSX, Windows and Linux environments. Visual representations of mission SPICE data are controlled using an assortment of menus and GUI controls. A scripting interface is also available. Tutorials and programming lessons A set of tutorials is available to help users understand the SPICE data and software. Some "open book" programming lessons useful in learning how to program using Toolkit subroutines are also available. Availability The SPICE data, Toolkit software, tutorials and programming lessons are all freely available from the NAIF website. There are no licensing or export restrictions. Prospective users are cautioned that it takes some effort to learn to use this software: it is primarily provided for professionals in the space exploration business. Prospective users should carefully read the "Rules" page available at the NAIF website. External links NAIF Website References NASA online Jet Propulsion Laboratory
58148875
https://en.wikipedia.org/wiki/The%20Trojan%20Women%20Set%20Fire%20to%20their%20Fleet%20%28Claude%20Lorrain%29
The Trojan Women Set Fire to their Fleet (Claude Lorrain)
The Trojan Women Set Fire to their Fleet is a mid-17th century painting by French artist Claude Lorrain. Done in oil on canvas, the painting is currently in the collection of the Metropolitan Museum of Art. Description Claude Lorrain painted The Trojan Women Set Fire to their Fleet around 1643 at the behest of Cardinal Girolamo Farnese. The scene is Lorrain's take on a famed event in Book 5 of the Aeneid in which the exiled women of Troy, spurred on by the Greek goddess Juno, burn the Trojan fleet to force their men to stop roaming and settle in Sicily. However, Aeneas prays to the god Jupiter to save the ships from the flames by summoning a rainstorm; this is alluded to by Lorrain via his inclusion of dark clouds in the top right of the painting. Lorrain's choice of scene carries additional subtext, as his patron commissioned the painting after returning to Rome from an extensive period of work abroad, with Trojan Women thus evoking thoughts of an end to wandering. According to the Met, the painting later inspired British maritime artist J. M. W. Turner. References 1643 paintings Paintings in the collection of the Metropolitan Museum of Art Paintings by Claude Lorrain Maritime paintings Paintings based on the Aeneid
419859
https://en.wikipedia.org/wiki/Classical%20test%20theory
Classical test theory
Classical test theory (CTT) is a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers. It is a theory of testing based on the idea that a person's observed or obtained score on a test is the sum of a true score (error-free score) and an error score. Generally speaking, the aim of classical test theory is to understand and improve the reliability of psychological tests. Classical test theory may be regarded as roughly synonymous with true score theory. The term "classical" refers not only to the chronology of these models but also contrasts with the more recent psychometric theories, generally referred to collectively as item response theory, which sometimes bear the appellation "modern" as in "modern latent trait theory". Classical test theory as we know it today was codified by Novick (1966) and described in classic texts such as Lord & Novick (1968) and Allen & Yen (1979/2002). The description of classical test theory below follows these seminal publications. History Classical test theory was born only after the following three achievements or ideas were conceptualized: 1. a recognition of the presence of errors in measurements, 2. a conception of that error as a random variable, 3. a conception of correlation and how to index it. In 1904, Charles Spearman was responsible for figuring out how to correct a correlation coefficient for attenuation due to measurement error and how to obtain the index of reliability needed in making the correction. Spearman's finding is thought to be the beginning of Classical Test Theory by some (Traub, 1997). Others who had an influence in the Classical Test Theory's framework include: George Udny Yule, Truman Lee Kelley, Fritz Kuder & Marion Richardson involved in making the Kuder–Richardson Formulas, Louis Guttman, and, most recently, Melvin Novick, not to mention others over the next quarter century after Spearman's initial findings. Definitions Classical test theory assumes that each person has a true score,T, that would be obtained if there were no errors in measurement. A person's true score is defined as the expected number-correct score over an infinite number of independent administrations of the test. Unfortunately, test users never observe a person's true score, only an observed score, X. It is assumed that observed score = true score plus some error: X = T + E observed score true score error Classical test theory is concerned with the relations between the three variables , , and in the population. These relations are used to say something about the quality of test scores. In this regard, the most important concept is that of reliability. The reliability of the observed test scores , which is denoted as , is defined as the ratio of true score variance to the observed score variance : Because the variance of the observed scores can be shown to equal the sum of the variance of true scores and the variance of error scores, this is equivalent to This equation, which formulates a signal-to-noise ratio, has intuitive appeal: The reliability of test scores becomes higher as the proportion of error variance in the test scores becomes lower and vice versa. The reliability is equal to the proportion of the variance in the test scores that we could explain if we knew the true scores. The square root of the reliability is the absolute value of the correlation between true and observed scores. Evaluating tests and scores: Reliability Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be acquired by diverse means. One way of estimating reliability is by constructing a so-called parallel test. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that and Under these assumptions, it follows that the correlation between parallel test scores is equal to reliability (see Lord & Novick, 1968, Ch. 2, for a proof). Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as Cronbach's . Consider a test consisting of items , . The total test score is defined as the sum of the individual item scores, so that for individual Then Cronbach's alpha equals Cronbach's can be shown to provide a lower bound for reliability under rather mild assumptions. Thus, the reliability of test scores in a population is always higher than the value of Cronbach's in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's is included in many standard statistical packages such as SPSS and SAS. As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for , say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing. These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear. Evaluating items: P and item-total correlations Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the P-value (proportion) and the item-total correlation (point-biserial correlation coefficient). The P-value represents the proportion of examinees responding in the keyed direction, and is typically referred to as item difficulty. The item-total correlation provides an index of the discrimination or differentiating power of the item, and is typically referred to as item discrimination. In addition, these statistics are calculated for each response of the oft-used multiple choice item, which are used to evaluate items and diagnose possible issues, such as a confusing distractor. Such valuable analysis is provided by specially-designed psychometric software. Alternatives Classical test theory is an influential theory of test scores in the social sciences. In psychometrics, the theory has been superseded by the more sophisticated models in item response theory (IRT) and generalizability theory (G-theory). However, IRT is not included in standard statistical packages like SPSS, but SAS can estimate IRT models via PROC IRT and PROC MCMC and there are IRT packages for the open source statistical programming language R (e.g., CTT). While commercial packages routinely provide estimates of Cronbach's , specialized psychometric software may be preferred for IRT or G-theory. However, general statistical packages often do not provide a complete classical analysis (Cronbach's is only one of many important statistics), and in many cases, specialized software for classical analysis is also necessary. Shortcomings One of the most important or well-known shortcomings of classical test theory is that examinee characteristics and test characteristics cannot be separated: each can only be interpreted in the context of the other. Another shortcoming lies in the definition of reliability that exists in classical test theory, which states that reliability is "the correlation between test scores on parallel forms of a test". The problem with this is that there are differing opinions of what parallel tests are. Various reliability coefficients provide either lower bound estimates of reliability or reliability estimates with unknown biases. A third shortcoming involves the standard error of measurement. The problem here is that, according to classical test theory, the standard error of measurement is assumed to be the same for all examinees. However, as Hambleton explains in his book, scores on any test are unequally precise measures for examinees of different ability, thus making the assumption of equal errors of measurement for all examinees implausible (Hambleton, Swaminathan, Rogers, 1991, p. 4). A fourth, and final shortcoming of the classical test theory is that it is test oriented, rather than item oriented. In other words, classical test theory cannot help us make predictions of how well an individual or even a group of examinees might do on a test item. See also Educational psychology Standardized test Notes References Allen, M.J., & Yen, W. M. (2002). Introduction to Measurement Theory. Long Grove, IL: Waveland Press. Novick, M.R. (1966) The axioms and principal results of classical test theory Journal of Mathematical Psychology Volume 3, Issue 1, February 1966, Pages 1-18 Lord, F. M. & Novick, M. R. (1968). Statistical theories of mental test scores. Reading MA: Addison-Welsley Publishing Company Further reading External links International Test Commission article on Classical Test Theory TAP: free software for Classical Test Theory Iteman: software for visual reporting with Classical Test Theory Lertap: Excel-based software for Classical Test Theory CITAS: Excel-based software for Classical Test Theory jMetrik: Software for Classical Test Theory Psychometrics Statistical theory Comparison of assessments Industrial and organizational psychology Statistical reliability
333053
https://en.wikipedia.org/wiki/David%20Bradley%20%28engineer%29
David Bradley (engineer)
David J. Bradley (born 4 January 1949) is one of the twelve engineers who worked on the original IBM PC, developing the computer's ROM BIOS code. Bradley is credited for implementing the "Control-Alt-Delete" (Ctrl-Alt-Del) key combination that was used to reboot the computer. Bradley joined IBM in 1975 after earning his doctorate in electrical engineering from Purdue University with a dissertation on computer architectures. Education Bachelors, Electrical Engineering, University of Dayton (Ohio), 1971. Master of Science, Electrical Engineering, Purdue University, 1972. PhD, Electrical Engineering, Purdue University, 1975. Control-Alt-Delete According to Bradley, Control-Alt-Delete was not intended to be used by end users, originally—it was meant to be used by people writing programs or documentation, so that they could reboot their computers without powering them down. This was useful since after a computer was powered down, it was necessary to wait a few seconds before powering it up again to avoid potential damage to the power supply and hard drive. Since software developers and technical writers would need to restart a computer many times, this key combination was a big time-saver. David Bradley and Mel Hallerman chose this key combination because it is practically impossible to accidentally press this combination of keys on a standard original IBM PC keyboard. However, the key combination was described in IBM's technical reference documentation and thereby revealed to the general public. At the 20th anniversary of the IBM PC on August 8, 2001 at The Tech Museum, while on a panel with Bill Gates, Bradley said, "I have to share the credit. I may have invented it [Control-Alt-Delete], but I think Bill made it famous." Multiple-key reboot had been introduced by Exidy, Inc., in 1978, for its Sorcerer Z80 computer. It provided two Reset buttons, which must be pressed simultaneously to achieve reboot. In March 1980, the multiple-key reboot concept had been introduced for the Apple II by Videx in its VideoTerm display card add-on, requiring Control-Reset, rather than Reset alone, to reboot the machine. The innovation was noted and well received at the time. Other accomplishments Bradley is the author of Assembly Language Programming for the IBM Personal Computer (Simon & Schuster, , January 1984), also released in French as Assembleur sur IBM PC (Dunod, ), Russian ("Radio" Publishing House, Moscow), and Bulgarian ("Technica" Publishing house, 1989). Bradley holds seven U.S. patents. Bradley has been adjunct professor of electrical and computer engineering at Florida Atlantic University and at North Carolina State University. Much of Bradley's career has been at IBM. Bradley received a B.E.E. degree in 1971 from the University of Dayton, (Ohio). He went on to Purdue University in West Lafayette, Indiana, where he completed an M.S. degree in 1972 and Ph.D. in 1975, both in electrical engineering. Upon graduation he went to work for IBM in Boca Raton, Florida, as senior associate engineer. He worked on the Series/1 system. In 1978 he developed the I/O system for the System/23 Datamaster. In 1980 Bradley was one of twelve engineers developing the first IBM Personal Computer. Bradley developed the ROM BIOS. That got him promoted to manage the BIOS and diagnostics for the IBM PC/XT. In 1983 Bradley formed the Personal Systems Architecture Department. In 1984 he helped manage development of the Personal System/2 Model 30. In November 1987 Bradley became manager of advanced processor design. His group developed the 486/25 Power Platform and the PS/2 Models 90 and 95. In 1991 he became manager of systems architecture for the Entry Systems Technology group. In 1992 he became the architecture manager for the group that developed a personal computer using the PowerPC RISC microprocessor. In 1993 he returned to be the manager of architecture in the PC group. On January 30, 2004, Bradley retired from IBM. Bradley wrote about the development of the IBM PC, including Control-Alt-Delete, in the August 2011 issue of the IEEE's Computer magazine. References External links The History of IBM - about.com TechTV |Summer PC, Geraldo Rivera, IBM Anniversary Living people 1949 births IBM employees North Carolina State University faculty University of Dayton alumni
27262460
https://en.wikipedia.org/wiki/Pat%20Curran%20%28fighter%29
Pat Curran (fighter)
Pat Curran (born August 31, 1987) is a retired American mixed martial artist, and a former two-time Bellator Featherweight World Champion. He is the cousin of World Extreme Cagefighting veteran Jeff Curran and fought primarily with Xtreme Fighting Organization (XFO) before signing with Bellator, where he is the winner of Bellator Season Two Lightweight Tournament and the Bellator 2011 Summer Series. From July 2011 till April 2014 and January 2018 till July 2018, Curran was ranked in the top #10 Featherweights in the world by Fight Matrix, rising to as high as #2 from July 2012 till April 2013. Background Curran went to Olympic Heights Community High School where he was a standout wrestler. Mixed martial arts career Curran spent much of his youth crafting his skills on the playground with boxing gloves. Curran was a Florida High School wrestling stand-out who went on to study Brazilian jiu-jitsu with his cousin Jeff Curran at the young age of 17. It was during this summer of training that Curran decided to pursue mixed martial arts as a career. Curran made his debut in 2008, against Tony Hervey, a future King of the Cage Lightweight Champion. Curran won the bout. Curran had his second professional bout against Lazar Stojadinovic, who previously had a dominating performance over Curran's teammate Ben Miller. The bout was featured on the Tapout reality series on the Versus Channel, giving Curran his first mainstream performance. TapouT Curran was featured on the Tapout reality series on the TV channel Versus. Curran trains under his cousin, former WEC Featherweight title challenger Jeff Curran at his gym in Crystal Lake, Illinois. On the show, between training sessions, Pat got his first tattoo and he, Jeff and the crew did an autograph session at a Chicago clothing store. At the official weigh-in, Pat gets his first look at his opponent, Lazar Stojadinovic. Lazar has fought another of Jeff's students, so Pat reviews footage of that fight. The fighters were scheduled to meet at XFO 23 in Crystal Lake. Pat went on to dominate much of the first round with his grappling and ground and pound. He similarly dominated rounds two and three and won a unanimous decision from the judges. Post-TapouT Curran returned to the XFO for his third professional fight, where he defeated Amir Khillah by Decision (Split). His next two fights took place outside of the XFO banner, where he went 1-1 (the loss coming at the hands of UFC competitor Darren Elkins), before returning to XFO to defeat Daniel Mason-Straus by KO (Punches) in the second. After winning another with XFO, he lost again competing outside of the promotion, this time to Charles Diaz by Decision (Split). Once again, returning to XFO, Curran next faced Luke Gwaltney, winning by TKO in the first. On October 10, 2009 he faced Jay Ellis in an XFO event, where he defeated his opponent in under a minute, submitting him with a Guillotine Choke. He was defeated in his next fight with XFO, at the hands of Travis Perzynski, losing by rear naked choke in the second round. Curran took part in a Trojan MMA event, where he defeated former Cage Rage British Lightweight Championship contender Robbie Olivier on February 27, 2010 via unanimous decision. Bellator MMA It was announced that Curran would be a participant in the Bellator Season Two Lightweight Tournament. His first round match-up was announced to be UFC Welterweight Champion Georges St-Pierre's protégé, Mike Ricci and the fight took place at Bellator 14. A crowd favorite, the Chicago-area fighter remained patient through a few strategic minutes in which the fighters traded low kicks and jabs while finding their range. However, a little more than midway through the round, Curran connected on a powerful right hook that sent opponent Mike Ricci crashing to the mat where he stayed unconscious for a few uncomfortable minutes after a few follow up punches. The bout was called, and Curran won by knockout by 3:01 in the first round. His second bout took place at Bellator 17 against former UFC veteran Roger Huerta. Huerta was heavily favored going in to the fight, but Curran impressed over three rounds and went home with a unanimous decision, winning 29-28 on all three judges scorecards. With that victory, Curran moved on to face Toby Imada, who was also victorious that night, in the Season 2 Lightweight Tournament Final. At Bellator 21 Curran defeated Imada via split decision in a close fight, becoming the Bellator Season Two Lightweight Tournament Champion. A match-up against Bellator Season One Champion Eddie Alvarez was anticipated for Season Three, but Curran was forced out of the contest due to a shoulder injury. Fighting in his place, Roger Huerta took on Alvarez in a non-title bout at Bellator 33 and lost via TKO due to a doctor stoppage. Curran's bout with Alvarez took place on April 2, 2011 at Bellator 39. He lost the fight via unanimous decision, with the judges scoring it 49-46, 50-45 and 50-45 in favor of Alvarez. Curran dropped to his original fight weight of 145 lbs to enter the Bellator 2011 Summer Series Featherweight Tournament. In his quarterfinal bout Curran submitted Luis Palomino via Peruvian necktie in the first round. Curran faced Ronnie Mann in the Featherweight Tournament Semifinal at Bellator 47. He won the fight via unanimous decision, advancing him to the Bellator Featherweight Tournament Finals. Curran faced former Sengoku Featherweight Champion and Pancrase Featherweight Champion Marlon Sandro at Bellator 48 for the Bellator Featherweight Tournament Final. Curran defeated Sandro four minutes into the second round via a highlight reel head-kick KO, in the process becoming the first person to win Bellator tournaments in two different weight classes. Curran faced Joe Warren at Bellator 60. Curran defeated Warren by a brutal KO in round three to win the Bellator Featherweight Championship. In his first title defense, Curran faced Patricio Freire at Bellator 85 on January 17, 2013. He successfully defended his title for the first time, winning the fight via split decision. Curran was expected to defend his title against Bellator Season Six Featherweight Tournament winner, Daniel Mason-Straus, at Bellator 95. However, Straus broke his hand and was forced out of the bout. Straus was replaced by Bellator Season Seven Featherweight Tournament winner, Shahbulat Shamhalaev. Curran was successful in his second title defense, defeating Shamalaev via first-round submission. Curran put his Bellator Featherweight Championship on the line on November 2, 2013 at Bellator 106 in a rematch against Daniel Mason-Straus. He was unsuccessful with his third title defense, losing the fight by unanimous decision. Curran fought current champ Daniel Mason-Straus for the third time at Bellator 112 in March. An instant rematch drew criticism for Bellator from MMA pundits and fans, as many felt that Curran, who had previously lost his last match to Straus and not won a tournament for a rematch, had not done enough to earn a title shot over waiting tournament winners Patricio Freire and Magomedrasul Khasbulaev. He won the bout via rear-naked choke submission in the fifth round thus, ending their trilogy and winning the Bellator Featherweight Championship for the second time. Curran was scheduled to make the first defense of his new title in a rematch with Patricio Freire on June 6, 2014 at Bellator 121. However, on May 21, it was announced that Curran had pulled out of the bout due to a calf injury. The rematch eventually took place at Bellator 123 on September 5, 2014. Curran lost the bout to Freire by unanimous decision. Curran faced Daniel Weichel on February 13, 2015 at Bellator 133. He lost the fight via split decision. Curran was expected to face Goiti Yamauchi at Bellator 139 on June 26, 2015. However, Yamauchi pulled out of the fight due to injury. Curran instead faced Emmanuel Sanchez at the event. He won the fight via unanimous decision. Curran was expected to face Justin Lawrence at Bellator 145 on November 6, 2015. Curran pulled out of the bout due to a knee injury. Curran faced Georgi Karakhanyan at Bellator 155 on May 20, 2016. In the first round, Curran knocked down his opponent with a left hand. Karakhanyan managed to recover and fight on, but in the end Curran won via unanimous decision. Curran was expected to face John Teixeira at Bellator 167 on December 3, 2016, however, Curran pulled out of the bout due to an injury and was replaced by Justin Lawrence. The bout with Teixeira was rescheduled for Bellator 184 on October 6, 2017. Curran won the fight by unanimous decision. After a year and a half long layoff, Curran returned to face rising Featherweight prospect A.J. McKee at Bellator 221 on May 11, 2019. He lost the fight via unanimous decision. In the heels of the defeat, Curran signed a contract extension with Bellator. In the first round of the tournament, Curran got Ádám Borics as his opponent. The fight was held on 7 September 2019 on Bellator 226. At the end of round 2, a flying knee made an impact again and sent Curran to the floor. At the end of a huge amount of punches on the ground, the referee stopped the fight in the last second of the 2nd round. Curran announced on October 27, 2020 that he was retiring from MMA. Championships and awards Bellator MMA Bellator Featherweight World Championship (Two times) Two successful title defenses Bellator Season Two Lightweight Tournament Winner Bellator 2011 Summer Series Featherweight Tournament Winner First Fighter to win tournaments in multiple weight classes Two successful title defenses Inside Fights Knockout of the Year (2012) Sherdog 2011 All-Violence Second Team MMAJunkie.com 2014 March Submission of the Month vs. Daniel Mason-Straus Mixed martial arts record |- |Loss |align=center| 23–9 |Ádám Borics |TKO (punches) |Bellator 226 | |align=center|2 |align=center|4:59 |San Jose, California, United States | |- |Loss |align=center|23–8 |A. J. McKee |Decision (unanimous) |Bellator 221 | |align=center|3 |align=center|5:00 |Rosemont, Illinois, United States | |- |Win |align=center|23–7 |John Macapá |Decision (unanimous) |Bellator 184 | |align=center|3 |align=center|5:00 |Thackerville, Oklahoma, United States | |- | Win | align=center| 22–7 | Georgi Karakhanyan | Decision (unanimous) | Bellator 155 | | align=center| 3 | align=center| 5:00 | Boise, Idaho, United States | |- |Win |align=center|21–7 |Emmanuel Sanchez |Decision (unanimous) |Bellator 139 | |align=center|3 |align=center|5:00 |Mulvane, Kansas, United States | |- |Loss |align=center|20–7 |Daniel Weichel |Decision (split) |Bellator 133 | |align=center|3 |align=center|5:00 |Fresno, California, United States | |- |Loss |align=center|20–6 |Patrício Pitbull |Decision (unanimous) |Bellator 123 | |align=center|5 |align=center|5:00 |Uncasville, Connecticut, United States | |- |Win |align=center|20–5 | Daniel Straus |Submission (rear-naked choke) |Bellator 112 | |align=center|5 |align=center|4:46 |Hammond, Indiana, United States | |- |Loss |align=center| 19–5 |Daniel Straus |Decision (unanimous) |Bellator 106 | |align=center|5 |align=center|5:00 |Long Beach, California, United States | |- |Win |align=center|19–4 |Shahbulat Shamhalaev |Technical Submission (guillotine choke) |Bellator 95 | |align=center|1 |align=center|2:38 |Atlantic City, New Jersey, United States | |- |Win |align=center|18–4 |Patrício Pitbull |Decision (split) |Bellator 85 | |align=center|5 |align=center|5:00 |Irvine, California, United States | |- |Win |align=center|17–4 |Joe Warren |KO (knees and punches) |Bellator 60 | |align=center|3 |align=center|1:25 |Hammond, Indiana, United States | |- |Win |align=center|16–4 |Marlon Sandro |KO (head kick) |Bellator 48 | |align=center|2 |align=center|4:00 |Uncasville, Connecticut, United States | |- |Win |align=center|15–4 |Ronnie Mann |Decision (unanimous) |Bellator 47 | |align=center|3 |align=center|5:00 |Rama, Ontario, Canada | |- |Win |align=center|14–4 |Luis Palomino |Submission (Peruvian necktie) |Bellator 46 | |align=center|1 |align=center|3:49 |Hollywood, Florida, United States | |- |Loss |align=center|13–4 |Eddie Alvarez |Decision (unanimous) |Bellator 39 | |align=center|5 |align=center|5:00 |Uncasville, Connecticut, United States | |- |Win |align=center|13–3 |Toby Imada |Decision (split) |Bellator 21 | |align=center|3 |align=center|5:00 |Hollywood, Florida, United States | |- |Win |align=center|12–3 |Roger Huerta |Decision (unanimous) |Bellator 17 | |align=center|3 |align=center|5:00 |Boston, Massachusetts, United States | |- |Win |align=center|11–3 |Mike Ricci |KO (punch) |Bellator 14 | |align=center|1 |align=center|3:01 |Chicago, Illinois, United States | |- |Win |align=center|10–3 |Robbie Olivier |Decision (unanimous) |Trojan MMA: Trojan Warfare | |align=center|3 |align=center|5:00 |Exeter, England, United Kingdom | |- |Loss |align=center|9–3 |Travis Perzynski |Submission (rear-naked choke) |XFO 34: Curran vs. Hori | |align=center|2 |align=center|4:38 |Lakemoor, Illinois, United States | |- |Win |align=center|9–2 |Jay Ellis |Submission (guillotine choke) |XFO 32 | |align=center|1 |align=center|0:58 |New Munster, Wisconsin, United States | |- |Win |align=center|8–2 |Lucas Gwaltney |TKO (punches) |XFO 31: Outdoor War 5 | |align=center|1 |align=center|1:32 |Island Lake, Illinois, United States | |- |Loss |align=center|7–2 |Charles Diaz |Decision (split) |Elite Fighting Challenge 4 | |align=center|3 |align=center|5:00 |Norfolk, Virginia, United States | |- |Win |align=center|7–1 |Mike Pickett |Submission (rear-naked choke) |XFO 30 | |align=center|1 |align=center|1:56 |New Munster, Wisconsin, United States | |- |Win |align=center|6–1 |Daniel Straus |KO (punches) |XFO 29 | |align=center|2 |align=center|1:31 |Lakemoor, Illinois, United States | |- |Win |align=center|5–1 |Ramiro Hernandez |Decision (unanimous) |Adrenaline MMA 2: Miletich vs. Denny | |align=center|3 |align=center|5:00 |Moline, Illinois, United States | |- |Loss |align=center|4–1 |Darren Elkins |Decision (unanimous) |C3: Domination | |align=center|3 |align=center|5:00 |Hammond, Indiana, United States | |- |Win |align=center|4–0 |Jay Ellis |Submission (rear-naked choke) |XFO 25 | |align=center|1 |align=center|0:51 |Island Lake, Illinois, United States | |- |Win |align=center|3–0 |Amir Khillah |Decision (unanimous) |XFO 25: Outdoor War 4 | |align=center|3 |align=center|5:00 |Island Lake, Illinois, United States | |- |Win |align=center|2–0 |Lazar Stojadinovic |Decision (unanimous) |XFO 23: Title Night | |align=center|3 |align=center|5:00 |Lakemoor, Illinois, United States | |- |Win |align=center|1–0 |Tony Hervey |Submission (rear-naked choke) |XFO 22: Rising Star | |align=center|1 |align=center|1:24 |Crystal Lake, Illinois, United States | |- See also List of mixed martial artists References External links Bellator Profile 1987 births Living people Featherweight mixed martial artists Lightweight mixed martial artists Mixed martial artists utilizing wrestling American male mixed martial artists Bellator MMA champions
45370424
https://en.wikipedia.org/wiki/Affinity%20Designer
Affinity Designer
Affinity Designer is a vector graphics editor developed by Serif for macOS, iPadOS, and Microsoft Windows. It is part of the "Affinity trinity" alongside Affinity Photo and Affinity Publisher. Affinity Designer is available for purchase directly from the company website and in the Mac App Store, iOS App Store, and the Microsoft Store. Functionality Affinity Designer serves as a successor to Serif's own DrawPlus software, which the company discontinued in August 2017 in order to focus on the Affinity product range. It has been described as an Adobe Illustrator alternative, and is compatible with common graphics file formats, including Adobe Illustrator (AI), Scalable Vector Graphics (SVG), Adobe Photoshop (PSD), Portable Document Format (PDF), and Encapsulated PostScript (EPS) formats. The application can also import data from some Adobe FreeHand files (specifically versions 10 & MX). Affinity Designer's core functions include vector pen and shape-drawing tools, support for custom vector and raster brushes (including the ability to import Adobe Photoshop (ABR) brushes), dynamic symbols, stroke stabilization, text style management, and vector/pixel export options. Affinity Designer provides non-destructive editing features across unlimited layers, with pan and zoom at 60fps, and real-time views for effects and transformations. It supports the RGB, RGB Hex, LAB, CMYK and Grayscale color models, along with PANTONE color swatches and an end-to-end CMYK workflow with ICC color management, and 16-bit per channel editing. Development Affinity Designer began as a vector graphics editor solely for macOS. It was developed entirely from scratch for this operating system, allowing it to leverage core native technologies such as OpenGL, Grand Central Dispatch, and Core Graphics. The first version was released in October 2014, making it the first of the Affinity apps to be released by Serif (and their first macOS release). At that time, Serif's vector graphics application for Windows was DrawPlus; however, following the release of Affinity Designer for Windows, this product has now been discontinued. Version 1.2, released in April 2015, introduced new tools and features, such as a corner tool and a pixel-alignment mode for GUI design tasks. In December 2015, version 1.4 then introduced new features for managing artboards and printing. With version 1.5 in October 2016, the application received multiple new features, including symbols, constraints, asset management and text styles. The application began branching out to other platforms in November 2016, when it first launched for Microsoft Windows. Version 1.6 was released in November 2017, introducing performance improvements and alternative GUI display mode. The first release of a separate iPad version of Affinity Designer took place in July 2018. Version 1.7 was released in June 2019 adding some key features such as HDR support, unlimited strokes and fills to a single shape, new point transform tool, new transform mode in Node tool, Lasso selection of nodes, new sculpt mode added to pencil, and also some big performance improvements. Version 1.8, released in February 2020, added the ability for users to define their own document templates and keyboard shortcuts, and a built-in panel for adding stock images. Reception Affinity Designer was selected as a runner-up in Apple's "Best of 2014" list of Mac App Store and iTunes Store content in the macOS app category. It also was one of the winners of the 2015 Apple Design Award. In 2018, the Windows version of Affinity Designer won 'Application Creator of the Year' at the Windows Developer Awards (part of Microsoft Build 2018). See also Comparison of vector graphics editors References Further reading Affinity Designer Workbook. Nottingham: Serif Europe Ltd. 2016. . External links Vector graphics editors MacOS graphics software Macintosh graphics software Windows graphics-related software 2014 software
10058578
https://en.wikipedia.org/wiki/NMEA%202000
NMEA 2000
NMEA 2000, abbreviated to NMEA2k or N2K and standardised as IEC 61162-3, is a plug-and-play communications standard used for connecting marine sensors and display units within ships and boats. Communication runs at 250 kilobits-per-second and allows any sensor to talk to any display unit or other device compatible with NMEA 2000 protocols. Electrically, NMEA 2000 is compatible with the Controller Area Network ("CAN Bus") used on road vehicles and fuel engines. The higher-level protocol format is based on SAE J1939, with specific messages for the marine environment. Raymarine SeaTalk 2, Raymarine SeaTalkNG, Simrad Simnet, and Furuno CAN are rebranded implementations of NMEA 2000, though may use physical connectors different from the standardised DeviceNet 5-pin A-coded M12 screw connector, all of which are electrically compatible and can be directly connected. The protocol is used to create a network of electronic devices—chiefly marine instruments—on a boat. Various instruments that meet the NMEA 2000 standard are connected to one central cable, known as a backbone. The backbone powers each instrument and relays data among all of the instruments on the network. This allows one display unit to show many different types of information. It also allows the instruments to work together, since they share data. NMEA 2000 is meant to be "plug and play" to allow devices made by different manufacturers to communicate with each other. Examples of marine electronics devices to include in a network are GPS receivers, auto pilots, wind instruments, depth sounders, navigation instruments, engine instruments, and nautical chart plotters. The interconnectivity among instruments in the network allows, for example, the GPS receiver to correct the course that the autopilot is steering. History The NMEA 2000 standard was defined by, and is controlled by, the US-based National Marine Electronics Association (NMEA). Although the NMEA divulges some information regarding the standard, it claims copyright over the standard and thus its full contents are not publicly available. For example, the NMEA publicizes which messages exist and which fields they contain, but they do not disclose how to interpret the values contained in those fields. However, enthusiasts are slowly making progress in discovering these PGN definitions. Functionality NMEA 2000 connects devices using Controller Area Network (CAN) technology originally developed for the auto industry. NMEA 2000 is based on the SAE J1939 high-level protocol, but defines its own messages. NMEA 2000 devices and J1939 devices can be made to co-exist on the same physical network. NMEA 2000 (IEC 61162-3) can be considered a successor to the NMEA 0183 (IEC 61162-1) serial data bus standard. It has a significantly higher data rate (250k bits/second vs. 4800 bits/second for NMEA 0183). It uses a compact binary message format as opposed to the ASCII serial communications protocol used by NMEA 0183. Another improvement is that NMEA 2000 supports a disciplined multiple-talker, multiple-listener data network whereas NMEA 0183 requires a single-talker, multiple-listener (simplex) serial communications protocol. Network construction The NMEA 2000 network, like the SAE J1939 network on which it is based, is organized around a bus topology, and requires a single 120Ω termination resistor at each end of the bus. (The resistors are in parallel, so a properly terminated bus should have a total resistance of 60Ω). The maximum distance for any device from the bus is six metres. The maximum backbone cable length is 250 meters (820 feet) with Mini cable backbone or 100 meters (328 feet) with Micro cable backbone Cabling and interconnect The only cabling standard approved by the NMEA for use with NMEA 2000 networks is the DeviceNet cabling standard, which is controlled by the Open DeviceNet Vendors Association. Such cabling systems are permitted to be labeled "NMEA 2000 Approved". The DeviceNet standard defines levels of shielding, conductor size, weather resistance, and flexibility which are not necessarily met by other cabling solutions marketed as "NMEA 2000" compatible. There are two sizes of cabling defined by the DeviceNet/NMEA 2000 standard. The larger of the two sizes is denoted as "Mini" (or alternatively, "Thick") cable, and is rated to carry up to 8 Amperes of power supply current. The smaller of the two sizes is denoted as "Micro" (or alternatively, "Thin") cable using the M12 5-pin barrel connector specified in IEC 61076-2-101, and is rated to carry up to 3 Amperes of power supply current. Mini cable is primarily used as a "backbone" (or "trunk") for networks on larger vessels (typically with lengths of 20 m and above), with Micro cable used for connections between the network backbone and the individual components. Networks on smaller vessels often are constructed entirely of Micro cable and connectors. An NMEA 2000 network is not electrically compatible with an NMEA 0183 network, and so an interface device is required to send messages between devices on the different types of network. An adapter is also required if NMEA 2000 messages are to be received by or transmitted from a PC. Message format and parameter group numbers (PGNs) In accordance with the SAE J1939 protocol, NMEA 2000 messages are sent as packets that consist of a header followed by (typically) 8 bytes of data. The header for a message specifies the transmitting device, the device to which the message was sent (which may be all devices), the message priority, and the PGN (Parameter Group Number). The PGN indicates which message is being sent, and thus how the data bytes should be interpreted to determine the values of the data fields that the message contains. A parameter group definition may describe a data record that consists of more data than can be contained within a single CAN frame. NMEA 2000 transfer methods include transmitting single-frame parameter groups and two methods of transmitting multi-frame parameter groups. These transfer methods are compared below: The Multi-Packet protocol specified in ISO 11783-3 provides for the transmission of multi-frame parameter groups up to 1,785 bytes. The protocol encapsulates the parameter group in a transport protocol, either globally or to a specific address. In case of address specific transfer (RTS/CTS), the receiving device can control the data flow in accordance with the receiving device’s available resources. In both cases (RTS/CTS) verus BAM. the message being transferred is announced in the first message. In case of CTS/RTS the receiver can refuse the message. In case of a BAM the message can simply be ignored. The Fast Packet protocol defined in NMEA 2000 provides a means to stream up to 223 bytes of data, with the advantage that each frame retains the parameter group identity and priority. The first frame transmitted uses 2 bytes to identify sequential Fast Packet parameter groups and sequential frames within a single parameter group transmission. The first byte contains a sequence counter to distinguish consecutive transmission of the same parameter groups and a frame counter set to frame zero. The second byte in the first frame identifies the total size of the parameter group to follow. Successive frames use just single data byte for the sequence counter and the frame counter. Because many of the NMEA 2000 parameter groups exceed 8 bytes but do not require the 1,785-byte capacity of multi-packet, the default method of transmitting multi-frame parameter groups in NMEA 2000 is using the Fast Packet protocol. Regardless of which protocol is used, multi-frame parameter groups are sent on a frame-by-frame basis and may be interspersed with other higher priority parameter groups using either protocol, or even single- frame parameter groups. Each device is responsible for reassembling the parameter group once all the frames for the parameter group are transmitted. Device certification Devices go through a certification process overseen by the NMEA, and are permitted to display the "NMEA 2000 Certified" logo once they have completed the certification process. The certification process does not guarantee data content, that is the responsibility of the manufacturers. However, the certification process does assure that products from different manufacturers exchange data in a compatible way and that they can coexist on a network. NMEA 2000 and proprietary networks Several manufacturers, including Simrad, Raymarine, Stowe, and BEP, have their own proprietary networks that are compatible with or akin to NMEA 2000. Simrad's is called SimNet, Raymarine's is called SeaTalk NG, Stowe's is called Dataline 2000, and BEP's is called CZone. Some of these, such as SimNet and Seatalk NG, are a standard NMEA 2000 network but use non-standard connectors and cabling; adapters are available to convert to standard NMEA 2000 connectors, or the user can simply remove the connector and make a direct connection. Trademarks The term "NMEA 2000" is a registered trademark of the National Marine Electronics Association. Devices which are not "NMEA 2000 Certified" may not legally use the NMEA 2000 trademark in their advertising. Manufacturers The following are some of the companies that have registered with the NMEA for the purpose of producing NMEA 2000 certified products: MarineCraft SAMYUNG ENC Carling Technologies Amphenol LTW Actisense Airmar Empirbus Furuno Garmin GME Standard Communications Honda Humminbird Quark-elec(UK) Icom Incorporated Lowrance Electronics Molex Maretron Navico Raymarine Simrad Yachting SeaStar Solutions (formerly Teleflex Marine) Tohatsu VeeThree Yacht Devices Yamaha Marine Hemisphere GNSS Warwick Control Technologies See also GPS Exchange Format Related standards NMEA 0183 NMEA OneNet, a future standard based on Ethernet Safety Standards using NMEA 2000 Automatic Identification System References External links Official NMEA 2000 Web Page List of NMEA 2000 Certified Products NMEA 2000 Parameter Group Numbers and Brief Description NMEA 2000 Parameter Group Descriptions (Messages) with (Longer) Field Description ODVA Planning and Installation Manual: DeviceNet Cable System - network wiring for DeviceNet networks, much of which applies to NMEA 2000 networks. Luft LA, Anderson L, Cassidy F. "NMEA 2000: A Digital Interface for the 21st Century" 2002-01-30 Global Positioning System Computer buses Marine electronics
22515069
https://en.wikipedia.org/wiki/Smuxi
Smuxi
Smuxi is a cross-platform IRC client for the GNOME desktop inspired by Irssi. It pioneered the concept of separating the frontend client from the backend engine which manages connections to IRC servers inside a single graphical application. Architecture Smuxi is based on the client–server model: The core application exists in the Smuxi back-end server which is connected to the Internet around-the-clock. The user interacts with one or more Smuxi front-end clients which are connected to the Smuxi back-end server. This way, the Smuxi back-end server can maintain connections to IRC servers even when all Smuxi front-end clients have been closed. The combination of screen and Irssi served as an example of this architecture. The Quassel IRC client has a similar design. Smuxi also supports the regular single application mode. This behaves like a typical IRC client with no separation of back-end and front-end. It utilizes a local IRC engine that is used by the local front-end client. Features Smuxi supports nick colors which are identical across channels and networks, a Caret Mode as seen in Firefox that allows users to navigate through the messages using the keyboard, theming with colors and fonts, configurable tray-icon support, optional stripping of colours and formattings and convenience features like CTCP support, channel search and nickname completion. It has a tabbed document interface, tabbed user interface, and support for multiple servers. Smuxi can attach to a local backend engine or a remote engine utilizing the Engine drop down menu (similar to screen used with irssi). It also includes, in client-server operation, a visual marker showing the user's last activity in an open session, and ignore filtering. Distribution Smuxi can be found in many major free operating systems such as: Debian GNU/Linux (including Debian GNU/kFreeBSD), Ubuntu, Gentoo Linux, Arch Linux, openSUSE Community Repository, Frugalware Linux, Slackware, and FreeBSD. Smuxi is also available for Microsoft Windows XP, Vista, 7, 8.x and 10 (32-bit and 64-bit architectures). Smuxi is available for Mac OS X starting with the 0.8.9 release. Reception Smuxi was selected in "Hot Picks" by Linux Format Magazine in March 2009. TuxRadar wrote: In Tom's Hardware, Adam Overa wrote: In LinuxToday, Joe Brockmeier wrote: See also Comparison of IRC clients References External links Internet Relay Chat clients Free Internet Relay Chat clients Windows Internet Relay Chat clients Unix Internet Relay Chat clients MacOS Internet Relay Chat clients Cross-platform software Instant messaging clients that use GTK
11125049
https://en.wikipedia.org/wiki/JavaFX
JavaFX
JavaFX is a software platform for creating and delivering desktop applications, as well as rich web applications that can run across a wide variety of devices. JavaFX has support for desktop computers and web browsers on Microsoft Windows, Linux, and macOS, as well as mobile devices running iOS and Android. On desktops, JavaFX supports Windows Vista, Windows 7, Windows 8, Windows 10, macOS and Linux operating systems. Beginning with JavaFX 1.2, Oracle has released beta versions for OpenSolaris. On mobile, JavaFX Mobile 1.x is capable of running on multiple mobile operating systems, including Symbian OS, Windows Mobile, and proprietary real-time operating systems. JavaFX was intended to replace Swing as the standard GUI library for Java SE, but it has been dropped from new Standard Editions while Swing and AWT remain included, supposedly because JavaFX's marketshare has been "eroded by the rise of 'mobile first' and 'web first applications." With the release of JDK 11 in 2018, Oracle made JavaFX part of the OpenJDK under the OpenJFX project, in order to increase the pace of its development. Oracle support for JavaFX is also available for Java JDK 8 through March 2025. Open-source JavaFXPorts works for iOS (iPhone and iPad) and Android and embedded (Raspberry Pi); and the related commercial software created under the name "Gluon" supports the same mobile platforms with additional features plus desktop. This allows a single source code base to create applications for the desktop, iOS, and Android devices. Features JavaFX 1.1 was based on the concept of a "common profile" that is intended to span across all devices supported by JavaFX. This approach makes it possible for developers to use a common programming model while building an application targeted for both desktop and mobile devices and to share much of the code, graphics assets and content between desktop and mobile versions. To address the need for tuning applications on a specific class of devices, the JavaFX 1.1 platform includes APIs that are desktop or mobile-specific. For example, the JavaFX Desktop profile includes Swing and advanced visual effects. For the end user, the "Drag-to-Install" feature enables them to drag a JavaFX widget - an application residing in a website - and drop it onto their desktop. The application will not lose its state or context even after the browser is closed. An application can also be re-launched by clicking on a shortcut that gets created automatically on the user's desktop. This behavior is enabled out-of-the-box by the Java applet mechanism since the Java 6u10 update, and is leveraged by JavaFX from the underlying Java layer. Sun touts "Drag-to-Install" as opening up of a new distribution model and allowing developers to "break away from the browser". JavaFX 1.x included a set of plug-ins for Adobe Photoshop and Illustrator that enable advanced graphics to be integrated directly into JavaFX applications. The plug-ins generate JavaFX Script code that preserves the layers and structure of the graphics. Developers can then add animation or effects to the static graphics imported. There is also an SVG graphics converter tool (also known as Media Factory) that allows for importing graphics and previewing assets after the conversion to JavaFX format. Before version 2.0 of JavaFX, developers used a statically typed, declarative language called JavaFX Script to build JavaFX applications. Because JavaFX Script was compiled to Java bytecode, programmers could also use Java code instead. JavaFX applications could run on any desktop that could run Java SE. JavaFX 2.0 and later is implemented as a "native" Java library, and applications using JavaFX are written in "native" Java code. JavaFX Script has been scrapped by Oracle, but development is being continued in the Visage project. JavaFX 2.x does not support the Solaris operating system or mobile phones; however, Oracle plans to integrate JavaFX to Java SE Embedded 8, and Java FX for ARM processors is in developer preview phase. Sun Microsystems licensed a custom typeface called Amble for use on JavaFX-powered devices. The font family was designed by mobile user interface design specialists Punchcut and is available as part of the JavaFX SDK 1.3 Release. WebView WebView, the embedded browser component, supports the following HTML5 features: Canvas Media playback Form controls (except for <input type="color"> ) Editable content History maintenance Support for the <meter> and <progress> tags Support for the <details> and <summary> tags DOM MathML SVG CSS JavaScript Support for domain names written in national languages JavaFX Mobile JavaFX Mobile was the implementation of the JavaFX platform for rich web applications aimed at mobile devices. JavaFX Mobile 1.x applications can be developed in the same language, JavaFX Script, as JavaFX 1.x applications for browser or desktop, and using the same tools: JavaFX SDK and the JavaFX Production Suite. This concept makes it possible to share code-base and graphics assets for desktop and mobile applications. Through integration with Java ME, the JavaFX applications have access to capabilities of the underlying handset, such as the filesystem, camera, GPS, bluetooth or accelerometer. An independent application platform built on Java, JavaFX Mobile is capable of running on multiple mobile operating systems, including Android, Windows Mobile, and proprietary real-time operating systems. JavaFX Mobile was publicly available as part of the JavaFX 1.1 release announced by Sun Microsystems on February 12, 2009. Sun planned to enable out-of-the-box support of JavaFX on the devices by working with handset manufacturers and mobile operators to preload the JavaFX Mobile runtime on the handsets. JavaFX Mobile running on an Android was demonstrated at JavaOne 2008 and selected partnerships (incl. LG Electronics, Sony Ericsson) were announced at the JavaFX Mobile launch in February, 2009. Components JavaFX 2.x platform includes the following components: The JavaFX SDK: runtime tools. Graphics, media web services, and rich text libraries. Java FX 1.x also included JavaFX compiler, which is now obsolete as JavaFX user code is written in Java. NetBeans IDE for JavaFX: NetBeans with drag-and-drop palette to add objects with transformations, effects and animations plus a set of samples and best practices. For JavaFX 2 support you need at least NetBeans 7.1.1. For Eclipse users there is a community-supported plugin hosted on e(fx)clipse. JavaFX scene builder: This was introduced for Java FX 2.1 and later. A user interface (UI) is created by dragging and dropping controls from a palette. This information is saved as an FXML file, a special XML format. Tools and plugins for creative tools (a.k.a. Production Suite): Plugins for Adobe Photoshop and Adobe Illustrator that can export graphics assets to JavaFX Script code, tools to convert SVG graphics into JavaFX Script code and preview assets converted to JavaFX from other tools (currently not supported in JavaFX 2.x versions) History Early releases JavaFX Script, the scripting component of JavaFX, began life as a project by Chris Oliver called F3. Sun Microsystems first announced JavaFX at the JavaOne Worldwide Java Developer conference on May 2007. In May 2008 Sun Microsystems announced plans to deliver JavaFX for the browser and desktop by the third quarter of 2008, and JavaFX for mobile devices in the second quarter of 2009. Sun also announced a multi-year agreement with On2 Technologies to bring comprehensive video capabilities to the JavaFX product family using the company's TrueMotion Video codec. Since end of July 2008, developers could download a preview of the JavaFX SDK for Windows and Macintosh, as well as the JavaFX plugin for NetBeans 6.1. Major releases since JavaFX 1.1 have a release name based on a street or neighborhood in San Francisco. Update releases typically do not have a release name. On December 4, 2008 Sun released JavaFX 1.0.2. JavaFX for mobile development was finally made available as part of the JavaFX 1.1 release (named Franca) announced officially on February 12, 2009. JavaFX 1.2 (named Marina) was released at JavaOne on June 2, 2009. This release introduced: Beta support for Linux and Solaris Built-in controls and layouts Skinnable CSS controls Built-in chart widgets JavaFX I/O management, masking differences between desktop and mobile devices Speed improvements Windows Mobile Runtime with Sun Java Wireless Client JavaFX 1.3 (named Soma) was released on April 22, 2010. This release introduced: Performance improvements Support of additional platforms Improved support for user interface controls JavaFX 1.3.1 was released on August 21, 2010. This release introduced: Quick startup time of JavaFX application Custom progress bar for application startup JavaFX 2.0 (named Presidio) was released on October 10, 2011. This release introduced: A new set of Java APIs opening JavaFX capabilities to all Java developers, without the need for them to learn a new scripting language. Java FX Script support was dropped permanently. Support for high performance lazy binding, binding expressions, bound sequence expressions, and partial bind re-evaluation. Dropping support for JavaFX Mobile. Oracle announcing its intent to open-source JavaFX. JavaFX runtime turning to be platform-specific, utilizing system capabilities, as video codec available on the system; instead of implementing only one cross-platform runtime as with JavaFX 1.x. Various improvements have been made within the JavaFX libraries for multithreading. The Task APIs have been updated to support much more concise threading capabilities (i.e. the JavaTaskBase class is no longer necessary since all the APIs are in Java, and the requirement to have a callback interface and Java implementation class are no longer necessary). In addition, the scene graph has been designed to allow scenes to be constructed on background threads and then attached to "live" scenes in a threadsafe manner. On May 26, 2011, Oracle released the JavaFX 2.0 Beta. The beta release was only made available for 32 and 64 bit versions of Microsoft Windows XP, Windows Vista and Windows 7. An Early Access version for Mac OS X was also available for members of the JavaFX Partner Program at the time, while Linux support was planned for a future release of JavaFX. JavaFX 2.0 was released with only Windows support. Mac OS X support was added with JavaFX 2.1. Linux support was added with JavaFX 2.2. JavaFX 2.0 makes use of a new declarative XML language called FXML. On April 27, 2012, Oracle released version 2.1 of JavaFX, which includes the following main features: First official version for OS X (desktop only) H.264/MPEG-4 AVC and Advanced Audio Coding support CoolType text UI enhancements including combo box controls, charts (stacked chart), and menu bars Webview component now allows JavaScript to make calls to Java methods On August 14, 2012, Oracle released version 2.2 of JavaFX, which includes the following main features: Linux support (including plugin and webstart) Canvas New controls: Color Picker, Pagination HTTP Live Streaming support Touch events and gestures Image manipulation API Native Packaging JavaFX 2.2 adds new packaging option called Native Packaging, allowing packaging of an application as a "native bundle". This gives users a way to install and run an application without any external dependencies on a system JRE or FX SDK. As of Oracle Java SE 7 update 6 and Java FX 2.2, JavaFX is bundled to be installed with Oracle Java SE platform. Releases after version bump JavaFX is now part of the JRE/JDK for Java 8 (released on March 18, 2014) and has the same numbering, i.e., JavaFX 8. JavaFX 8 adds several new features, including: Support for 3D graphics Sensor support MathML support, with JavaFX 8 Update 192 Printing and rich text support Generic dialog templates via inclusion of ControlsFX to replace JOptionPane as of JavaFX 8u40 JavaFX 9 features were centered on extracting some useful private APIs from the JavaFX code to make these APIs public: JEP 253: Prepare JavaFX UI Controls and CSS APIs for Modularization Oracle announced their intention to stop shipping JavaFX with JDK 11 and later. It is no longer bundled with the latest version. JavaFX 11 was first shipped in September 2018. JavaFX 11.0.2 is the latest public release of JavaFX 11. JavaFX 11.0.3 is the latest release of JavaFX 11 for those with a long-term support contract. MathML support, with JavaFX 11 FX Robot API JavaFX 12 was first shipped in March 2019. JavaFX 12.0.1. JavaFX 13 shipped in September 2019. JavaFX 14 was released in March 2020. JavaFX 15 was released in September 2020. JavaFX 16 was released in March 2021. JavaFX 17 was released in September 2021. Future work Oracle also announced in November 2012 the open sourcing of Decora, a DSL Shader language for JavaFX allowing to generate Shaders for OpenGL and Direct3D. Oracle wrote in its Client Support Roadmap that JavaFX new fixes will continue to be supported on Java SE 8 through March 2025. Previously, Oracle announced that they are "working with interested third parties to make it easier to build and maintain JavaFX as a separately distributable open-source module." JavaFX will continue to be supported in the future by the company Gluon as a downloadable module in addition to the JDK. Availability As of March 2014 JavaFX is deployed on Microsoft Windows, OS X, and Linux. Oracle has an internal port of JavaFX on iOS and Android. Support for ARM is available starting with JavaFX 8 On February 11, 2013, Richard Bair, chief architect of the Client Java Platform at Oracle, announced that Oracle would open-source the iOS and Android implementations of its JavaFX platform in the next two months. Starting with version 8u33 of JDK for ARM, support for JavaFX Embedded has been removed. Support will continue for x86-based architectures. A commercial port of JavaFX for Android and iOS has been created under the name "Gluon". License There are various licenses for the modules that compose the JavaFX runtime: Parts of the core JavaFX runtime are still proprietary software and its code has not yet been released to the public, however developers and executives behind the technology are moving toward a full opening of the code, The JavaFX compiler and an older version of the 2D Scene graph are released under a GPL v2 license, The NetBeans plugin for JavaFX is dual licensed under GPL v2 and CDDL. During development, Sun explained they will roll out their strategy for the JavaFX licensing model for JavaFX first release. After the release in 2008, Jeet Kaul, Sun's Vice president for Client Software, explained that they will soon publish a specification for JavaFX and its associated file formats, and will continue to open-source the JavaFX runtime, and decouple this core from the proprietary parts licensed by external parties. At JavaOne 2011, Oracle Corporation announced that JavaFX 2.0 would become open-source. Since December 2011, Oracle began to open-source the JavaFX code under the GPL+linking exception. In December 2012, new portions of the JavaFX source code were open-sourced by Oracle: the animations and timelines classes the event delivery mechanism and other various core classes the render tree interface, and the implementation of this interface the geometry and shapes implementation the java part of the rendering engine used in the rendering pipeline the logging support See also Curl (programming language) JavaFX Script Standard Widget Toolkit References Bibliography External links JavaFX Tutorial Java (programming language) Oracle software Rich web application frameworks Sun Microsystems software Articles with example Java code
2783580
https://en.wikipedia.org/wiki/ArcGIS
ArcGIS
ArcGIS is a family of client software, server software, and online geographic information system (GIS) services developed and maintained by Esri. ArcGIS was first released in 1999 and originally was released as ARC/INFO, a command line based GIS system for manipulating data. ARC/INFO was later merged into ArcGIS Desktop, which was eventually superseded by ArcGIS Pro in 2015. ArcGIS Pro works in 2D and 3D for cartography and visualization, and includes Artificial Intelligence (AI). Esri also provides server side ArcGIS software for web maps, known as "ArcGIS Server". Product history Prior to the ArcGIS suite, Esri had focused its software development on the command line Arc/INFO workstation program and several Graphical User Interface-based products such as the ArcView GIS 3.x desktop program. Other Esri products included MapObjects, a programming library for developers, and ArcSDE as a relational database management system. The various products had branched out into multiple source trees and did not integrate well with one another. In January 1997, Esri decided to revamp its GIS software platform, creating a single integrated software architecture. ArcMap 8.0 In late 1999, Esri released ArcMap 8.0, which ran on the Microsoft Windows operating system. ArcGIS combined the visual user-interface aspect of ArcView GIS 3.x interface with some of the power from the Arc/INFO version 7.2 workstation. This pairing resulted in a new software suite called ArcGIS including the command-line ArcInfo workstation (v8.0) and a new graphical user interface application called ArcMap (v8.0). This ArcMAP incorporating some of the functionality of ArcInfo with a more intuitive interface, as well as a file management application called ArcCatalog (v8.0). The release of the ArcMap constituted a major change in Esri's software offerings, aligning all their client and server products under one software architecture known as ArcGIS, developed using Microsoft Windows COM standards. While the interface and names of ArcMap 8.0 are similar to later versions of ArcGIS Desktop, they are different products. ArcGIS 8.1 replaced ArcMap 8.0 in the product line but was not an update to it. ArcGIS Desktop 8.1 to 8.3 ArcGIS 8.1 was unveiled at the Esri International User Conference in 2000. ArcGIS 8.1 was officially released on April 24, 2001. This new application included three extensions: 3D Analyst, Spatial Analyst, and GeoStatistical Analyst. These three extension had become very powerful and popular in ArcView GIS 3.x product line. ArcGIS 8.1 also added the ability to access data online, directly from the Geography Network site or other ArcIMS map services. ArcGIS 8.3 was introduced in 2002, adding topology to geodatabases, which was a feature originally available only with ArcInfo coverages. One major difference is the programming (scripting) languages available to customize or extend the software to suit particular user needs. In the transition to ArcGIS, Esri dropped support of its application-specific scripting languages, Avenue and the ARC Macro Language (AML), in favour of Visual Basic for Applications scripting and open access to ArcGIS components using the Microsoft COM standards. ArcGIS is designed to store data in a proprietary RDBMS format, known as geodatabase. ArcGIS 8.x introduced other new features, including on-the-fly map projections, and annotation in the database. ArcGIS 9.x ArcGIS 9 was released in May 2004, which included ArcGIS Server and ArcGIS Engine for developers. The ArcGIS 9 release includes a geoprocessing environment that allows execution of traditional GIS processing tools (such as clipping, overlay, and spatial analysis) interactively or from any scripting language that supports COM standards. Although the most popular of these is Python, others have been used, especially Perl and VBScript. ArcGIS 9 includes a visual programming environment, similar to ERDAS IMAGINE's Model Maker (released in 1994, v8.0.2). The Esri version is called ModelBuilder and as does the ERDAS IMAGINE version allows users to graphically link geoprocessing tools into new tools called models. These models can be executed directly or exported to scripting languages which can then execute in batch mode (launched from a command line), or they can undergo further editing to add branching or looping. On June 26, 2008, Esri released ArcGIS 9.3. The new version of ArcGIS Desktop has new modeling tools and geostatistical error tracking features, while ArcGIS Server has improved performance, and support for role-based security. There also are new JavaScript APIs that can be used to create mashups, and integrated with either Google Maps or Microsoft Virtual Earth. At the 2008 Esri Developers Summit, there was little emphasis on ArcIMS, except for one session on transitioning from ArcIMS to ArcGIS Server-based applications, indicating a change in focus for Esri with ArcGIS 9.3 for web-based mapping applications. In May 2009, Esri released ArcGIS 9.3.1, which improved the performance of dynamic map publishing and introduced better sharing of geographic information. ArcGIS 10.x In 2010, Esri announced that the prospective version 9.4 would become version 10 and would ship in the second quarter of 2010. The ArcGIS 10.3 release included ArcGIS Pro 1.0, which became available in January 2015. On October 21, 2020 Esri publicly announced that this would be the last release of ArcGIS Desktop. Its products, including ArcMap, will be supported until March 1, 2026. This announcement confirmed predictions that ArcGIS Pro (and related products) was planned to be a complete replacement for ArcMap. ArcGIS Pro ArcGIS Pro is a 64-bit GIS software that is the more modern version of ArcGIS Desktop. Unlike ArcGIS Desktop, the ArcCatalog and ArcMap functionalities are accessed through the same application, most commonly through the Catalog pane. The graphics requirements for ArcGIS Pro are considerably higher than for ArcGIS Desktop in order to support the upgraded visualization. ArcGIS Pro also supports streamlined workflows that involve publishing and consuming feature layers using ArcGIS Online. ArcGIS Pro 1.0 was released in January 2015. ArcGIS Pro 2.6 was released in July 2020. Noted features added included: Voxel layers are 3D representations of data over space and time and are saved in a netCDF file. Voxel layers are used to visualize complex layers such as atmospheric and oceanic data or space-time cubes. These layers are used to analyze spatial patterns of data in specific situations. Voxel layers generally encompass extensive areas and slices can be used to delineate areas of the layer that need further analysis. Voxels can be shown with other geospatial data to further visualize the study area. Trace networks are used to evaluate connectivity models like railroads. Edges and junctions along with network attributes are used to understand the movement of goods through the network. The connectedness of the network is established based on the concurrence of geometric features. Trace networks are used alongside network topology to make more tools available such as trace and validation. Interactive suitability analysis using the new Suitability Modeler is a way to figure out an optimal location for a building project or other similar initiative. This is done by feeding the model with certain criteria to find areas that would be suitable for the project. The suitability modeler is an interactive way to visualize and assess the suitability model. The suitability modeler allows a user to see how each criterion changes the model and make a more educated decision for the project. Feedback is also given from the modeler to help the user understand the model better. Graphics layers store geometric features and do not need to be in a feature class to visualize. Graphics layers go on top of other layers on a map to better illustrate the purpose of the map. Graphics layers are used to add extra information to map such as text or highlight important features. There can be multiple graphics layers in a map and can be grouped together. Parcel adjustment using least squares adjustment is way to adjust parcel fabric to find the optimal position for parcel fabric points. The parcel fabric is a network that measures the distance of lines and angles between points. There are two types of least squares adjustment for parcel fabric. These are free network adjustment and weighted/constrained adjustment. Free network adjustment uses no control points and the layer is adjusted for the measurements to be most optimal, and Weighted/constrained adjustment uses control points and the layer is adjusted within the scope of the scope of the points. A least squares adjustment can be ran after a new parcel-fabric is created or new data is added to an existing parcel fabric. Link analysis develops a network of connected of objects and determines the patterns that exist. Link analysis is done to find what patterns in a network are most important and finds new patterns that were previously unknown. Link analysis uses link charts to visualize the network. Link charts represent the objects in a network using nodes and these nodes can be people, buildings, or devices. Objects are usually moving such as people or vehicles, and link charts show how they interact with each other over both space and time. Link analysis is done to better understand the network. This is done by finding the shortest path between nodes, showing what nodes have the strongest connections, and finding the nodes that are nearest to each other. Project recovery is an automatic way of saving a project so work is not lost. When ArcGIS pro is opened it will prompt the user if they want to keep all the unsaved changes that were backed up. The backups are also stored in the .backups folder in the project home. The interval of time that the project saves automatically can be determined by using the backup settings. Functionality Data formats Older Esri products, including ArcView 3.x, worked with data in the shapefile format. ArcInfo Workstation handled coverages, which stored topology information about the spatial data. Coverages, which were introduced in 1981 when ArcInfo was first released, have limitations in how they handle types of features. Some features, such as roads with street intersections or overpasses and underpasses, should be handled differently from other types of features. ArcGIS is built around a geodatabase, which uses an object–relational database approach for storing spatial data. A geodatabase is a "container" for holding datasets, tying together the spatial features with attributes. The geodatabase can also contain topology information, and can model behavior of features, such as road intersections, with rules on how features relate to one another. When working with geodatabases, it is important to understand feature classes which are a set of features, represented with points, lines, or polygons. With shapefiles, each file can only handle one type of feature. A geodatabase can store multiple feature classes or type of features within one file. Geodatabases in ArcGIS can be stored in three different ways – as a "file geodatabase", a "personal geodatabase", or an "enterprise geodatabase" (formerly known as an SDE or ArcSDE geodatabase). Introduced at 9.2, the file geodatabase stores information in a folder named with a .gdb extension. The insides look similar to that of a coverage but is not, in fact, a coverage. Similar to the personal geodatabase, the file geodatabase only supports a single editor. However, unlike the personal geodatabase, there is virtually no size limit. By default, any single table cannot exceed 1TB, but this can be changed. Personal geodatabases store data in Microsoft Access files, using a BLOB field to store the geometry data. The OGR library is able to handle this file type, to convert it to other file formats. Database administration tasks for personal geodatabases, such as managing users and creating backups, can be done through ArcCatalog and ArcGIS Pro. Personal geodatabases, which are based on Microsoft Access, run only on Microsoft Windows and have a 2 gigabyte size limit. Enterprise (multi-user) geodatabases sit on top of high-end DBMS such as PostgreSQL, Oracle, Microsoft SQL Server, DB2 and Informix to handle database management aspects, while ArcGIS deals with spatial data management. Enterprise level geodatabases support database replication, versioning and transaction management, and are cross-platform compatible, able to run on Linux, Windows, and Solaris. Also released at 9.2 is the personal SDE database that operates with SQL Server Express. Personal SDE databases do not support multi-user editing, but do support versioning and disconnected editing. Microsoft limits SQL Server Express databases to 4GB. ArcGIS Pro (which is a 64-bit application) does not support the personal geodatabase format but can convert them into supported formats using geoprocessing tools. ArcGIS Desktop Product levels ArcGIS Desktop is available at different product levels, with increasing functionality. ArcReader (freeware, viewer) is a basic data viewer for maps and GIS data published in the proprietary Esri format using ArcGIS Publisher. The software also provides some basic tools for map viewing, printing and querying of spatial data. ArcReader is included with any of the ArcGIS suite of products, and is also available for free to download. ArcReader only works with pre-authored published map files, created with ArcGIS Publisher. ArcGIS Desktop Basic, formerly known as ArcView, is the entry level of ArcGIS licensing. With ArcView, one is able to view and edit GIS data held in flat files, or view data stored in a relational database management system by accessing it through ArcSDE. One can also create layered maps and perform basic spatial analysis. ArcGIS Desktop Standard, formerly known as ArcEditor, is the midlevel software suite designed for advanced editing of spatial data in shapefiles and geodatabases. It provides tools for the creation of map and spatial data used in GIS, including the ability of editing geodatabase files and data, multiuser geodatabase editing, versioning, raster data editing and vectorization, advanced vector data editing, managing coverages, coordinate geometry (COGO), and editing geometric networks. ArcEditor is not intended for advanced spatial analysis. ArcGIS Desktop Advanced, formerly known as ArcInfo, allows users the most flexibility and control in "all aspects of data building, modeling, analysis, and map display." ArcInfo includes increased capability in the areas of spatial analysis, geoprocessing, data management, and others. Other desktop GIS software include ArcGIS Explorer and ArcGIS Engine. ArcGIS Explorer is a GIS viewer which can work as a client for ArcGIS Server, ArcIMS, ArcWeb Services and Web Map Service (WMS). ArcGIS Online is a web application allowing sharing and search of geographic information, as well as content published by Esri, ArcGIS users, and other authoritative data providers. It allows users to create and join groups, and control access to items shared publicly or within groups. ArcGIS Web Mapping APIs are APIs for several languages, allowing users to build and deploy applications that include GIS functionality and Web services from ArcGIS Online and ArcGIS Server. Adobe Flex, JavaScript and Microsoft Silverlight are supported for applications that can be embedded in web pages or launched as stand-alone Web applications. Flex, Adobe Air and Windows Presentation Foundation (WPF) are supported for desktop applications. Components ArcGIS Desktop consists of several integrated applications, including ArcMap, ArcCatalog, ArcToolbox, ArcScene, ArcGlobe, and ArcGIS Pro. ArcCatalog is the data management application, used to browse datasets and files on one's computer, database, or other sources. In addition to showing what data is available, ArcCatalog also allows users to preview the data on a map. ArcCatalog also provides the ability to view and manage metadata for spatial datasets. ArcMap is the application used to view, edit and query geospatial data, and create maps. The ArcMap interface has two main sections, including a table of contents on the left and the data frames which display the map. Items in the table of contents correspond with layers on the map. ArcToolbox contains geoprocessing, data conversion, and analysis tools, along with much of the functionality in ArcInfo. It is also possible to use batch processing with ArcToolbox, for frequently repeated tasks. ArcScene is an application which allows the user to view their GIS data in 3-D and is available with the 3D Analyst License. In the layer properties of ArcScene there is an Extrusion function which allows the user to exaggerate features three dimension-ally. ArcGlobe is another one of ArcGIS's 3D visualization applications available with the 3D Analyst License. ArcGlobe is a 3D visualization application that allows you to view large amounts of GIS data on a globe surface. The ArcGIS Pro application was added to ArcGIS Desktop in 2015 February. It had the combined capabilities of the other integrated applications and was built as a fully 64-bit software application. ArcGIS Pro has ArcPy Python scripting for database programming. Extensions There are a number of software extensions that can be added to ArcGIS Desktop that provide added functionality, including 3D Analyst, Spatial Analyst, Network Analyst, Survey Analyst, Tracking Analyst, and Geostatistical Analyst. Advanced map labeling is available with the Maplex extension, as an add-on to ArcView and ArcEditor and is bundled with ArcInfo. Numerous extensions have also been developed by third parties, such as the MapSpeller spell-checker, ST-Links PgMap, XTools Pro and MAP2PDF for creating georeferenced pdfs (GeoPDF), ERDAS' Image Analysis and Stereo Analyst for ArcGIS, and ISM's PurVIEW, which converts Arc- desktops into precise stereo-viewing windows to work with geo-referenced stereoscopic image models for accurate geodatabase-direct editing or feature digitizing. Address locator An address locator is a dataset in ArcGIS that stores the address attributes, associated indexes, and rules that define the process for translating nonspatial descriptions of places, such as street addresses, into spatial data that can be displayed as features on a map. An address locator contains a snapshot of the reference data used for geocoding, and parameters for standardizing addresses, searching for match locations, and creating output. Address locator files have a .loc file extension. In ArcGIS 8.3 and previous versions, an address locator was called a geocoding service. Other products ArcGIS Mobile and ArcPad are products designed for mobile devices. ArcGIS Mobile is a software development kit for developers to use to create applications for mobile devices, such as smartphones or tablet PCs. If connected to the Internet, mobile applications can connect to ArcGIS Server to access or update data. ArcGIS Mobile is only available at the Enterprise level Server GIS products include ArcIMS (web mapping server), ArcGIS Server and ArcGIS Image Server. As with ArcGIS Desktop, ArcGIS Server is available at different product levels, including Basic, Standard, and Advanced Editions. ArcGIS Server comes with SQL Server Express DBMS embedded and can work with enterprise DBMS such as SQL Server Enterprise and Oracle. The Esri Developer Network (EDN) includes ArcObjects and other tools for building custom software applications, and ArcGIS Engine provides a programming interface for developers. For non-commercial purposes, Esri offers a home use program with a lower annual license fee. ArcGIS Engine The ArcGIS Engine is an ArcGIS software engine, a developer product for creating custom GIS desktop applications. ArcGIS Engine provides application programming interfaces (APIs) for COM, .NET, Java, and C++ for the Windows, Linux, and Solaris platforms. The APIs include documentation and a series of high-level visual components to ease building ArcGIS applications. ArcGIS Engine includes the core set of components, ArcObjects, from which ArcGIS Desktop products are built. With ArcGIS Engine one can build stand-alone applications or extend existing applications for both GIS and non-GIS users. The ArcGIS Engine distribution additionally includes utilities, samples, and documentation. One ArcGIS Engine Runtime or ArcGIS Desktop license per computer is necessary. Sales ArcGIS Desktop products and ArcPad are available with a single-use license. Most products are also available with concurrent-use license, while development server licenses and other types of software licenses are available for other products. Single-use products can be purchased online from the Esri Store, while all ArcGIS products are available through a sales representative or reseller. Annual software maintenance and support is also available for ArcGIS. While there are alternative products available from vendors such as MapInfo, Maptitude, AutoCAD Map 3D and open-source QGIS, Esri has a dominant share of the GIS software market, estimated in 2015 at 43%. Criticisms Issues with ArcGIS include perceived high prices for the products, proprietary formats, and difficulties of porting data between Esri and other GIS software. Esri's transition to the ArcGIS platform, starting with the 1999 release of ArcGIS 8.0, rendered incompatible an extensive range of user-developed and third-party add-on software and scripts. A minority user base resists migrating to ArcGIS because of changes in scripting capability, functionality, operating system (Esri developed ArcGIS Desktop software exclusively for the Microsoft Windows operating system), as well as the significantly larger system resources required by the ArcGIS software. See also ArcView 3.x Covering the older version of ArcView ArcView The new entry level licensing level of ArcGIS GIS in environmental contamination List of Geographic Information Systems Software References External links ArcGIS official website – Esri Esri (2004) What is ArcGIS? – White paper Mapping the world and all its data, USA Today, August 3, 2004 Geodatabase at 9.2 with Craig Gillgrass – A VerySpatial Podcast, Episode 57, August 20, 2006 ST-Links SpatialKit, a tool to connect Spatial Database with ArcMap List of 3,500+ government ArcGIS server addresses Esri software GIS software
15921267
https://en.wikipedia.org/wiki/Animation%20Magic
Animation Magic
Animation Magic () was a Russian-American animation studio created in Gaithersburg, Maryland, with offices in Cambridge, Massachusetts and a 100% owned subsidiary located in Saint Petersburg, Russia. It developed animations for CD-based software. The company was acquired in December 1994 by Capitol Multimedia. In 1994 it had 90 employees, including 12 software engineers and approximately 60 animators, computer graphic, background and sprite artists. Its products include Link: The Faces of Evil, Zelda: The Wand of Gamelon, Mutant Rampage: Bodyslam, Pyramid Adventures, I.M. Meen, Chill Manor, Hotel Mario, King's Quest VII: The Princeless Bride, Darby the Dragon and the cancelled Warcraft Adventures: Lord of the Clans. In April 1997, Animation Magic was acquired by Davidson & Associates, for whom the company had been developing Warcraft Adventures, a point and click graphical adventure game based on the Warcraft franchise. In being bought by Davidson & Associates, Animation Magic became part of CUC Software, and its founder and CEO Igor Razboff, was made a Vice President of CUC Software. In 1998, CUC Software was renamed to Cendant Software after parent CUC International merged with HFS, Inc. In December 1998, Cendant Software was sold to Havas, a subsidiary of Vivendi, and became Vivendi Universal Games. In 2001, Vivendi Universal Games closed Animation Magic. References External links Как создатели мемных спин-оффов «Зельды» изменили российскую индустрию игр и анимации (lit. How the creators of Zelda's meme spin-offs changed the Russian gaming industry and animations) . DTF. November 25, 2021. Former Vivendi subsidiaries American animation studios Video game companies established in 1992 Mass media companies established in 1992 Video game companies disestablished in 2001 Mass media companies disestablished in 2001 Defunct video game companies of the United States Defunct companies based in Maryland Companies based in Gaithersburg, Maryland
16613623
https://en.wikipedia.org/wiki/K%20Desktop%20Environment%201
K Desktop Environment 1
K Desktop Environment 1 was the inaugural series of releases of the K Desktop Environment. There were two major releases in this series. Pre-release The development started right after Matthias Ettrich's announcement on 1996-10-14 to found the Kool Desktop Environment. The word Kool was dropped shortly afterward and the name became simply K Desktop Environment. In the beginning, all components were released to the developer community separately without any coordinated timeframe throughout the overall project. First communication of KDE via mailing list, that was called [email protected]. The first coordinated release was Beta 1 on – almost exactly one year after the original announcement. Three additional Betas followed , , and . K Desktop Environment 1.0 On 12 July 1998 the finished version 1.0 of K Desktop Environments was released: This version received mixed reception. Many criticized the use of the Qt software framework – back then under the Qt Free Edition License which was claimed to not be compatible with free software – and advised the use of Motif or LessTif instead. Despite that criticism, KDE was well received by many users and made its way into the first Linux distributions. K Desktop Environment 1.1 An update, K Desktop Environment 1.1, was faster, more stable and included many small improvements. It also included a new set of icons, backgrounds and textures. Among this overhauled artwork was a new KDE logo by Torsten Rahn consisting of the letter K in front of a gear which is used in revised form to this day. Some components received more far-reaching updates, such as the Konqueror predecessor kfm, the application launcher kpanel, and the KWin predecessor kwm. Newly introduced were e. g. kab, a software library for address management, and a rewrite of KMail, called kmail2, which was installed as alpha version in parallel to the classic KMail version. kmail2, however, never left alpha state and development was ended in favor of updating classic KMail. K Desktop Environment 1.1 was well received among critics. At the same time Trolltech prepared version 2.0 of Qt which was released as beta on 1999-01-28. Consequently, no bigger upgrades for KDE 1 based on Qt 1 were developed. Instead only bugfixes were released: version 1.1.1 on 1999-05-03 and version 1.1.2 on 1999-09-13. A more profound upgrade along with a port to Qt 2 was in development as K Desktop Environment 2. KDE Restoration Project To celebrate KDE's 20th birthday, KDE and Fedora contributor Helio Chissini de Castro re-released 1.1.2 on 2016-10-14. That re-release incorporates several changes required for compatibility with modern Linux variants. Work on that project started one month earlier at QtCon, a conference for Qt developers, in Berlin. There Castro showcased Qt 1.45 compiling on a modern Linux system. See also Linux on the desktop References 1998 software Desktop environments KDE Software Compilation de:K Desktop Environment#K Desktop Environment 1.x
10178154
https://en.wikipedia.org/wiki/Basis%20Technology
Basis Technology
Basis Technology Corp. is a software company specializing in applying artificial intelligence techniques to understanding documents and unstructured data written in different languages. It has headquarters in Somerville, Massachusetts and offices in San Francisco, Washington, D.C., London, and Tokyo. The company was founded in 1995 by graduates of the Massachusetts Institute of Technology to use artificial intelligence techniques to help understand the many different languages that humans use. Its software focuses on finding structure inside text so algorithms can do a better job understanding the meaning of the words. The tools identify different forms of names and phrases. The name of someone, say Albert P. Jones for instance, can appear in many different ways. Some texts will call him "Al Jones", others "Mr. Jones" and others "Albert Paul Jons". Basis Technology's software can match all of these instances. Their software enhances parsing tools by classifying the role of words and provides metadata on the role of words to other algorithms. Software from Basis Technology will, for instance, identify the language of an incoming stream of characters and then identify the parts of each sentence like the subject or the direct object. The company is best known for its Rosette Linguistics Platform which uses Natural Language Processing techniques to improve information retrieval, text mining, search engines and other applications. The tool is used to create normalized forms of text by major search engines, and, translators. Basis Technology software is also used by forensic analysts to search through files for words, tokens, phrases or numbers that may be important to investigators. Rosette The Rosette Linguistics Platform consists of a component library for multilingual text retrieval and analysis. Rosette provides automatic language identification, linguistic analysis, entity extraction, and entity translation from unstructured text. It can be integrated into applications to help analyse volumes of unstructured text. The Rosette Linguistics Platform is composed of these modules: Rosette Language Identifier looks at the structural and statistical signature of the file to identify the language. The pre-configured software can recognize 55 different languages with 45 different encodings. Rosette Base Linguistics identifies the lemma or word stem after finding the tokens. Search is often faster and more accurate when words are grouped by their stem. Rosette Entity Extractor analyzes raw text and identifies the probable role that words and phrases play in the document, a key step that makes it possible for algorithms to distinguish between the various meanings that many words can have. Splitting the raw text into groups of words according to their role and then classifying their contribution to meaning is often called entity analysis. The Basis hybrid approach mixes statistical modeling with rules, regular expressions, and gazetteers, lists of special words that can be tuned to the language and text to be analyzed. The tool is designed to work directly with varied alphabets and multiple languages, an advantage because foreign words are often transliterated in multiple ways. It is believed to be the first commercially available tool for analyzing Arabic text. Rosette Name Translator transliterates non-Latin alphabets like Arabic into a consistent Latin form. Rosette Name Indexer enables simple search across name variations either by plugging into open source search engines or as a standalone service. Rosette Core Library for Unicode smooths the use of Unicode text. Rosette Chat Translator for Arabic converts words from the Arabic chat alphabet to Arabic. The Rosette Platform is used in both the United States government offices to support translation and by major Internet infrastructure firms like search engines. Digital forensics Basis Technology develops open-source digital forensics tools, The Sleuth Kit and Autopsy, to help identify and extract clues from data storage devices like hard disks or flash cards, as well as devices such as smart phones and iPods. The open-source licensing model allows them to be used as the foundation for larger projects like a Hadoop-based tool for massively parallel forensic analysis of very large data collections. The digital forensics tool set is used to perform analysis of file systems, new media types, new file types and file system metadata. The tools can search for particular patterns in the files allowing it to target significant files or usage profiles. It can, for instance, look for common files using hash functions and also deconstruct the data structures of the important operating system log files. The tools are designed to be customizable with an open plugin architecture. Basis Technology helps manage a large and diverse community of developers who use the tool in investigations. Highlight Highlight is transliteration software designed to assist linguists and analysts standardize names and places, allowing them to concentrate on "connecting the dots". Highlight is a plug-in to Microsoft Office Excel and Word. Key features include: Supports SEVEN languages: Arabic, Dari, Farsi, Pashto, Mandarin, Russian, and Korean. Intelligence Community (IC)-compliant entity standardization for people and places Record/review edits for quality control and enhanced analytics Highlight can: Resolve different spellings of foreign persons and places to standard forms. Translate name lists, telephone directories, and personnel databases from foreign languages into English. Connect place names appearing in reports with locations on maps. Access the CIA's Chiefs of State list Brochure for Highlight References External links Official website Software companies based in Massachusetts Privately held companies based in Massachusetts Software companies established in 1995
39834451
https://en.wikipedia.org/wiki/Viatron
Viatron
Viatron Computer Systems, or simply Viatron was an American computer company headquartered in Bedford, Massachusetts, and later Burlington, Massachusetts. Viatron coined the term "microprocessor" although it was not used in the sense in which the word microprocessor is used today. Viatron was founded in 1967 by engineers from Mitre Corporation led by Dr. Edward M. Bennett and Dr. Joseph Spiegel. In 1968 the company announced its System 21 small computer system together with its intention to lease the systems starting at a revolutionary price of $40 per month. The basic system included a microprocessor with 512 characters of read/write RAM memory, a keyboard, a CRT display and two cartridge tape drives. The system specifications, advanced for 1968 – five years before the advent of the first commercial personal computers – caused a lot of excitement in the computer industry. The System 21 was aimed, among others, at applications such as mathematical and statistical analysis, business data processing, data entry and media conversion, and educational/classroom use. The expectation was that the use of new large scale integrated circuit technology (LSI) and volume would enable Viatron to be successful at lower margins, however the prototype did not incorporate LSI technology. In 1968 Bennett claimed that by 1972 Viatron would have delivered more "digital machines" than had "previously been installed by all computer makers." He declared "We want to turn out computers like GM turns out Chevvies," The semiconductor industry was unable to produce circuits in the volumes required, forcing Viatron to sell fewer than the planned 5,000–6,000 systems per month. This raised the production costs per unit and prevented the company from ever achieving profitability. Bennet and Spiegel were fired in 1970, and the company declared Chapter XI bankruptcy in 1971. System 21 components As announced the System 21 line consisted of the following: System 21 Terminal. What would later be called an intelligent terminal, the System 21 terminal included either the 2101 or 2111 microprocessors, a CRT display formatted as four lines of 20 characters with optional color, a keyboard, a control panel, and attachability of up to two peripherals: The terminal was equipped with one of two microprocessors. 2101 – 512 16-bit words of read-only memory (ROM), 400 8-bit character read/write magnetic-core memory. 2111 – 1024 16-bit words of read-only memory (ROM), 400 8-bit character read/write magnetic-core memory. Printing robot – fit over the keyboard of a standard IBM Selectric typewriter and generated typed output at 12 characters per second. Card Reader-punch – despite its name this was actually an attachment for an IBM 129 keypunch to provide punched card input and output. Communications adapter – provided serial ASCII communications at 1200 bits per second. Tape channel attachments – provided for attachment of up to two "Viatape" cartridge recorders, capable of reading and writing 80 character records at 1200 bits per second. So-called Computer Compatible tape recorders, Magnetic tape units, could also be attached to the tape channel attachments to read and write mini-reels at either 9 track, 800 bpi or 7 track, 556/800 bpi. Foreign device attachment – provided parallel input/output in ASCII or Hollerith punch-card code. System 21 computers. Two computers were announced with the System 21: the 2140 and the 2150. Both employed a MOS LSI CPU and magnetic core memory. The systems included 2 µs core memory with 16-bit words and a high speed data channel. Computers weighed about . The 2140 included 4 KW of memory and could support up to 8 local or remote System 21 terminals. The 2150 included 8 KW of memory and could support up to 24 local or remote System 21 terminals. Software. The Viatron Programming System (VPS) came standard with: DDL-I (Distributed Data Language I) Assembler Subroutine library containing input/output, mathematical, arithmetic and conversion routines Utility program library containing load, dump, and a library manager A FORTRAN IV compiler standard with the 2150. CPU The Viatron CPUs differed in memory size and interrupt levels – 2 on the 2140 and 4 on the 2150. They had the ability to operate on 8-bit, 16-bit, 32-bit, or 48-bit data. Three index registers were provided. The CPUs included two independent arithmetic units with different capabilities. Arithmetic unit I had three 16-bit registers called A, B, and C, and a 16-bit D register which functioned as a buffer. Arithmetic unit II performed both arithmetic and addressing operations. It had four registers. P was the program counter, R and E were special-purpose, and Q, which was used for 32-bit operations (with A as the high-order word), or 48-bit operations (with A and B). Q also served as the multiplier-quotient register for multiplication and division. The system had two instruction formats: Standard, 16-bit instructions, and Extended, 32-bit instructions. Standard instructions had a 6-bit operation code, a two-bit index register identifier, and an 8-bit PC-relative address. Extended instructions had a 6-bit operation code, a two-bit index register identifier, an 8-bit operation code modifier, and a 16-bit memory address. Indirect addressing was allowed. There were 85 instructions, some of which had both standard and extended forms: Arithmetic – add, subtract, multiply and divide Logical – and, or, exclusive or Load and Store Shift and Rotate Modify memory word and skip on test Execute input/output Branching – skip or branch on condition, branch unconditional, branch and store program counter (conditional and unconditional), add to index register and skip on test Operate – increment/decrement register, ones complement register, negate (twos-complement) register, move register to register, move console switches to register, increment register and skip on test. All the above operate instructions used one or more of registers A, B, or C. There were also wait and a no-operation operate instructions. References External links Defunct computer companies based in Massachusetts Early microcomputers
3582027
https://en.wikipedia.org/wiki/Percote
Percote
Percote or Perkote () was a town or city of ancient Mysia on the southern (Asian) side of the Hellespont, to the northeast of Troy. Percote is mentioned a few times in Greek mythology, where it plays a very minor role each time. It was said to be the home of a notable seer named Merops, also its ruler. Merops was the father of Arisbe (the first wife of King Priam, and subsequently wife of King Hyrtacus), Cleite (wife of King Cyzicus), and two sons named Amphius and Adrastus who fought during the Trojan War. As an ally of Troy, Percote sent a contingent to help King Priam during the Trojan War - though this contingent was led not by Merops's sons, but by Asius, son of Hyrtacus, according to Homer's Iliad, one native from Percote was wounded in the Trojan War by Antilochus, two natives from Percote were killed in the Trojan War by Diomedes and Ullysses. The Meropidae (Amphius and Adrastus) instead lead a contingent from nearby Adrastea. A nephew of Priam, named Melanippus, son of Hicetaon, herded cattle (oxen) at Percote, according to Homer. It is mentioned by numerous ancient writers, including Herodotus, Arrian, Pliny the Elder, Apollonius of Rhodes, Stephanus of Byzantium, and in the Periplus of Pseudo-Scylax. According to Phanias of Eresus, Artaxerxes I of Persia had given to Themistocles the city of Percote with bedding for his house. (see: Percale) It was a member of the Delian League. Percote was no longer in existence during the time of Strabo, and in his Geography he mentions that the exact location of Percote on the Hellespont shore is unknown. Strabo also claims that Percote was originally called Percope, and that it was part of the Troad. The inhabitants of Percote (and neighboring places like Arisbe and Adrastea) were apparently neither Trojan or Dardanian, and the origins of the Meropidae and Hyrtacidae are unclear. Its site is located east of Umurbey, Asiatic Turkey. See also List of ancient Greek cities References Locations in the Iliad Ancient Greek archaeological sites in Turkey Former populated places in Turkey Populated places in ancient Mysia Populated places in ancient Troad Members of the Delian League
5506397
https://en.wikipedia.org/wiki/Naval%20Information%20Warfare%20Systems%20Command
Naval Information Warfare Systems Command
The Naval Information Warfare Systems Command (NAVWARSYSCOM), based in San Diego, is one of six SYSCOM Echelon II organizations within the United States Navy and is the Navy's technical authority and acquisition command for C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance), business information technology and space systems. Echelon II means that the organization reports to someone who, in turn, reports directly to the Chief of Naval Operations on the military side. From a civilian perspective, NAVWARSYSCOM reports to the Assistant Secretary of the Navy (RDA). The command was formerly known as SPAWAR, or the Space and Naval Warfare Systems Command and was renamed in June 2019 to better align its identity with its mission. NAVWARSYSCOM supports over 150 programs managed by the Program Executive Office (PEO) for Command, Control, Communications, Computers and Intelligence (C4I), as well as the programs of PEO for Enterprise Information Systems (PEO EIS) and PEO Space Systems. These PEOs are located in the greater Washington, D.C. area. The Naval Information Warfare Center (NIWC) Atlantic is located in Charleston, SC, and also includes facilities in Norfolk, VA, New Orleans and Stuttgart, Germany. NIWC Pacific is located in San Diego, and includes facilities in Japan, Guam and Hawaii. Effective February 18, 2019, the names of the systems centers changed to Naval Information Warfare Center Atlantic and Pacific. History A number of mergers over the years have led to the current organization. Eighty percent of the Point Loma Military Reservation evolved into the Naval Electronics Laboratory Center (NELC) at the end of World War II. In the 1960s, NELC was tasked with 4C: Command, Control, Communications and Computers. In 1977 NELC was merged into the Naval Ocean Systems Center (NOSC) and eventually was merged into SPAWAR (now NAVWAR). The Naval Command, Control and Ocean Surveillance Center (NCCOSC) was SPAWAR's warfare center for command, control, communications, and ocean surveillance. NCCOSC's mission, as part of SPAWAR, was to develop, acquire, and support systems for information transfer and management to provide U.S. naval forces a decisive warfare advantage, and to be the Navy's center for research, development, test and evaluation, engineering, and fleet support for command, control, communications, and ocean surveillance systems. In June 2019, the Space and Naval Warfare Systems Command was renamed the Naval Information Warfare Systems Command. Site redevelopment In May 2021, the US Navy released an exposure draft of a proposal to re-develop the roughly NAVWAR site, to consist of: of total development, of which: for all-new Navy cybersecurity buildings and employee parking spots, with that portion to be built by a private partner within the first five years; the remaining approximately for a mixed-use development in buildings up to tall, dominated by housing, including: 10,000 units (and 14,400 parking stalls) for a neighborhood population of over 14,000 people, of commercial office space, of mostly ground-level stores, a transit center with 500 parking spots, and two hotels offering a total of 450 rooms. Responsibilities NAVWARSYSCOM designs and develops communications and information systems. They employ over 12,000 professionals located around the world and close to the United States Navy fleet. NAVWARSYSCOM is responsible for managing Air Traffic Control contractors in Afghanistan, including the Kabul en route air traffic control center, the Kabul, Kandahar, and Bagram approach control radar facilities, and respective control towers. NAVWARSYSCOM provides systems engineering and technical support for the development and maintenance of C4ISR (command, control, communications, computers, intelligence, surveillance and reconnaissance), business information technology and space capabilities. These are used in ships, aircraft and vehicles to connect individual platforms into integrated systems for the purpose of information sharing among Navy, Marine, joint forces, federal agencies and international allies. Command and Control: to organize, direct, coordinate, deploy and control forces to accomplish assigned missions Responsible for managing Air Traffic Control contractors in the Afghanistan theater of operations. Includes the Kabul en route air traffic control center, Kabul, Kandahar, and Bagram approach control radar facilities, as well as control towers at all three locations. Intelligence, Surveillance, Reconnaissance and Information Operations: to collect, process, exploit and disseminate information regarding an adversary's capability and intent Cyberspace Operations: to operate and protect communications and networks, while exploiting and disrupting adversary's command and control Business Information Technology (IT) and Enterprise Information Systems: to enable business processes and to ensure standard IT capabilities Enterprise Systems Engineering: to develop solutions based on capability needs, design considerations and constraints Space Systems: to procure and manage narrowband communication satellites in support of the Department of Defense and other government agencies Communications and Networks: to provide information through voice, video and data Program Executive Offices (PEO) NAVWAR's three affiliated Program Executive Offices (PEOs) are responsible for the prototyping, procurement, and fielding of C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance), business information technology and space systems. Their mission is to develop, acquire, field and sustain affordable and integrated state of the art equipment for the Navy. PEOs report to the NAVWAR commander for planning and execution of in-service support, and to the Assistant Secretary of the Navy (Research, Development and Acquisition) for acquisition-related matters. The NAVWAR-affiliated PEOs are: Program Executive Office Command, Control, Communications, Computers and Intelligence (PEO C4I) and Space Systems Program Executive Office for Digital and Enterprise Services (PEO Digital) Program Executive Office Manpower, Logistics and Business Solutions (PEO MLB) See also Naval Information Warfare Center Pacific Naval Information Warfare Center Atlantic NAVWAR Space Field Activity U.S. Armed Forces systems commands Army Materiel Command Marine Corps Systems Command United States Navy systems commands Naval Sea Systems Command Naval Air Systems Command Naval Facilities Engineering Systems Command Naval Supply Systems Command Air Force Materiel Command Space Systems Command References External links SPAWAR Headquarters Facebook Page Shore commands of the United States Navy Military in San Diego Military units and formations established in 2012 Space warfare Military Superfund sites
1074611
https://en.wikipedia.org/wiki/Commodore%2064%20software
Commodore 64 software
The Commodore 64 amassed a large software library of nearly 10,000 commercial titles, covering most genres from games to business applications, and many others. Applications, utility, and business software While the 1541 disk drive's slow performance made the Commodore 64 mostly unsuitable as a business computer, it was still widely used for many important tasks, including computer graphics creation, desktop publishing, and word processing. Info 64, the first magazine produced with desktop publishing tools, was created on and dedicated to the Commodore platform. The best known art package was perhaps KoalaPainter, primarily because of its own custom graphics tablet user interface - the KoalaPad. Another popular drawing program for the C64 was Doodle!. A Commodore 64 version of The Print Shop existed, allowing users to generate signs and banners with a printer. "The Newsroom" was a desktop publishing suite. Lightpens and CAD drawing software were also commercially produced, such as the Inkwell Lightpen and related tools. There were many prepackaged wordprocessors available for the Commodore 64, such as PaperClip and Vizawrite, but a popular DIY program was SpeedScript, which was available as a type-in program from Compute!'s Gazette. The MultiPlan spreadsheet application from Microsoft was ported to the Commodore 64, where it competed against established packages such as Calc Result. The first Lotus 1-2-3-like integrated software package for the 64 was Viza Software's Vizastar. A complete office suite arrived in the form of British made Mini Office II. In Germany and Scandinavia, many popular application programs were published by German company Data Becker. The typical C64 spreadsheet could store 64 columns and 255 rows, or 16,000 cells, but only 5-10% of them could be used at any one time, due to RAM limitations. Serious Commodore 64 business users, however, were drawn to GEOS. Due to its speed, ease of use, and full suite of office applications and utility software, GEOS provided a work environment similar to that of an early Apple Macintosh. Arguably the best office applications appeared on GEOS because it was graphically advanced and not limited by the Commodore 64's screen area of 40-columns. Being a fully-fledged OS, GEOS brought the arrival of many add-on fonts, accessories, and applications. It also supported most Commodore 64 peripherals and models of third-party printers. KoalaPad and Lightpen users could use GEOS too, which greatly increased the amount of clip art available for the platform. GEOS proved very popular because of low price for the necessary hardware (and of course the capability of the OS). This was due in part to the aggressive pricing of the Commodore 64 as a games machine and home computer (With rebates, the C64 was going for as little as US$100 at the time). This was in comparison to a typical PC for US$2000 (which required MS-DOS, and another $99 for Windows 1.0) or the venerable Mac 512K Enhanced also $2000. There were numerous sound editing tools for the Commodore 64. Commodore released music composition software which included a keyboard overlay suited for early model Commodore 64s. Software titles such as the Music Construction Set were available for users to compose music with notes, however the only tools which really pushed the C64's sonic capability to the full were mostly demoscene music tools, or pure assembly language. MIDI expansion cartridges and speech synthesizing hardware was also available for more serious musicians. The Prophet64 cartridge was recently released and features a suite of GUI-style applications for sequencing music, drum and rhythm synthesis, MIDI DIN-sync, and taking advantage of the SID chip in other ways, effectively turning the C64 into a true musical instrument that anyone can use. There was also software which could be used to make the Commodore 64 speak, the best-known being SAM. The first screen shows the C64's BASIC with a small program. The BASIC interpreter does not only allow the user to write programs, but it is also used as command prompt, so in order to load a program a BASIC command needs to be entered. KoalaPainter is an early paint program. It uses two screens. The first displays a menu and the second is the picture that is being worked on. The program is controlled either by a joystick or with a graphics tablet that was also sold by Koala. Magic Desk is an application by Commodore that tries to resemble a real type writer. It contains basic editing functions though. Multiplan is a text-based spreadsheet application, written by Microsoft. Vizawrite is another text-based word processor for the C64, but looks more like the professional word processors of the early 80s. GEOS was a graphical user interface, first released in 1987. It was a small revolution at its time, because until then GUIs, other than Apple II Desktop/MouseDesk, were mostly available for the much more powerful 16-bit machines. geoPaint is a paint program for GEOS. Beside the small resolution it had all capabilities of other GUI-based drawing programs of its time. geoWrite is a word processor for GEOS. It did not only have a GUI, but also supported many different styles and fonts with the WYSIWYG principle, unlike the other word processors on the C64. UIFLI (Underlay Interlace Flexible Line Interpreter) is a Graphicsmode on the Commodore 64 invented by DeeKay and Crossbow of Crest in 1995. Games By 1985, games were estimated to make up 60 to 70% of Commodore 64 software. Due in part to its advanced sound and graphic hardware, and to the quality and quantity of games written for it, the C64 became better known as a gaming and home entertainment platform than as a serious business computer. Its large installed user base encouraged commercial companies to flood the market with game software, even up until Commodore's demise in 1994. In total over 23,000 unique game titles exist for the Commodore 64. International Soccer was Commodore's best first-party game; otherwise "the normal standard for Commodore software is mediocrity", InfoWorld stated in 1984. The company did not publish many other games for the C64, instead releasing game cartridges primarily from the failed MAX Machine for the C64. Commodore included an "Ultimax" mode in the Commodore 64's hardware, which allowed the computer to emulate a MAX machine for this purpose. However, aside from the initial Commodore cartridges, very few cartridge-based games were released for the Commodore. Most third-party game cartridges came from Llamasoft, Activision, and Atarisoft, however some of these games found their way into disk and tape versions too. Only later, when the failed C64GS console was produced, did cartridges make a brief comeback, including the production of a few more cartridge-only games. Crackers managed to port these games to disk later on. While the 1541 floppy disk drive quickly became universal in the US, in Europe it was common for prepackaged commercial game software to either come on floppy disk or cassette-tape format, and sometimes both. Cassette-based games were usually cheaper than their disk-based counterparts; however, due to the Datasette's lack of speed and random access, many large games (such as role-playing video games) were never made for the cassette format. Despite this, a great deal of software was published only on the cassette format in Europe, including many "budget" games produced by companies like Mastertronic, Firebird, and Codemasters which were released on cassette only and sold for a fraction of the price of full-price commercial software. Whilst many commercial software companies produced prepackaged game software, an abundant supply of free software was also available. What is noticeable from the Commodore 64's game catalog is that a rather large selection of all C64 games were programmed non-commercially by average Commodore 64 users, with editors included in some games, e.g. Boulder Dash Construction Kit, Pinball Construction Set, SEUCK, The Quill, GameMaker. Given the accessibility of BASIC on the Commodore 64, many BASIC games were created and also ported from other computer platforms and modified for the Commodore 64. In addition, many games exist that were released as Type-in programs from numerous magazines, especially European Commodore magazines. Many books and magazines were published containing listings for games, and public domain software was developed and released from both BBS systems and public domain libraries such as "Binary Zone" in the UK. There were many classic must-have games produced on the Commodore 64, perhaps too many to mention, including versions of classic video games. Of particular note, the smash hit Impossible Mission produced by Epyx was originally designed for the Commodore 64. Epyx's multievent games (Summer Games, Winter Games, World Games, and California Games) were very popular, as well as perhaps the first driving game with split-screen dynamics, Pitstop II. Most of these games eventually made an appearance on the Commodore DTV joystick unit many years later. Other hit games such as Boulder Dash, The Sentinel, Archon, and Elite were all given Commodore 64 versions. Cassette users may remember titles such as Master of Magic, Rocketball, One Man and His Droid, and Spellbound on Mastertronic's budget labels. Other notable titles on the Commodore 64 include the Ultima and Bard's Tale role-playing game series. Hewson/Graftgold were responsible for several well-received C64 titles including Paradroid and Uridium—made famous for their metallic bas-relief styled graphic effects and addictive gameplay. System 3 produced The Last Ninja action adventure series originally on the C64. Armalyte, a groundbreaking shoot 'em up title from Thalamus Ltd, and Turrican I & II are among some of the highest rated games for the Commodore 64 (according to Zzap64, which awarded "Gold Medals" to these games). Notable game designers for the Commodore 64 are: Paul Norman, Danielle Barry (aka Dan Bunten), Andrew Braybrook, Stephen Landrum, Tim and Chris Stamper, Jeff Minter and Tony Gibson just to name a few. During the final mainstream commercial years of the Commodore 64, Issue 38 of Commodore Format magazine in November 1993 awarded the only 100% rating ever given to a Commodore 64 game in any major Commodore 64 publication. As no game had ever received such a high rating before, and as the commercial Commodore 64 scene was winding down in the mid-1990s, the awarding of 100% was seen as somewhat controversial. The game, titled Mayhem in Monsterland, was developed to exploit a multitude of programming tricks and quirks in the Commodore 64's hardware to the maximum. The impressive use of non-standard colors and scrolling resulted in perhaps the most graphically stunning game ever produced for the Commodore 64. The gameplay itself is similar to that of Nintendo's Super Mario Bros. and SEGA's Sonic the Hedgehog. Whilst mainstream commercial activity for games no longer exists for the C64, many enthusiasts and hobbyists still write games for the platform. In addition, a few small publishers still sell game software. Commodore 64 games continue to inspire developers and gamers on modern platforms such as iOS with many games being produced using similar styles of game-play mechanics to those from the Commodore 64 era. Type-ins, bulletin boards, and disk magazines Besides prepackaged commercial software, the C64, like the VIC before it, had a large library of type-in programs. Numerous computer magazines offered type-in programs, usually written in BASIC or assembly language or a combination of the two. Because of its immense popularity, many general-purpose magazines that supported other computers offered C64 type-ins (Compute! was one of these), and at its peak, there were many magazines in North America (Ahoy!, Commodore Magazine, Compute!'s Gazette, Power/Play, RUN and Transactor ) dedicated to Commodore computers exclusively. These magazines sometimes had disk companion subscriptions available at extra cost with the programs stored on disk to avoid the need to type them in. The disk magazine Loadstar offered fairly elaborate ready-to-run programs, music, and graphics. Books of type-ins were also common, especially in the machine's early days. There were also many books publishing type-ins for the C-64, sometimes programs that had originally appeared in one of the magazines, but books containing original software were also available. A large library of public domain and freeware programs, distributed by online services such as Q-Link and CompuServe, BBSs, and user groups also emerged. Commodore also maintained an archive of public domain software, which it offered for sale on diskette. Despite limited RAM and disk capacity, the Commodore 64 was a popular platform for BBS hosting. Some of the most popular installations included the highly optimized and fast Blue Board program, and the Color64 BBS System, which allowed the use of color PETSCII graphics. Many BBS sysops used high-capacity floppy drives like the SFD-1001 or hard drives such as the Lt. Kernal. Software cracking The C64 software market had widespread problems with copyright infringement. There were many kinds of copy protection systems employed on both cassette and floppy disk, to prevent the unauthorized copying of commercial Commodore 64 software. Practically all of them were worked around or defeated by crackers and warez groups. The popularity of this activity has been attributed to the large Commodore 64 user base. Many BBSs offered cracked commercial software, sometimes requiring special access and usually requiring users to maintain an upload/download ratio. A large number of warez groups existed, including Fairlight, which continued to exist more than a decade after the C64's demise. Some members of these groups turned to telephone phreaking and credit card or calling card fraud to make long-distance calls, either to download new titles not yet available locally, or to upload newly cracked titles released by the group. Not all Commodore 64 users had modems however. For these people, many warez group "swappers" maintained contacts throughout the world. These contacts would usually mass mail cracked floppy disks through the postal service. Also, sneakernets existed at schools and businesses all over the world, as friends and colleagues would trade (and usually later copy) their software collections. At a time before the Internet was widespread, this was the only way for many users to amass huge software libraries. Also, and particularly in Europe, groups of people would hold copy-parties explicitly to copy software, usually irrespective of software licence. Several popular utilities were sold that contained custom routines to defeat most copy-protection schemes in commercial software. (Fast Hack'em—probably the most popular example—was itself widely redistributed.) Pirates Toolbox was another popular set of tools for copying disks and removing copy protection. Tapes could be copied with special software, but often it was simply done by dubbing the cassette in a dual deck tape recorder, or by relying on an Action Replay cartridge to freeze the program in memory and save to cassette. Cracked games could often be copied manually without any special tools. In Europe, some hardware devices, colloquially known as "black boxes" were available under the counter that connected two C1530 tape decks together at the connection point to the C64 permitting a copy to be made whilst loading a game. This overcame the difficulties in direct dubbing of later games using the high speed loaders that were developed to overcome the very long load times. BASIC Like most computers from the late 1970s and 1980s, the Commodore 64 came with a version of the BASIC programming language. It was used for both writing software and for performing the duties of an operating system such as loading software and formatting disks. The onboard BASIC programming language offered no easy way to tap the machine's advanced graphics and sound capabilities. Accessing these associated memory addresses to make use of the advanced features required using the PEEK and POKE commands, third-party BASIC extensions, such as Simons' BASIC, or to program in assembly language. Commodore had a better implementation of BASIC but chose to ship the C64 with the same BASIC 2.0 used in the VIC-20 to minimize cost. This, however, did not stop countless people making thousands of programs in the BASIC V2 language, and teaching people their first steps in computer programming. Music The MOS Technology 6581 SID is the sound chip for the C64, for which many music software programs were written. A musical software tool for the C64, was Kawasaki Synthesizer created in 1983. Development tools Aside from games and office applications such as word processors, spreadsheets, and database programs, the C64 was well equipped with development tools from Commodore as well as third-party vendors. Various assembler solutions were available; the MIKRO assembler came in ROM cartridge form and integrated seamlessly with the standard BASIC screen editor. The PAL Assembler by Brad Templeton was also popular. Several companies sold BASIC compilers, C compilers and Pascal compilers, and a subset of Ada, to mention but a few popular languages available for the machine. The likely most popular entertainment oriented development suite was the Shoot'Em-Up Construction Kit, affectionately known as SEUCK. SEUCK allowed those non-skilled in programming to create original, professional-looking shooting games. Garry Kitchen's Gamemaker and Arcade Game Construction Kit also allowed non-programmers to create simple games with little effort. Text adventure game tools included The Quill and Graphic Adventure Creator development suites. The Pinball Construction Set gave users a pinball machine to design. Modern-day development tools Software development on the Commodore 64 never really stopped. There are many tools available today, including IDEs such as CBM prg Studio, Relaunch64, and WUDSN IDE, which is a plug-in for the open source Eclipse IDE. Along with small C compilers such as cc65, there are many assemblers and cross assemblers to be used on modern day PCs: Turbo Assembler Kick Assembler by Mads Nielsen dasm acme ca65 (which is part of cc65.) c64asm C64List by Jeff Hoag is both a cross assembler and cross-platform BASIC editor/tokenizer that allows developers to write mixed BASIC/assembly programs in a text file on a PC and compile it into a single .prg file that can be executed on an actual C64 or emulator. Tools such as PuCrunch, an LZ77 data and executable self extracting compression program, are also available released under GNU LGPL. Sprite editors like Sprite Pad allow you to design C64 Sprites and animations using Windows. GoatTracker allows you to write music using modern OSs and uses the ReSID engine. Using CodeNet it is possible to transfer and execute programs to a C64 via a TCP/IP network cable from a PC. Although this does require an Ethernet adapter on the C64 such as Individual Computers RR-Net or an appropriate version of the 1541 Ultimate. Retrocomputing efforts The magnetic tapes and disks upon which home computer software was stored are decaying at an alarming rate. In order to preserve game software and information, efforts are underway to copy from these degrading media onto fresh media which will help ensure a long life for the software and make it available for emulation and archiving. In addition, there are other efforts to archive Commodore 64 documentation, software manuals, magazine articles, and other nostalgia (such as software packaging artwork, game screenshots, and Commodore 64 TV commercials). Commodore 64 game software has been remarkably well documented and preserved - a considerable feat when taking the amount of software available for the platform into consideration. The GameBase 64 (GB64) organization has an online database of game information, which at version 7 holds information for 21,000 unique game titles. The database is still growing as new information comes to light. Besides the online database a downloadable offline version exists. Using one of the frontends GameBase (Windows only) or jGameBase (platform independent) you can conveniently browse the database entries and directly start them in an emulator. The GoodGB64 variant of Cowering's Good Tools allows users to audit their C64 game collections using the GameBase64 database. There are tools available to transfer original 1541 floppy discs to or from the PC. The Star Commander is a DOS-based tool, cbm4linux is a Linux tool, and cbm4win is a Windows tool to transfer data from an original floppy drive to the PC, or vice versa, using a simple X-cable. There are also tools available, 64HDD, to allow your C64 to directly load D64 software stored on your PC using the same cables. The Individual Computers Catweasel allows PC users to use their own floppy drive to read C64 disks. In addition, there is now a growing number of emulators available, which allow the use of an emulated C64 on modern computing hardware. These include VICE, which is free and runs on most modern as well as some older platforms; CCS64, which is available for Windows and is written by Per Håkan Sundell; and Power64, which has versions for Mac OS X and OS 9. Also the Quantum Link service has been reconstructed as Quantum Link Reloaded. It can be accessed with a real Commodore 64, or through the VICE emulator. Special hardware has also been designed to aid the conservation of software, such as the IDE64 cartridge, which allows the user to connect a modern PC IDE ATA hard drive or a CompactFlash flashcard directly to the machine, giving the possibility to copy software onto the hard drive and use it from there, preventing wear on a decades-old floppy disk. Nintendo's Virtual Console service offers Commodore 64 games for download on the Wii console in North America and Europe. References
2127775
https://en.wikipedia.org/wiki/Sysfs
Sysfs
sysfs is a pseudo file system provided by the Linux kernel that exports information about various kernel subsystems, hardware devices, and associated device drivers from the kernel's device model to user space through virtual files. In addition to providing information about various devices and kernel subsystems, exported virtual files are also used for their configuration. sysfs provides functionality similar to the sysctl mechanism found in BSD operating systems, with the difference that sysfs is implemented as a virtual file system instead of being a purpose-built kernel mechanism, and that, in Linux, sysctl configuration parameters are made available at /proc/sys/ as part of procfs, not sysfs which is mounted at /sys/. History During the 2.5 development cycle, the Linux driver model was introduced to fix the following shortcomings of version 2.4: No unified method of representing driver-device relationships existed. There was no generic hotplug mechanism. procfs was cluttered with non-process information. Sysfs was designed to export the information present in the device tree which would then no longer clutter up procfs. It was written by Patrick Mochel. Maneesh Soni later wrote the sysfs backing store patch to reduce memory usage on large systems. During the next year of 2.5 development the infrastructural capabilities of the driver model and driverfs began to prove useful to other subsystems. kobjects were developed to provide a central object management mechanism and driverfs was renamed to sysfs to represent its subsystem agnosticism. Sysfs is mounted under the mount point. If it is not mounted during initialization, you can always mount it using the command: "mount -t sysfs sysfs /sys" Supported buses ACPI Exports information about ACPI devices. PCI Exports information about PCI and PCI Express devices. PCI Express Exports information about PCI Express devices. USB Exports information about USB devices. SCSI Exports information about mass storage devices, including USB, SATA and NVMe interfaces. S/390 buses As the S/390 architecture contains devices not found elsewhere, special buses have been created: css: Contains subchannels (currently the only driver provided is for I/O subchannels). ccw: Contains channel attached devices (driven by CCWs). ccwgroup: Artificial devices, created by the user and consisting of ccw devices. Replaces some of the 2.4 chandev functionality. iucv: Artificial devices like netiucv devices which use VM's IUCV interface. Sysfs and userspace Sysfs is used by several utilities to access information about hardware and its driver (kernel modules) such as udev or HAL. Scripts have been written to access information previously obtained via procfs, and some scripts configure device drivers and devices via their attributes. See also procfs configfs tmpfs sysctl, alternative way of exporting configuration used in BSD systems References External links Driver model overview from the LWN porting to 2.6 series kobjects and sysfs from the LWN porting to 2.6 series Ramfs The sysfs Filesystem, OLS'05 Documentation/filesystems/sysfs.txt Linux kernel documentation for sysfs Free special-purpose file systems Interfaces of the Linux kernel Linux kernel features Pseudo file systems supported by the Linux kernel
40011866
https://en.wikipedia.org/wiki/ISO/IEC%20JTC%201/SC%2037
ISO/IEC JTC 1/SC 37
ISO/IEC JTC 1/SC 37 Biometrics is a standardization subcommittee in the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), which develops and facilitates standards within the field of biometrics. The international secretariat of ISO/IEC JTC 1/SC 37 is the American National Standards Institute (ANSI), located in the United States. History ISO/IEC JTC 1/SC 37 was established in August 2002, after the approval of a proposal submitted by ANSI to ISO/IEC JTC 1 for the establishment of a new JTC 1 subcommittee on biometrics. The main purpose of the new subcommittee was to provide an international venue that would harmonize and accelerate formal international biometric standardization, resulting in better interoperability, reliability, usability, and security for future standards based systems and applications. With better interoperability between biometrics systems, the success of these applications would be much more likely. ISO/IEC JTC 1/SC 37 was created with the intent that it would create standards that could support the rapid deployment of open systems, standard-based security solutions for a number of purposes, such as prevention of ID theft and homeland defense. Standards developed by ISO/IEC JTC 1/SC 37 provide support to a wide range of systems and applications that provide accurate and reliable verification and identification of individuals. The subcommittee has published a number of standards pertaining to biometrics in the areas of technical interfaces, data interchange formats, performance testing and application profiles. Other topics within biometrics that have already, or are currently, being addressed by ISO/IEC JTC 1/SC 37 are performance and conformance testing methodology standards, sample quality standards, and standards and technical reports in support of technical implementation issues and cross jurisdictional issues related to the utilization of biometric technologies in commercial applications. Scope and mission The scope of ISO/IEC JTC 1/SC 37 is the "Standardization of generic biometric technologies pertaining to human beings to support interoperability and data interchange among applications and systems." Generic human biometric standards include: Common file frameworks Biometric application programming interfaces Biometric data interchange formats Related biometric profiles Application of evaluation criteria to biometric technologies Methodologies for performance testing and reporting and cross jurisdictional and societal aspects The mission of ISO/IEC JTC 1/SC 37 is to ensure a comprehensive and high priority, worldwide approach for the development and approval of international biometric standards. Work done by ISO/IEC JTC 1/SC 37 does not include: Work covered by ISO/IEC JTC 1/SC 17 for applying biometric technologies to cards and personal identification Work covered by ISO/IEC JTC 1/SC 27 for biometric data protections techniques, biometric security testing, evaluations, and evaluations methodologies ISO/IEC JTC 1/SC 37 Roadmap – 12 August 2015 Structure ISO/IEC JTC 1/SC 37 is made up of six working groups (WGs), each of which carries out specific tasks in standards development within the field of biometrics. The focus of each working group is described in the group's terms of reference. Working groups of ISO/IEC JTC 1/SC 37 are: Collaborations ISO/IEC JTC 1/SC 37 works in close collaboration with a number of other JTC 1 subcommittees, specifically ISO/IEC JTC 1/SC 17: Cards and Personal Identification, and ISO/IEC JTC 1/SC 27: IT Security Techniques. The standard ISO/IEC 7816-11:2004, Identification cards – Integrated circuit cards – Part 11: Personal verification through biometric methods, developed by ISO/IEC JTC 1/SC 17 includes an instantiation of a biometric data encapsulator developed by ISO/IEC JTC 1/SC 37. ISO/IEC JTC 1/SC 37 has also developed standards with external organizations, such as the International Labour Organization (ILO). External organizations, such as the International Civil Aviation Organization (ICAO), have also adopted many of the standards developed by ISO/IEC JTC 1/SC 37. In 2011, ICAO published the sixth edition of Document 9303 Part 1, which specifies requirements for passports (specifically, Machine-readable passports) within the realms of physical security features, biometrics, and data storage media. Many of the specifications for biometrics developed and facilitated by ISO/IEC JTC 1/SC 37 were integrated into the document, specifically those pertaining to face, finger, and iris images. Organizations internal to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 37 include: ISO/IEC JTC 1 ISO/IEC JTC 1/SC 17 ISO/IEC JTC 1/SC 27 ISO/IEC JTC 1/SC 31 ISO/IEC JTC 1/SC 35 ISO/IEC JTC 1/SC 38 ISO/IEC JTC 1/SC 42 IEC TC 79 TC 272 IEC/SC 3C Organizations external to ISO or IEC that collaborate with, or are in liaison to, ISO/IEC JTC 1/SC 37 include: International Biometric Industry Association (IBIA) International Telecommunication Union (ITU) (SG 17) Organization for the Advancement of Structured Information Standards (OASIS) FIDO Alliance (FIDO Alliance) CEN TC 224 / WG 18 Frontex Certain countries represented within ISO/IEC JTC 1/SC 37 have also adopted a number of the subcommittee's standards. Two official documents of Spain, the electronic national identity card (DNIe) and the Spanish ePassport, store biometric data outlined in ISO/IEC JTC 1/SC 37's standard data interchange format. In addition, the Planning Commission of the Unique Identification Authority of India (UIDAI) has also planned to use ISO/IEC JTC 1/SC 37's biometric series of standards for fingerprints (ISO/IEC 19794-4:2005, Information technology – Biometric data interchange formats – Part 4: Finger image data), face (ISO/IEC 19794-5, Information technology – Biometric data interchange formats – Part 5: Face image data) and iris (ISO/IEC 19794-6:2005, Information technology – Biometric data interchange formats – Part 6: Iris image data) data interchange formats for the organization's unique identity project. The UIDAI is currently developing the Aadhaar ("Foundation") system and also plans to incorporate a number of other standards developed and facilitated by ISO/IEC JTC 1/SC 37, including, ISO/IEC 19785 CBEFF (Common Biometric Exchange Formats Framework), which provides the common structure, metadata, and security block in packaging the biometric data. Member countries Countries pay a fee to ISO to be members of subcommittees. The 29 "P" (participating) members of ISO/IEC JTC 1/SC 37 are: Australia, Canada, China, Czech Republic, Denmark, Egypt, Finland, France, Germany, India, Israel, Italy, Japan, Republic of Korea, Malaysia, Netherlands, New Zealand, Norway, Poland, Portugal, Russian Federation, Singapore, South Africa, Spain, Sweden, Switzerland, Ukraine, United Kingdom, and United States of America. The 13 "O" (observing) members of ISO/IEC JTC 1/SC 37 are: Austria, Belgium, Bosnia and Herzegovina, Ghana, Hungary, Indonesia, Islamic Republic of Iran, Ireland, Kenya, Romania, Serbia, Thailand, and Turkey. Standards As of May 2017, ISO/IEC JTC 1/SC 37 has 118 published standards (including amendments) in biometrics. The types of standards within biometrics published by ISO/IEC JTC 1/SC 37, by working group, are: WG 1: Responsible for the development of a Harmonized Biometric Vocabulary: establish a systematic description of the concepts in the field of biometrics pertaining to recognition of human beings and reconcile variant terms in use in pre-existing biometric standards against the preferred terms, thereby clarifying the use of terms in this field (ISO/IEC 2382-37, Information technology – Vocabulary – Part 37: Biometrics, which is freely downloadable from the ISO website). WG 2: Technical interface standards: address necessary interfaces and interactions between biometric components and sub-systems, as well as the possible use of security mechanisms to protect stored data and data transferred between systems. Standards under development include BioAPI for Object-oriented programming languages Part 1: Architecture, Part 2: Java implementation, and Part 3: C# implementation. WG 3: Data interchange format standards: specify the content, meaning, and representation of biometric data formats which are specific to a particular biometric modality. Biometric sample quality standards and technical reports: specify terms and definitions that are useful in the specification, use and testing of image quality metrics and define the purpose, intent, and interpretation of image quality scores for a particular biometric modality. Conformance testing methodology standards: specify methods and procedures, assertion language definitions, test assertions, testing and reporting requirements, and other aspects of conformance testing methodologies. Amendments to the data interchange formats to specify XML encoding. Biometric presentation attack detection: the purpose of ISO/IEC 30107-1 is to provide a foundation for Presentation Attack Detection through defining terms and establishing a framework through which presentation attack events can be specified and detected so that they can be categorized, detailed and communicated for subsequent decision making and performance assessment activities. ISO/IEC 30107-1, Biometric presentation attack detection - Part 1: Framework, which is freely downloadable from the ISO website: http://standards.iso.org/ittf/PubliclyAvailableStandards/c053227_ISO_IEC_30107-1_2016.zip WG 4: Technical implementation of biometric systems standards and guidance: develop technical best practices, guidance, implementation requirements and biometric profiles that support the successful use and interoperability of biometric applications. Standards and technical reports under development include the ISO/IEC standard ISO/IEC 30124 Code of practice for the implementation of a biometric system and the Technical Reports ISO/IEC TR 29196 Guidance for Biometric Enrolment and ISO/IEC TR 30125 Use of Mobile Biometrics for Personalization and Authentication WG 5: Biometric performance testing and reporting methodology standards and technical reports: specify biometric performance metric definitions and calculations, approaches to test the performance, and requirements for reporting the test results. Standards under development include: Specifications for Machine-readable test data for biometric testing and reporting An evaluation methodology for environmental influence in biometric system performance WG 6: Standards and technical reports related to cross-jurisdictional and societal issues: address the applications of biometric technologies, specifically in respect to, accessibility, health and safety, and support of legal requirements. See also ISO/IEC JTC1 List of ISO standards American National Standards Institute International Organization for Standardization International Electrotechnical Commission FICV benchmark under ISO/IEC 19795-5 New Zealand Online Passport Renewal Note: New Zealand Passports accepts passport renewal applications including passport images online. The link above provides access to the New Zealand Online Photo Checker which can be used before completing the application form to independently check that the intended image meets New Zealand Passports photographic requirements. References External links ISO/IEC JTC 1/SC 37 page at ISO Identity management initiative 037 Biometrics
21447170
https://en.wikipedia.org/wiki/Cimatron
Cimatron
Cimatron is an Israeli software company that produces CAD/CAM software for manufacturing, toolmaking and CNC programming applications. The company was listed on the Nasdaq exchange under the symbol CIMT, until its 2014 acquisition by 3D Systems. Prior to this, the company's major shareholder was DBSI, whose co-managing partner, Yossi Ben-Shalom, chaired the Cimatron board. Headquartered in Tel Aviv, the company had subsidiaries in the United States, Germany, Italy, China, South Korea, India and Brazil, as well as resellers in over 40 countries. Its main software products, CimatronE and GibbsCAM, continue to be used in over 40,000 installations worldwide. Its clients are largely from the automotive, aerospace, consumer electronics, toys, medical, optics and telecom industries. One of the company's major clients is China's Haier Mould, a subsidiary of the Haier Group. History The company was founded in 1982 as MicroCAD, releasing its first software products Multicadd and Multicam in 1984 for use by small- to medium-sized tool shops. In 1987 the company changed its name to Cimatron. In 1990, the company launched Cimatron IT, which it claimed was the world's first integrated CAD/CAM software. In March 1996, Cimatron began trading on the Nasdaq under the symbol CIMT. In 1999 Cimatron launched its product for Windows, CimatronE. In March 2011, Cimatron began trading on the Tel-Aviv Stock Exchange, becoming a dual-listed company. However, in 2013 its board of directors voted to delist from the TASE. In July 2005, Cimatron acquired an initial 27.5% interest in Microsystem Srl, its Italian distributor. By July 2008, Cimatron had completed the acquisition of 100% of Microsystem. In January 2008, Cimatron merged with US CNC machining software company Gibbs and Associates. Former Gibbs head William Gibbs assumed the position of Cimatron President North America and Vice Chairman of Cimatron Ltd. and agreed to remain with the company for at least five years. In 2010, Cimatron was listed by PLM consulting firm CIMdata as one of the leading suppliers of CAM software based on CAM software and services direct revenue received. CIMdata also predicted that Cimatron would be one of the five most rapidly growing CAM software companies in 2011. In the 4th quarter of 2010, Cimatron reported its highest ever quarterly revenue of $11 million and operating profit of $1.7 million. Also Cimatron and LEDAS (LGS 3D owner those days) have collaborated on Motion Simulation application dedicated to mold, tool and die maker design, that is able to work with standard CAD shapes, i.e. canonics and NURBS. Collision detection was based on functions of ACIS kernel, while motion itself was performed by LGS 3D as a sequence of constraint satisfaction problems. As a result of collaboration, Cimatron licensed LGS 3D, and Motion Simulation application was developed and integrated into CimatronE CAM system. In 2011, the company was listed as one of Israel's fastest growing technology companies in the Deloitte Fast 50 Awards' list. For 2012 Cimatron reported revenues of $42.3 million, with a record non-GAAP operating profit of $6.1 million. In February 2013, Cimatron CEO Danny Haran announced that the company had begun researching the additive manufacturing field. In March of that year Cimatron established a 3D Printing Advisory board, naming 3D printing expert Terry Wohlers as its first member. In 2015, 3D Systems completed its acquisition of all shares of Cimatron Ltd. for approximately $97 million. Products CimatronE CimatronE is an integrated CAD/CAM solution for mold, die and tool makers and manufacturers of discrete parts, providing associativity across the manufacturing process from quoting, through design and delivery. The solution's products include mold design, electrode design, die design, 2.5 to 5-axis numerical control (NC) programming and 5-axis discrete part production. In 2010 the CimatronE SuperBox was launched, a combined hardware-software device for the offloading and acceleration of toolpath calculations in NC programming. GibbsCAM GibbsCAM specializes in 2- through 5-axis milling, turning, mill/turning, multi-task simultaneous machining and wire-EDM. It also provides integrated manufacturing modeling, including 2D, 2.5D, 3D wireframe, surface and solid modeling. References External links Computer-aided design software Computer-aided manufacturing software Companies established in 1982 Product lifecycle management Software companies of Israel Companies formerly listed on the Nasdaq Engineering software companies Computer-aided engineering software 1982 establishments in Israel
311819
https://en.wikipedia.org/wiki/RAR%20%28file%20format%29
RAR (file format)
RAR is a proprietary archive file format that supports data compression, error recovery and file spanning. It was developed in 1993 by Russian software engineer Eugene Roshal and the software is licensed by win.rar GmbH. The name RAR stands for Roshal Archive. File format The filename extensions used by RAR are .rar for the data volume set and .rev for the recovery volume set. Previous versions of RAR split large archives into several smaller files, creating a "multi-volume archive". Numbers were used in the file extensions of the smaller files to keep them in the proper sequence. The first file used the extension .rar, then .r00 for the second, and then .r01, .r02, etc. RAR compression applications and libraries (including GUI based WinRAR application for Windows, console rar utility for different OSes and others) are proprietary software, to which Alexander L. Roshal, the elder brother of Eugene Roshal, owns the copyright. Version 3 of RAR is based on Lempel-Ziv (LZSS) and prediction by partial matching (PPM) compression, specifically the PPMd implementation of PPMII by Dmitry Shkarin. The minimum size of a RAR file is 20 bytes. The maximum size of a RAR file is 9,223,372,036,854,775,807 (263−1) bytes, which is about 9,000 PB. Versions The RAR file format revision history: 1.3 – the first public version, does not have the "Rar!" signature. 1.5 – changes are not known. 2.0 – released with WinRAR 2.0 and Rar for MS-DOS 2.0; features the following changes: Multimedia compression for true color bitmap images and uncompressed audio. Up to 1 MB compression dictionary. Introduces archives data recovery protection record. 2.9 – released in WinRAR version 3.00. Feature changes in this version include: File extensions is changed from {volume name}.rar, {volume name}.r00, {volume name}.r01, etc. to {volume name}.part001.rar, {volume name}.part002.rar, etc. Encryption of both file data and file headers. Improves compression algorithm using 4 MB dictionary size, Dmitry Shkarin's PPMII algorithm for file data. Optional creation of "recovery volumes" (.rev files) with redundancy data, which can be used to reconstruct missing files in a volume set. Support for archive files larger than 9 GB. Support for Unicode file names stored in UTF-16 little endian format. 5.0 – supported by WinRAR 5.0 and later. Changes in this version: Maximum compression dictionary size increased to 1 GB (default for WinRAR 5.x is 32 MB and 4 MB for WinRAR 4.x). Maximum path length for files in RAR and ZIP archives is increased up to 2048 characters. Support for Unicode file names stored in UTF-8 format. Faster compression and decompression. Multicore decompression support. Greatly improves recovery. Optional AES encryption increased from 128-bit to 256-bit. Optional 256-bit BLAKE2 file hash instead of a default 32-bit CRC32 file checksum. Optional duplicate file detection. Optional NTFS hard and symbolic links. Optional Quick Open Record. Rar4 archives had to be parsed before opening as file names were spread throughout the archive, slowing operation particularly with slower devices such as optical drives, and reducing the integrity of damaged archives. Rar5 can optionally create a "quick open record", a special archive block at the end of the file that contains the names of files included, allowing archives to be opened faster. Removes specialized compression algorithms for Itanium executables, text, raw audio (WAV), and raw image (BMP) files; consequently some files of these types compress better in the older RAR (4) format with these options enabled than in RAR5. Notes Software Operating system support Software is available for Microsoft Windows (named WinRAR), Linux, FreeBSD, macOS, and Android; archive extraction is supported natively in Chrome OS. WinRAR supports the Windows graphical user interface (GUI); other versions named RAR run as console commands. Later versions are not compatible with some older operating systems previously supported: WinRAR v6.10 supports Windows Vista and later. WinRAR v6.02 is the last version that supports Windows XP. WinRAR v4.11 is the last version that supports Windows 2000. WinRAR v3.93 is the last version that supports Windows 95, 98, ME, and NT. RAR v3.93 is the last version that supports MS-DOS and OS/2 on 32-bit x86 CPUs such as 80386 and later. It supports long file names in a Windows DOS box (except Windows NT), and uses the RSX DPMI extender. RAR v2.50 is the last version that supports MS-DOS and OS/2 on 16-bit x86 CPUs such as Intel 8086, 8088, and 80286. Creating RAR files RAR files can be created only with commercial software WinRAR (Windows), RAR for Android, command-line RAR (Windows, MS-DOS, macOS, Linux, and FreeBSD), and other software that has written permission from Alexander Roshal or uses copyrighted code under license from Roshal. The software license agreements forbid reverse engineering. Third-party software for extracting RAR files Several programs can unpack the file format. RARLAB distributes the C++ source code and binaries for a command-line unrar program. The license permits its use to produce software capable of unpacking, but not creating, RAR archives, without having to pay a fee. It is not a free software license. 7-Zip, a free and open-source program, starting from 7-Zip version 15.06 beta can unpack RAR5 archives, using the RARLAB unrar code. PeaZip is a free RAR unarchiver, licensed under the LGPL, it runs as a RAR extractor on Linux, macOS, and Windows, with a GUI. PeaZip supports both pre-RAR5 .rar files, and files in the new RAR5 format. The Unarchiver is a proprietary software unarchiver for RAR and other formats. It runs on macOS, and the command-line version, , also runs on Windows and on Linux, and is free software licensed under the LGPL. It supports all versions of the RAR archive format, including RAR3 and RAR5. UNRARLIB (UniquE RAR File Library) was an obsolete free software unarchiving library called "unrarlib", licensed under the GPL. It could only decompress archives created by RAR versions prior to 2.9; archives created by RAR 2.9 and later use different formats not supported by this library. The original development-team ended work on this library in 2007. libarchive, a free and open source library for reading and writing a variety of archive formats, supports all RAR versions including RAR5. The code was written from scratch using RAR's "technote.txt" format description. Other uses of rar The filename extension rar is also used by the unrelated Resource Adapter Archive file format. See also .cbr List of archive formats Comparison of archive formats Comparison of file archivers Data corruption, Bit rot, Disc rot References External links RARLAB FTP download website, current and old versions of WinRAR and RAR RAR 5.0 archive file format Computer-related introductions in 1993 Archive formats Russian inventions
968213
https://en.wikipedia.org/wiki/Individual%20Computers%20Catweasel
Individual Computers Catweasel
The Catweasel is a family of enhanced floppy-disk controllers from German company Individual Computers. These controllers are designed to allow more recent computers, such as PCs, to access a wide variety of older or non-native disk formats using standard floppy drives. Principle The floppy controller chip used in IBM PCs and compatibles was the NEC 765A. As technology progressed, descendants of these machines used what were essentially extensions to this chip. Many other computers, particularly ones from Commodore and early ones from Apple, write disks in formats which cannot be encoded or decoded by the 765A, even though the drive mechanisms are more or less identical to ones used on PCs. The Catweasel was therefore created to emulate the hardware necessary to produce these other low-level formats. The Catweasel provides a custom floppy drive interface in addition to any other floppy interfaces the computer is already equipped with. Industry standard floppy drives can be attached to the Catweasel, allowing the host computer to read many standard and custom formats by means of custom software drivers. Supported formats: Versions The initial version of the Catweasel was introduced in 1996 and has since undergone several revisions. The Catweasel Mk1 and Mk2, for the Commodore Amiga 1200 and Amiga 4000, sold out in October 2001. The Mk3 added PCI compatibility and sold out in mid-2004. It was succeeded by the Mk4. The Mk2 was re-released in 2006 as a special "Anniversary Edition". Mk1 The original version of the Catweasel was introduced in 1996 for the Amiga computer, and was available in two versions - one for the Commodore Amiga 1200 and one for the Amiga 4000. The Amiga 1200 version connected to the machine's clock port; the Amiga 4000 version connected to the machine's IDE port. A pass-through was provided on the Amiga 4000 version so that the IDE port could still be used for mass storage devices. ISA A version of the Catweasel controller was developed for use in a standard PC ISA slot as a means of reading custom non-PC floppy formats from MS-DOS. Custom DOS commands are required to use the interface. Official software and drivers are also available for Windows. Mk2 and Mk2 Anniversary Edition The Mk2 Catweasel was a redesign of the original Catweasel, merging the Amiga 1200 and Amiga 4000 versions into a single product that could be used on both computers, and providing a new PCB layout that allowed it to be more easily installed in a standard Amiga 1200 case. The continued popularity of the Catweasel Mk2 led to a special "Anniversary Edition" of this model being released in 2006. The PCB of the Anniversary Edition received minor updates, however it retained the same form factor and functionality as the Mk2. Z-II The Catweasel Z-II version was an Amiga Zorro-II expansion that combined the Catweasel Mk2 controller with another Individual Computers product, the Buddha, on a single board providing floppy and IDE interfaces to the host computer. Mk3 The Catweasel Mk3 was designed to interface with either a PCI slot, an Amiga Zorro II slot or the clock port of an Amiga 1200. In addition to the low-level access granted to floppy drives, it has a socket for a Commodore 64 SID sound chip, a port for an Amiga 2000 keyboard, and two 9-pin digital joysticks (Atari 2600 de facto standard). The Mk3 was succeeded by the Mk4. Mk4 and Mk4plus The Catweasel Mk4 was officially announced on 18 July 2004, with a wide array of new features planned. However, due to manufacturing delays and production backlogs, the Mk4 was not released until early February 2005. This version of the Catweasel makes heavy use of reconfigurable logic in the form of an Altera ACEX EP1K30TC144-3N FPGA chip, as well as an AMD MACH110 PLD and a PCI interface IC. The Mk4/Mk4+ driver uploads the FPGA microcode on start, which makes easy updates possible without having to replace hardware. Official software and drivers are available for Windows, and unofficial drivers and utilities are available for Linux. The Catweasel Mk4Plus appears to be no longer available. References External links Official wiki Catweasel product webpage, Catweasel MK4 announcement Catweasel Floppy Read/Write Tools for TRS-80 & DEC RX02 diskettes ACID 64 Player - C64 Music Player for all HardSID cards and the Catweasel MK series Karsten Scheibler's Linux kernel driver and command-line utility Michael Krause's Linux block-device kernel driver Arjuna floppy controller software homepage DrawBridge aka Arduino Amiga Floppy Disk Reader and Writer Greaseweazle USB Floppy Drive Reader Fluxengine USB Floppy Disk Interface AmigaOS & MorphOS driver development page Reading 1541 reverse sides Floppy disk drives Amiga MorphOS
13357114
https://en.wikipedia.org/wiki/Acamas%20%28son%20of%20Theseus%29
Acamas (son of Theseus)
In Greek mythology, Acamas or Akamas (;Ancient Greek: , folk etymology: 'unwearying') was a character in the Trojan War. Family Acamas was the son of King Theseus of Athens and Phaedra, daughter of Minos. He was the brother or half brother to Demophon. Mythology After his father lost the throne of Athens, Acamas grew up an exile in Euboea with his brother under Elephenor. He and Diomedes were sent to negotiate the return of Helen before the start of the Trojan War, though Homer ascribes this embassy to Menelaus and Odysseus. During his stay at Troy he caught the eye of Priam's daughter Laodice, and fathered her son Munitus. The boy was raised by Aethra, Acamas' grandmother, who was living in Troy as one of Helen's slaves. Munitus later died of a snakebite while hunting at Olynthus in Thrace. In the war, Acamas fought on the side of the Greeks and was counted among the men inside the Trojan Horse. After the war, he rescued Aethra from her long captivity in Troy. Later mythological traditions describe the two brothers embarking on other adventures as well, including the capture of the Palladium. Some sources relate of Acamas the story which is more commonly told of his brother Demophon, namely the one of his relationship with Phyllis of Thrace. This might be a mistake. Acamas is not mentioned in Homer's Iliad, but later works, including Virgil's Aeneid, and almost certainly the Iliou persis, mention that Acamas was one of the men inside the Trojan horse. The dominant character trait of Acamas is his interest in faraway places. Eponyms and Acamas in art The promontory of Acamas in Cyprus, the town of Acamentium in Phrygia, and the Attic tribe Acamantis all derived their names from him. Notes References Diodorus Siculus, The Library of History translated by Charles Henry Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Vol. 3. Books 4.59–8. Online version at Bill Thayer's Web Site Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888-1890. Greek text available at the Perseus Digital Library. Euripides, Heracleidae with an English translation by David Kovacs. Cambridge. Harvard University Press. 1994. Online version at the Perseus Digital Library. Greek text available from the same website. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Graves, Robert, The Greek Myths, Harmondsworth, London, England, Penguin Books, 1960. Graves, Robert, The Greek Myths: The Complete and Definitive Edition. Penguin Books Limited. 2017. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Pseudo-Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. . Online version at the Perseus Digital Library. Greek text available from the same website. Stephanus of Byzantium, Stephani Byzantii Ethnicorum quae supersunt, edited by August Meineike (1790-1870), published 1849. A few entries from this important ancient handbook of place names have been translated by Brady Kiesling. Online version at the Topos Text Project. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Children of Theseus Achaean Leaders People of the Trojan War cs:Akamás#Akamás z Athén ja:アカマース#テーセウスの子 Attican characters in Greek mythology Characters in Greek mythology
3588315
https://en.wikipedia.org/wiki/Nuke%20%28software%29
Nuke (software)
Nuke is a node-based digital compositing and visual effects application first developed by Digital Domain, and used for television and film post-production. Nuke is available for Microsoft Windows 7, OS X 10.9, Red Hat Enterprise Linux 5, and newer versions of these operating systems. Foundry has further developed the software since Nuke was sold in 2007. Nuke's users include Digital Domain, Walt Disney Animation Studios, Blizzard Entertainment, DreamWorks Animation, Illumination Mac Guff, Sony Pictures Imageworks, Sony Pictures Animation, Framestore, Weta Digital, Double Negative, and Industrial Light & Magic. History Nuke (the name deriving from 'New compositor') was originally developed by software engineer Phil Beffrey and later Bill Spitzak for in-house use at Digital Domain beginning in 1993. In addition to standard compositing, Nuke was used to render higher-resolution versions of composites from Autodesk Flame. Nuke version 2 introduced a GUI in 1994, built with FLTK – an in-house GUI toolkit developed at Digital Domain. FLTK was subsequently released under the GNU LGPL in 1998. Nuke won an Academy Award for Technical Achievement in 2001. In 2002, Nuke was publicly released by D2 Software. In 2005, Nuke 4.5 introduced a new 3D subsystem developed by Jonathan Egstad. In 2007, The Foundry, a London-based plug-in development company, took over development and marketing of Nuke from D2. The Foundry released Nuke 4.7 in June 2007, and Nuke 5 was released in early 2008, which replaced the interface with Qt and added Python scripting, and support for a stereoscopic workflow. In 2015, The Foundry released Nuke Non-commercial with some basic limitations. Nuke supports use of The Foundry plug-ins via its support for the OpenFX standard (several built-in nodes such as Keylight are OpenFX plugins). Similar products VSDC Free Video Editor Fusion – Blackmagic Design Boris RED – Boris FX Natron After Effects – Adobe While not intended for compositing, the free and open source Blender contains a limited node-based compositing feature which, among other things, is capable of basic keying and blurring effects. References External links Sourceforge site for the OpenFX effects plug-in standard Compositing software Visual effects software Proprietary software that uses Qt Software that uses FLTK Software that uses Qt
3026705
https://en.wikipedia.org/wiki/Association%20for%20Competitive%20Technology
Association for Competitive Technology
The Association for Competitive Technology, now known as ACT | The App Association is a trade association representing over 5,000 application software developers and small and mid-sized technology companies in the United States and Europe. The App Association was founded in 1998 by independent software developers who were concerned that the Microsoft antitrust case would cause great disruption of the platform for which they wrote software. The organization represents app developers whose issues primarily involve: A competitive ecosystem in the mobile marketplace providing app developers with the best opportunities Strong support for intellectual property rights Limited government involvement in technology (such as antitrust actions or mandates to use free software / open source software instead of proprietary alternatives), and Concern that governance of global internet infrastructure maintain a balance of government and industry interests The App Association has played a prominent role in educating lawmakers and regulators on technology issues affecting app developers. The organization has testified multiple times before House and Senate Committees and briefed White House and administration officials about the challenges and concerns app developers face. The App Association participates at conferences across the country speaking to developers about privacy issues that confront this nascent marketplace. The group sponsors hackathons, developer camps, and has annual conferences to encourage app developers to meet and engage with their elected officials in Washington, D.C. In 2011, ACT | The App Association testified before the House Judiciary Committee on children's online privacy protections, before the Senate Judiciary Committee on privacy and location-based services, before the Senate Commerce Committee on mobile privacy issues, and before the House Administration Committee on improving Congress's use of tablets and other paperless communications measures. In 2010, ACT | The App Association testified before the House Judiciary Committee on competition in the mobile marketplace and before the Senate Finance Committee on international trade in the digital economy. In the past (circa 2005–2007), ACT| The App Association has lobbied against the Massachusetts endorsement of the OpenDocument standards. ACT | The App Association has large, independent sponsors such as Microsoft, Apple, eBay, Oracle, Intel and VeriSign. On March 9, 2006, then President of ACT | The App Association, Jonathan Zuck wrote an opinion piece which was published on news.com, criticizing the Free Software Foundation's plan to fight digital rights management (DRM) with the new 3.0 version of the GNU General Public License. Since 2015 ACT started lobbying activities in the EU area and began gathering important memberships in companies successfully working in EU software and the field of application production. Some of the main software houses which joined ACT in Europe are Brightec (UK), Egylis (FR), AppsGarden (PL), Andaman (BE), and Synethsia (the Italian software house which organizes Droidcon Turin). A draft of a European Commission strategy paper on open-source software with modifications by ACT's Jonathan Zuck was leaked (via WikiLeaks) in February 2009 showing, in the words of Linux Journal, "how lobbyists operate in their attempt to neuter threats to their constituencies through the shameless evisceration and outright inversion of content." References External links ACT official web site SourceWatch entry for ACT Discussion of ACT's lobbying in Massachusetts, with link to recording Information technology organizations Political advocacy groups in the United States Lobbying organizations in the United States
48720974
https://en.wikipedia.org/wiki/React%20Native
React Native
React Native is an open-source UI software framework created by Meta Platforms, Inc. It is used to develop applications for Android, Android TV, iOS, macOS, tvOS, Web, Windows and UWP by enabling developers to use the React framework along with native platform capabilities. It is also being used to develop virtual reality applications at Oculus. History In 2012 Mark Zuckerberg commented, "The biggest mistake we made as a company was betting too much on HTML as opposed to native". Using HTML5 for Facebook's mobile version resulted in an unstable application that retrieved data slowly. He promised Facebook would soon deliver a better mobile experience. Inside Facebook, Jordan Walke found a way to generate UI elements for iOS from a background JavaScript thread, which became the basis for the React web framework. They decided to organize an internal Hackathon to perfect this prototype in order to be able to build native apps with this technology. In 2015, after months of development, Facebook released the first version for the React JavaScript Configuration. During a technical talk, Christopher Chedeau explained that Facebook was already using React Native in production for their Group App and their Ads Manager App. Implementation The working principles of React Native are virtually identical to React except that React Native does not manipulate the DOM via the Virtual DOM. It runs in a background process (which interprets the JavaScript written by the developers) directly on the end-device and communicates with the native platform via serialized data over an asynchronous and batched bridge. React components wrap existing native code and interact with native APIs via React’s declarative UI paradigm and JavaScript. While React Native styling has a similar syntax to CSS, it does not use HTML or CSS. Instead, messages from the JavaScript thread are used to manipulate native views. React Native also allows developers to write native code in languages such as Java or Kotlin for Android, Objective-C or Swift for iOS, and C++/WinRT or C# for Windows 10, which makes it even more flexible. Microsoft builds and maintains React Native for Windows and React Native for macOS. Hello World example A Hello, World program in React Native looks like this: import { AppRegistry, Text } from 'react-native'; import * as React from 'react'; const HelloWorldApp = () => { return <Text>Hello world!</Text>; } export default HelloWorldApp; AppRegistry.registerComponent('HelloWorld', () => HelloWorldApp); See also Multiple phone web-based application framework NativeScript Xamarin Appcelerator Titanium Apache Cordova Flutter (software) Qt (software) Codename One React Native at Microsoft (Windows and macOS) References Mobile software development Software using the MIT license Facebook software Software development Cross-platform software
25469171
https://en.wikipedia.org/wiki/Advanced%20Digital%20Broadcast
Advanced Digital Broadcast
Advanced Digital Broadcast (ADB) is a company which provides software, system and services to pay-TV and telecommunication operators, content distributors and property owners around the world. The company specializes also in the development of digital connectivity devices such as set-top boxes and residential gateways. ADB's global headquarters is located in Bellevue, Switzerland. The company has research and development facilities in Poland and Italy and an Operations division in Taipei, Taiwan. ADB has local offices in several countries in Europe and the United States. Founded in 1995, ADB initially focused on developing and marketing software for digital TV processors and expanded its business to the design and manufacture of digital TV equipment in 1997. The company sold its first set-top box in 1997 and since then has been delivering a number of set-top boxes, and residential gateways, together with advanced software platforms. ADB has sold over 64 million devices worldwide to cable, satellite, IPTV and broadband operators. ADB employs over 350 people, of which 70% are in engineering functions. History In 1995, ADB was founded by Andrew Rybicki, Janusz C. Szajna, Krzysztof Kolbuszewski and Mariusz Walkowiak with an initial focus on developing and marketing software for advanced digital TV processors. In 1997, ADB started designing and manufacturing of digital TV equipment. ADB designed its first commercial set-to box, and established its dedicated R&D facility in Poland and corporate headquarters in Taiwan. Between 1998 and 2000 offices were opened in Australia, and Spain, and ADB sold its one millionth set-top box. In 2001, ADB announced the development of an open standard set-top box middleware based on the Multimedia Home Platform (MHP) specification and established its worldwide headquarters in Geneva, Switzerland. In 2002, the company opened its Americas headquarters in Chicago (since moved to Denver). and became the world's first set-top box provider to launch an MHP digital receiver, the i-CAN 3000, in Finland. In 2003, ADB delivered set-top boxes for the launch of digital terrestrial TV services in Italy, announced the world's first hybrid MHP Internet set-top box with terrestrial reception and became the first company to supply the television industry with MHP-compliant set-top boxes. The company received a German-Polish Innovation award and a Cable & Satellite International (CSI) Product of the Year award for its i-CAN 3000 set-top box. In 2005, ADB Group was floated on the Swiss Stock Exchange (SWX) and ADB secured its first major IPTV contract to supply set-top boxes to Telefónica in Spain. The company unveiled its first single-chip MPEG-4 AVC based set-top box, the ADB-7800TW. In the same year ADB established new R&D Centre in Kharkiv, Ukraine. In 2006 ADB introduced European-based manufacturing and shipped its eight millionth set-top box. The company partnered with UK broadcasters the BBC, Channel 4, Five and ITV.in for the first HDTV field trial on the UK's digital terrestrial network. This year ADB participated also in a pilot with Italian broadcaster RAI for the world's first HDTV transmission using MPEG-4 AVC video compression during coverage of the Winter Olympics from Turin, Italy. In 2007, ADB deployed first IPTV STB on North American Telco Network In 2008, ADB deployed its 12 millionth set-top box and introduced tru2way certified set-top boxes for the interactive cable TV market. The same year, ADB received Product of the Year award by IBC and IPTV World Series Awards. In 2009, ADB was announced TV Innovator of the Year & Best set-top box technology provider by IMS RESEARCH In 2010, ADB Group acquired Pirelli Broadband Solutions, thus entering the broadband market. In the same year ADB became Indonesia’s first HD interactive set-top solution provider. ADB received 2 Product of the Year awards by CSI at IBC. In 2015, ADB rebranded and changed the company logo. In 2016, ADB's personal TV platform graphyne2 won the Best of Show 2016 award and CSI Award for Best interactive TV technology or application during IBC in Amsterdam. In 2018, ADB implements Wi-Fi Mesh Technology to the company's portfolio. In 2021, ADB introduces a premium fiber optic access gateway with Wi-Fi 6 to the company's product portfolio. Products ADB is a global supplier of set-top boxes and residential gateways to cable, satellite, IP, terrestrial television operators, and broadband service providers. The company sells software for advanced pay-TV services, such as catch-up, start-over, multi-room video recording, VOD, interactive TV and internet applications, consumer devices, middleware, applications, head ends; broadband access gateways and home networking devices; remote management software; and life-cycle and integration services. References Television technology Interactive television Video on demand Broadband Companies based in the canton of Geneva
15036
https://en.wikipedia.org/wiki/Information%20security
Information security
Information security, sometimes shortened to InfoSec, is the practice of protecting information by mitigating information risks. It is part of information risk management. It typically involves preventing or reducing the probability of unauthorized/inappropriate access to data, or the unlawful use, disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g. electronic or physical, tangible (e.g. paperwork) or intangible (e.g. knowledge). Information security's primary focus is the balanced protection of the confidentiality, integrity, and availability of data (also known as the CIA triad) while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a structured risk management process that involves: identifying information and related assets, plus potential threats, vulnerabilities, and impacts; evaluating the risks; deciding how to address or treat the risks i.e. to avoid, mitigate, share or accept them; where risk mitigation is required, selecting or designing appropriate security controls and implementing them; monitoring the activities, making adjustments as necessary to address any issues, changes and improvement opportunities. To standardize this discipline, academics and professionals collaborate to offer guidance, policies, and industry standards on password, antivirus software, firewall, encryption software, legal liability, security awareness and training, and so forth. This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, transferred and destroyed. However, the implementation of any standards and guidance within an entity may have limited effect if a culture of continual improvement isn't adopted. Definition Various definitions of information security are suggested below, summarized from different sources: "Preservation of confidentiality, integrity and availability of information. Note: In addition, other properties, such as authenticity, accountability, non-repudiation and reliability can also be involved." (ISO/IEC 27000:2009) "The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability." (CNSS, 2010) "Ensures that only authorized users (confidentiality) have access to accurate and complete information (integrity) when required (availability)." (ISACA, 2008) "Information Security is the process of protecting the intellectual property of an organisation." (Pipkin, 2000) "...information security is a risk management discipline, whose job is to manage the cost of information risk to the business." (McDermott and Geer, 2001) "A well-informed sense of assurance that information risks and controls are in balance." (Anderson, J., 2003) "Information security is the protection of information and minimizes the risk of exposing information to unauthorized parties." (Venter and Eloff, 2003) "Information Security is a multidisciplinary area of study and professional activity which is concerned with the development and implementation of security mechanisms of all available types (technical, organizational, human-oriented and legal) in order to keep information in all its locations (within and outside the organization's perimeter) and, consequently, information systems, where information is created, processed, stored, transmitted and destroyed, free from threats. Threats to information and information systems may be categorized and a corresponding security goal may be defined for each category of threats. A set of security goals, identified as a result of a threat analysis, should be revised periodically to ensure its adequacy and conformance with the evolving environment. The currently relevant set of security goals may include: confidentiality, integrity, availability, privacy, authenticity & trustworthiness, non-repudiation, accountability and auditability." (Cherdantseva and Hilton, 2013) Information and information resource security using telecommunication system or devices means protecting information, information systems or books from unauthorized access, damage, theft, or destruction (Kurose and Ross, 2010). Overview At the core of information security is information assurance, the act of maintaining the confidentiality, integrity, and availability (CIA) of information, ensuring that information is not compromised in any way when critical issues arise. These issues include but are not limited to natural disasters, computer/server malfunction, and physical theft. While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized, with information assurance now typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system). It is worthwhile to note that a computer does not necessarily mean a home desktop. A computer is any device with a processor and some memory. Such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones and tablet computers. IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious cyber attacks that often attempt to acquire critical private information or gain control of the internal systems. The field of information security has grown and evolved significantly in recent years. It offers many areas for specialization, including securing networks and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning, electronic record discovery, and digital forensics. Information security professionals are very stable in their employment. more than 80 percent of professionals had no change in employer or employment over a period of a year, and the number of professionals is projected to continuously grow more than 11 percent annually from 2014 to 2019. Threats Information security threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, theft of identity, theft of equipment or information, sabotage, and information extortion. Most people have experienced software attacks of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses in the information technology (IT) field. Identity theft is the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information through social engineering. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile, are prone to theft and have also become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of an organization's website in an attempt to cause loss of confidence on the part of its customers. Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as with ransomware. There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is conduct periodical user awareness. The number one threat to any organisation are users or internal employees, they are also called insider threats. Governments, military, corporations, financial institutions, hospitals, non-profit organisations, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor or a black hat hacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation. From a business perspective, information security must be balanced against cost; the Gordon-Loeb Model provides a mathematical economic approach for addressing this concern. For the individual, information security has a significant effect on privacy, which is viewed very differently in various cultures. Responses to threats Possible responses to a security threat or risk are: reduce/mitigate – implement safeguards and countermeasures to eliminate vulnerabilities or block threats assign/transfer – place the cost of the threat onto another entity or organization such as purchasing insurance or outsourcing accept – evaluate if the cost of the countermeasure outweighs the possible cost of loss due to the threat History Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher c. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands. However, for the most part protection was achieved through the application of procedural handling controls. Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box. As postal services expanded, governments created official organizations to intercept, decipher, read, and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653). In the mid-nineteenth century more complex classification systems were developed to allow governments to manage their information according to the degree of sensitivity. For example, the British Government codified this, to some extent, with the publication of the Official Secrets Act in 1889. Section 1 of the law concerned espionage and unlawful disclosures of information, while Section 2 dealt with breaches of official trust. A public interest defense was soon added to defend disclosures in the interest of the state. A similar law was passed in India in 1889, The Indian Official Secrets Act, which was associated with the British colonial era and used to crack down on newspapers that opposed the Raj's policies. A newer version was passed in 1923 that extended to all matters of confidential or secret information for governance. By the time of the First World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters. Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information. The establishment of computer security inaugurated the history of information security. The need for such appeared during World War II. The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls. An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed. The Enigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted by Alan Turing, can be regarded as a striking example of creating and using secured information. Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture of U-570). Various Mainframe computers were connected online during the Cold War to complete more sophisticated tasks, in a communication process easier than mailing magnetic tapes back and forth by computer centers. As such, the Advanced Research Projects Agency (ARPA), of the United States Department of Defense, started researching the feasibility of a networked system of communication to trade information within the United States Armed Forces. In 1968, the ARPANET project was formulated by Dr. Larry Roberts, which would later evolve into what is known as the internet. In 1973, important elements of ARPANET security were found by internet pioneer Robert Metcalfe to have many flaws such as the: "vulnerability of password structure and formats; lack of safety procedures for dial-up connections; and nonexistent user identification and authorizations", aside from the lack of controls and safeguards to keep data safe from unauthorized access. Hackers had effortless access to ARPANET, as phone numbers were known by the public. Due to these problems, coupled with the constant violation of computer security, as well as the exponential increase in the number of hosts and users of the system, "network security" was often alluded to as "network insecurity". The end of the twentieth century and the early years of the twenty-first century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful, and less expensive computing equipment made electronic data processing within the reach of small business and home users. The establishment of Transfer Control Protocol/Internetwork Protocol (TCP/IP) in the early 1980s enabled different types of computers to communicate. These computers quickly became interconnected through the internet. The rapid growth and widespread use of electronic data processing and electronic business conducted through the internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process, and transmit. The academic disciplines of computer security and information assurance emerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability of information systems. Basic principles Key concepts The CIA triad of confidentiality, integrity, and availability is at the heart of information security. (The members of the classic InfoSec triad—confidentiality, integrity, and availability—are interchangeably referred to in the literature as security attributes, properties, security goals, fundamental aspects, information criteria, critical information characteristics and basic building blocks.) However, debate continues about whether or not this CIA triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy. Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such as non-repudiation do not fit well within the three core concepts. The triad seems to have first been mentioned in a NIST publication in 1977. In 1992 and revised in 2002, the OECD's Guidelines for the Security of Information Systems and Networks proposed the nine generally accepted principles: awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment. Building upon those, in 2004 the NIST's Engineering Principles for Information Technology Security proposed 33 principles. From each of these derived guidelines and practices. In 1998, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian Hexad are a subject of debate amongst security professionals. In 2011, The Open Group published the information security management standard O-ISM3. This standard proposed an operational definition of the key concepts of security, with elements called "security objectives", related to access control (9), availability (3), data quality (1), compliance, and technical (4). In 2009, DoD Software Protection Initiative released the Three Tenets of Cybersecurity which are System Susceptibility, Access to the Flaw, and Capability to Exploit the Flaw. Neither of these models are widely adopted. Confidentiality In information security, confidentiality "is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes." While similar to "privacy," the two words are not interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers. Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals. Integrity In IT security, data integrity means maintaining and assuring the accuracy and completeness of data over its entire lifecycle. This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Information security systems typically incorporate controls to ensure their own integrity, in particular protecting the kernel or core functions against both deliberate and accidental threats. Multi-purpose and multi-user computer systems aim to compartmentalize the data and processing such that no user or process can adversely impact another: the controls may not succeed however, as we see in incidents such as malware infections, hacks, data theft, fraud, and privacy breaches. More broadly, integrity is an information security principle that involves human/social, process, and commercial integrity, as well as data integrity. As such it touches on aspects such as credibility, consistency, truthfulness, completeness, accuracy, timeliness, and assurance. Availability For any information system to serve its purpose, the information must be available when it is needed. This means the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down. In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program. Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect. This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails. Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response, and policy/change management. A successful information security team involves many different key roles to mesh and align for the CIA triad to be provided effectively. Non-repudiation In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction. It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity). The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation). Risk management Broadly speaking, risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). The Certified Information Systems Auditor (CISA) Review Manual 2006 defines risk management as "the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization." There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing, iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerabilities emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Furthermore, these processes have limitations as security breaches are generally rare and emerge in a specific context which may not be easily duplicated. Thus, any process and countermeasure should itself be evaluated for vulnerabilities. It is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk." A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis. Research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human. The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be examined during a risk assessment: security policy, organization of information security, asset management, human resources security, physical and environmental security, communications and operations management, access control, information systems acquisition, development, and maintenance, information security incident management, business continuity management regulatory compliance. In broad terms, the risk management process consists of: Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious acts originating from inside or outside the organization. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity. For any given risk, management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or outsourcing to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. Security controls Selecting and implementing proper security controls will initially help an organization bring down risk to acceptable levels. Control selection should follow and should be based on the risk assessment. Controls can vary in nature, but fundamentally they are ways of protecting the confidentiality, integrity or availability of information. ISO/IEC 27001 has defined controls in different areas. Organizations can implement additional controls according to requirement of the organization. ISO/IEC 27002 offers a guideline for organizational information security standards. Administrative Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards, and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day-to-day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards, and guidelines that must be followed – the Payment Card Industry Data Security Standard (PCI DSS) required by Visa and MasterCard is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls, which are of paramount importance. Logical Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. Passwords, network and host-based firewalls, network intrusion detection systems, access control lists, and data encryption are examples of logical controls. An important logical control that is frequently overlooked is the principle of least privilege, which requires that an individual, program or system process not be granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read email and surf the web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, employees are promoted to a new position, or employees are transferred to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges, which may no longer be necessary or appropriate. Physical Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities and include doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and workplace into functional areas are also physical controls. An important physical control that is frequently overlooked is separation of duties, which ensures that an individual can not complete a critical task by himself. For example, an employee who submits a request for reimbursement should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator; these roles and responsibilities must be separated from one another. Defense in depth Information security must protect information throughout its lifespan, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its lifetime, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering on, and overlapping of security measures is called "defense in depth." In contrast to a metal chain, which is famously only as strong as its weakest link, the defense in depth strategy aims at a structure where, should one defensive measure fail, other measures will continue to provide protection. Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense in depth strategy. With this approach, defense in depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, and network security, host-based security, and application security forming the outermost layers of the onion. Both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy. Classification An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification. Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The Information Systems Audit and Control Association (ISACA) and its Business Model for Information Security also serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed. The type of information security classification labels selected and used will depend on the nature of the organization, with examples being: In the business sector, labels such as: Public, Sensitive, Private, Confidential. In the government sector, labels such as: Unclassified, Unofficial, Protected, Confidential, Secret, Top Secret, and their non-English equivalents. In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber, and Red. All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures. Access control Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Access control is generally considered in three steps: identification, authentication, and authorization. Identification Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to". Authentication Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe, a claim of identity. The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to. There are three different types of information that can be used for authentication: Something you know: things such as a PIN, a password, or your mother's maiden name Something you have: a driver's license or a magnetic swipe card Something you are: biometrics, including palm prints, fingerprints, voice prints, and retina (eye) scans Strong authentication requires providing more than one type of authentication information (two-factor authentication). The username is the most common form of identification on computer systems today and the password is the most common form of authentication. Usernames and passwords have served their purpose, but they are increasingly inadequate. Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such as Time-based One-time Password algorithms. Authorization After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include role-based access control, available in many advanced database management systems; simple file permissions provided in the UNIX and Windows operating systems; Group Policy Objects provided in Windows network systems; and Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. The U.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type of audit trail. Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions. This principle is used in the government when dealing with difference clearances. Even though two employees in different departments have a top-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to. Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad. Cryptography Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older, less secure applications such as Telnet and File Transfer Protocol (FTP) are slowly being replaced with more secure applications such as Secure Shell (SSH) that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU‑T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction, and they must be available when needed. Public key infrastructure (PKI) solutions address many of the problems that surround key management. Process The terms "reasonable and prudent person", "due care", and "due diligence" have been used in the fields of finance, securities, and law for many years. In recent years these terms have found their way into the fields of computing and information security. U.S. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems. In the business world, stockholders, customers, business partners, and governments have the expectation that corporate officers will run the business in accordance with accepted business practices and in compliance with laws and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that everything necessary is done to operate the business by sound business principles and in a legal, ethical manner. A prudent person is also diligent (mindful, attentive, ongoing) in their due care of the business. In the field of information security, Harris offers the following definitions of due care and due diligence: "Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees." And, [Due diligence are the] "continual activities that make sure the protection mechanisms are continually maintained and operational." Attention should be made to two important points in these definitions. First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing. Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA) provides principles and practices for evaluating risk. It considers all parties that could be affected by those risks. DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden. With increased data breach litigation, companies must balance security controls, compliance, and its mission. Security governance The Software Engineering Institute at Carnegie Mellon University, in a publication titled Governing for Enterprise Security (GES) Implementation Guide, defines characteristics of effective security governance. These include: An enterprise-wide issue Leaders are accountable Viewed as a business requirement Risk-based Roles, responsibilities, and segregation of duties defined Addressed and enforced in policy Adequate resources committed Staff aware and trained A development life cycle requirement Planned, managed, measurable, and measured Reviewed and audited Incident response plans An incident response plan (IRP) is a group of policies that dictate an organizations reaction to a cyber attack. Once an security breach has been identified the plan is initiated. It is important to note that there can be legal implications to a data breach. Knowing local and federal laws is critical. Every plan is unique to the needs of the organization, and it can involve skill sets that are not part of an IT team. For example, a lawyer may be included in the response plan to help navigate legal implications to a data breach. As mentioned above every plan is unique but most plans will include the following: Preparation Good preparation includes the development of an Incident Response Team (IRT). Skills need to be used by this team would be, penetration testing, computer forensics, network security, etc. This team should also keep track of trends in cybersecurity and modern attack strategies. A training program for end users is important as well as most modern attack strategies target users on the network. Identification This part of the incident response plan identifies if there was a security event. When an end user reports information or an admin notices irregularities, an investigation is launched. An incident log is a crucial part of this step. All of the members of the team should be updating this log to ensure that information flows as fast as possible. If it has been identified that a security breach has occurred the next step should be activated. Containment In this phase, the IRT works to isolate the areas that the breach took place to limit the scope of the security event. During this phase it is important to preserve information forensically so it can be analyzed later in the process. Containment could be as simple as physically containing a server room or as complex as segmenting a network to not allow the spread of a virus. Eradication This is where the threat that was identified is removed from the affected systems. This could include using deleting malicious files, terminating compromised accounts, or deleting other components. Some events do not require this step, however it is important to fully understand the event before moving to this step. This will help to ensure that the threat is completely removed. Recovery This stage is where the systems are restored back to original operation. This stage could include the recovery of data, changing user access information, or updating firewall rules or policies to prevent a breach in the future. Without executing this step, the system could still be vulnerable to future security threats. Lessons Learned In this step information that has been gathered during this process is used to make future decisions on security. This step is crucial to the ensure that future events are prevented. Using this information to further train admins is critical to the process. This step can also be used to process information that is distributed from other entities who have experienced a security event. Change management Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers, and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented. Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of management's many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented. Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management. However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity. The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system. Change management is usually overseen by a change review board composed of representatives from key business areas, security, networking, systems administrators, database administration, application developers, desktop support, and the help desk. The tasks of the change review board can be facilitated with the use of automated work flow application. The responsibility of the change review board is to ensure the organization's documented change management procedures are followed. The change management process is as follows Request: Anyone can request a change. The person making the change request may or may not be the same person that performs the analysis or implements the change. When a request for change is received, it may undergo a preliminary review to determine if the requested change is compatible with the organizations business model and practices, and to determine the amount of resources needed to implement the change. Approve: Management runs the business and controls the allocation of resources therefore, management must approve requests for changes and assign a priority for every change. Management might choose to reject a change request if the change is not compatible with the business model, industry standards or best practices. Management might also choose to reject a change request if the change requires more resources than can be allocated for the change. Plan: Planning a change involves discovering the scope and impact of the proposed change; analyzing the complexity of the change; allocation of resources and, developing, testing, and documenting both implementation and back-out plans. Need to define the criteria on which a decision to back out will be made. Test: Every change must be tested in a safe test environment, which closely reflects the actual production environment, before the change is applied to the production environment. The backout plan must also be tested. Schedule: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing the proposed implementation date for potential conflicts with other scheduled changes or critical business activities. Communicate: Once a change has been scheduled it must be communicated. The communication is to give others the opportunity to remind the change review board about other changes or critical business activities that might have been overlooked when scheduling the change. The communication also serves to make the help desk and users aware that a change is about to occur. Another responsibility of the change review board is to ensure that scheduled changes have been properly communicated to those who will be affected by the change or otherwise have an interest in the change. Implement: At the appointed date and time, the changes must be implemented. Part of the planning process was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan should be implemented. Document: All changes must be documented. The documentation includes the initial request for change, its approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change review board critique, the date/time the change was implemented, who implemented it, and whether the change was implemented successfully, failed or postponed. Post-change review: The change review board should hold a post-implementation review of changes. It is particularly important to review failed and backed out changes. The review board should try to understand the problems that were encountered, and look for areas for improvement. Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment. Good change management procedures improve the overall quality and success of changes as they are implemented. This is accomplished through planning, peer review, documentation, and communication. ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps (Full book summary), and ITIL all provide valuable guidance on implementing an efficient and effective change management program information security. Business continuity Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects. BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual. The BCM should be included in an organizations risk analysis plan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function. It encompasses: Analysis of requirements, e.g., identifying critical business functions, dependencies and potential failure points, potential threats and hence incidents or risks of concern to the organization; Specification, e.g., maximum tolerable outage periods; recovery point objectives (maximum acceptable periods of data loss); Architecture and design, e.g., an appropriate combination of approaches including resilience (e.g. engineering IT systems and processes for high availability, avoiding or preventing situations that might interrupt the business), incident and emergency management (e.g., evacuating premises, calling the emergency services, triage/situation assessment and invoking recovery plans), recovery (e.g., rebuilding) and contingency management (generic capabilities to deal positively with whatever occurs using whatever resources are available); Implementation, e.g., configuring and scheduling backups, data transfers, etc., duplicating and strengthening critical elements; contracting with service and equipment suppliers; Testing, e.g., business continuity exercises of various types, costs and assurance levels; Management, e.g., defining strategies, setting objectives and goals; planning and directing the work; allocating funds, people and other resources; prioritization relative to other activities; team building, leadership, control, motivation and coordination with other business functions and activities (e.g., IT, facilities, human resources, risk management, information risk and security, operations); monitoring the situation, checking and updating the arrangements when things change; maturing the approach through continuous improvement, learning and appropriate investment; Assurance, e.g., testing against specified requirements; measuring, analyzing, and reporting key parameters; conducting additional tests, reviews and audits for greater confidence that the arrangements will go to plan if invoked. Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, a disaster recovery plan (DRP) focuses specifically on resuming business operations as quickly as possible after a disaster. A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover critical information and communications technology (ICT) infrastructure. Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan. Laws and regulations Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security. Important industry sector regulations have also been included when they have a significant impact on information security. The UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data Protection Directive (EUDPD) requires that all E.U. members adopt national regulations to standardize the protection of data privacy for citizens throughout the E.U. The Computer Misuse Act 1990 is an Act of the U.K. Parliament making computer crime (e.g., hacking) a criminal offense. The act has become a model upon which several other countries, including Canada and the Republic of Ireland, have drawn inspiration from when subsequently drafting their own information security laws. The E.U.'s Data Retention Directive (annulled) required internet service providers and phone companies to keep data on every electronic message sent and phone call made for between six months and two years. The Family Educational Rights and Privacy Act (FERPA) ( g; 34 CFR Part 99) is a U.S. Federal law that protects the privacy of student education records. The law applies to all schools that receive funds under an applicable program of the U.S. Department of Education. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record. The Federal Financial Institutions Examination Council's (FFIEC) security guidelines for auditors specifies requirements for online banking security. The Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. Additionally, it requires health care providers, insurance providers and employers to safeguard the security and privacy of health data. The Gramm–Leach–Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999, protects the privacy and security of private financial information that financial institutions collect, hold, and process. Section 404 of the Sarbanes–Oxley Act of 2002 (SOX) requires publicly traded companies to assess the effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each fiscal year. Chief information officers are responsible for the security, accuracy, and the reliability of the systems that manage and report the financial data. The act also requires publicly traded companies to engage with independent auditors who must attest to, and report on, the validity of their assessments. The Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing payment account data security. It was developed by the founding payment brands of the PCI Security Standards Council — including American Express, Discover Financial Services, JCB, MasterCard Worldwide, and Visa International — to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures. State security breach notification laws (California and many others) require businesses, nonprofits, and state institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or stolen. The Personal Information Protection and Electronics Document Act (PIPEDA) of Canada supports and promotes electronic commerce by protecting personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act. Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 165/2011) establishes and describes the minimum information security controls that should be deployed by every company which provides electronic communication networks and/or services in Greece in order to protect customers' confidentiality. These include both managerial and technical controls (e.g., log records should be stored for two years). Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 205/2013) concentrates around the protection of the integrity and availability of the services and data offered by Greek telecommunication companies. The law forces these and other related companies to build, deploy, and test appropriate business continuity plans and redundant infrastructures. Culture Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations: Attitudes: Employees’ feelings and emotions about the various activities that pertain to the organizational security of information. Behaviors: Actual or intended activities and risk-taking actions of employees that have direct or indirect impact on information security. Cognition: Employees' awareness, verifiable knowledge, and beliefs regarding practices, activities, and self-efficacy relation that are related to information security. Communication: Ways employees communicate with each other, sense of belonging, support for security issues, and incident reporting. Compliance: Adherence to organizational security policies, awareness of the existence of such policies and the ability to recall the substance of such policies. Norms: Perceptions of security-related organizational conduct and practices that are informally deemed either normal or deviant by employees and their peers, e.g. hidden expectations regarding security behaviors and unwritten rules regarding uses of information-communication technologies. Responsibilities: Employees' understanding of the roles and responsibilities they have as a critical factor in sustaining or endangering the security of information, and thereby the organization. Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests. Research shows information security culture needs to be improved continuously. In Information Security Culture from Analysis to Change, authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation. Pre-Evaluation: to identify the awareness of information security within employees and to analyze current security policy Strategic Planning: to come up a better awareness-program, we need to set clear targets. Clustering people is helpful to achieve it Operative Planning: create a good security culture based on internal communication, management buy-in, security awareness, and training programs Implementation: should feature commitment of management, communication with organizational members, courses for all organizational members, and commitment of the employees Post-evaluation: to better gauge the effectiveness of the prior steps and build on continuous improvement Sources of standards The International Organization for Standardization (ISO) is a consortium of national standards institutes from 157 countries, coordinated through a secretariat in Geneva, Switzerland. ISO is the world's largest developer of standards. ISO 15443: "Information technology – Security techniques – A framework for IT security assurance", ISO/IEC 27002: "Information technology – Security techniques – Code of practice for information security management", ISO-20000: "Information technology – Service management", and ISO/IEC 27001: "Information technology – Security techniques – Information security management systems – Requirements" are of particular interest to information security professionals. The US National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S. Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests, and validation programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management, and operation. NIST is also the custodian of the U.S. Federal Information Processing Standard publications (FIPS). The Internet Society is a professional membership society with more than 100 organizations and over 20,000 individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the internet, and it is the organizational home for the groups responsible for internet infrastructure standards, including the Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security Handbook. The Information Security Forum (ISF) is a global nonprofit organization of several hundred leading organizations in financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It undertakes research into information security practices and offers advice in its biannual Standard of Good Practice and more detailed advisories for members. The Institute of Information Security Professionals (IISP) is an independent, non-profit body governed by its members, with the principal objective of advancing the professionalism of information security practitioners and thereby the professionalism of the industry as a whole. The institute developed the IISP Skills Framework. This framework describes the range of competencies expected of information security and information assurance professionals in the effective performance of their roles. It was developed through collaboration between both private and public sector organizations, world-renowned academics, and security leaders. The German Federal Office for Information Security (in German Bundesamt für Sicherheit in der Informationstechnik (BSI)) BSI-Standards 100–1 to 100-4 are a set of recommendations including "methods, processes, procedures, approaches and measures relating to information security". The BSI-Standard 100-2 IT-Grundschutz Methodology describes how information security management can be implemented and operated. The standard includes a very specific guide, the IT Baseline Protection Catalogs (also known as IT-Grundschutz Catalogs). Before 2005, the catalogs were formerly known as "IT Baseline Protection Manual". The Catalogs are a collection of documents useful for detecting and combating security-relevant weak points in the IT environment (IT cluster). The collection encompasses as of September 2013 over 4,400 pages with the introduction and catalogs. The IT-Grundschutz approach is aligned with to the ISO/IEC 2700x family. The European Telecommunications Standards Institute standardized a catalog of information security indicators, headed by the Industrial Specification Group (ISG) ISI. See also Backup Capability-based security Data breach Data-centric security Enterprise information security architecture Identity-based security Information infrastructure Information security audit Information security indicators Information security management Information security standards Information technology Information technology security audit IT risk ITIL security management Kill chain List of computer security certifications Mobile security Network Security Services Privacy engineering Privacy software Privacy-enhancing technologies Security bug Security convergence Security information management Security level management Security of Information Act Security service (telecommunication) Single sign-on Verification and validation References Further reading Anderson, K., "IT Security Professionals Must Evolve for Changing Market", SC Magazine, October 12, 2006. Aceituno, V., "On Information Security Paradigms", ISSA Journal, September 2005. Dhillon, G., Principles of Information Systems Security: text and cases, John Wiley & Sons, 2007. Easttom, C., Computer Security Fundamentals (2nd Edition) Pearson Education, 2011. Lambo, T., "ISO/IEC 27001: The future of infosec certification", ISSA Journal, November 2006. Dustin, D., " Awareness of How Your Data is Being Used and What to Do About It", "CDR Blog", May 2017. Bibliography External links DoD IA Policy Chart on the DoD Information Assurance Technology Analysis Center web site. patterns & practices Security Engineering Explained Open Security Architecture- Controls and patterns to secure IT systems IWS – Information Security Chapter Ross Anderson's book "Security Engineering" Data security Security Crime prevention National security Cryptography Information governance
28394022
https://en.wikipedia.org/wiki/Scrumedge
Scrumedge
ScrumEdge is a collaborative web-based scrum tool that allows agile development teams, ScrumMasters, and stakeholders to manage the Scrum lifecycle at the product and sprint levels. ScrumEdge programmed in PHP and is distributed under the Apache Software License 2.0. It supports Scrum-Teams, ScrumMasters and Product Owners in running and coordinating agile software development projects. Framework Scrum is an agile framework for completing complex projects. Scrum originally was formalized for software development projects, but works well for any complex, innovative scope of work. The possibilities are endless. The Scrum framework is deceptively simple. History ScrumEdge was founded in 2008 and on April 23, 2009 ScrumEdge launched its project planning and management tool for scrum teams. ScrumEdge is based out of Virginia, USA. Features ScrumEdge, while following the manifesto for agile software development, enables agile development teams to share information about their projects and manage their project deliverables. ScrumEdge allows users to manage their product backlogs, report their scrum related issues and report time against their sprint backlogs. ScrumEdge also has a project level dashboard that all users see when they sign in. This page displays all projects users are assigned to along with a basic snapshot of the number of sprints, stories and users that are assigned to each project. Progress bars allow users to see project progress at a glance. Story and Task Effort level reports allow ScrumMasters and Product Owners to follow their sprints in more detail. ScrumEdge also generates burndown charts, budget charts and the scrum team's burn rate charts. ScrumEdge is intended to support various flavors of Agile like XP, Scrum, Agile Hybrid. Though primarily a project management tool, it also provides issue tracking, project reporting and program management. References External links My Search For Scrum Tools The Software Maven, February 25, 2010 ScrumEdge Review on Bright Hub Online Collaboration Tool for Scrum Management by Craig Agranoff, 25 January 2010 Proprietary software Agile software development
10858709
https://en.wikipedia.org/wiki/Ceibal%20project
Ceibal project
The Plan Ceibal is a Uruguayan initiative to implement the "One laptop per child" model to introduce Information and Communication Technologies (ICT) in primary public education and is beginning with the expansion into secondary schools. In four years Plan Ceibal delivered 450,000 laptops to all students and teachers in the primary education system and no-cost internet access throughout the country. As of 2009, results include increased self-esteem in students, improved motivation of students and teachers, and active participation by parents (94% approve of the Plan according to a national survey performed in 2009). The success of Plan Ceibal is not only due to technological innovations, but also to achievements such as the creation of a training plan for teachers in primary education, the active inclusion of the society and teachers in the project and the successful design and implementation of a monitoring and evaluation model to measure the impact nationally that serves as a guide to define future actions in the Plan. Ceibal Project emerged as a result of the digital gap that existed in Uruguay between the people who didn't have access to technology and to those who did. It was impelled during Tabaré Vazquez' term of office. Vasquez was the main proponent of this pioneer project; although it was inspired by Nicholas Negroponte's One Laptop per Child project. It raised three principal values: to distribute technology, to promote knowledge and to generate social equity. The project was named "Ceibal" like the typical Uruguayan tree and flower called "ceibo", known in English as Cockspur coral tree. Ceibal also stands for "Conectividad Educativa de Informática Básica para el Aprendizaje en Línea" (Educational Connectivity/Basic Computing for Online Learning in English). The OLPC XO-1 computers used in the project are nicknamed "ceibalitas". Goals Ceibal Project tries to promote digital inclusion and decrease the digital gap that exists among the Uruguayans and between Uruguay and the rest of the world. However, this goal can be accomplished only if it is complemented by an educational plan for teachers, students and their families. The educational plan of Ceibal Project tries to create the technological resources, the teacher's formation, the creation of suitable content and the social and familiar participation. Ceibal Project has strategic principles: equity of knowledge, equal opportunities for children and youth, and the provision of tools to learn not only knowledge given by the school, but also knowledge that the child can learn by him or herself. The original expected outcomes defined the right to have an internet connection at school as well as at home, explicitly ("1.2.3. EXPECTED RESULTS") of. General To improve the quality of education through the new technological system. Providing computers to every scholar and teacher of the public education, promote the same opportunities for all. To develop a culture based on collaboration between children, a child and teacher, teachers with each other, families and school. To promote a sense of criticism of technology on the pedagogical community. To provide an internet connection at home as well as at school to all students. Specific To promote the laptops as a useful tool in the schools. To offer the teachers a suitable technological and pedagogical formation of the new technologies. To produce educational resources, with the new technological tools. To inspire an innovational mind of the teachers. To assure a good development of the project by a supportive system and technological assistance. To involve the parents in the implementation of the project, not only in the schools but also in the houses of the scholars. To promote the proportion of relevant information of all the people involved in the project, in order to improve it. Phases To assure a good development of the project by a supportive system and technological assistance. To involve the parents in the implementation of the project, not only in the schools but also in the houses of the scholars. To promote the proportion of relevant information of all the people involved in the project, in order to improve it. Stages 2007 In April the presidential decree 144/007 signals the kick-off to provide a laptop to each primary school child and his teacher in every public school, as well as training for its use, and the promotion of educational proposals In May the pilot phase starts in Villa Cardal (Departamento de Florida), in which 150 students and their teachers participate. Villa Cardal is a town of 1,290 inhabitants and has just one school of 150 students. For this pilot phase laptops were donated by OLPC In October through an open tender, 100,000 OLPC laptops and 200 servers are awarded By the end of 2007, all children and teachers from Florida have their laptops. 2008 The XO laptops a tool to appropriate technology" which took place in Colonia del Sacramento (Departamento de Colonia) on 5, 6, and 7 June 2008, in support of Ceibal Project Before the end of the year, more than 175,000 laptops are delivered, completing the whole country with the exception of parts of Canelones, Montevideo and its metropolitan area In September Ceibal Project and Teleton (Uruguay) signed an agreement that committed Ceibal to adapt its laptops to the needs of children with motor disabilities. In December the educational portal Plan Ceibal is created: Plan Ceibal 2009 In April work is started with small companies in the interior of the country to provide decentralized technical support, within the framework of the Rayuela Project with the Inter-American Development Bank and DINAPYME In June on-line courses aimed at students in teacher education are launched in support of the National Administration of Public Education (ANEP) In the same month, the first national monitoring and evaluation system is developed to work within the Ceibal Project In August laptops are delivered to private schools In the same month laptops for the visually imparted are delivered In October the plan is fully covered including Canelones, Montevideo and its metropolitan area, all children and teachers in the country have their own laptops, with a coverage of more than 350,000 children and 16,000 teachers. 2010 In May private companies sponsor Ceibal classrooms in their companies as part of their social responsibility projects In November the pilot project in robotics is launched In October Ceibal Project starts its second phase, delivering laptops to students in secondary schools and to students in technical schools 2011 In March Ceibal Project starts a new and ambitious phase introducing laptops in nursery schools In August a television show called "¡Sabelo!" (Know it!) starts. It is a quiz show produced by Ceibal Project for students in secondary schools 2012 To celebrate the fifth anniversary of Ceibal Project a ceremony takes place at Villa Cardal, where it all started. VIDEO 5 años del Plan Ceibal Ceibal documentary for Austrian public television 2013 On 2 October, president José Mujica delivered the 1,000,000 computer into an act at Escuela N° 177 (Yugoslavia 307) from Nuevo París, in Montevideo. Awards In recognition to Ceibal Project's achievements, it has received various national and international awards: Bronze Medal for National Quality and Commitment in Public Management due to Ceibal Project's work in connectivity and connectivity support, National Institute of Quality INACAL, Uruguay, October 2012 INACAL, Uruguay, octubre 2012. Frida Award in the category "Access", awarded by LANIC, IDRC and ISCO, Buenos Aires, Argentina October 2011 First Prize in Development Capacity, awarded by UNDP-United Nations Development Programme for Development at the "Knowledge Fair", Marrakech, Morocco, March 2010 Prize for Public Management, awarded by RED GEALC-Network of e-government Leaders of Latin America and the Caribbean "ExcelGob08", Montevideo, Uruguay, March 2009 Special Mention for Ceibal's Commitment with the Millennium Goals, awarded by RED GEALC-Network of e-government Leaders of Latin America and the Caribbean "ExcelGob08", Montevideo, Uruguay, March 2009 Morosoli Golden Prize to Uruguayan Culture awarded by the Lolita Rubial Foundation. Minas, Uruguay, December 2013 English project The project Ceibal en Inglés was conceived with the aim to solve the problem of lack of specialized teachers of English in primary schools in the Uruguayan public system of education, with the objective of universalizing the right of every Uruguayan child to acquire a second language through a means that would ensure both quality and sustainability. Design The project allows primary school children between fourth and sixth grades to have three 45 minute slots per week of English: one taught by a remote teacher, model of language and in charge of introducing and explaining the linguistic content corresponding to each week through his remote presence via videoconference equipment; and two forty-five-minute slots with the classroom teacher, who following the lesson plans sent to her every week, may review, accompany and guide her students in the learning of English. Lesson plans are associated to digital and non-digital materials, which contain songs, games, videos, etc., so that those contents already presented and explained by the remote teacher, may be revised, reviewed and recycled under the conduction of the classroom teacher. Coherence between remote lessons and face to face lessons is ensured by a half-hour virtual coordination between the two teachers involved in the learning process, in which concerns, learning and teaching styles are discussed. Each classroom teacher decides whether she wants to participate in this project, she does not need to know any English, but she receives an on-line course of English so that she can be one step ahead of her students and give them the necessary support. Pilot phase and expansion This project was piloted in twenty schools in the country between June and November 2012; five in the south (three in Maldonado and two in Pando, Canelones), and fifteen in the north between Salto and Paysandú, with a total of 37 groups. The group in the south started on June 23, and the one in the north on August 23. The results of the pilot phase are highly encouraging, for this reason, Ceibal en Inglés has expanded the project to one thousand groups in 2013. Plan Ceibal through an open international bid chose the British Council, a British organization of recorded experience in the teaching of English as a foreign language internationally, as its partner for this project. On Thursday, 9 January 2014, the BBC Mundo website published an article entitled "Thousands of children in Uruguay learn English with distance teachers". The report highlights the work done by Plan Ceibal and explains that "this is the first time in the world where telepresence technology is used to teach English to large classes of primary school students in the state system", according to Paul Woods, representative of the British Council, the organization providing the teaching materials. Computing device models Since the project's launch up to this day (2020) the Computers that are part of this program keep being improved and changed (the models described on this list don't include much in the way of official names due to lack of documentation) On-launch Ceibalita Also called "Marcianita", this version was very limited with a very slow processor and a total of 300 MB of RAM, the storage was around 1-2 GBs, it was mostly white with green details. Hardware This model was white with a logo on the lid composed of a circle on top of an X, both encrusted on the lid and being of random colors. It had a green membrane keyboard that was pretty similar to any other laptop keyboard, with the exception of having all the F1 through F12 keys replaced with symbols that indicated their new functions as well as some other minor changes. It had a simple mouse pad with 2 buttons, one with an X and one with an O as symbols, unlike most modern mousepads, it didn't accept more than one input at once and couldn't perform gestures like tap-and-hold or double finger swiping. It had 2 sets of 4 buttons at each side of the screen, with the left one being a single piece d-pad and the other side having 4 buttons, with the symbols X, O, a checkmark symbol and a triangle, these were apparently made for playing games but they were fairly unwieldy due to the screen's width and weight. It sported 2 speakers, both at the sides of the screen and above each set of buttons, the microphone was located right below the screen, the earphone jack was located at the right border of the case, on the screen's side. The top left and right of the screen had 2 green antennas that when withdrawn fitted perfectly with the rest of the case, these antennas concealed a pair of USB ports. The computer couldn't be "closed" without hiding the antennas first. Software The laptop ran sugar, an open source OS based on Linux, which was fairly limited, the programs were called activities, only allowing one to run at the same time. The computer had 4 sections which were accessed with F1, F2, F3 and F4. These keys had different logos to match all being represented with a black circle and green shapes. F1(community view) had 6 green dots near the border, F2 (group view) had 3 green dots near the center, F3 ?, F4 had a simple green rectangle. First it had community view, which displayed other laptops of the same kind in the local network represented by the same logo on the lid of the computer, this screen allowed to form groups and perform other cooperative activities, especially inviting others to an activity which allowed more than one user, like the text editor. Then it had the group view, this showed other computers on the same group, this wasn't so different from the community view other than trimming the amount of computers on-screen. The "toolbox"(unknown real name) view had all of the installed activities ready to be launched. The activity view showed the currently launched activity, this didn't work if no activity had been launched yet. Buttons F5 to F8 had just black dots of increasing size, F9 to F12 had buttons to adjust volume and screen brightness. Later models As time went on, more improved versions of the ceibalita were made, this also included their hardware which was severely upgraded to fit the new features. Hardware The keyboard was changed to a green plastic one and the mouse was improved to accept most gestures, eventually dropping the F1–F12 special keys, screen resolution was improved, their processor, storage and RAM also had significant upgrades. Later models changed to a blue motif instead of the old green one. Software Sugar was upgraded to allow for more activities open at once between other features, the computer also came with Android installed and eventually also Linux, which could be switched by restarting, changing to sugar didn't require a restart but took a significant amount of time. Sugar was eventually dropped altogether from the latest models. Tablets Usually targeted towards pre-schoolers, these tablets came with Android and a special educative OS with parental restrictions which required completing a simple mathematical operation to unlock. Conventional laptops Eventually, the original "Marcianita" model was discontinued and common laptop models were distributed in its stead, these had a significantly more frail case, but their hardware had significant upgrades from the previous models, with storage being increased to 60 GB and getting 2.4 GHz Dual Core CPUs, these computers ditched Android in favor of Windows 10, but kept Linux. See also One Laptop per Child#Uruguay References External links Official Education Web site Official Institucional Web site Project page on the British Council Uruguay site Miguel Brechner in TEDX – Plaza Cibeles OLPC Wiki about the project in Uruguay Miguel Brechner Frey of Plan CEIBAL speaks about the measurable results of the plan at TEDxBuenosAires, October 2010. Kids, parents and teachers from Escuela 5, Salto, Uruguay appear discussing the impact of the XO on their lives. Science and technology in Uruguay Education in Uruguay
33130700
https://en.wikipedia.org/wiki/Cherokee%20Nation%20Businesses
Cherokee Nation Businesses
Cherokee Nation Businesses, LLC (CNB) is an American conglomerate holding company headquartered in Catoosa, Oklahoma, that oversees and manages a number of subsidiary companies. CNB is a wholly owned subsidiary of the Cherokee Nation, which is the largest Native American tribe by population in the United States. CNB operates in the following industries: aerospace and defense, hospitality and entertainment, environmental and construction services, information technology, healthcare, and security and safety. History Cherokee Nation Businesses was established on June 16, 2004. CNB is a wholly owned subsidiary of the Cherokee Nation. The tribe exerts control over the operations of CNB through the Board of Directors. Upon its establishment, CNB became responsible for providing "strategic direction" to all Cherokee Nation of Oklahoma-owned businesses, to diversify the Cherokee Nation business holdings, and to act as a holding company for some Cherokee Nation of Oklahoma business investments. CNB receives revenues from its subsidiaries in order to fund the expansion of existing firms and the acquisition of new ones. CNB was established to diversify the business interests of the Cherokee Nation of Oklahoma. At its establishment, pursuant to the Cherokee Nation of Oklahoma Corporation Reform Act of 2002, 25% of all CNB profits were to be reinvested with the Tribal Government as dividend payment. Pursuant to the provisions of the Cherokee Nation Jobs Growth Act of 2005, CNB became the holding company for all business enterprises, including Cherokee Nation Entertainment (CNE) and Cherokee Nation Industries (CNI). CNE was transferred to CNB ownership in 2006 and CNI was transferred in 2008. Prior to these transfers ownership of CNE and CNI were held directly by the Cherokee Nation of Oklahoma. At its establishment, CNB was governed by its own board with both CNE and CNI governed by separate boards of their own. In 2010, the separate boards were dissolved and consolidated into a single CNB board. The same year, the Cherokee Nation of Oklahoma enacted the Dividend Act of 2005, which increased CNB's annual dividend payment to the Tribal Government from 25% to 30% of a total profits. CNB established its Environmental and Construction Division in March 2005 with the establishment of Cherokee CRC. The Division provides environmental consulting, construction engineering and management services, environmental remediation services, and scientific research and development (including laboratory testing) for government agencies as well as for chemical, oil and gas, manufacturing and waste companies. The Division was expanded in 2008 with the establishment of Cherokee Nation Construction Services to offer construction management services. In February 2010, CNB announced the closure of CNB Manufacturing Division's construction unit due to constant financial problems. The unit's assets were moved from CNB's Manufacturing Division into its Environmental Division. The CNB Manufacturing Division's defense contracting mission was expanded in August 2008 with the acquisition of Alabama-based Aerospace S.E., Inc., which provides aerospace product distribution and supply chain services for the United States Department of Defense. In August 2010, in order to implement the provisions of the Jobs Growth Act of 2005, CNB consolidated its various internal support and administrative service operations into a single structure located in Tulsa, Oklahoma. The new office complex houses several of CNB's operating divisions as well as CNB-wide corporate support staff offices. CNB's Manufacturing Division remained headquartered in Stilwell, Oklahoma, with CNB's gaming operations and corporate headquarters remaining centered in Catoosa, Oklahoma. CNB established its Security and Defense Division in July 2010. The Division provides security services, including property surveillance and guard services. Additionally, through the acquisition of Kellyville, Oklahoma-based Cherokee Nation Red Wing, the Division serves as a defense contractor by manufacturing and assembling electronic parts for the United States Department of Defense and the aerospace industry. In July 2012, the Manufacturing and Distribution Division's telecommunications group expanded with the acquisition of a 143,000-square foot building at the MidAmerica Industrial Park in Pryor, Oklahoma, establishing the CNB Distribution Center in the process. CNB completed the acquisition of Mobility Plus, a Muskogee, Oklahoma-based supplier of health care equipment, in November 2011, expanding CNB's Healthcare Division in the process. That same month, CNB purchased expanded its Technology Division through the purchase of two Colorado-based companies: ETI Professionals Inc. (which was renamed Cherokee Nation Government Solutions) which offers strategic technology project management and staffing solutions, and ITX Inc. (which was consolidated into Cherokee Services Group) which provides full-service computer and information technology services to United States federal government and commercial entities. In November 2011, at the request of Principal Chief of the Cherokee Nation of Oklahoma Bill John Baker, the Cherokee Nation of Oklahoma enacted the Corporate Health Dividend Act of 2011, which increased CNB's annual dividend payment to the Tribal Government from 30% to 35% of a total profits. The 5% is earmarked to support the provision of healthcare services to Cherokee citizens. In March 2012, CNB sold its corporate airplane at the request of Chief Baker. Baker had promised the sale of the plane as part of his 2011 election campaign. The proceeds of the sale (approximately $1.5 million) was given to the Cherokee Nation of Oklahoma to supplement its funding of healthcare services for Cherokee Nation of Oklahoma members. CNB purchased a 298,000 square foot building in Tahlequah, Oklahoma, in November 2012 in order to allow for future expansion of its manufacturing operations. Corporate affairs Board of Directors The Cherokee Nation Principal Chief, with the approval of the Cherokee Nation Tribal Council, appoints all members of the Board of Directors of CNB. The current Chairman of the Board is Sam Hart, having served in that position since February 14, 2012. As of August 2019, the Board is composed of the following members: Management The current chief executive officer of CNB, Chuck Garrett, has been CEO since August 2019, replacing Shawn Slaton, who had served as CEO since 2011. Garrett is a graduate of the University of Oklahoma and Harvard Law School. A native of Muskogee, with family ties in Adair County, and a Cherokee Nation citizen, Garrett worked in real estate investment, asset management and investment banking prior to returning to Oklahoma to join CNB in 2013. Finances For the fiscal year ending September 30, 2011, Cherokee Nation Businesses reported a net income of $91.5 million on $653.5 million of revenue (a 14% profit margin). Operating divisions HospitalityChief Operating Officer: Mark Fulton CNB's Hospitality Division oversees the company's portfolio of retail, hotel, gaming, and entertainment assets. The Division operates through its primary subsidiary: Cherokee Nation Entertainment (CNE). CNE was formed in the late 1980s and is the holding company for all gambling, gaming, entertainment, and hospitality operations of the Tribe. CNE operates ten casinos, three hotels, a horse racing facility with electronic gaming machines, retail and convenience shops, entertainment venues, golf courses, and cultural tourism programs. CNE operates casinos in the following Oklahoma locations: Tulsa (Hard Rock Hotel and Casino Tulsa) Fort Gibson Roland (Hotel and Casino) (Cherokee Casino Roland) Ramona South Coffeyville Sallisaw Tahlequah West Siloam Springs (Hotel and Casino) Claremore (Will Rogers Downs) Grove CNE is itself the holding company of Will Rogers Downs LLC and Cherokee Hotels. WRD owns and operates a horse racing and casino facilities in Claremore while Cherokee Hotels is responsible for owning and operating hotels managed by CNE in Tulsa, Roland, and West Siloam Springs. CNE is regulated by the Indian Gaming Regulatory Act (IGRA). IGRA mandates that all Class III gaming operations can only be conducted on Tribal land held in trust for the tribe by the Bureau of Indian Affairs. Manufacturing and DistributionDivision President: Chris Moody CNB's Manufacturing and Distribution Division provides products and services to the commercial and defense aerospace industry, leading telecommunications companies, and government and commercial clients in need of facility and office solutions. The Division's predecessor was initially formed in 1969 as the first business entity owned by the Cherokee Nation of Oklahoma. The Division is a defense contractor for the United States Department of Defense. The division is the holding company for several subsidiaries under the umbrella Cherokee Nation Industries (CNI) brand: Cherokee Nation Aerospace and Defense (CNAD) provides contract manufacturing, electromechanical assembly, and component integration for commercial and military aerospace needs Cherokee Nation CND provides contract manufacturer and integrator of electro-mechanical assemblies commercial and military aerospace needs Cherokee Nation Metalworks (CNMW) manufactured fabricated details and assemblies for commercial and military aircraft as well as various military missile and unmanned aerial vehicles Cherokee Nation Office Solutions (CNOS) provides office and facility support products and services Cherokee Nation Telecommunications (CNT) sells and distributes communication products for government agencies and businesses Aerospace Products, S.E. (APSE) 75% stake acquired by CNB in 2008, defense contractor providing third party logistics and outsource procurement services to aerospace and defense firms and the United States Department of Defense Federal SolutionsCherokee Nation Businesses' Federal Solutions companies provide information technology, management, consulting, medical, professional, environmental, and construction services. The companies also offer management and support of programs, projects, professionals and technical staff. Cherokee Medical Services (CMS) provides a wide range of services including recruiting, credentialing and placement of clinical, technical, administrative, professional, engineering and housekeeping personnel for federal agencies and commercial clients. Cherokee Nation 3S (CN3S) provides a wide range of services including recruiting, credentialing and placement of clinical, technical, administrative, professional, engineering and housekeeping personnel for federal agencies and commercial clients. Cherokee Nation Assurance (CNA) delivers professional management and consulting services to defense, health, environmental and civilian agencies. The company provides management for government programs and disciplines including health information technology, research & science, program management, communications, correspondence and document management, governance and administrative support. Cherokee Nation Cherokee CRC (CCRC) is an environmental, construction and professional services company that provides the federal government with custom tailored services. Cherokee Nation Construction Resources (CNCR) is a construction management company specializing in preconstruction services such as design and planning, scheduling, budgeting, defining project roles and responsibilities, as well as constructability reviews. CNCR's team of construction managers also work to ensure the use of TERO subcontractors on CNB projects. Cherokee Nation Construction Services (CNCS) provides professional, technical and administrative support teams for government and commercial clients. CNCS helps manage construction projects through engineering, scheduling, safety and financial management controls. Cherokee Nation Diagnostic Innovations (CNDI) provides comprehensive lifecycle business consulting services by monetizing intellectual property assets and advancing assets through the value chain into a profitable target market. CNDI staff assists clients by developing strategies to integrate key stakeholders, leverage value-added business partners, negotiate contracts, secure approvals from applicable authorities, develop distribution channels and analyze competitive markets. Cherokee Nation Environmental Solutions (CNES) is an environmental-focused company based in Tulsa, Oklahoma. CNES provides soil testing, storm drainage/run off, site assessment, above & underground storage tanks, regulatory contracting, long-term monitoring, environmental consulting, hazardous waste collection, treatment and disposal, as well as other miscellaneous waste management services. Cherokee Nation Government Solutions (CNGS) provides technical services and project personnel to support and supplement the mission, expertise and skill sets of federal, state and local government. CNGS locates specific candidates for rapid response requests in areas including science, engineering, construction, information technology, research & development, facilities management, program management and mission support. Cherokee Nation Healthcare Services (CNHS) provides analysis and optimization services for federal health programs, improving access to care, medical readiness and program efficiency within federal projects. Cherokee Nation Integrated Health (CNIH) delivers a wide variety of professional management consulting services and manages intricate government programs and disciplines including health information technology, program management and administrative support. Cherokee Nation Management & Consulting (CNMC) provides advisory & assistance services in research, experimental development, technology implementation, and program management. CNMC has expertise in a wide-range of technical disciplines including engineering, environmental, information and asset management, along with a variety of physical and life sciences. Cherokee Nation Mechanical (CNM) provides mechanical and plumbing solutions including cost management, constructability analysis, fabrication and systems installation, pre-construction services and 3-D modeling. Cherokee Nation Mission Solutions (CNMS) is a global service provider partnering with U.S. federal government to provide diplomatic support services such as housing, transportation, shipping and facilities maintenance to assist the U.S. Department of Homeland Security, Department of Defense and other federal agency and personnel, as well as their families, with transitioning overseas. As an established Outside the Continental United States or OCUNUS, provider, CNB's international teams are continually adding new capabilities and experience in foreign markets. Cherokee Nation Operational Solutions (CNOS) provides medical equipment and supplies along with office products and services to businesses and health care facilities throughout North America. Cherokee Nation Strategic Programs (CNSP) provides technical services, including information technology, global vulnerability assessments, information assurance, intelligence operations, program management and professional services, throughout the U.S. and overseas. Cherokee Nation Security and Defense (CNSD) specializes in anti-terrorism and force protection and provides comprehensive services in securing facility perimeters. CNSD implements smart design, cutting-edge technology and proven mitigation strategies into command and control stations. Cherokee Nation System Solutions (CNSS) provides services, consultation and products, including application modernization, data utilization and advanced analytics, geospatial, GIS and remote sensing, information technology infrastructure, program professional services and scientific and research capabilities, to government agencies. Cherokee Nation Technologies (CNT) provides a full spectrum of unmanned systems expertise, IT services, technology solutions and geospatial information systems services, as well as management and support of programs, projects, professionals and technical staff. Cherokee Nation Technology Solutions (CNTS) provides technical support services and project support personnel to its defense and civilian agency clients. CNTS specializes in locating hard-to-find candidates for rapid response requests throughout the country. The company manages complex government programs and disciplines including medical, science, geospatial intelligence, engineering, construction, research and development, facilities management, information technology, program management and mission support. Cherokee Services Group (CSG) provides federal and commercial clients throughout the U.S. with award-winning IT solutions and business support services. CSG specializes in software and application services, network infrastructure services, and business process services. Headquartered in Tulsa, Oklahoma, Cherokee Services Group has a regional office in Fort Collins, CO, and 22 additional offices nationwide. Real EstateSenior Real Estate Development Manager: Brian Hunt Cherokee Nation Businesses' Real Estate Division operates primarily through Cherokee Nation Property Management (CNPM). CNPM offers several real estate options, including management, development, acquisitions and leasing. The division generates revenue and develops long-term strategies for commercial development while managing more than 3 million square feet of property. Security and DefenseDivision President: Russell Claybrook CNB's Security and Defense Division provides security and protection products and services to commercial and governmental clients. The division is also a defense contractor for the United States Department of Defense that provides on aviation weapon systems life-cycle support, with locations across the United States in key military bases. Cherokee Nation Defense Solution (CNDS) provides critical site infrastructure protection and security surveillance services, and access control products Cherokee Nation Red Wing (CNRW) acquired by CNB in 2009, a defense contractor providing aviation and weapon system engineering and manufacturing Cherokee Nation Security and Defense (CNSD) provides internal security for all CNB properties and external security and surveillance services to private contracts and government agencies Environmental and ConstructionDivision President: Cheryl Cohenour The Environmental and Construction Division provides clients with environmental, construction and professional services. The Division oversees project management through effective engineering, scheduling, safety and financial management controls. Cherokee CRC - CNB acquired 51% ownership of Cherokee CRC in 2005. Cherokee CRC provides consulting and engineering services, predominantly in the areas of aerospace, construction, environmental and professional services Cherokee Nation Construction Services (CNCS) provides general contracting services and construction management Cherokee Nation Environmental Solutions (CNES) - provide environmental services for both commercial and governmental clients TechnologyDivision President: Steven Bilby CNB's Technology Division provides a full spectrum of IT services and technology solutions. It offers management and support of programs, projects, professionals and technical staff, with a primary focus on information technology, mission support, and research and development Cherokee Services Group (CSG) provides management information services, network infrastructure management, and software development Cherokee Nation Technologies (CNT) markets professional services to commercial enterprises Cherokee Nation Technology Solutions (CNTS) provides technical support services and project support personnel to defense and civilian agency clients in the areas of information technology, science, engineering, construction, research & development, facilities management, program management, strategic communications, and mission support. Cherokee Nation Government Solutions (CNGS)' provides technical support to governmental client in the areas of science, engineering, construction, information technology, facilities management, and mission support Cultural & Economic Development Visit Cherokee Nation (operates 6 museums) Osiyo, Voices of the Cherokee People (documentary television series) Cherokee Springs Plaza Cherokee Nation Film Office Renewable Energy In 2011, Cherokee Nation Businesses began the process to expand its business operations in renewable energy operations. In December 2011, the Cherokee Nation Tribal Council authorized CNB to seek a grant from the United States Department of the Interior in order to construct a $144 million hydroelectric dam along the Arkansas River in Sequoyah County, Oklahoma. The dam is expected to be completed by 2015 and generate between $10 million and $15 million in annual revenues for CNB. After winning Tribal Council approval in December 2012, CNB plans to construct a wind energy farm in Kay County, Oklahoma, with financial support from a grant by the United States Department of Energy's Office of Energy Efficiency and Renewable Energy. Once fully construction, the wind farm is expected the generate between $16 million and $19 million in annual revenues for CNB. See also Cherokee Nation References Cherokee Nation Conglomerate companies of the United States Holding companies of the United States Private equity portfolio companies Construction and civil engineering companies of the United States Hospitality companies of the United States Information technology companies of the United States Manufacturing companies based in Oklahoma Real estate companies of the United States Companies based in Tulsa, Oklahoma American companies established in 2004 Conglomerate companies established in 2004 Holding companies established in 2004 Construction and civil engineering companies established in 2004 Manufacturing companies established in 2004 Real estate companies established in 2004 Renewable resource companies established in 2004 Technology companies established in 2004 2004 establishments in Oklahoma Privately held companies based in Oklahoma Science and technology in Oklahoma
2922837
https://en.wikipedia.org/wiki/DSpace
DSpace
DSpace is an open source repository software package typically used for creating open access repositories for scholarly and/or published digital content. While DSpace shares some feature overlap with content management systems and document management systems, the DSpace repository software serves a specific need as a digital archives system, focused on the long-term storage, access and preservation of digital content. The optional DSpace registry lists almost three thousand repositories all over the world. History The first public version of DSpace was released in November 2002, as a joint effort between developers from MIT and HP Labs. Following the first user group meeting in March 2004, a group of interested institutions formed the DSpace Federation, which determined the governance of future software development by adopting the Apache Foundation's community development model as well as establishing the DSpace Committer Group. In July 2007 as the DSpace user community grew larger, HP and MIT jointly formed the DSpace Foundation, a not-for-profit organization that provided leadership and support. In May 2009 collaboration on related projects and growing synergies between the DSpace Foundation and the Fedora Commons organization led to the joining of the two organizations to pursue their common mission in a not-for-profit called DuraSpace. DuraSpace and LYRASIS merged in July 2019. Currently the DSpace software and user community receives leadership and guidance from LYRASIS. Technology DSpace is constructed with Java web applications, many programs, and an associated metadata store. The web applications provide interfaces for administration, deposit, ingest, search, and access. The asset store is maintained on a file system or similar storage system. The metadata, including access and configuration information, is stored in a relational database and supports the use of PostgreSQL and Oracle database. DSpace holdings are made available primarily via a web interface. More recent versions of DSpace also support faceted search and browse functionality using Apache Solr. Features Some most important features of DSpace are as follows. Free open source software Completely customizable to fit user needs Manage and preserve all format of digital content (PDF, Word, JPEG, MPEG, TIFF files) Apache SOLR based search for metadata and full text contents UTF-8 Support Interface available in 22 languages Granular group based access control, allowing setting permissions down to the level of individual files Optimized for Google Scholar indexing Integration with BASE, CORE, OpenAIRE, Unpaywall and WorldCat Operating systems DSpace software runs on Linux, Solaris, Unix, Ubuntu and Windows. It can also be installed on OS X. Linux is by far the most common OS for DSpace. Notable DSpace repositories The World Bank - Open Knowledge Repository Apollo - University of Cambridge Repository Digital Access to Scholarship at Harvard DSpace@MIT Spiral - Imperial College London Repository WHO Institutional Repository for Information Sharing A full list of institutional repositories using DSpace software as well as others is available via the Registry of Open Access Repositories (ROAR) and at the DuraSpace Registry. See also Digital library DuraCloud Institutional repository Fedora Commons SWORD DSpace Alternatives Free and Open Source Software EPrints Invenio Zenodo CKAN Samvera References External links – official site 2002 software Digital library software Free institutional repository software Free software programmed in Java (programming language) Massachusetts Institute of Technology software Open-access archives Software using the BSD license Free and open-source software
493744
https://en.wikipedia.org/wiki/DEC%20PRISM
DEC PRISM
PRISM (Parallel Reduced Instruction Set Machine) was a 32-bit RISC instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC). It was the outcome of a number of DEC research projects from the 1982–1985 time-frame, and the project was subject to continually changing requirements and planned uses that delayed its introduction. This process eventually decided to use the design for a new line of Unix workstations. The arithmetic logic unit (ALU) of the microPrism version had completed design in April 1988 and samples were fabricated, but the design of other components like the floating point unit (FPU) and memory management unit (MMU) were still not complete in the summer when DEC management decided to cancel the project in favor of MIPS-based systems. An operating system codenamed MICA was developed for the PRISM architecture, which would have served as a replacement for both VAX/VMS and ULTRIX on PRISM. PRISM's cancellation had significant effects within DEC. Many of the team members left the company over the next year, notably Dave Cutler who moved to Microsoft and led the development of Windows NT. The MIPS-based workstations were moderately successful among DEC's existing Ultrix users but had little success competing against companies like Sun Microsystems. Meanwhile, DEC's cash-cow VAX line grew increasingly less performant as new RISC designs outperformed even the top-of-the-line VAX 9000. As the company explored the future of the VAX they concluded that a PRISM-like processor with a few additional changes could address all of these markets. Starting where PRISM left off, the DEC Alpha program started in 1989. History Background Introduced in 1977, the VAX was a runaway success for DEC, cementing its place as the world's #2 computer vendor behind IBM. The VAX was noted for its rich instruction set architecture (ISA), which was implemented in complex microcode. The VMS operating system was layered on top of this ISA, which drove it to have certain requirements for interrupt handling and the memory model used for memory paging. By the early 1980s, VAX systems had become "the computing hub of many technology-driven companies, sending spokes of RS-232 cables out to a rim of VT-100 terminals that kept the science and engineering departments rolling." This happy situation was upset by the relentless improvement of semiconductor manufacturing as encoded by Moore's Law; by the early 1980s there were a number of capable 32-bit single-chip microprocessors with performance similar to early VAX machines yet able to fit into a desktop pizza box form factor. Companies like Sun Microsystems introduced Motorola 68000 series-based Unix workstations that could replace a huge multi-user VAX machine with one that provided even more performance but was inexpensive enough to be purchased for every user that required one. While DEC's own microprocessor teams were introducing a series of VAX implementations at lower price-points, the price-performance ratio of their systems continued to be eroded. By the later half of the 1980s, DEC found itself being locked out of the technical market. RISC During the 1970s, IBM had been carrying out studies of the performance of their computer systems and found, to their surprise, that 80% of the computer's time was spent performing only five operations. The hundreds of other instructions in their ISAs, implemented using microcode, went almost entirely unused. The presence of the microcode introduced a delay when the instructions were decoded, so even when one called one of those five instructions directly, it ran slower than it could if there was no microcode. This led to the IBM 801 design, the first modern RISC processor. Around the same time, in 1979, Dave Patterson was sent on a sabbatical from University of California, Berkeley to help DEC's west-coast team improve the VAX microcode. Patterson was struck by the complexity of the coding process and concluded it was untenable. He first wrote a paper on ways to improve microcoding, but later changed his mind and decided microcode itself was the problem. He soon started the Berkeley RISC project. The emergence of RISC sparked off a long-running debate within the computer industry about its merits; when Patterson first outlined his arguments for the concept in 1980, a dismissive dissenting opinion was published by DEC. By the mid-1980s practically every company with a processor design arm began exploring the RISC approach. In spite of any official disinterest, DEC was no exception. In the period from 1982 to 1985, no fewer than four attempts were made to create a RISC chip at different DEC divisions. Titan from DEC's Western Research Laboratory (WRL) in Palo Alto, California was a high-performance ECL based design that started in 1982, intended to run Unix. SAFE (Streamlined Architecture for Fast Execution) was a 64-bit design that started the same year, designed by Alan Kotok (of Spacewar! fame) and Dave Orbits and intended to run VMS. HR-32 (Hudson, RISC, 32-bit) started in 1984 by Rich Witek and Dan Dobberpuhl at the Hudson, MA fab, intended to be used as a co-processor in VAX machine. The same year Dave Cutler started the CASCADE project at DECwest in Bellevue, Washington. PRISM Eventually, Cutler was asked to define a single RISC project in 1985, selecting Rich Witek as the chief architect. In August 1985 the first draft of a high-level design was delivered, and work began on the detailed design. The PRISM specification was developed over a period of many months by a five-person team: Dave Cutler, Dave Orbits, Rich Witek, Dileep Bhandarkar, and Wayne Cardoza. Through this early period, there were constant changes in the design as debates within the company argued over whether it should be 32- or 64-bit, aimed at a commercial or technical workload, and so forth. These constant changes meant the final ISA specification was not complete until September 1986. At the time, the decision was made to produce two versions of the basic concept, DECwest worked on a "high-end" ECL implementation known as Crystal, while the Semiconductor Advanced Development team worked on microPRISM, a CMOS version. This work was 98% done 1985–86 and was heavily supported by simulations by Pete Benoit on a large VAXcluster. In the middle of 1987, the decision was made that both designs be 64-bit, although this lasted only a few weeks. In October 1987, Sun introduced the Sun-4. Powered by a 16 MHz SPARC, a commercial version of Patterson's RISC design, it ran four times as fast as their previous top-end Sun-3 using a 20 MHz Motorola 68020. With this release, DEC once again changed the target for PRISM, aiming it solely at the workstation space. This resulted in the microPRISM being respecified as a 32-bit system while the Crystal project was canceled. This introduced more delays, putting the project far behind schedule. By early 1988 the system was still not complete; the CPU design was nearly complete, but the FPU and MMU, both based on the contemporary Rigel chipset for the VAX, were still being designed. The team decided to stop work on those parts of the design and focus entirely on the CPU. Design was completed in March 1988 and taped out by April. Cancellation Throughout the PRISM period, DEC was involved in a major debate over the future direction of the company. As newer RISC-based workstations were introduced, the performance benefit of the VAX was constantly eroded, and the price/performance ratio completely undermined. Different groups within the company debated how to best respond. Some advocated moving the VAX into the high-end, abandoning the low-end to the workstation vendors like Sun. This led to the VAX 9000 program, which was referred to internally as the "IBM killer". Others suggested moving into the workstation market using PRISM or a commodity processor. Still others suggested re-implementing the VAX on a RISC processor. Frustrated with the growing number of losses to cheaper faster competitive machines, independently, a small skunkworks group in Palo Alto, outside of Central Engineering, focused on workstations and UNIX/Ultrix, entertained the idea of using an off-the-shelf RISC processor to build a new family of workstations. The group carried out due diligence, eventually choosing the MIPS R2000. This group acquired a development machine and prototyped a port of Ultrix to the system. From the initial meetings with MIPS to a prototype machine took only 90 days. Full production of a DEC version could begin as early as January 1989, whereas it would be at least another year before a PRISM based machine would be ready. When the matter was raised at DEC headquarters the company was split on which approach was better. Bob Supnik was asked to consider the issue for an upcoming project review. He concluded that while the PRISM system appeared to be faster, the MIPS approach would be less expensive and much earlier to market. At the acrimonious review meeting by the company's Executive Committee in July 1988, the company decided to cancel Prism, and continue with the MIPS workstations and high-end VAX products. The workstation emerged as the DECstation 3100. By this time samples of the microPRISM had been returned and were found to be mostly working. They also proved capable of running at speeds of 50 to 80 MHz, compared to the R2000's 16 to 20. This would have offered a significant performance improvement over the MIPS systems. Legacy By the time of the July 1988 meeting, the company had swung almost entirely into the position that the RISC approach was a workstation play. But PRISM's performance was similar to that of the latest VAX machines and the RISC concept had considerable room for growth. As the meeting broke up, Ken Olsen asked Supnik to investigate ways that Digital could keep the performance of VMS systems competitive with RISC-based Unix systems. A group of engineers formed a team, variously referred to as the "RISCy VAX" or "Extended VAX" (EVAX) task force, to explore this issue. By late summer, the group had explored three concepts, a subset of the VAX ISA with a RISC-like core, a translated VAX that ran native VAX code and translated it on-the-fly to RISC code and stored in a cache, and the ultrapipelined VAX, a much higher-performance CISC implementation. All of these approaches had issues that meant they would not be competitive with a simple RISC machine. The group next considered systems that combined both an existing VAX single-chip solution as well as a RISC chip for performance needs. These studies suggested that the system would inevitably be hamstrung by the lower-performance part and would offer no compelling advantage. It was at this point that Nancy Kronenberg pointed out that people ran VMS, not VAX, and that VMS only had a few hardware dependencies based on its modelling of interrupts and memory paging. There appeared to be no compelling reason why VMS could not be ported to a RISC chip as long as these small bits of the model were preserved. Further work on this concept suggested this was a workable approach. Supnik took the resulting report to the Strategy Task Force in February 1989. Two questions were raised: could the resulting RISC design also be a performance leader in the Unix market, and should the machine be an open standard? And with that, the decision was made to adopt the PRISM architecture with the appropriate modifications, eventually becoming the Alpha, and began the port of VMS to the new architecture. When PRISM and MICA were cancelled, Dave Cutler left Digital for Microsoft, where he was put in charge of the development of what became known as Windows NT. Cutler's architecture for NT was heavily inspired by many aspects of MICA. Design In terms of integer operations, the PRISM architecture was similar to the MIPS designs. Of the 32-bits in the instructions, the 6 highest and 5 lowest bits were the instruction, leaving the other 21 bits of the word for encoding either a constant or register locations. Sixty-four 32-bit registers were included, as opposed to thirty-two in the MIPS, but usage was otherwise similar. PRISM and MIPS both lack the register windows that were a hallmark of the other major RISC design, Berkeley RISC. The PRISM design was notable for several aspects of its instruction set. Notably, PRISM included Epicode (extended processor instruction code), which defined a number of "special" instructions intended to offer the operating system a stable ABI across multiple implementations. Epicode was given its own set of 22 32-bit registers to use. A set of vector processing instructions were later added as well, supported by an additional sixteen 64-bit vector registers that could be used in a variety of ways. References Bibliography E-mail with Bob Supnik Prism documents at bitsavers.org Further reading Bhandarkar, Dileep P. (1995). Alpha Architecture and Implementations. Digital Press. Bhandarkar, D. et al. (1990. "High performance issue orientated architecture". Proceedings of Compcon Spring '90, pp. 153–160. Conrad, R. et al. (1989). "A 50 MIPS (peak) 32/64 b microprocessor". ISSCC Digest of Technical Papers, pp. 76–77. DEC microprocessors Instruction set architectures Information technology projects
37971003
https://en.wikipedia.org/wiki/Forever%20Dusty
Forever Dusty
Forever Dusty is a stage musical based on the life of British pop star Dusty Springfield. The musical numbers are all songs performed by Springfield during her career. The book of the musical was written by Kirsten Holly Smith, who also plays the lead role of Dusty Springfield in the originating production, and Jonathan Vankin. Forever Dusty opened on 18 November 2012 at New World Stages, an Off-Broadway venue in New York City. The UK Premiere opened on 3 May 2017 at Brookside Theatre, a London Fringe theatre as the start of a UK theatre tour. History Smith originated the project at the University of Southern California in 2006, where she performed a workshop version of the piece as a one-woman show after receiving a Spectrum Arts Grant from USC. Smith has said that her interest in Springfield's story was sparked by listening to the singer's Dusty in Memphis album. "That soulful, sultry, passionate yet vulnerable voice. I immediately identified with her and knew that I needed to explore the roots of Dusty’s voice. I felt like I knew exactly what she was feeling when she sang her songs, that maybe it was the same feeling I had when I sang," Smith said. The one-woman musical moved in 2008 to the Renberg Theatre in West Hollywood, California. The theatre is part of the Lily Tomlin/Jane Wagner Cultural Arts Center at the Los Angeles Gay and Lesbian Center. The L.A. Gay and Lesbian Center co-produced the show, then entitled Stay Forever: The Life and Music of Dusty Springfield, with actress Jorja Fox (best known for her work on the TV series, CSI: Crime Scene Investigation ), Leslie Brockett and Jon Imparato as producers. Brockett was a producer of the USC workshop version as well. The Off Broadway production of Forever Dusty is greatly expanded from the one-woman Stay Forever. The show became a full-scale book musical with the addition of co-writer Vankin. The cast now consists of five actors, each portraying multiple characters (with the exception of Smith, who plays Dusty Springfield only). The director of the production at New World Stages is Randal Myler, whose notable previous work includes the bio-musicals Love, Janis (about Janis Joplin) and Hank Williams: Lost Highway as well as the Tony and Drama Desk Award-nominated Broadway production It Ain’t Nothin' But The Blues. Fox and Brockett stayed on as producers, joined by Jane Gullong and Sandalphon Productions. Helga Olafsson and Lawrence D. Poster came on as associate producers. Eva Price of Maximum Entertainment became the production's executive producer. Michael Thomas Murray served as musical director as well as associate producer. On 13 February 2013, Forever Dusty played its 100th performance. The show marked the occasion with a "Sing Along Night," in which audience members were encouraged to sing along with the performers with lyrics being projected on the back wall of the set. The United States west coast premiere of Forever Dusty was produced by Triangle Productions in Portland, Oregon, and ran from 2 February 2017, to 25 February 2017, at The Sanctuary at Sandy Plaza. On 3 May 2017, Forever Dusty had its UK premiere at the Brookside Theatre, Romford, produced in conjunction with Strictly Theatre Entertainments Ltd. The run was the start of a UK theatre tour. The cast includes Katherine Ferguson in the role of Dusty Springfield. Show Description Forever Dusty runs approximately 90 minutes without an intermission and contains 20 songs. An "Author’s Note" included in the Playbill program of the New World Stages production states that the story of Springfield's life as told in Forever Dusty is true, but presented in "fictional form" for dramatic purposes, including composite characters and some alterations in "the time sequence of events." The central plot line of the musical focuses on Dusty's relationship with "Claire," an African-American music journalist who becomes the singer's love interest. Events depicted in Forever Dusty include the origins of Springfield's career with her brother's group, The Springfields; her controversial arrest in and expulsion from apartheid South Africa for her refusal to perform to segregated audiences; her sponsorship of the first Motown revue in the United Kingdom; her troubled recording sessions for Dusty in Memphis; her public "coming out" as a lesbian; her comeback with The Pet Shop Boys; and her battle against breast cancer. List of musical numbers Wishin' and Hopin' (Burt Bacharach/Hal David) Seven Little Girls Sitting In the Back Seat (Bill Hilliard/Lee Pockriss) Island of Dreams (Tom Springfield) Tell Him (Bert Berns) I Only Want to Be With You (Michael Edwin Hawker/Ivor Raymonde) The Look of Love (Burt Bacharch/Hal David) Just a Little Lovin' (Barry Mann/Cynthia Weil) Love Power (Teddy Vann) People Get Ready (Curtis Mayfield) Willie and Laura Mae Jones (Tony Joe White) You Don't Have to Say You Love Me (Pino Donaggio/Vito Pallavicini /English lyrics by Vicki Wickham/Simon Napier-Bell) Son of a Preacher Man (John Hurley/Ronnie Wilkins) Little by Little (Buddy Kaye/Bea Verdi) Crumbs Off the Table (Ronald Dunbar/Edith Wayne/Sherrie Payne) I Just Don't Know What to Do With Myself (Burt Bacharach / Hal David) What Have I Done To Deserve This? (Allee Willis/Neil Tennant/Christopher Lowe) Brand New Me (Thom Bell/Jerry Butler/Kenny Gamble) Quiet Please There's a Lady On Stage (Peter Allen/Carole Bayer Sager) I Found My Way (Gil Slavin/Mike Soles) Don't Forget About Me (Gerry Goffin/Carole King) Cast In order of appearance, for the Off Broadway production: Kirsten Holly Smith – Dusty Springfield Benim Foster – Jerry Wexler/Bob Thackeray/Assorted characters Christina Sajous – Claire/Assorted characters Coleen Sexton – Becky Brixton/Gini/Understudy for Dusty Springfield Sean Patrick Hopkins – Tom Springfield/Record Label Executive Ashley Betton – Understudy for Becky Brixton/Gini/Assorted characters Jonathan C. Kaplan – Understudy for Jerry Wexler/Bob Thackeray/Tom Springfield/Record Label Executive Original UK Cast – London Fringe production: Katherine Ferguson – Dusty Springfield Jai Sepple – Jerry Wexler/Assorted characters Tash Thomas – Claire/Assorted characters Laura Hyde – Becky Brixton/Gini/Assorted characters Ben Ian Gordon – Tom Springfield/Record Label Executive/Bob Thackeray/Assorted Characters Isabel Gamble – Swing "Baby, It’s Cold Outside" Though the song does not appear in the play itself, on 18 December 2012, Forever Dusty released a holiday video of Smith and Sajous performing the song "Baby, It's Cold Outside". The video was inspired by a 1978 television performance of the song by Springfield with Rod McKuen. External links Official Site Kirsten Holly Smith on Portraying ‘Soulful, Sultry’ Dusty Springfield in Forever Dusty Dusty Springfield Musical’s Kirsten Holly Smith on Adapting the Life of a Complex Legend, Huffington Post A Dusty Road to Springfield, GALO Magazine Sixties Icon Dusty Springfield Subject of New Stage Musical, Reuters UK "Baby, It’s Cold Outside" video on Playbill.com References 2012 musicals Off-Broadway musicals
47692667
https://en.wikipedia.org/wiki/274th%20Air%20Support%20Operations%20Squadron
274th Air Support Operations Squadron
The United States Air Force's 274th Air Support Operations Squadron is a combat support unit located at Hancock Field Air National Guard Base, Syracuse, New York. The 274th provides tactical command and control of air power assets to the Joint Forces Air Component Commander and Joint Forces Land Component Commander for combat operations. Mission The 274 Air Support Operations Squadron trains, equips, and deploys mission qualified Tactical Air Control Party (TACP) members consisting of Air Liaison Officers and Joint Terminal Attack Controllers (JTAC) in support of the 42d Infantry Division, 27th Infantry Brigade Combat Team, and 86th Infantry Brigade Combat Team. Unit members are tasked with providing advice, guidance, and planning considerations to United States Army ground commanders on the proper integration of USAF airpower and close air support into the ground scheme of maneuver. As JTACs, 274th members are further qualified to provide terminal guidance and attack execution in a combat environment. In a domestic operations role, the 274th is responsible for establishing communications during state emergency response and contingency operations as ordered by the Governor of New York. History World War II The 313th Signal Company, Wing was constituted and activated on 1 December 1942 at Camp Pinedale, California and assigned to the Fourth Air Force for training purposes in preparation for reassignment overseas in support of World War II. Upon completion of training, the unit was assigned to Twelfth Air Force and transferred to the Camp Kilmer, New Jersey on 12 August 1943 to prepare for transport overseas. On 4 September 1943, the 313th loaded onto the S.S. Santa Elena, a converted luxury cruise liner from the Grace Line, at the New York Port of Embarkation where it departed for North Africa, arriving on 11 September. Upon arrival in North Africa, the 313th was reassigned to the 57th Bombardment Wing which it would remain under throughout the course of the war until inactivation. The 313th departed North Africa for Italy on 26 November 1943, arriving on 29 November. In the Mediterranean Theater of Operations, the unit participated in the Allied Invasion of Italy during the Naples-Foggia Campaign from 18 August 1943 to 21 January 1944 and the Rome-Arno Campaign from 22 January 1944 till the end of the war. The 313th received battle credits and streamers for both campaigns. On 18 April 1944, the unit was transferred from Italy to Corsica. On 15 April 1945, the unit was transferred back to Italy where it would remain throughout the duration of the war. On 4 October 1945 at Camp Marcianise, Italy, the 313th was formally inactivated and control of the unit was transferred to the War Department. Cold War era On 24 May 1946, the 313th was allotted to the New York National Guard and redesignated as the 102d Communications Squadron. On 29 March 1948, the unit was consolidated at the White Plains Armory in White Plains, New York and received Federal recognition with the mission to install, maintain, and operate communications facilities for the 52nd Fighter Wing, New York Air National Guard. On 1 July 1952, the unit underwent major manpower and mission changes and was redesignated as the 274th Communications Squadron. The unit’s new mission was to install, operate, and maintain mobile communications facilities in support of the 253rd Combat Communications Group, Air Force Communications Service, and Tactical Air Command communications area in a national emergency. In July 1959, the 274th was assigned the primary mission to provide highly mobile communications teams in support of contingencies and relocated to Roslyn Air National Guard Station in East Hills, New York. Changing with each Command served, the basic mission of the 274th was to provide, site, install, operate, and maintain deployed tactical communications equipment in support of a Tactical Air Base, providing commanders in the field with record and voice communications back to rear / area command headquarters via long haul radio systems and or in country circuits. Communications were provided via long haul HF/ISB radio and later satellite radio systems. Tactical telephone, record communications, and Command and Control HF/SSB systems were the primary services provided. Local Area Networks (LAN) for computerized supply, personnel and maintenance reporting services were added later. All communications systems were highly complex and all were secured. 1995 BRAC In 1995, in order to produce a more efficient and cost-effective basing structure, the Base Realignment and Closure Commission recommended to the Secretary of Defense the closure of Roslyn ANGS and subsequent relocation of the 274th Combat Communications Squadron to Stewart Air National Guard Base in Newburgh, New York. In 1998, rather than being moved to Stewart ANGB, the unit was to be moved to Hancock Field Air National Guard Base in Syracuse, New York and redesignated as the 274th Air Support Operations Squadron. Air support operations On 1 October 1999, the 274th Combat Communications Squadron was redesignated as the 274th Air Support Operations Squadron and relocated to Hancock Field ANGB. The 274 ASOS' new mission was tactical command and control of air assets while embedded with aligned US Army units. Despite the name change and new mission, the history of working with advanced communications equipment since World War II continued. September 11, 2001 The attacks on New York City on the morning of 11 September 2001 were the first operational test of the 274 ASOS. Within hours of the attacks, half of the unit loaded onto HC-130's from the 106th Rescue Wing for deployment to New York City as part of the initial response; the second half of the unit would follow on 12 September. The rapid notification and deployment ensured that unit members were on the ground and operational in less than 8 hours after the first attack. In New York City, the unit's mission was to serve as a communications platform in the center of the city for the various first responder, aid, and relief units. Unit members drew upon their training and experiences working in austere environments within minimal equipment to effectively and efficiently relay communications between relief workers and local, state, and federal leadership. Afghanistan and Iraq In anticipation of deployment operations as a result of the September 11th attacks, the 274th increased their training tempo and operational readiness, conducting close air support training throughout the winter in locations across the United States. The increased effort put towards JTAC training, readiness, and currency resulted in the unit being given initial operational capability (IOC) status in December 2002, paving the way for unit members to contribute to the operations in Afghanistan and to prepare for the expected operations in Iraq. Domestic operations As part of the domestic response role, the unit was activated several times for relief efforts, most recently for the flooding in Binghamton, New York caused by Tropical Storm Lee in September 2011 and Hurricane Sandy in New York City and the relief efforts in Buffalo, New York due to severe lake effect snowfall received in November 2014. Lineage Constituted as the 313th Signal Company, Wing Activated on 1 December 1942 Inactivated 4 October 1945 Redesignated 102d Communications Squadron, Wing and allotted to the National Guard on 24 May 1946 Organized on 15 March 1948 Extended federal recognition on 29 March 1948 Redesignated 274th Communications Squadron, Operations on 1 July 1952 Redesignated 274th Communications Squadron (Operations) on 1 July 1955 Redesignated 274th Communications Squadron, Tributary Team on 10 May 1961 Redesignated 274th Mobile Communications Squadron (Contingency) on 16 March 1968 Federalized and ordered to extended active duty on 24 March 1970 Released from extended active duty and returned to New York state control on 26 March 1970 Redesignated 274th Combat Communications Squadron (Contingency) on 1 April 1976 Redesignated 274th Combat Information Systems Squadron on 1 July 1984 Redesignated 274th Combat Communications Squadron on 1 October 1986 Redesignated 274th Air Support Operations Squadron on 1 October 1999 Assignments Western Signal Aviation Training Center, 1 December 1942 Twelfth Air Force, ~August 1943 57th Bombardment Wing, ~1944 52d Fighter Wing, 29 March 1948 New York Air National Guard, 1 November 1950 Attached to 106th Composite Wing, 1 November 1950 Attached to 152d Aircraft Control and Warning Group, 2 April 1951 Attached to 107th Fighter Wing, 9 July 1951 Attached to 152d Tactical Control Group, c. July 1952 253d Communications Group (later 253d Mobile Communications Group, 253d Combat Communications Group, 253d Combat Information Systems Group, 253d Combat Communications Group), January 1953 174th Fighter Wing, 1 October 1999 152d Air Operations Group, 20 March 2013 Stations White Plains Armory, White Plains, New York, March 1948 – November 1954 Westchester Air National Guard Base, Westchester, New York, December 1954 – June 1959 Roslyn Air National Guard Station, East Hills, New York, July 1959 – September 1999 Hancock Field Air National Guard Base, Syracuse, New York, October 1999 – Present See also List of United States Air Force Air Support Operations Squadrons United States Air Force Tactical Air Control Party References External links Squadrons of the United States Air National Guard
4334868
https://en.wikipedia.org/wiki/Distributed%20lock%20manager
Distributed lock manager
Operating systems use lock managers to organise and serialise the access to resources. A distributed lock manager (DLM) runs in every machine in a cluster, with an identical copy of a cluster-wide lock database. In this way a DLM provides software applications which are distributed across a cluster on multiple machines with a means to synchronize their accesses to shared resources. DLMs have been used as the foundation for several successful clustered file systems, in which the machines in a cluster can use each other's storage via a unified file system, with significant advantages for performance and availability. The main performance benefit comes from solving the problem of disk cache coherency between participating computers. The DLM is used not only for file locking but also for coordination of all disk access. VMScluster, the first clustering system to come into widespread use, relied on the OpenVMS DLM in just this way. Resources The DLM uses a generalized concept of a resource, which is some entity to which shared access must be controlled. This can relate to a file, a record, an area of shared memory, or anything else that the application designer chooses. A hierarchy of resources may be defined, so that a number of levels of locking can be implemented. For instance, a hypothetical database might define a resource hierarchy as follows: Database Table Record Field A process can then acquire locks on the database as a whole, and then on particular parts of the database. A lock must be obtained on a parent resource before a subordinate resource can be locked. Lock modes A process running within a VMSCluster may obtain a lock on a resource. There are six lock modes that can be granted, and these determine the level of exclusivity being granted, it is possible to convert the lock to a higher or lower level of lock mode. When all processes have unlocked a resource, the system's information about the resource is destroyed. Null (NL). Indicates interest in the resource, but does not prevent other processes from locking it. It has the advantage that the resource and its lock value block are preserved, even when no processes are locking it. Concurrent Read (CR). Indicates a desire to read (but not update) the resource. It allows other processes to read or update the resource, but prevents others from having exclusive access to it. This is usually employed on high-level resources, in order that more restrictive locks can be obtained on subordinate resources. Concurrent Write (CW). Indicates a desire to read and update the resource. It also allows other processes to read or update the resource, but prevents others from having exclusive access to it. This is also usually employed on high-level resources, in order that more restrictive locks can be obtained on subordinate resources. Protected Read (PR). This is the traditional share lock, which indicates a desire to read the resource but prevents other from updating it. Others can however also read the resource. Protected Write (PW). This is the traditional update lock, which indicates a desire to read and update the resource and prevents others from updating it. Others with Concurrent Read access can however read the resource. Exclusive (EX). This is the traditional exclusive lock which allows read and update access to the resource, and prevents others from having any access to it. The following truth table shows the compatibility of each lock mode with the others: Obtaining a lock A process can obtain a lock on a resource by enqueueing a lock request. This is similar to the QIO technique that is used to perform I/O. The enqueue lock request can either complete synchronously, in which case the process waits until the lock is granted, or asynchronously, in which case an AST occurs when the lock has been obtained. It is also possible to establish a blocking AST, which is triggered when a process has obtained a lock that is preventing access to the resource by another process. The original process can then optionally take action to allow the other access (e.g. by demoting or releasing the lock). Lock value block A lock value block is associated with each resource. This can be read by any process that has obtained a lock on the resource (other than a null lock) and can be updated by a process that has obtained a protected update or exclusive lock on it. It can be used to hold any information about the resource that the application designer chooses. A typical use is to hold a version number of the resource. Each time the associated entity (e.g. a database record) is updated, the holder of the lock increments the lock value block. When another process wishes to read the resource, it obtains the appropriate lock and compares the current lock value with the value it had last time the process locked the resource. If the value is the same, the process knows that the associated entity has not been updated since last time it read it, and therefore it is unnecessary to read it again. Hence, this technique can be used to implement various types of cache in a database or similar application. Deadlock detection When one or more processes have obtained locks on resources, it is possible to produce a situation where each is preventing another from obtaining a lock, and none of them can proceed. This is known as a deadlock (E. W. Dijkstra originally called it a deadly embrace). A simple example is when Process 1 has obtained an exclusive lock on Resource A, and Process 2 has obtained an exclusive lock on Resource B. If Process 1 then tries to lock Resource B, it will have to wait for Process 2 to release it. But if Process 2 then tries to lock Resource A, both processes will wait forever for each other. The OpenVMS DLM periodically checks for deadlock situations. In the example above, the second lock enqueue request of one of the processes would return with a deadlock status. It would then be up to this process to take action to resolve the deadlock—in this case by releasing the first lock it obtained. Linux clustering Both Red Hat and Oracle have developed clustering software for Linux. OCFS2, the Oracle Cluster File System was added to the official Linux kernel with version 2.6.16, in January 2006. The alpha-quality code warning on OCFS2 was removed in 2.6.19. Red Hat's cluster software, including their DLM and GFS2 was officially added to the Linux kernel with version 2.6.19, in November 2006. Both systems use a DLM modeled on the venerable VMS DLM. Oracle's DLM has a simpler API. (the core function, dlmlock(), has eight parameters, whereas the VMS SYS$ENQ service and Red Hat's dlm_lock both have 11.) Other implementations Other DLM implementations include the following: Google has developed Chubby, a lock service for loosely coupled distributed systems. It is designed for coarse-grained locking and also provides a limited but reliable distributed file system. Key parts of Google's infrastructure, including Google File System, Bigtable, and MapReduce, use Chubby to synchronize accesses to shared resources. Though Chubby was designed as a lock service, it is now heavily used inside Google as a name server, supplanting DNS. Apache ZooKeeper, which was created at Yahoo, is open-source software and can be used to perform distributed locks as well. Etcd is open-source software, developed at CoreOS under the Apache License. It can be used to perform distributed locks as well. Redis is an open source, BSD licensed, advanced key-value cache and store. Redis can be used to implement the Redlock Algorithm for distributed lock management. HashiCorp's Consul, which was created by HashiCorp, is open-source software and can be used to perform distributed locks as well. Taooka distributed lock manager uses the "try lock" methods to avoid deadlocks. It can also specify a TTL for each lock with nanosecond precision. A DLM is also a key component of more ambitious single system image (SSI) projects such as OpenSSI. References HP OpenVMS Systems Services Reference Manual – $ENQ Officer - A simple distributed lock manager written in Ruby FLoM - A free open source distributed lock manager that can be used to synchronize shell commands, scripts and custom developed C, C++, Java, PHP and Python software Distributed computing architecture
27443739
https://en.wikipedia.org/wiki/List%20of%20University%20of%20Wisconsin%E2%80%93Madison%20people%20in%20academics
List of University of Wisconsin–Madison people in academics
This University of Wisconsin–Madison people in academics consists of notable people who graduated or attended the University of Wisconsin–Madison. Julia Adams (sociologist), Professor, Yale University Robert Adair, Professor Emeritus of Physics, Yale University David Adamany, former President of Temple University Colin Adams, Professor of Mathematics, Williams College Paul C. Adams, Associate Professor of Geography, University of Texas-Austin Julius Adler Madeleine Wing Adler, former President, West Chester University Sarita Adve, Professor of Computer Science, University of Illinois, Urbana-Champaign Michael A'Hearn, astronomer Julie Ahringer, Senior Research Fellow, Gurdon Institute, Cambridge University Anastasia Ailamaki, Professor of Computer Science, École Polytechnique Fédérale de Lausanne Fay Ajzenberg-Selove, Professor Emerita of Physics, University of Pennsylvania; taught at Haverford College Robert A. Alberty, Professor Emeritus of Chemistry, MIT F. King Alexander, President of California State University, Long Beach Gar Alperovitz, author, economist, historian, and former fellow at Cambridge University Sanford Soverhill Atwood - scientist, Provost of Cornell University, President of Emory University Alice Ambrose, former Professor of Philosophy, Smith College Stephen E. Ambrose, author and historian Marc A. Anderson, environmental chemist Arthur Irving Andrews, former Professor of Diplomacy, Charles University in Prague Thomas G. Andrews, historian Nancy Armstrong, Professor of English, Duke University Marilyn Arnold, Professor Emeritus of English, Brigham Young University Richard Arratia, Professor of Mathematics, University of Southern California Michael Aschbacher, Professor of Mathematics, California Institute of Technology Peter J. Aschenbrenner, historian, Purdue University David Audretsch, Professor of Economics, Indiana University Nina Auerbach, Professor of Comparative Literature, University of Pennsylvania John D. Axtell, former Professor of Agronomy, Purdue University Oliver Edwin Baker, geographer Tania A. Baker, Professor of Biochemistry, MIT Ira Baldwin, bacteriologist Clinton Ballou, Professor Emeritus of Biochemistry, University of California, Berkeley David P. Barash, Professor of Psychology, University of Washington Thomas P.M. Barnett, military and security strategist, former professor at the Naval War College Michael Barnsley, Professor of Mathematics, Australian National University Henry H. Barschall, physicist Florence Bascom, geologist at Bryn Mawr College Carolyn Baylies, former Reader in Sociology, University of Leeds Charles L. Beach, President of the University of Connecticut Jesse Beams, former Professor of Physics, University of Virginia Carl L. Becker, former Professor of History, Cornell University David T. Beito, author and historian Richard Bellman, mathematician and inventor of dynamic programming Frank Bencriscutto, former Professor of Music, University of Minnesota Ernst Benda, former Professor of Law, University of Freiburg William H. Bennett, Professor of Agronomy, Utah State University Ira Berlin, Distinguished University Professor, University of Maryland Bruce C. Berndt, Professor of Mathematics, University of Illinois, Urbana-Champaign William T. Bielby, former Professor of Sociology, University of Pennsylvania Ray Allen Billington, former Professor of History, Oxford University and Northwestern University Thomas Binford, Professor Emeritus of Computer Science, Stanford University Robert Byron Bird, chemical engineer Kenneth O. Bjork, former Professor of History, Saint Olaf College David W. Blight, Professor of History, Yale University; taught at Amherst College Leonard Bloomfield, former Professor of Linguistics, Yale University Herbert Eugene Bolton, professor of history at the University of Texas at Austin and University of California, Berkeley George Boyer, Professor of Economics, Cornell University Carol Breckenridge, anthropologist Patricia Flatley Brennan, Professor of Engineering Arthur Louis Breslich, President of German Wallace College and Baldwin-Wallace College Ernest J. Briskey, Dean of Agricultural Science, Oregon State University David H. Bromwich, Professor of Geography, Ohio State University Morton Brown, Professor Emeritus of Mathematics, University of Michigan Norman O. Brown, scholar of Classics Christopher Browning, Professor of History, University of North Carolina, Chapel Hill Robert X. Browning, Associate Professor of Political Science, Purdue University Mari Jo Buhle, Professor Emerita of History, Brown University Paul Buhle, activist and lecturer, Brown University R. Carlyle Buley, former Professor of History, Indiana University Mary Bunting, former President, Radcliffe College Robert H. Burris, biochemist Frederick H. Buttel, former Professor of Sociology Lester J. Cappon, historian, documentary editor, and archivist for Colonial Williamsburg Claudia Card, Emma Goldman (WARF) Professor of Philosophy at the University of Wisconsin–Madison Margery C. Carlson (M.S. 1920, Ph.D. 1925), Professor of Botany, Northwestern University John Casida, Professor of Entomology, University of California, Berkeley Carlos Castillo-Chavez, Professor of Mathematical Biology, Arizona State University Edward Castronova, Professor of Telecommunications, Indiana University Kwang-Chu Chao, chemical engineer at Purdue University Arthur B. Chapman, geneticist Peter Charanis, former Professor of History, Rutgers University Vivek Chibber, sociologist, New York University Edith Clarke, former Professor of Electrical Engineering, University of Texas-Austin W. Wallace Cleland, biochemist John H. Coatsworth, Provost, Columbia University Alan Code, Professor of Philosophy, Stanford University Stephen P. Cohen, Senior Fellow, Brookings Institution Betsy Colquitt, Professor of Literature and Creative Writing, Texas Christian University Timothy E. Cook, former Professor of Political Science at Williams College and Louisiana State University Vincent Cooke, S.J., (Ph.D. philosophy 1971), academic administrator, President of Canisius College (1993–2010) Arthur C. Cope, former Professor of Chemistry, MIT Brian Coppola, Professor of Chemistry, University of Michigan Giovanni Costigan, former Professor of History, University of Washington May Louise Cowles, home economics instructor and lecturer Richard H. Cracroft, Professor of English, Brigham Young University Joanne V. Creighton, Interim President, Haverford College; former President, Mount Holyoke College Kimberlé Crenshaw, Professor of Law at Columbia University and UCLA Tim Cresswell, Professor of Geography, University of London William Cronon (1976), environmental historian Harold Marion Crothers, Professor of Electrical Engineering, South Dakota State University Chicita F. Culberson, Senior Research Scientist in Biology, Duke University Chris Cuomo, former Professor of Ethics, University of Cincinnati Richard N. Current, historian John T. Curtis, botanist Edward Cussler, Professor of Chemical Engineering, University of Minnesota Thomas Daniel, Professor of Biology, University of Washington Stephen Daniels, Professor of Cultural Geography, University of Nottingham Richard Danner, Professor of Law, Duke University Kelvin Davies, Professor of Gerontology, University of Southern California W. R. Davies, President (1941–1959), University of Wisconsin–Eau Claire James A. Davis, sociologist Kenneth S. Davis, historian Dick de Jongh, Professor Emeritus of Logic and Mathematics, University of Amsterdam Brady J. Deaton, Chancellor, University of Missouri Peter Dervan, Professor of Chemistry, California Institute of Technology Matthew Desmond, Professor of Sociology, Princeton University Frans Dieleman, former Professor of Geography, Utrecht University John Louis DiGaetani, Professor of English, Hofstra University Hasia Diner, historian Robert Disque, President, Drexel Institute of Technology Carl Djerassi, Professor of Chemistry, Stanford University John Dollard, former Professor of Psychology, Yale University J. Kevin Dorsey, Dean, Southern Illinois University School of Medicine Eliza T. Dresang (PhD, 1981), professor and researcher in literacy, library and information sciences, media and technology Lee A. DuBridge, former President, California Institute of Technology Wendell E. Dunn, President of the Middle States Association of Colleges and Schools Nancy Dye, former President, Oberlin College William G. Dyer, Dean, Marriott School of Management at Brigham Young University Anne Haas Dyson 1972 College of Education - professor and researcher in literacy Olin J. Eggen, astronomer Marc Egnal, Professor of History, York University Jean Bethke Elshtain, Professor of Divinity and Philosophy, University of Chicago Conrad Elvehjem, former President, University of Wisconsin-Madison Michael Engh, President of Santa Clara University David Estlund, Lombardo Family Professor of the Humanities, Brown University John Eyler, Professor Emeritus of History, University of Minnesota John K. Fairbank, former Professor of History, Harvard University Etta Zuber Falconer, Professor of Mathematics, Norfolk State University and Spelman College Joseph Felsenstein, Professor of Biology, University of Washington Peter Edgerly Firchow, Professor Emeritus of English, University of Minnesota Erica Flapan, Professor of Mathematics, Pomona College Robben Wright Fleming, former President, University of Michigan Neil Fligstein, Professor of Sociology, University of California, Berkeley George T. Flom, former Professor of Scandinavian Languages, University of Illinois, Urbana-Champaign Karl Folkers, biochemist Michael J. Franklin, Professor of Computer Science, University of California, Berkeley Shane Frederick, Associate Professor of Management, Yale University Daniel Z. Freedman, Professor of Physics, MIT Joseph S. Freedman, Professor of Education at Alabama State University Frank Freidel, former Professor of History at Harvard University and the University of Washington Linda P. Fried, Dean of Public Health, Columbia University Joseph G. Fucilla, former Professor of Romantic Languages, Northwestern University D.R. Fulkerson, former Professor of Mathematics, Cornell University Ellen V. Futter, former President of Barnard College William A. Gahl, geneticist, NIH John Gallagher III, astronomer Fernando García Roel, Rector, Monterrey Institute of Technology and Higher Education Lloyd Gardner, historian of U.S. foreign relations Johannes Gehrke, Professor of Computer Science, Cornell University Judy Genshaft, President of University of South Florida Mark Gertler, Professor of Economics, New York University Paul Gertler, Professor of Economics and Business, University of California, Berkeley Arnold Gesell, former Professor of Psychology, Yale University Reza Ghadiri, Professor of Chemistry, Scripps Research Institute Jacquelyn Gill, Assistant professor of climate science, University of Maine Donna Ginther, Professor of Economics, University of Kansas G. N. Glasoe, former Professor of Physics, Columbia University; Associate Director, Brookhaven National Laboratory George Glauberman, Professor of Mathematics, University of Chicago Helen Iglauer Glueck, Director of the Coagulation Laboratory, University of Cincinnati Harvey Goldberg, activist and historian Gerson Goldhaber, Professor Emeritus of Physics, University of California, Berkeley; researcher, Lawrence Berkeley National Laboratory Brison D. Gooch, historian Ann Dexter Gordon, historian, editor of The Elizabeth Cady Stanton and Susan B. Anthony Papers Project at Rutgers University Myron J. Gordon, Professor Emeritus of Finance, University of Toronto Richard K. Green, Professor of Business, University of Southern California William Greene, Professor of Economics, New York University Michael Gribskov, Professor of Biological Sciences, Purdue University Paul J. Griffiths, Professor of Theology, Duke University Erik Gronseth, former Professor of Sociology, University of Oslo James A. Gross, labor historian at Cornell University Jennifer Guglielmo, Associate Professor of History and Women's Studies, Smith College Ernst Guillemin, electrical engineer and computer scientist, MIT, recipient of the IEEE Medal of Honor Ramón A. Gutiérrez, Professor of History, University of Chicago Herbert Gutman, Professor of History, City University of New York Jeffrey K. Hadden, former Professor of Sociology, University of Virginia Usha Haley, former Professor of International Business, University of New Haven Joseph M. Hall, Jr., Professor of American History, Bates College Helena Hamerow, Professor of Medieval Archaeology, Oxford University Gordon Hammes, Professor Emeritus of Biochemistry, Duke University Jo Handelsman, Professor of Biology and Medicine, Yale University Pat Hanrahan, Professor of Computer Science and Electrical Engineering, Stanford University Alvin Hansen, former Professor of Economics, Harvard University; Presidential advisor John W. Harbaugh, Professor of Geological and Environmental Sciences, Stanford University Robert Harms, Professor of History, Yale University Cole Harris, geographer; professor at the University of Toronto Daniel Hartl, Professor of Biology, Harvard University Arthur D. Hasler, ecologist and zoologist Darren Hawkins, Professor of Political Science, Brigham Young University James Edwin Hawley, Professor of Mineralogy, Queen's University; namesake of Hawleyite Patrick J. Hearden, Professor of History, Purdue University Margaret Hedstrom, Professor of Information, University of Michigan D. Mark Hegsted, former Professor of Nutrition at Harvard University Walter Heller, former Professor of Economics, University of Minnesota; Chair, Council of Economic Advisors Joseph M. Hellerstein, Professor of Computer Science, University of California, Berkeley Frederick Hemke, Professor of Saxophone, Bienen School of Music at Northwestern University Ralph D. Hetzel, former President, Pennsylvania State University Elfrieda "Freddy" Hiebert, literacy advocate Thomas Hines, Professor Emeritus, UCLA Ralph Hirschmann, former Professor of Chemistry, University of Pennsylvania Ho Ping-sung, former historian at Peking University and Beijing Normal University Michael A. Hoffman, Professor of Earth Sciences and Resources, University of South Carolina LaVahn Hoh, Professor of Drama, University of Virginia Karen Holbrook, former President, Ohio State University Charles H. Holbrow, physicist, Charles A. Dana Professor of physics, emeritus, Colgate University Lori L. Holt, Associate Professor of Psychology, Carnegie Mellon University Olga Holtz, Associate Professor of Mathematics, University of California, Berkeley; Professor of Applied Mathematics, Technical University Berlin Renate Holub, philosopher and interdisciplinary theorist, University of California, Berkeley Robert C. Holub, current chancellor of the University of Massachusetts Amherst (2008–present) Vasant Honavar, Dorothy Foehr Huck and J. Lloyd Huck Chair in Biomedical Data Sciences and Artificial Intelligence, Professor of Informatics, Data Sciences, Computer Science, Operations Research, Neuroscience, and Public Health Sciences Pennsylvania State University, former Professor of Computer Science, Iowa State University; Earnest Hooton, former Professor of Anthropology, Harvard University Calvin B. Hoover, former Professor of Economics, Duke University William O. Hotchkiss, President of Michigan Technological University and Rensselaer Polytechnic Institute Mark Huddleston, President, University of New Hampshire Clark L. Hull, psychologist of motivation at Yale University William Hunter, statistician William Edwards Huntington, President of Boston University Charlie D. Hurt, former provost and Professor of Computer Science and Information Systems, University of Wisconsin-River Falls Lloyd Hustvedt, former Professor of Norwegian, Saint Olaf College William Jaco, Professor of Mathematics, Oklahoma State University-Stillwater Russell Jacoby, Professor of History, UCLA James Alton James, former Professor of History, Northwestern University Henry Jenkins, Professor of Communication Arts, University of Southern California Merrill Jensen, historian Carleton B. Joeckel, former librarian Peter Johnsen, Vice President for Academic Affairs, Bradley University Emory Richard Johnson, former Dean of Business, University of Pennsylvania Michael D. Johnson, Dean of Hotel Administration, Cornell University Charles O. Jones, former Professor of Political Science, University of Wisconsin-Madison and University of Virginia and former President, American Political Science Association Jacqueline Jones, Professor of History, University of Texas-Austin Kenneth Judd, Senior Fellow, Hoover Institution Gerald Kaiser, Professor Emeritus, University of Massachusetts Lowell Ellsworth Kalas, President of Asbury Theological Seminary Vytautas Kavolis, sociologist Homayoon Kazerooni, Professor of Mechanical Engineering, University of California, Berkeley Edmond Keller, Professor of Political Science, UCLA George L. Kelling, Professor of Social Welfare, Rutgers University Ben Kerkvliet, Professor of Political Science, Australian National University Corey Keyes, sociologist at Emory University Margaret Keyes, former Professor of Home Economics, University of Iowa Spencer L. Kimball, former Dean of Law, University of Wisconsin-Madison; former Professor of Law, University of Chicago and University of Michigan Robin Wall Kimmerer, Professor of Environmental of Forest Biology, State University of New York College of Environmental Science and Forestry Gary King, Professor of Government, Harvard University; taught at New York University and Oxford University Nicole King, Assistant Professor of Genetics, Genomics, and Development; University of California, Berkeley Ronold W. P. King, former Professor of Physics, Harvard University Willford I. King, former Professor of Economics, New York University John W. Kingdon, Professor Emeritus of Political Science, University of Michigan David Kinley, former President, University of Illinois, Urbana-Champaign Grayson L. Kirk, former President, Columbia University Charles Kittel, former Professor of Physics, University of California, Berkeley Anne C. Klein, Professor of Religious Studies, Rice University William J. Klish, Professor of Pediatrics, Gastroenterology, Hepatology, and Nutrition; Baylor College of Medicine J. Martin Klotsche, first chancellor of the University of Wisconsin–Milwaukee Clyde Kluckhohn, former Professor of Anthropology, Harvard University Anne Kelly Knowles, Professor of Geography, Middlebury College Kenneth Koedinger, Professor of Psychology, Carnegie Mellon University Henry Koffler, former Vice President, University of Minnesota Gabriel Kolko, historian Arnold Krammer, historian, retired from Texas A&M University Thomas R. Kratochwill, psychologist Konrad Bates Krauskopf, former Professor of Geology, Stanford University James E. Krier, Professor of Law, University of Michigan; taught at Harvard University, Oxford University, Stanford University, and UCLA Leo Kristjanson, President, University of Saskatchewan Lawrence Kritzman, Professor of French, Dartmouth College Anne O. Krueger, Professor of Economics, Johns Hopkins University; taught at Stanford University Harold J. Kushner, Professor Emeritus of Applied Mathematics, Brown University Philip Kutzko, Professor of Mathematics, University of Iowa Walter LaFeber, historian of U.S. foreign relations at Cornell University Max G. Lagally, engineer and professor James A. Lake, Professor of Biology and Genetics, UCLA Janja Lalich, Professor of Sociology, California State University, Chico Henry A. Lardy, biochemist Edward Larson, winner of the Pulitzer Prize in history Mark Lautens, Professor of Chemistry, University of Toronto Traugott Lawler, Professor of English, Yale University Michael Ledeen, security strategist at American Enterprise Institute and Foundation for Defense of Democracies Winfred P. Lehmann, former Professor of German, University of Texas-Austin Charles Kenneth Leith, geologist John Leonora, Professor of Physiology and Pharmacology, Loma Linda University A. Carl Leopold, Graduate Dean, University of Nebraska-Lincoln A. Starker Leopold, son of Aldo Leopold; former Professor of Zoology at the University of California, Berkeley Luna Leopold, son of Aldo Leopold; former Professor of Geology, University of California, Berkeley Herb Levi, former Professor of Biology, Harvard University Robert Lieber, Professor, Department of Government and School of Foreign Service, Georgetown University Gene Likens, ecologist Mary Ann Lila, former Professor of Nutrition, University of Illinois, Urbana-Champaign Bernard J. Liska, food scientist at Purdue University Timothy P. Lodge, Professor of Chemistry, University of Minnesota Timothy M. Lohman, Professor of Medicine, Washington University Roberto Sabatino Lopez, former Professor of History at Yale University Max O. Lorenz, economist and statistician Daryl B. Lund, former Dean of Agricultural and Life Sciences, Cornell University George A. Lundberg, sociologist at the University of Washington Karl Mahlburg, mathematician Tak Wah Mak, Professor of Biophysics and Immunology, University of Toronto Howard Malmstadt, Professor Emeritus of Chemistry, University of Illinois, Urbana-Champaign Daniel R. Mandelker, Professor of Law, Washington University James G. March, Professor Emeritus of Psychology, Stanford University Carolyn "Biddy" Martin, President, Amherst College Abraham Maslow (PhD 1934), groundbreaking humanist psychologist, "hierarchy of needs;" former professor at Brandeis University Max Mason, former President, University of Chicago Thomas Mathiesen, Professor Emeritus of Sociology, University of Oslo Lola J. May, mathematics educator Thomas J. McCormick, scholar of international relations Thomas K. McCraw, Professor Emeritus of Business, Harvard University Frederick Merk, former Professor of Government and History, Harvard University Alan G. Merten, President of George Mason University Gerald Meyer, Professor of Chemistry, Johns Hopkins University Joseph C. Miller, Professor of History, University of Virginia Renée J. Miller, Professor of Computer Science, University of Toronto C. Wright Mills, sociologist and professor at Columbia University Lawrence Mishel, President, Economic Policy Institute Olivia S. Mitchell, Professor of Economics, University of Pennsylvania Jason Mittell, Professor of American Studies and Film, Middlebury College Pornchai Mongkhonvanit, President of Siam University, President Emeritus of International Association of University Presidents Florence M. Montgomery, art historian Stephen S. Morse, Professor of Epidemiology, Columbia University Clark A. Murdock, Senior Adviser, Center for Strategic and International Studies John E. Murdoch, former historian and philosopher of science, Harvard University John Murray, Jr., Chancellor and Professor of Law, Duquesne University Daniel J. Myers, Professor of Sociology, University of Notre Dame Mark Myers, geologist Jeffrey Naughton, computer scientist Richard Nelson, cultural anthropologist Maurice F. Neufeld, professor emeritus, Cornell University David Newbury, Professor of African Studies, Smith College Barbara W. Newell, former President, Wellesley College Carl Niemann, former Professor of Biochemistry, California Institute of Technology David W. Noble, Professor of American Studies, University of Minnesota Mark Nordenberg, Chancellor, University of Pittsburgh Olaf M. Norlie, former Dean, Hartwick College Gerald North, climatologist Russel B. Nye, former Professor of English, Michigan State University Alton Ochsner, UW medical professor and cancer researcher; co-founded the Ochsner Clinic in New Orleans Emiko Ohnuki-Tierney, anthropologist Bertell Ollman, Professor of Politics, New York University Scott E. Page, Professor of Economics, University of Michigan Thomas Palaima, Professor of Classics, University of Texas-Austin Ann C. Palmenberg, biochemist Bernhard Palsson, Professor of Bioengineering, University of California, San Diego John Parascandola, medical historian Gil-Sung Park, Korean sociologist W. Robert Parks, former President, Iowa State University Michael Quinn Patton, former Professor of Sociology, University of Minnesota John Allen Paulos, Professor of Mathematics, Temple University; author of books about the consequences of mathematical illiteracy John Vernon Pavlik, Professor of Journalism, Rutgers University Donald E. Pearson, former Professor of Chemistry, Vanderbilt University Joseph A. Pechman, former Senior Fellow, Brookings Institution; former President, American Economic Association John Pemberton, Associate Professor of Anthropology, Columbia University Selig Perlman, economist and labor historian August Herman Pfund, former Professor of Physics, Johns Hopkins University Andrew C. Porter, former president, AERA; former professor, Vanderbilt University; Dean of Education, University of Pennsylvania Alejandro Portes, Professor of Sociology, Princeton University Philip S Portoghese, Professor of Medicine, University of Minnesota L. Fletcher Prouty, former Professor of Air Force Science and Tactics, Yale University Benjamin Arthur Quarles, historian Matthew Rabin, Professor of Economics, University of California, Berkeley Marian Radke-Yarrow, psychologist, National Institute of Mental Health Ronald Radosh, activist and historian Douglas W. Rae, Professor of Political Science, Yale University; author of Equalities Jim Ranchino, late Professor of Political Science, Ouachita Baptist University John Rapp, Professor of Political Science, Beloit College George Rawick, historian Robert A. Rees, former Professor of English, UCLA Thomas Reh, Professor of Biology, University of Washington J. Wayne Reitz, Professor of Agricultural Economics; fifth President of the University of Florida (1955-1967) Frank J. Remington, former Professor of Law, University of Wisconsin-Madison Justin Rhodes, Professor of Psychology, University of Illinois, Urbana-Champaign Lori Ringhand, Professor of Law, University of Georgia Walter Ristow, librarian Temario Rivera, Professor of Political Science, International Christian University Anita Roberts, former biochemist, National Cancer Institute Arthur H. Robinson, geographer Stuart Rojstaczer, former Professor of Geophysics, Duke University Gerhard Krohn Rollefson, former Professor of Chemistry, University of California, Berkeley Ediberto Roman, Professor of Law, Florida International University College of Law and Florida International University Jia Rongqing, Professor of Mathematics, University of Alberta Charles E. Rosenberg, historian of science at Harvard University Milton J. Rosenberg, Professor Emeritus of Psychology, University of Chicago Nathan Rosenberg, Professor Emeritus of Economics, Stanford University; taught at Cambridge University George C. Royal, microbiologist Lee Albert Rubel, former Professor of Mathematics, University of Illinois, Urbana-Champaign David S. Ruder, former Dean of Law, Northwestern University Mary P. Ryan, Professor of History, Johns Hopkins University and Professor Emerita of History, University of California, Berkeley Joseph F. Rychlak, Professor of Humanistic Psychology, Loyola University Chicago Herbert J. Ryser, former Professor of Mathematics, California Institute of Technology and Ohio State University Yuriko Saito, Professor of Philosophy, Rhode Island School of Design Theodore Saloutos, former Professor of History, UCLA Warren Samuels, economist Austin Sarat, Professor of Political Science, Amherst College Richard J. Saykally, Professor of Chemistry, University of California, Berkeley George Schaller, biologist and conservationist Richard Scheller, former Professor of Biochemistry, Stanford University Steven Schier, Professor of Political Science, Carleton College Bernadotte Everly Schmitt, former President, American Historical Association; former Professor of History, University of Chicago Mark Schorer, former Professor of English, University of California, Berkeley Joan Wallach Scott, Professor of History, Institute for Advanced Study Michael L. Scott, Professor of Computer Science, University of Rochester John Searle, Professor of Philosophy, University of California, Berkeley Robert Serber, former Professor of Physics, Columbia University; scientist on the Manhattan Project Jim G. Shaffer, Professor of Anthropology, Case Western Reserve University Cosma Shalizi, Assistant Professor of Statistics, Carnegie Mellon University Steven Shapin, historian of science, Harvard University Ira Sharkansky, Professor Emeritus of Political Science, Hebrew University of Jerusalem Lauriston Sharp, former Professor of Anthropology, Cornell University Spencer Shaw, former Professor of Library Science, University of Washington Jerome Lee Shneidman, former Professor of History at Adelphi University, specialist in psychohistory Victor Shoup, Professor of Mathematics, New York University Daniel L. Simmons, Professor of Chemistry and Director of the Cancer Research Center, Brigham Young University Brooks D. Simpson, Professor of History, Arizona State University Louis B. Slichter, former Professor of Geophysics, MIT and UCLA Sumner Slichter, former Professor of Economics, Harvard University Ronald Smelser, former Professor of History (University of Utah), Holocaust educator and author of The Myth of the Eastern Front William Cunningham Smith, literature scholar David R. Soll, Professor of Biology, University of Iowa Robert Soucy, Professor Emeritus of History, Oberlin College Roy Spencer, Principal Research Scientist, University of Alabama in Huntsville Clint Sprott, physicist Janet Staiger, Professor of Communication, University of Texas-Austin George Stambolian, former Professor of French, Wellesley College Kenneth M. Stampp, former Professor of History, University of California, Berkeley; taught at Harvard University, Oxford University, University of London, and University of Munich Leon C. Standifer, Professor of Horticulture, Louisiana State University Michael Starbird, Professor of Mathematics, University of Texas-Austin Stephen C. Stearns, Professor of Biology, Yale University Harry Steenbock, biochemist and Vitamin D researcher George Steinmetz (academic), Professor of Sociology, University of Michigan Christopher H. Sterling, Professor of Media and Public Affairs, George Washington University C. Eugene Steuerle, Institute Fellow, Urban Institute Robert Stickgold, Associate Professor of Psychiatry, Harvard University Philip Stieg, Professor and Chairman of Neurosurgery and Weill Medical College and New York-Presbyterian Medical Center Gilbert Stork, Professor Emeritus of Chemistry at Columbia University Murray A. Straus, Sociologist and professor University of New Hampshire, creator of the Conflict tactics scale Jon Strauss, former President of Harvey Mudd College Robert P. Strauss, Professor of Economics and Public Policy, Carnegie Mellon University Philip Taft, former Professor of Economics at Brown University Sol Tax, former Professor of Anthropology, University of Chicago Henry Charles Taylor, former Professor of Economics at Northwestern University and the University of Wisconsin-Madison Lily Ross Taylor, former Professor of Classics, University of California, Berkeley and Institute for Advanced Study Paul Schuster Taylor, former Professor of Economics, University of California, Berkeley Larry Temkin, Professor of Philosophy, Princeton University (starting in 2011) Albert M. Ten Eyck, agriculturist and agronomist Earle M. Terry, physicist Victor A. Tiedjens, agricultural scientist at Rutgers University Virginia Tilley, Chief Research Specialist, Human Sciences Research Council (South Africa) Ignacio Tinoco, Jr., Professor of Chemistry, University of California, Berkeley Steve Tittle, Associate Professor of Composition and Theory, Dalhousie University Andrew P. Torrence (M.A. 1951; Ph.D. 1954), President of Tennessee State University (1968-1974); executive vice president and provost of Tuskegee University (1974-1980). Sidney Dean Townley, former Professor of Astronomy, Stanford University Paul M. Treichel, chemist Glenn Thomas Trewartha, geographer Susan Traverso, President of Thiel College, former Provost of Elizabethtown College Jim Trier, Professor of Education, University of North Carolina, Chapel Hill Arleen Tuchman, Professor of History at Vanderbilt University Konrad Tuchscherer, Associate Professor of History and Director of Africana Studies at St. John's University David Tulloch, Associate Professor of Landscape Architecture, Rutgers University Melvin Tumin, former Professor of Sociology, Princeton University Frederick Jackson Turner (1884, MA 1888), historian and professor, Pulitzer Prize winner Joseph Tussman, former Professor of Philosophy, University of California, Berkeley Michael Uebel, professor, author Ruth Hill Useem, former Professor of Sociology, Michigan State University Edwin Vedejs, former professor of chemistry at University of Wisconsin-Madison and University of Michigan Victor Vacquier, former Professor of Geophysics, Scripps Institute of Oceanography, University of California, San Diego Bonita H. Valien, PhD, former professor of Sociology at Fisk University, author of books about school desegregation. Preston Valien, PhD, former professor of Sociology at Fisk University and Brooklyn College; cultural attache in Nigeria. Robert van de Geijn, Professor of Computer Science, University of Texas-Austin Andrew H. Van de Ven, Professor of Organizational Innovation, University of Minnesota Charles Van Hise, former President, University of Wisconsin-Madison Martha Vicinus, Professor of Women's Studies, University of Michigan Julia Grace Wales, former Professor of English, University of Wisconsin-Madison; taught at the University of Cambridge and the University of London Cody Walker, Lecturer in English, University of Michigan John Charles Walker, plant pathologist Hubert Stanley Wall, former mathematician at Northwestern University and the University of Texas-Austin Martin Walt, Professor of Electrical Engineering, Stanford University David Der-wei Wang, Professor of East Asian Languages, Harvard University David Ward, former President, American Council on Education Arthur Waskow, former Resident Fellow, Institute for Policy Studies John Watrous, Associate Professor of Computer Science, David R. Cheriton School of Computer Science at the University of Waterloo Oliver Patterson Watts, chemical engineer John Carrier Weaver, former Professor of Geography; former Vice President for Academic Affairs, Ohio State University; former President, University of Wisconsin System Warren Weaver, mathematician, Rockefeller Institute Lee-Jen Wei, Professor of Biostatistics, Harvard University I. Bernard Weinstein, former Professor of Medicine, Columbia University Herman B Wells, former President, Indiana University Norman Wengert, political scientist and professor Peter Wenz, Professor of Philosophy, University of Illinois at Springfield Mark Wessel, former Dean, Carnegie Mellon University Wyatt C. Whitley, Professor of Chemistry, Georgia Institute of Technology John Wilce, former Professor of Medicine, Ohio State University John D. Wiley, former Chancellor, University of Wisconsin-Madison Dallas Willard, former Professor of Philosophy, University of Southern California T. Harry Williams, historian William Appleman Williams, historian of U.S. foreign relations Greg Williamson, Lecturer in English, Johns Hopkins University Linda S. Wilson, President Emerita, Radcliffe College; former Vice President, University of Michigan Christopher Winship, Professor of Sociology, Harvard University Edward Witten, Professor of Physics, Institute for Advanced Study Lawrence S. Wittner, Professor of History, University at Albany, SUNY Julian Wolpert, Professor Emeritus of Geography, Public Affairs, and Urban Planning, Princeton University David Woodward, geographer Joseph Wong, Vice President, International at University of Toronto James Wright, 16th president of Dartmouth College Yang Guanghua, Chinese engineer Y. Lawrence Yao, Professor of Mechanical Engineering, Columbia University Stephen Yenser, Professor of English, UCLA John Milton Yinger, Professor Emeritus of Sociology at Oberlin College Allyn Abbott Young, former Professor of Economics, Harvard University and the University of London Brigitte Young, Professor Emerita of Political Science, University of Münster Hugh Edwin Young, former Chancellor, University of Wisconsin-Madison; former President, University of Wisconsin System Nicholas S. Zeppos, Chancellor of Vanderbilt University Valdis Zeps, former linguist Zheng Xiaocang, Chinese academic administrator Maung Zarni, Burmese educator, academic, and human rights activist noted for his opposition to the violence in Rakhine State and Rohingya genocide Andrew Zimbalist, Professor of Economics, Smith College Norton Zinder, Professor of Microbiology, Rockefeller University Jane Zuengler, Professor of English; linguist See also List of University of Wisconsin–Madison people References Athletics University of Wisconsin-Madison
464009
https://en.wikipedia.org/wiki/Second%20Air%20Force
Second Air Force
The Second Air Force (2 AF; 2d Air Force in 1942) is a USAF numbered air force responsible for conducting basic military and technical training for Air Force enlisted members and non-flying officers. In World War II the CONUS unit defended the Northwestern United States and Upper Great Plains regions and during the Cold War, was Strategic Air Command unit with strategic bombers and missiles. Elements of Second Air Force engaged in combat operations during the Korean War; Vietnam War, as well as Operation Desert Storm. History The Northwest Air District of the GHQ Air Force was established on 19 October 1940; activated on 18 December 1940 at McChord Field, and then re-designated as 2d Air Force on 26 March 1941. 5th Bombardment Wing was assigned to Second Air Force up until 5 September 1941. 2nd Air Force On 11 December 1941, four days after the Pearl Harbor attack, 2d Air Force was placed under Western Defense Command. However, on 5 January 1942, it was returned to the Air Force Combat Command (a redesignation of GHQAF after creation of the United States Army Air Forces on 20 January 1941), and later placed directly under Headquarters AAF when Air Force Combat Command was dissolved in March 1942. From December 1941, 2d Air Force organized air defense for the northwest Pacific Ocean coastline of the United States (1940–1941) and flew antisubmarine patrols along coastal areas until October 1942. It appears that immediately after 7 December 1941, only the 7th, 17th, 39th and 42d Bombardment Groups under II Bomber Command were available for this duty. In late January 1942, elements of the B-25 Mitchell-equipped 17th Bombardment Group at Pendleton Field, Oregon were reassigned to Columbia Army Air Base, South Carolina ostensibly to fly antisubmarine patrols off the southeast coast of the United States, but in actuality came to prepare for the Doolittle Raid against Japan. In January 1942, the 2d Air Force was withdrawn from the Western Defense Command and assigned the operational training of units, crews, and replacements for bombardment, fighter, and reconnaissance operations. It received graduates from Army Air Forces Training Command flight schools; navigator training; flexible gunnery schools and various technical schools, organized them into newly activated combat groups and squadrons, and conducted operational unit training (OTU) and replacement training (RTU) to prepare groups and replacements for deployment overseas to combat theaters. As the Second Air Force it became predominantly the training organization of B-17 Flying Fortress and B-24 Liberator heavy bombardment groups. Nearly all new heavy bomb groups organized after Pearl Harbor were organized and trained by Second Air Force OTU units, then were deployed to combat commands around the world. After most of the heavy bombardment groups had completed OTU training, the Second Air Force conducted replacement training of heavy bombardment combat crews and acquired a new mission of operational and replacement training of very heavy bombardment (B-29 Superfortress) groups and crews. Designated the Second Air Force on 18 September 1942, starting in mid-1943 the unit's training of B-17 and B-24 replacement crews began to be phased out, and reassigned to First, Third and Fourth Air Forces as the command began ramping up training of B-29 Superfortress Very Heavy bomb groups, destined for Twentieth Air Force. Under the newly organized XX Bomber Command, B-29 aircraft were received from Boeing's manufacturing plants and new combat groups were organized and trained. XX Bomber Command and the first B-29 groups were deployed in December 1943 to airfields in India for Operation Matterhorn operations against Japan. A football team made up of Second Air Force personnel defeated Hardin–Simmons University in the 1943 Sun Bowl. XXI Bomber Command, the second B-29 combat command and control organization was formed under Second Air Force in March 1944 with its combat groups beginning to deploy to the Mariana Islands in the Western Pacific beginning in December 1944. A third B-29 organization, XXII Bomber Command was formed by Second Air Force in August 1944, however the organization never got beyond forming Headquarters echelon and Headquarters squadron. Inactivated before any operational groups were assigned, as XX Bomber Command units were reassigned from India to the Marianas, eliminating need for the command. On 13 December 1944, First, Second, Third and Fourth Air Force were all placed under the unified command of the Continental Air Forces (CAF) with the Numbered Air Forces becoming subordinate commands of CAF. The training of B-29 groups and replacement personnel continued until August 1945 and the end of the Pacific War. With the war's end, Second Air Force was inactivated on 30 March 1946. In what was effectively a redesignation, the headquarters staff and resources were used to create Fifteenth Air Force, which became the first Numbered Air Force of the new Strategic Air Command ten days later. Cold War The command was reactivated on 6 June 1946 under Air Defense Command, at Offutt Air Force Base. The Second Air Force assumed responsibility for the air defense of certain portions of the continental United States. In 1947, the 73d Bomb Wing was reactivated with the 338th and 351st Bombardment Groups being assigned to it, both reserve B-29 Superfortress organizations. The wing was assigned to Second Air Force. A third group, the 381st was added in 1948. However SAC was having enough difficulties keeping its front-line active duty bomb units in the air to maintain even minimal pilot proficiency in the late 1940s. The wing and its bomb groups were all inactivated in 1949. The Second Air Force was also assigned the reserve 96th Bombardment Wing, which was later redesignated an air division, and several C-46 Commando troop carrier groups under the 322d Troop Carrier Wing. One of these groups was the 440th Troop Carrier Group. It was again inactivated on 1 July 1948. The Second Air Force was (re)-activated and assigned to Strategic Air Command on 1 November 1949 at Barksdale AFB, Louisiana. It drew personnel and equipment from the 311th Air Division, inactivated on the same base on the same day. Initial units of the Second Air Force as part of SAC included: 6th Air Division, MacDill AFB, Florida (assigned 10 February 1951) 305th Bombardment Wing (MacDill AFB) (B-29) 306th Bombardment Wing (MacDill AFB) (B-47A)(Initial B-47 Stratojet Operational Training Unit – Not on Operational Alert) 307th Bombardment Wing (MacDill AFB) (B-29) Detached for Korean War combat service with Far East Air Force, Kadena AB, Okinawa 40th Air Division, Turner AFB, Georgia (assigned 14 March 1951) 31st Fighter Escort Wing (Turner AFB) (F-84) 108th Fighter Wing (Turner AFB) (F-47D) (Federalized New Jersey Air National Guard wing) 37th and 38th Air Divisions joined Second Air Force on 10 October 1951. 37th Air Division was responsible for Lockbourne Air Force Base and Lake Charles Air Force Base, and 38th Air Division was located at Hunter Air Force Base, Georgia. With the end of fighting in Korea, President Eisenhower, who had taken office in January 1953, called for a "new look" at national defense. The result: a greater reliance on nuclear weapons and air power to deter war. His administration chose to invest in the Air Force, especially Strategic Air Command. The nuclear arms race shifted into high gear. The Air Force retired nearly all of its propeller-driven bombers and they were replaced by new Boeing B-47 Stratojet medium jet bombers. By 1955, the Boeing B-52 Stratofortress heavy bomber would be entering the inventory in substantial numbers and as a result, Second Air Force grew both in scope and in numbers. After the Korean War, the history of Second Air Force became part of Strategic Air Command's history, as B-47 Stratojet, and later B-52 Stratofortress and KC-135 Stratotanker aircraft entered SAC's inventory. During the Cold War, Second Air Force aircraft and intercontinental ballistic missiles (ICBM)s stood nuclear alert, providing a deterrence against an attack on the United States by the Soviet Union. In 1966, an order of battle for the force showed units spread across most of the United States, from the 6th Strategic Aerospace Wing at Walker AFB, New Mexico, to the 11th Strategic Aerospace Wing at Altus AFB, Oklahoma, to the 97th Bombardment Wing at Blytheville AFB, Arkansas. During the Vietnam War, squadrons of 2d Air Force B-52 Stratofortesses (primarily B-52Ds, augmented by some B-52Gs) were deployed to bases on Guam, Okinawa and Thailand to conduct Arc Light bombing attacks on communist forces. The 28th Bombardment Wing was among the units assigned this duty. The 2d Air Force organization was inactivated during the post-Vietnam drawdown, on 1 January 1975, with those 2 AF bomb wings not inactivated and/or those 2 AF bases not closed, redistributed to 8 AF and 15 AF. With the end of the Cold War and the restructuring of Strategic Air Command, Second Air Force was reactivated and became the steward for reconnaissance and battlefield management assets, based at Beale AFB, California. This assignment lasted from 1 September 1991 until 1 July 1993, when it was inactivated by Air Combat Command. Air Education and Training Command Second Air Force was reactivated and reassigned on 1 July 1993 to Keesler AFB, Mississippi. Its mission became conducting basic military and technical training for Air Force enlisted members and support officers at five major AETC training bases in the United States. The command has the mission is to train mission ready graduates to support combat readiness and to build 'the world's most respected air, space, and cyberspace force'. To carry out this mission, Second Air Force manages all operational aspects of nearly 5,000 active training courses taught to approximately 250,000 students annually in technical training, basic military training, medical and distance learning courses. Training operations across Second Air Force range from intelligence to computer operations to space and missile operations and maintenance. The first stop for all Air Force, Air National Guard and Air Force Reserve enlisted airmen is basic military training (BMT) at Lackland AFB, Texas. After completing BMT, airmen begin technical training in their career field specialties, primarily at five installations: Goodfellow, Lackland, and Sheppard Air Force bases in Texas; Keesler AFB, Mississippi; and Vandenberg AFB, California. Each base is responsible for a specific portion of formal technical training airmen require to accomplish the Air Force mission. Instructors conduct technical training in specialties such as enlisted aviator, aircraft maintenance, civil engineering, medical services, computer systems, security forces, air traffic control, personnel, intelligence, fire fighting, and space and missile operations. Commissioned officers attend technical training courses for similar career fields at the same locations. Wings and Groups under Second Air Force are: 37th Training Wing Lackland Air Force Base TexasProvides Basic Military Training to Air Force recruits as well as technical training in the career enlisted aviator, logistics, and Security Forces career fields. 81st Training Wing Keesler Air Force Base MississippiProvides training in Aviation Resource Management, weather, basic electronics, communications electronic systems, communications computer systems, air traffic control, airfield management, command post, air weapons control, precision measurement, education and training, financial management and comptroller, information management, manpower and personnel. 17th Training Wing Goodfellow Air Force Base TexasProvides training in intelligence and firefighting career fields. Also provides training to Army, Navy and Marine detachments. 82d Training Wing Sheppard Air Force Base TexasProvides specialized technical training, medical, and field training for officers, Airmen, and civilians of all branches of the military, other DoD agencies, and foreign nationals. 381st Training Group Vandenberg AFB, CaliforniaProvides qualification training for intercontinental ballistic missiles (ICBM), space surveillance, missile warning, spacelift, and satellite command and control operators. It also performs initial and advanced maintenance training on air-launched missiles (ALM) and ICBMs. It conducts training in joint space fundamentals and associated computer maintenance. The group also conducts qualification and orientation training for Air Force Space Command (AFSPC) staff and senior-level personnel, as well as instructor enhancement in support of operational units. 602d Training Group Keesler Air Force Base MississippiProvides fully combat mission capable Airmen to all Combatant Commanders in direct support of the Joint Expeditionary Tasking (JET) mission. In 2006, Second Air Force was assigned the responsibility of coordinating training for Joint Expeditionary Tasked (JET) Training Airmen. These Airmen are assigned to perform traditional US Army duties in Iraq, Afghanistan and the Horn of Africa. An Expeditionary Mission Support Group was formed to provide command and control of these JET Airmen as they are trained at US Army Power Projection Platforms across the US prior to deploying to their assigned Area of Responsibility (AOR). This group has been named the 602d Training Group. In 2007, Second Air Force was given responsibility to provide curricula and advice to the Iraqi Air Force as it stands up its own technical training and branch specific basic training among others. This mission is known as "CAFTT" for Coalition Air Forces Technical Training. Lineage Established as Northwest Air District on 19 October 1940 Activated on 18 December 1940 Re-designated: 2d Air Force on 26 March 1941 Re-designated: Second Air Force on 18 September 1942 Inactivated on 30 March 1946. Activated on 6 June 1946. Inactivated on 1 July 1948. Activated on 1 November 1949. Inactivated on 1 January 1975. Activated on 1 September 1991. Inactivated on 1 July 1993. Activated on 1 July 1993 Assignments General Headquarters Air Force (later, Air Force Combat Command) 18 December 1940 Western Defense Command, 11 December 1941 Air Force Combat Command (later, United States Army Air Forces), 5 January 1942 Continental Air Forces 13 December 1944 Strategic Air Command 21 March 1946 – 30 March 1946 Air Defense Command, 6 June 1946 – 1 July 1948 Strategic Air Command, 1 November 1949 – 1 January 1975; 1 September 1991 – July 1993 Air Combat Command, 1 June 1992 – 1 July 1993 Air Education and Training Command, 1 July 1993 Stations McChord Field, Washington, 18 December 1940 Fort George Wright, Washington, 9 January 1941 Colorado Springs AAF, Colorado, 13 June 1943 – 30 March 1946 Fort Crook, Nebraska, 6 June 1946 – 1 July 1948 Barksdale AFB, Louisiana, 1 November 1949 – 1 January 1975 Beale AFB, California, 1 September 1991 – 1 July 1993 Keesler AFB, Mississippi. 1 July 1993 – Components Commands I Bomber Command: 1 May – 6 October 1943 Redesignated: XX Bomber Command: 20 November 1943 – 29 June 1944 2d Air Force Service (later, 2d Air Force Base) Command: 1 October 1941 – 20 May 1942. 2d Air Support (later, 2d Ground Air Support; II Air Support) Command: 1 September 1941 – 25 January 1943 2d Bomber (later, II Bomber) Command: 5 September 1941 – 6 October 1943. 4th Air Support (later, IV Air Support) Command: 12 August 1942 – 21 January 1943. XXI Bomber Command: 1 March 1944 – 9 November 1944 XXII Bomber Command: 14 August 1944 – 13 February 1945 Divisions 4th Air (later, 4th Strategic Aerospace): 16 June 1952 – 31 March 1970 6th Air: 10 February 1951 – 16 June 1952; 16 June 1952 – 1 January 1959. 17th Air (later, 17th Strategic Aerospace): 15 July 1959 – 1 July 1963 19th Air: 1 July 1955 – 1 January 1975. 21st Air (later, 21st Strategic Aerospace): 1 January 1959 – 1 September 1964 22d Air: 15 July 1959 – 9 September 1960. 37 Air: 10 October 1951 – 28 May 1952 37th Air: 10 October 1951 – 28 May 1952 38th Air: 10 October 1951 – 16 June 1952; 16 June 1952 – 1 January 1959 40th Air 14 March 1951 – 1 July 1952 1 July 1952 – 1 April 1957 1 July 1959 – 1 January 1975 42d Air (later, 42d Strategic Aerospace; 42d Air) 1 April 1955 – 1 July 1957 15 July 1959 – 2 July 1969 1 January 1970 – 1 January 1975 45th Air: 31 March 1970 – 1 January 1975 47th Air: 31 March 1970 – 1 July 1971 73d Air (formerly, 73d Bombardment Wing): 12 June 1947 – 1 July 1948 96th Air (formerly, 96th Bombardment Wing): 12 June 1947 – 1 July 1948 322d Air (formerly, 322d Troop Carrier Wing): 12 June 1947 – 1 July 1948 801st Air: 28 May 1952 – 1 July 1955. 806 Air: 16 June 1952 – 15 June 1960 806th Air: 16 June 1952 – 15 June 1960 810th Strategic Aerospace: 1 July 1963 – 2 July 1966 813th Air: 15 July 1954 – 1 June 1956 816th Air (later, 816th Strategic Aerospace): 1 July 1958 – 1 July 1965 817th Air: 31 March 1970 – 30 June 1971 818th Air (later, 818th Strategic Aerospace): 1 January 1959 – 25 March 1965 819th Strategic Aerospace: 1 July 1965 – 2 July 1966 823d Air: 1 June 1956 – 1 January 1959; 31 March 1970 – 30 June 1971 825th Air (later, 825th Strategic Aerospace): 1 August 1955 – 1 January 1970. Wings 5th Bombardment Wing: 18 December 1940 – 1 September 1941. 11th Pursuit Wing: 18 December 1940 – 1 October 1941. 15th Bombardment Training Wing: 23 June 1942 – 6 April 1946 (ceased all activity February 1945). 20th Bombardment Wing: 18 December 1940 – 1 September 1941. Squadrons 393rd Bombardment Squadron, Very Heavy: 25 November - 17 December 1944 List of commanders References Maurer, Maurer (1983). Air Force Combat Units of World War II. Maxwell AFB, Alabama: Office of Air Force History. . Ravenstein, Charles A. (1984). Air Force Combat Wings Lineage and Honors Histories 1947–1977. Maxwell AFB, Alabama: Office of Air Force History. . External links Second Air Force Factsheet News Stories: Air Force News: http://www.aetc.af.mil/News/ArticleDisplay/tabid/5115/Article/263318/army-air-force-leaders-examine-in-lieu-of-training.aspx Air Force News: http://www.aetc.af.mil/News/ArticleDisplay/tabid/5115/Article/263675/ilo-training-prepares-airmen-to-serve-in-combat-operations.aspx Air Force News: Change of Command https://archive.today/20121212040908/http://www.af.mil/news/story.asp?id=123166976 PRESENTATION TO THE SUBCOMMITTEE ON READINESS COMMITTEE ON ARMED SERVICES UNITED STATES HOUSE OF REPRESENTATIVES: http://www.airforcemag.com/testimony/Documents/2007/July%202007/073107Gibson.pdf Air Force Historical Research Agency, Index card for 2 Air Force history, 1941 02 02 American Theater of World War II Military units and formations established in 1942 Military units and formations in Mississippi
149354
https://en.wikipedia.org/wiki/Information%20science
Information science
Information science (also known as information studies) is an academic field which is primarily concerned with analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information. Practitioners within and outside the field study the application and the usage of knowledge in organizations in addition to the interaction between people, organizations, and any existing information systems with the aim of creating, replacing, improving, or understanding information systems. Historically, information science is associated with computer science, data science, psychology, technology, library science, healthcare, and intelligence agencies. However, information science also incorporates aspects of diverse fields such as archival science, cognitive science, commerce, law, linguistics, museology, management, mathematics, philosophy, public policy, and social sciences. Foundations Scope and approach Information science focuses on understanding problems from the perspective of the stakeholders involved and then applying information and other technologies as needed. In other words, it tackles systemic problems first rather than individual pieces of technology within that system. In this respect, one can see information science as a response to technological determinism, the belief that technology "develops by its own laws, that it realizes its own potential, limited only by the material resources available and the creativity of its developers. It must therefore be regarded as an autonomous system controlling and ultimately permeating all other subsystems of society." Many universities have entire colleges, departments or schools devoted to the study of information science, while numerous information-science scholars work in disciplines such as communication, healthcare, computer science, law, and sociology. Several institutions have formed an I-School Caucus (see List of I-Schools), but numerous others besides these also have comprehensive information foci. Within information science, current issues include: Human–computer interaction for science Groupware The Semantic Web Value sensitive design Iterative design processes The ways people generate, use and find information Definitions The first known usage of the term "information science" was in 1955. An early definition of Information science (going back to 1968, the year when the American Documentation Institute renamed itself as the American Society for Information Science and Technology) states: "Information science is that discipline that investigates the properties and behavior of information, the forces governing the flow of information, and the means of processing information for optimum accessibility and usability. It is concerned with that body of knowledge relating to the origination, collection, organization, storage, retrieval, interpretation, transmission, transformation, and utilization of information. This includes the investigation of information representations in both natural and artificial systems, the use of codes for efficient message transmission, and the study of information processing devices and techniques such as computers and their programming systems. It is an interdisciplinary science derived from and related to such fields as mathematics, logic, linguistics, psychology, computer technology, operations research, the graphic arts, communications, management, and other similar fields. It has both a pure science component, which inquires into the subject without regard to its application, and an applied science component, which develops services and products." (Borko, 1968, p.3). Related terms Some authors use informatics as a synonym for information science. This is especially true when related to the concept developed by A. I. Mikhailov and other Soviet authors in the mid-1960s. The Mikhailov school saw informatics as a discipline related to the study of scientific information. Informatics is difficult to precisely define because of the rapidly evolving and interdisciplinary nature of the field. Definitions reliant on the nature of the tools used for deriving meaningful information from data are emerging in Informatics academic programs. Regional differences and international terminology complicate the problem. Some people note that much of what is called "Informatics" today was once called "Information Science" – at least in fields such as Medical Informatics. For example, when library scientists began also to use the phrase "Information Science" to refer to their work, the term "informatics" emerged: in the United States as a response by computer scientists to distinguish their work from that of library science in Britain as a term for a science of information that studies natural, as well as artificial or engineered, information-processing systems Another term discussed as a synonym for "information studies" is "information systems". Brian Campbell Vickery's Information Systems (1973) placed information systems within IS. Ellis, Allen, & Wilson (1999), on the other hand, provided a bibliometric investigation describing the relation between two different fields: "information science" and "information systems". Philosophy of information Philosophy of information studies conceptual issues arising at the intersection of psychology, computer science, information technology, and philosophy. It includes the investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences, as well as the elaboration and application of information-theoretic and computational methodologies to its philosophical problems. Ontology In science and information science, an ontology formally represents knowledge as a set of concepts within a domain, and the relationships between those concepts. It can be used to reason about the entities within that domain and may be used to describe the domain. More specifically, an ontology is a model for describing the world that consists of a set of types, properties, and relationship types. Exactly what is provided around these varies, but they are the essentials of an ontology. There is also generally an expectation that there be a close resemblance between the real world and the features of the model in an ontology.<ref>{{cite web |first=L. M. |last=Garshol |year=2004 |url=http://www.ontopia.net/topicmaps/materials/tm-vs-thesauri.html#N773 |title=Metadata? Thesauri? Taxonomies? Topic Maps! Making sense of it all |access-date=13 October 2008 |url-status=dead |archive-url=https://web.archive.org/web/20081017174807/http://www.ontopia.net/topicmaps/materials/tm-vs-thesauri.html#N773 |archive-date=17 October 2008 }}</ref> In theory, an ontology is a "formal, explicit specification of a shared conceptualisation". An ontology renders shared vocabulary and taxonomy which models a domain with the definition of objects and/or concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework. Careers Information scientist An information scientist is an individual, usually with a relevant subject degree or high level of subject knowledge, who provides focused information to scientific and technical research staff in industry or to subject faculty and students in academia. The industry *information specialist/scientist* and the academic information subject specialist/librarian have, in general, similar subject background training, but the academic position holder will be required to hold a second advanced degree (MLS/MI/MA in IS, e.g.) in information and library studies in addition to a subject master’s. The title also applies to an individual carrying out research in information science. Systems analyst A systems analyst works on creating, designing, and improving information systems for a specific need. Often systems analysts work with one or more businesses to evaluate and implement organizational processes and techniques for accessing information in order to improve efficiency and productivity within the organization (s). Information professional An information professional is an individual who preserves, organizes, and disseminates information. Information professionals are skilled in the organization and retrieval of recorded knowledge. Traditionally, their work has been with print materials, but these skills are being increasingly used with electronic, visual, audio, and digital materials. Information professionals work in a variety of public, private, non-profit, and academic institutions. Information professionals can also be found within organisational and industrial contexts. Performing roles that include system design and development and system analysis. History Early beginnings Information science, in studying the collection, classification, manipulation, storage, retrieval and dissemination of information has origins in the common stock of human knowledge. Information analysis has been carried out by scholars at least as early as the time of the Assyrian Empire with the emergence of cultural depositories, what is today known as libraries and archives. Institutionally, information science emerged in the 19th century along with many other social science disciplines. As a science, however, it finds its institutional roots in the history of science, beginning with publication of the first issues of Philosophical Transactions, generally considered the first scientific journal, in 1665 by the Royal Society (London). The institutionalization of science occurred throughout the 18th century. In 1731, Benjamin Franklin established the Library Company of Philadelphia, the first library owned by a group of public citizens, which quickly expanded beyond the realm of books and became a center of scientific experimentation, and which hosted public exhibitions of scientific experiments. Benjamin Franklin invested a town in Massachusetts with a collection of books that the town voted to make available to all free of charge, forming the first public library of the United States. Academie de Chirurgia (Paris) published Memoires pour les Chirurgiens, generally considered to be the first medical journal, in 1736. The American Philosophical Society, patterned on the Royal Society (London), was founded in Philadelphia in 1743. As numerous other scientific journals and societies were founded, Alois Senefelder developed the concept of lithography for use in mass printing work in Germany in 1796. 19th century By the 19th century the first signs of information science emerged as separate and distinct from other sciences and social sciences but in conjunction with communication and computation. In 1801, Joseph Marie Jacquard invented a punched card system to control operations of the cloth weaving loom in France. It was the first use of "memory storage of patterns" system. As chemistry journals emerged throughout the 1820s and 1830s, Charles Babbage developed his "difference engine," the first step towards the modern computer, in 1822 and his "analytical engine” by 1834. By 1843 Richard Hoe developed the rotary press, and in 1844 Samuel Morse sent the first public telegraph message. By 1848 William F. Poole begins the Index to Periodical Literature, the first general periodical literature index in the US. In 1854 George Boole published An Investigation into Laws of Thought..., which lays the foundations for Boolean algebra, which is later used in information retrieval. In 1860 a congress was held at Karlsruhe Technische Hochschule to discuss the feasibility of establishing a systematic and rational nomenclature for chemistry. The congress did not reach any conclusive results, but several key participants returned home with Stanislao Cannizzaro's outline (1858), which ultimately convinces them of the validity of his scheme for calculating atomic weights. By 1865, the Smithsonian Institution began a catalog of current scientific papers, which became the International Catalogue of Scientific Papers in 1902. The following year the Royal Society began publication of its Catalogue of Papers in London. In 1868, Christopher Sholes, Carlos Glidden, and S. W. Soule produced the first practical typewriter. By 1872 Lord Kelvin devised an analogue computer to predict the tides, and by 1875 Frank Stephen Baldwin was granted the first US patent for a practical calculating machine that performs four arithmetic functions. Alexander Graham Bell and Thomas Edison invented the telephone and phonograph in 1876 and 1877 respectively, and the American Library Association was founded in Philadelphia. In 1879 Index Medicus was first issued by the Library of the Surgeon General, U.S. Army, with John Shaw Billings as librarian, and later the library issues Index Catalogue, which achieved an international reputation as the most complete catalog of medical literature. European documentation The discipline of documentation science, which marks the earliest theoretical foundations of modern information science, emerged in the late part of the 19th century in Europe together with several more scientific indexes whose purpose was to organize scholarly literature. Many information science historians cite Paul Otlet and Henri La Fontaine as the fathers of information science with the founding of the International Institute of Bibliography (IIB) in 1895. A second generation of European Documentalists emerged after the Second World War, most notably Suzanne Briet. However, "information science" as a term is not popularly used in academia until sometime in the latter part of the 20th century. Documentalists emphasized the utilitarian integration of technology and technique toward specific social goals. According to Ronald Day, "As an organized system of techniques and technologies, documentation was understood as a player in the historical development of global organization in modernity – indeed, a major player inasmuch as that organization was dependent on the organization and transmission of information." Otlet and Lafontaine (who won the Nobel Prize in 1913) not only envisioned later technical innovations but also projected a global vision for information and information technologies that speaks directly to postwar visions of a global "information society". Otlet and Lafontaine established numerous organizations dedicated to standardization, bibliography, international associations, and consequently, international cooperation. These organizations were fundamental for ensuring international production in commerce, information, communication and modern economic development, and they later found their global form in such institutions as the League of Nations and the United Nations. Otlet designed the Universal Decimal Classification, based on Melville Dewey’s decimal classification system. Although he lived decades before computers and networks emerged, what he discussed prefigured what ultimately became the World Wide Web. His vision of a great network of knowledge focused on documents and included the notions of hyperlinks, search engines, remote access, and social networks. Otlet not only imagined that all the world's knowledge should be interlinked and made available remotely to anyone, but he also proceeded to build a structured document collection. This collection involved standardized paper sheets and cards filed in custom-designed cabinets according to a hierarchical index (which culled information worldwide from diverse sources) and a commercial information retrieval service (which answered written requests by copying relevant information from index cards). Users of this service were even warned if their query was likely to produce more than 50 results per search. By 1937 documentation had formally been institutionalized, as evidenced by the founding of the American Documentation Institute (ADI), later called the American Society for Information Science and Technology. Transition to modern information science With the 1950s came increasing awareness of the potential of automatic devices for literature searching and information storage and retrieval. As these concepts grew in magnitude and potential, so did the variety of information science interests. By the 1960s and 70s, there was a move from batch processing to online modes, from mainframe to mini and microcomputers. Additionally, traditional boundaries among disciplines began to fade and many information science scholars joined with other programs. They further made themselves multidisciplinary by incorporating disciplines in the sciences, humanities and social sciences, as well as other professional programs, such as law and medicine in their curriculum. By the 1980s, large databases, such as Grateful Med at the National Library of Medicine, and user-oriented services such as Dialog and Compuserve, were for the first time accessible by individuals from their personal computers. The 1980s also saw the emergence of numerous special interest groups to respond to the changes. By the end of the decade, special interest groups were available involving non-print media, social sciences, energy and the environment, and community information systems. Today, information science largely examines technical bases, social consequences, and theoretical understanding of online databases, widespread use of databases in government, industry, and education, and the development of the Internet and World Wide Web. Information dissemination in the 21st century Changing definition Dissemination has historically been interpreted as unilateral communication of information. With the advent of the internet, and the explosion in popularity of online communities, "social media has changed the information landscape in many respects, and creates both new modes of communication and new types of information", changing the interpretation of the definition of dissemination. The nature of social networks allows for faster diffusion of information than through organizational sources. The internet has changed the way we view, use, create, and store information, now it is time to re-evaluate the way we share and spread it. Impact of social media on people and industry Social media networks provide an open information environment for the mass of people who have limited time or access to traditional outlets of information diffusion, this is an "increasingly mobile and social world [that] demands...new types of information skills". Social media integration as an access point is a very useful and mutually beneficial tool for users and providers. All major news providers have visibility and an access point through networks such as Facebook and Twitter maximizing their breadth of audience. Through social media people are directed to, or provided with, information by people they know. The ability to "share, like, and comment on...content" increases the reach farther and wider than traditional methods. People like to interact with information, they enjoy including the people they know in their circle of knowledge. Sharing through social media has become so influential that publishers must "play nice" if they desire to succeed. Although, it is often mutually beneficial for publishers and Facebook to "share, promote and uncover new content" to improve both user base experiences. The impact of popular opinion can spread in unimaginable ways. Social media allows interaction through simple to learn and access tools; The Wall Street Journal offers an app through Facebook, and The Washington Post goes a step further and offers an independent social app that was downloaded by 19.5 million users in 6 months, proving how interested people are in the new way of being provided information. Social media's power to facilitate topics The connections and networks sustained through social media help information providers learn what is important to people. The connections people have throughout the world enable the exchange of information at an unprecedented rate. It is for this reason that these networks have been realized for the potential they provide. "Most news media monitor Twitter for breaking news", as well as news anchors frequently request the audience to tweet pictures of events. The users and viewers of the shared information have earned "opinion-making and agenda-setting power" This channel has been recognized for the usefulness of providing targeted information based on public demand. Research vectors and applications The following areas are some of those that information science investigates and develops. Information access Information access is an area of research at the intersection of Informatics, Information Science, Information Security, Language Technology, and Computer Science. The objectives of information access research are to automate the processing of large and unwieldy amounts of information and to simplify users' access to it. What about assigning privileges and restricting access to unauthorized users? The extent of access should be defined in the level of clearance granted for the information. Applicable technologies include information retrieval, text mining, text editing, machine translation, and text categorisation. In discussion, information access is often defined as concerning the insurance of free and closed or public access to information and is brought up in discussions on copyright, patent law, and public domain. Public libraries need resources to provide knowledge of information assurance. Information architecture Information architecture (IA) is the art and science of organizing and labelling websites, intranets, online communities and software to support usability. It is an emerging discipline and community of practice focused on bringing together principles of design and architecture to the digital landscape. Typically it involves a model or concept of information which is used and applied to activities that require explicit details of complex information systems. These activities include library systems and database development. Information management Information management (IM) is the collection and management of information from one or more sources and the distribution of that information to one or more audiences. This sometimes involves those who have a stake in, or a right to that information. Management means the organization of and control over the structure, processing and delivery of information. Throughout the 1970s this was largely limited to files, file maintenance, and the life cycle management of paper-based files, other media and records. With the proliferation of information technology starting in the 1970s, the job of information management took on a new light and also began to include the field of data maintenance. Information retrieval Information retrieval (IR) is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web. Automated information retrieval systems are used to reduce what has been called "information overload". Many universities and public libraries use IR systems to provide access to books, journals and other documents. Web search engines are the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevancy. An object is an entity that is represented by information in a database. User queries are matched against the database information. Depending on the application the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata. Most IR systems compute a numeric score on how well each object in the database match the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query. Information seeking Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Information seeking is related to, but different from, information retrieval (IR). Much library and information science (LIS) research has focused on the information-seeking practices of practitioners within various fields of professional work. Studies have been carried out into the information-seeking behaviors of librarians, academics, medical professionals, engineers and lawyers (among others). Much of this research has drawn on the work done by Leckie, Pettigrew (now Fisher) and Sylvain, who in 1996 conducted an extensive review of the LIS literature (as well as the literature of other academic fields) on professionals' information seeking. The authors proposed an analytic model of professionals' information seeking behaviour, intended to be generalizable across the professions, thus providing a platform for future research in the area. The model was intended to "prompt new insights... and give rise to more refined and applicable theories of information seeking" (1996, p. 188). The model has been adapted by Wilkinson (2001) who proposes a model of the information seeking of lawyers. Recent studies in this topic address the concept of information-gathering that "provides a broader perspective that adheres better to professionals’ work-related reality and desired skills." (Solomon & Bronstein, 2021). Information society An information society is a society where the creation, distribution, diffusion, uses, integration and manipulation of information is a significant economic, political, and cultural activity. The aim of an information society is to gain competitive advantage internationally, through using IT in a creative and productive way. The knowledge economy is its economic counterpart, whereby wealth is created through the economic exploitation of understanding. People who have the means to partake in this form of society are sometimes called digital citizens. Basically, an information society is the means of getting information from one place to another (Wark, 1997, p. 22). As technology has become more advanced over time so too has the way we have adapted in sharing this information with each other. Information society theory discusses the role of information and information technology in society, the question of which key concepts should be used for characterizing contemporary society, and how to define such concepts. It has become a specific branch of contemporary sociology. The information society is becoming very popular to research and study. Exist many committed to providing an opportunity for anyone to commence a FREE self-imposed, quality "re-education" in cyberspace. A part of commitment is to network the research and information that creative individuals and/or innovative institutions are exposing in cyberspace. Knowledge representation and reasoning Knowledge representation (KR) is an area of artificial intelligence research aimed at representing knowledge in symbols to facilitate inferencing from those knowledge elements, creating new elements of knowledge. The KR can be made to be independent of the underlying knowledge model or knowledge base system (KBS) such as a semantic network. Knowledge Representation (KR) research involves analysis of how to reason accurately and effectively and how best to use a set of symbols to represent a set of facts within a knowledge domain. A symbol vocabulary and a system of logic are combined to enable inferences about elements in the KR to create new KR sentences. Logic is used to supply formal semantics of how reasoning functions should be applied to the symbols in the KR system. Logic is also used to define how operators can process and reshape the knowledge. Examples of operators and operations include, negation, conjunction, adverbs, adjectives, quantifiers and modal operators. The logic is interpretation theory. These elements—symbols, operators, and interpretation theory—are what give sequences of symbols meaning within a KR. See also Computer and information science :Category:Information science journals Outline of information science Outline of information technology References Further reading Borko, H. (1968). Information science: what is it?. American documentation, 19''(1), 3-5. External links , a "professional association that bridges the gap between information science practice and research. ASIS&T members represent the fields of information science, computer science, linguistics, management, librarianship, engineering, data science, information architecture, law, medicine, chemistry, education, and related technology". Knowledge Map of Information Science Journal of Information Science Digital Library of Information Science and Technology open access archive for the Information Sciences Current Information Science Research at U.S. Geological Survey Introduction to Information Science The Nitecki Trilogy Information science at the University of California at Berkeley in the 1960s: a memoir of student days Chronology of Information Science and Technology LIBRES – Library and Information Science Research Electronic Journal - Shared decision-making Information Library science
3698793
https://en.wikipedia.org/wiki/Deuce%20Lutui
Deuce Lutui
Taitusi "Deuce" Lutui (born May 4, 1983) is a Tongan-born former American football player who was a guard in the National Football League (NFL) for seven seasons. He played college football for the University of Southern California (USC), and received consensus All-American honors. He was selected by the Arizona Cardinals in the second round of the 2006 NFL Draft, and also played for the NFL's Tennessee Titans. One of several Tongans who have played in the NFL, he is a cousin of Vai Sikahema, the first Tongan ever to play in the NFL. Early life Lutui was born in Haʻapai in the Pacific island nation of Tonga. One of six siblings, he is a younger cousin of Vai Sikahema, who became the first Tongan to play in the National Football League when he joined the Arizona Cardinals as a kick returner in 1986. When Lutui was a few months old, his father, Inoke Lutui, moved the family to the United States, settling in Mesa, Arizona, a suburb of Phoenix. When he was six years old, Lutui survived a car accident that killed one of his sisters and left his father permanently disabled. As a teenager, Lutui had to work to support his family. Lutui attended Mesa High School, where he played two-way lineman for the school's football team. In 2001, he was named Super Prep All-Farwest, Prep Star All-West, All-State, all-region and all-conference as a two-way lineman for Mesa High School. College career After high school Lutui signed with the University of Southern California to play for the USC Trojans football team. However, he failed to qualify for admission to the school. Instead, he spent a year at Mesa Community College, where he played for the school's junior college football team. He then transferred to Snow College in Ephraim, Utah, where he received a number of junior college football honors. By 2004, he had improved his grades enough to enroll at USC. Lutui was a first-team All-Pac-10 selection and a consensus first-team All-American at guard for the Trojans in 2005. At 370 pounds, he was the heaviest USC Trojans player of all-time. Professional career Arizona Cardinals Lutui was selected in the second round (41st overall) of the 2006 NFL draft by the Arizona Cardinals, where he is reunited with former USC teammate Matt Leinart. As a rookie, Lutui started 9 games. In 2007, he started 15 games. In 2008, he started 16 games while the Cardinals won the NFC West with a 9-7 record and was part of an offensive line that allowed Cardinals quarterback Kurt Warner to break single season records in completions and touchdown passes. They then had a surprising playoff run getting to Super Bowl XLIII. During the Super Bowl, the Pittsburgh Steelers's offensive line featured Chris Kemoeatu, the other Tongan in the NFL. In 2009, he started 16 games as the Cardinals once again became NFC West champions with a 10-6 record. The Cincinnati Bengals signed him to a two-year deal on July 29, 2011. On July 31, 2011, Kent Somers of The Arizona Republic revealed that Lutui failed his physical with the Bengals, and had re-signed with the Arizona Cardinals for one year. Seattle Seahawks Lutui signed with the Seattle Seahawks on April 6, 2012. Lutui was released by the Seahawks on August 26, 2012 at the end of training camp. Tennessee Titans The Tennessee Titans signed Lutui to a one-year deal on September 10, 2012. He went on to start eight games at right guard. In March 2013, he was suspended four games by the NFL for violating the league's policy on performance-enhancing substances. Personal life Lutui is a devout member of The Church of Jesus Christ of Latter-Day Saints. He is also an Eagle Scout. In 2004, he married the former Puanani Heimuli; they have six children. On July 2, 2010, Lutui became a naturalized citizen of the United States. Lutui follows a semi-vegan diet to help control his weight. References 1983 births Living people All-American college football players American football offensive guards Arizona Cardinals players Seattle Seahawks players Tennessee Titans players Snow Badgers football players Tongan emigrants to the United States USC Trojans football players Players of American football from Arizona Latter Day Saints from Arizona Tongan Latter Day Saints Mesa High School alumni
29893463
https://en.wikipedia.org/wiki/Phaenops
Phaenops
In Greek mythology, the name Phaenops (Ancient Greek: Φαίνοπος) refers to three characters who are all associated with Troy and the Trojan War: Phaenops, father of Xanthus and Thoon, who were slain by Diomedes. He was an old man by the time the Trojan War began, and had no other sons and heirs except these two. Phaenops, father of Phorcys, from Phrygia. Phaenops, son of Asius, grandson of Dymas and brother of Adamas. He was a resident of Abydus and the best guest-friend of Hector. Apollo, at one point, assumed the form of this Phaenops to appear in front of Hector. Notes References Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Trojans People of the Trojan War Characters in Greek mythology
14124747
https://en.wikipedia.org/wiki/Lobotomy%20Software
Lobotomy Software
Lobotomy Software, Inc. was an American video game developer which ported Quake and Duke Nukem 3D to the Sega Saturn and developed PowerSlave (titled Exhumed in Europe). History Lobotomy Software was founded in 1993, when a group of friends working at Nintendo of America left to form their own company, becoming the creative department of Lobotomy, with the programmers coming from Manley & Associates (a developer acquired by Electronic Arts in 1996, renamed Electronic Arts Seattle, and shut down in 2002). They originally worked out of co-founder Paul Lange's apartment, but after a few months set up an office in Redmond, Washington. The team began working on various game demos, one of which later became the first-person shooter PowerSlave. Shortly after PowerSlave was released, Sega secured the rights from GT Interactive to publish Duke Nukem 3D and Quake. Sega originally handed the projects to two other developers, but were unhappy with their work. Deadlines for the two games were set just a few months apart, and as such their development considerably overlapped. The Sega Saturn ports of Quake and Duke Nukem 3D both use the SlaveDriver engine Lobotomy created for the console versions of PowerSlave. Lobotomy Software had ported Quake to the Sony PlayStation, but could not find a publisher, which exacerbated their financial troubles. In 1998, Lobotomy Software was acquired by Crave Entertainment and renamed "Lobotomy Studios." The team worked on a Caesars Palace gambling game for the Nintendo 64, but after a year of development, the game was postponed and eventually cancelled. At that point, Lobotomy Studios was closed and employees were let go or given the option to be relocated to another position at Crave Entertainment. The next title that the team would have worked on was a sequel to PowerSlave simply titled "PowerSlave 2," which was going to be a third-person shooter and use a different game engine. Games PowerSlave (September 26, 1996) Sega Saturn, MS-DOS, PlayStation. Includes hidden mini-game Death Tank. Quake (October 31, 1997) port to Saturn Duke Nukem 3D (October 31, 1997) port to Saturn. Includes hidden mini-game Death Tank Zwei. References External links Official website PowerSlave on Playmates Lobotomy Software at MobyGames Video game development companies Defunct video game companies of the United States Software companies based in Washington (state) Defunct companies based in Washington (state) Defunct companies based in Redmond, Washington Video game companies established in 1993 Video game companies disestablished in 1998
51444569
https://en.wikipedia.org/wiki/Intelligence%20Act%20%28France%29
Intelligence Act (France)
The French Intelligence Act of 24 July 2015 (French: loi relative au renseignement) is a statute passed by the French Parliament. The law creates a new chapter in the Code of Internal Security aimed at regulating the surveillance programs of French intelligence agencies, in particular those of the DGSI (domestic intelligence) and the DGSE (foreign intelligence). History The Intelligence Bill was introduced to the Parliament on 19 March 2015 by French Prime Minister Manuel Valls and presented as the government's reaction to the Charlie Hebdo shooting. Despite widespread mobilization, the Bill was adopted with 438 votes in favor, 86 against and 42 abstentions at the National Assembly and 252 for, 67 against and 26 abstentions at the Senate. It was made into law on 24 July 2015. Although framed by the government as a response to the Paris attacks of January 2015, the passage of the Intelligence Act was actually long in the making. The previous law providing a framework for the surveillance programs of French intelligence agencies was the Wiretapping Act of 1991, aimed at regulating telephone wiretaps. Many surveillance programs developed in the 2000s—especially to monitor Internet communications—were rolled out outside of any legal framework. As early as 2008, the French government's White Paper of Defense and National Security stressed that "intelligence activities do not have the benefit of a clear and sufficient legal framework", and said that "legislative adjustments" were necessary. Main provisions This section summarizes the main provisions of the Intelligence Act. Scope Through article L. 811-3, the Act extended the number of objectives that can justify extrajudicial surveillance. These include: national independence, territorial integrity and national defense; major interests in foreign policy, implementation of European and international obligations of France and prevention of all forms of foreign interference; major economic, industrial and scientific interests of France; prevention of terrorism; prevention of: a) attacks on the republican nature of institutions; b) actions towards continuation or reconstitution of groups disbanded (article L. 212-1 of criminal code); c) collective violence likely to cause serious harm to public peace; prevention of organized crime and delinquency; prevention of proliferation of weapons of mass destruction. These surveillance powers can also be used by law enforcement agencies that are not part of the official "intelligence community" and whose combined staff is well over 45,000. Oversight The existing oversight commission, the Commission nationale de contrôle des interceptions de sécurité, was replaced by a new oversight body called the "National Oversight Commission for Intelligence-Gathering Techniques" (Commission nationale de contrôle des techniques de renseignement, or CNCTR). It is composed of: four members of Parliament designated by the Presidents of both chambers of Parliament; two administrative judges and two judicial judges designated respectively by the Council of State and the Cour de Cassation; one technical expert designated by the telecom National Regulatory Authority (the addition of a commissioner with technical expertise was the main innovation). The CNCTR has 24 hours to issue its ex ante non-binding opinion regarding the surveillance authorizations delivered by the Prime Minister before surveillance begins, except in cases of "absolute emergency" where it is simply notified of the surveillance measure within 24 hours upon deliverance (article L. 821-3). For ex post oversight, the CNCTR has "permanent, comprehensive and direct access to records, logs, collected intelligence, transcripts and extractions" of collected data. It is able to conduct both planned and in the premises where these documents are centralized (article L. 833-2-2). If an irregularity is found, it can send to the Prime Minister a "recommendation" so that she can put an end to it. One hugely significant exception to the CNCTR's oversight powers are the bulk of data obtained through data-sharing with foreign intelligence agencies (article L. 833-2-3). Wiretaps and access to metadata Techniques of communications surveillance covered by the Act include telephone or Internet wiretaps (L. 852-1), access to identifying data and other metadata (L. 851-1), geotagging (L. 851-4) and computer network exploitation (L. 853-2), all of which are subject to authorization of a renewable duration of 4 months. Black boxes and real-time access to metadata The Act authorizes the use of scanning device (nicknamed "black boxes") to be installed on the infrastructures of telecom operators and hosting providers. Article L. 851-3 of the Code of Internal Security provides that, "for the sole purpose of preventing terrorism, automated processing techniques may be imposed on the networks of [telecom operators and hosting providers] in order to detect, according to selectors specified in the authorisation, communications that are likely to reveal a terrorist threat. Another provision limited to anti-terrorism allows for the real-time collection of metadata (article L. 851-3, for terrorism only and for a 4 months period). Initially, the provision targeted only individuals "identified as a [terrorist] threat". After the 2016 Nice Attack, it was extended by a Bill of the state of emergency to cover individuals "likely to be related to a threat" or who simply belong to "the entourage" of individuals "likely related to a threat". According to La Quadrature du Net, this means that the provision can now potentially cover "hundreds or even thousands of persons (...) rather than just the 11 700 individuals" reported to be on the French terrorism watchlist. Computer Network Exploitation The Act authorizes computer network exploitation, or computer hacking, as a method for intelligence gathering. Article L. 853-2 allows for: access, collection, retention and transmission of computer data stored in a computer system; access, collection, retention and transmission of computer data, as it is displayed on a user's computer screen, as it is entered by keystrokes, or as received and transmitted by audiovisual peripheral devices. Considering the intrusiveness of computer hacking, the law provides that these techniques are authorized for a duration of 30 days, and only "when intelligence cannot be collected by any other legally authorized mean." The Act also grants blanket immunity to intelligence officers who carry on computer crimes into computer systems located abroad (article 323-8 of the Criminal Code). International surveillance The Act also provides chapter on the "surveillance of international communications", particularly relevant for the DGSE's surveillance programs. International communications are defined as "communications emitted from or received abroad" (article L. 854-1 and following). They can be intercepted and exploited in bulk on the French territory with reduced oversight. Safeguards in terms of bulk exploitation and data retention are lessened for foreigners (though the definition of foreigners is technical, i.e. people using non-French "technical identifiers"). Data retention periods For national surveillance measures, once communications data are collected by intelligence agencies, retention periods are the following: Content (correspondances): 1 month after collection (for encrypted content, period starts after decryption, within the limit of 6 years after initial collection); Metadata: 4 years (compared to a 3-year period before the adoption of the Act). For international surveillance, retention periods depend on whether one end of the communication uses a "technical identifiers traceable to the national territory" or not, in which case the "national" retention periods are applicable, but they start after the first exploitation and no later than six months after collection (article L. 854-8). If both ends of the communication are foreign, the following periods apply: Content: 1 year after first exploitation, within the limit of 4 years after collection (for encrypted content, periods starts after decryption, within the limit of 8 years after collection); Metadata: 6 years The law fails to provide any framework to limit staff access to collected intelligence once it is stored by intelligence and law enforcement agencies. Redress mechanism The Act reorganizes redress procedures against secret surveillance, establishing the possibility to introduce a legal challenge before the Council of State after having unsuccessfully thought redress before the CNCTR. Criticism and legal challenges During the Parliamentary debate, the bill faced severe criticism from several organizations including National Commission on Informatics and Liberty (CNIL), National Digital Council, Mediapart, La Quadrature du Net. The implementation decrees of the French Intelligence Act are currently undergoing a series of legal challenges by French civil society organizations La Quadrature du Net, French Data Network and Fédération FFDN before the Council of State. See also Investigatory Powers Act 2016 (British law) Gesetz zur Beschränkung des Brief-, Post- und Fernmeldegeheimnisses (German law) References 2015 in France Events relating to freedom of expression Mass surveillance Global surveillance
6329156
https://en.wikipedia.org/wiki/Government%20Engineering%20College%2C%20Idukki
Government Engineering College, Idukki
The Government Engineering College, Idukki (GECI) is located in the town of Painavu, in Idukki district of the Indian state of Kerala. It is affiliated to the APJ Abdul Kalam Technological University, and is approved by the All India Council for Technical Education (AICTE), New Delhi. History The college was established in 2000 under the Directorate of Technical Education of the government of Kerala. It was located at Painavu in government buildings renovated for the purpose. Twenty five acres of land at Kuyilimala near Painavu was transferred to the college for the construction of a permanent campus in August 2000. The first batch of students were admitted in November 2000. Owing to the delay in obtaining AICTE approval and as per the orders of the High Court of Kerala, three batches of students admitted to the college were transferred to Rajiv Gandhi Institute of Technology, Kottayam in their final year. The college was inspected by AICTE in April 2003, and approval was granted for admissions on 2003–2004 and later. GECI has 870 undergraduate students in four batches. The batch of students admitted to the college in 2003 is the first batch rolled out in October 2007. Campus The scenic campus is located at Kuyilimala, Painavu – the headquarters of the Idukki district. The campus is a two-hour road journey from Thodupuzha or Kothamangalam (co-ordinates: 9°50'52"N 76°56'32"E). The campus is situated near state highway 33 from Thodupuzha to Kattapana. Painavu is 120 km from Cochin and 130 km from Kottayam and is connected by bus to all parts of the state. The cool and picturesque environs of the Idukki wildlife sanctuary located close to the campus are ideal for the pursuit of higher education. The Idukki Dam is 7 km from the campus. Academics The college is affiliated to APJ Abdul Kalam Technological University, and offers Bachelor of Technology programmes in five branches of engineering and technology: Electrical and Electronics Engineering, Electronics and Communication Engineering, Computer Science and Engineering, Mechanical Engineering, Information Technology. The programs have an intake of 66 regular students and six lateral entry students. The courses span eight semesters. In addition to the four BTech programs, GECI offers three MTech programs. Electrical and Electronics Engineering Power Electronics and Control (since 2011) Computer Science Engineering Computer Science and System Engineering Information Technology Network Engineering Admissions Admissions to the BTech Degree programs are carried out on the basis of rank in the common entrance examination conducted by the Government of Kerala. Admissions to the MTech Degree programs are carried out on the basis of rank in the GATE conducted by the IITs. Departments The college is structured into seven departments: Electrical and Electronics Engineering Electronics and Communication Engineering Computer Science and Engineering Information Technology Mechanical Engineering General Department Basic Sciences Faculty The faculty are selected by the Public Service Commission, Kerala, on a merit basis. Staff advisory system Immediately after admission to the college, each student is assigned a staff adviser. The staff adviser guides the student in curricular and extracurricular activities during the period of study in the college. Campus discipline Any act of ragging is dealt with as per the provisions of the Kerala Prohibition of ragging Act, 1998. Students are not allowed to use mobile phones in the campus. While on campus, students carry their college identity cards and comply with the dress code: Boys: grey pants, cream shirt and suitable covered footwear, Girls: grey pants, cream shirt with grey overcoat and suitable covered footwear. College library The college library is in process of expansion. It has around 12000 books in 2500 titles. The library subscribes to 20 national and 12 international journals, five newspapers and some periodicals. Library operations are computerized using SOUL software. The library has the following sections: Book lending section, Reference section, Current periodicals section, Book bank for SC/ST and poor students, Reading room, Reprographic section. Digital library in engineering The college has access to international journals and technical papers through the Indian National Library in Engineering (INDEST) -AICTE Consortium. Central computing facility The Central Computing Facility (CCF) has been set up to supplement the departmental software labs and for sharing of computing resources within the college. CCF has networked PCs, printers and scanners for student use. Career guidance and placement cell The college has a Career Guidance and Placement Cell (CGPC); it has contacts with software and industrial houses for campus recruitment drives. The CGPC organizes soft skill development programmes and workshops to motivate students to perform better. Women’s cell The women's cell in the college was established in 2006. The cell in close co-ordination with the Idukki district women's council arranges counseling sessions, which can be utilized by both male and female students. Technical associations The technical associations provide students the venue to improve their technical and managerial skills. Activities of the associations include industrial visits, technical talks, and participation in technical events in other institutions. A chapter of the energy conservation society is functioning in the college. IPR CELL Intellectual Property Rights Cell started functioning at the college with its formal inauguration on 4 April 2011. It works as the nodal center of PIC Kerala. The cell creates awareness among students and faculty about Intellectual Property Rights to increase the Intellectual Property output and provides the assistance needed for filing a patent. Parent Teacher’s Association The Parent Teacher's Association (PTA) provides a forum for interaction among parents and teachers. The executive committee chaired by the Principal coordinates the activities of the PTA. Its efforts are oriented towards improving facilities in the college. Staff club The staff club facilitates improvement of the social, cultural and educational activities among the staff. Centre for Continuing Education CCE conducts courses to students and public, which stay apart from the regular curriculum, thereby generating internal funds for the development of the departments and the institute. Visiting faculty programme The programme extends the service of experienced teachers and experts from industries to GECI. Organizations like IITs and IISc, senior faculty from reputed institutions and experts from industries and consultancy services deliver lectures to the students. Internet and campus connectivity GECI has a 2 Mbit/s broadband internet connection, along with 10 more 512 Mbit/s connections under National Mission on Education through Information and Communication Technologies (NMEICT) program. The students can use the facilities through Campus Area Network (CAN) and Wireless Area Network (WAN). Centre for Engineering Research and Development (CERD) CERD was established by the government of Kerala to act as a platform for the faculty and student of engineering colleges in the state to pursue their interest in basic and applied research in engineering and technology. GECI is a nodal centre of CERD and provides centralized computational and other common research facilities, access to information sources, conduct training, seminars, symposia and lecture series in emerging areas of research and research initiation. EDUSAT Interactive Terminal The Satellite Interactive Terminal established in the college by the ISRO enables distant education via the EDUSAT. The virtual classroom environment is made alive by multimedia equipment, and students are able to interact in real-time with the subject experts. Festivals Kavettam Kavettam is the arts festival of government engineering college Idukki. ADVAYA Over the last nine years has provided a platform for promoting technical, scientific thinking, innovation, and raw ideas to echo, thus creating an unmatched aura of science and technology spectacle year after year. National Level Multi-Fest of Government Engineering college Idukki: In its endeavour to create a technical symposium for the creative minds of our nation to converge and exchange talent, Advaya strives to break new boundaries and reach new level year after year. Amenities College canteen The canteen was set up with the assistance of the College Development Fund. Co-operative society The co-operative society store caters to the needs of the students and employees. Stationery, textbooks, etc. are made available in the store. Hostel facilities The college has three main hostels – The Men's Hostel, Ladies' Hostel and a Staff Hostel. They are headed by the Warden, assisted by the Resident Tutors. The seat availability at the hostels are as follows: Men's hostel – I – 45 seats – for 2nd year and higher semester male students Men's hostel – II – 45 seats – for 1st year male students PH - 30 seats - male students C7 ( Men's hostel-III) - 15 seats Ladies hostel – 45 seats – for 2nd year and higher semester female students and female staff members Staff hostel – For male staff members Holy Family Ladies Hostel, Thannikkandam, Painavu - hostel run by Catholic Sisters Girirani Working Women's Hostel, Kuyilimala (managed by the Idukki District Women's Council), which is situated at a walkable distance from the college provides accommodation for female students. Private accommodation for male and female students are available at Cheruthony (6 km from Painavu). wisdom homes boys hostel run by SSF state committee located at Cheruthoni. College bus The institute operates five college buses for the use of staff and students. Student activities The college has a students' union, which organises technical, cultural, and entertainment events. Alumni association The alumni association consists of student members as well as staff members of the college. The association has organized meet up programs at Trivandrum, Eranakulam and Dubai. English club GECI students conduct an English club under the organization of TEQIP and the collaborative effort of LANGUAGE LAB for improving soft skills among students. Notable alumni Ankit Ashokan - Civil Services Examination 2016 rank holder and Indian Police Service officer, Kerala cadre. Neena Viswanath - Civil Services Examination 2020 rank holder and Indian Revenue Service officer. 2013 batch CSE. See also Engineering education References External links Official website Campus blog and e-magazine Photo blog Idukki District homepage Kerala Entrance Examinations official web portal Department of Technical Education Idukki District map Engineering colleges in Kerala Universities and colleges in Idukki district Colleges affiliated to Mahatma Gandhi University, Kerala Educational institutions established in 2000 2000 establishments in Kerala
2324396
https://en.wikipedia.org/wiki/Directory%20traversal%20attack
Directory traversal attack
A directory traversal (or path traversal) attack exploits insufficient security validation or sanitization of user-supplied file names, such that characters representing "traverse to parent directory" are passed through to the operating system's file system API. An affected application can be exploited to gain unauthorized access to the file system. Directory traversal is also known as the ../ (dot dot slash) attack, directory climbing, and backtracking. Some forms of this attack are also canonicalization attacks. Example A typical example of a vulnerable application in PHP code is: <?php $template = 'red.php'; if (isset($_COOKIE['TEMPLATE'])) { $template = $_COOKIE['TEMPLATE']; } include "/home/users/phpguru/templates/" . $template; An attack against this system could be to send the following HTTP request: GET /vulnerable.php HTTP/1.0 Cookie: TEMPLATE=../../../../../../../../../etc/passwd The server would then generate a response such as: HTTP/1.0 200 OK Content-Type: text/html Server: Apache root:fi3sED95ibqR6:0:1:System Operator:/:/bin/ksh daemon:*:1:1::/tmp: phpguru:f8fk3j1OIf31.:182:100:Developer:/home/users/phpguru/:/bin/csh The repeated ../ characters after /home/users/phpguru/templates/ have caused include() to traverse to the root directory, and then include the Unix password file /etc/passwd. Unix /etc/passwd is a common file used to demonstrate directory traversal, as it is often used by crackers to try cracking the passwords. However, in more recent Unix systems, the /etc/passwd file does not contain the hashed passwords, and they are instead located in the /etc/shadow file, which cannot be read by unprivileged users on the machine. Even in that case, though, reading /etc/passwd does still show a list of user accounts. Variations Directory traversal in its simplest form uses the ../ pattern. Some common variations are listed below: Microsoft Windows Microsoft Windows and DOS directory traversal uses the ..\ or ../ patterns. Each partition has a separate root directory (labeled C:\ where C could be any partition), and there is no common root directory above that. This means that for most directory vulnerabilities on Windows, attacks are limited to a single partition. Directory traversal has been the cause of numerous Microsoft vulnerabilities. Percent encoding in URIs Some web applications attempt to prevent directory traversal by scanning the path of a request URI for patterns such as ../. This check is sometimes mistakenly performed before percent-decoding, causing URIs containing patterns like %2e%2e/ to be accepted despite being decoded into ../ before actual use. Double encoding Percent decoding may accidentally be performed multiple times; once before validation, but again afterwards, making the application vulnerable to recursively percent-encoded input such as %252e%252e/ (a single percent-decoding pass turns %25 into a literal %-sign). This kind of vulnerability notably affected versions 5.0 and earlier of Microsoft's IIS web server software. UTF-8 A badly implemented UTF-8 decoder may accept characters encoded using more bytes than necessary, leading to alternative character representations, such as %2e and %c0%ae both representing .. This is specifically forbidden by the UTF-8 standard, but has still led to directory traversal vulnerabilities in software such as the IIS web server. Archives Some archive formats like zip allow for directory traversal attacks: files in the archive can be written such that they overwrite files on the filesystem by backtracking. Code that extracts archive files can be written to check that the paths of the files in the archive do not engage in path traversal. Prevention A possible algorithm for preventing directory traversal would be to: Process URI requests that do not result in a file request, e.g., executing a hook into user code, before continuing below. When a URI request for a file/directory is to be made, build a full path to the file/directory if it exists, and normalize all characters (e.g., %20 converted to spaces). It is assumed that a 'Document Root' fully qualified, normalized, path is known, and this string has a length N. Assume that no files outside this directory can be served. Ensure that the first N characters of the fully qualified path to the requested file is exactly the same as the 'Document Root'. If so, allow the file to be returned. If not, return an error, since the request is clearly out of bounds from what the web-server should be allowed to serve. Using a hard-coded predefined file extension to suffix the path does not necessarily limit the scope of the attack to files of that file extension. <?php include($_GET['file'] . '.html'); The user can use the NULL character (indicating the end of the string) in order to bypass everything after the $_GET. (This is PHP-specific.) See also Chroot jails may be subject to directory traversal if incorrectly created. Possible directory traversal attack vectors are open file descriptors to directories outside the jail. The working directory is another possible attack vector. Insecure direct object reference References Resources Open Web Application Security Project The WASC Threat Classification – Path Traversal Path Traversal Vulnerability Exploitation and Remediation CWE Common Weakness Enumeration - Path Traversal External links DotDotPwn – The Directory Traversal Fuzzer – Conviction for using directory traversal. Bugtraq: IIS %c1%1c remote command execution Cryptogram Newsletter July 2001 . Web security exploits
10869098
https://en.wikipedia.org/wiki/Link%20%28Unix%29
Link (Unix)
The link utility is a Unix command line program that creates a hard link from an existing directory entry to a new directory entry. It does no more than call the link() system function. It does not perform error checking before attempting to create the link. It returns an exit status that indicates whether the link was created (0 if successful, >0 if an error occurred). Creating a link to a directory entry that is itself a directory requires elevated privileges. The ln command is more commonly used as it provides more features: it can create both hard links and symbolic links, and has error checking. Synopsis link (-s) source target source The pathname of an existing folder or file. target The name of the link to be created. Note that source must specify an existing folder or file, and target must specify a non-existent entry in an existing directory. Standards The link command is part of the Single UNIX Specification (SUS), specified in the Shell and Utilities volume of the IEEE 1003.1-2001 standard. See also List of Unix commands Unlink (Unix) External links IEEE Std 1003.1-2004 Shell & Utilities volume—list of SUS utilities. GNU Coreutils link documentation. Unix SUS2008 utilities
25602295
https://en.wikipedia.org/wiki/History%20of%20Microsoft%20Word
History of Microsoft Word
The first version of Microsoft Word was developed by Charles Simonyi and Richard Brodie, former Xerox programmers hired by Bill Gates and Paul Allen in 1981. Both programmers worked on Xerox Bravo, the first WYSIWYG (What You See Is What You Get) word processor. The first Word version, Word 1.0, was released in October 1983 for Xenix and MS-DOS; it was followed by four very similar versions that were not very successful. The first Windows version was released in 1989, with a slightly improved interface. When Windows 3.0 was released in 1990, Word became a huge commercial success. Word for Windows 1.0 was followed by Word 2.0 in 1991 and Word 6.0 in 1993. Then it was renamed to Word 95 and Word 97, Word 2000 and Word for Office XP (to follow Windows commercial names). With the release of Word 2003, the numbering was again year-based. Since then, Windows versions include Word 2007, Word 2010, Word 2013, Word 2016, and most recently, Word for Office 365. In 1986, an agreement between Atari and Microsoft brought Word to the Atari ST. The Atari ST version was a translation of Word 1.05 for the Apple Macintosh; however, it was released under the name Microsoft Write (the name of the word processor included with Windows during the 80s and early 90s). Unlike other versions of Word, the Atari version was a one time release with no future updates or revisions. The release of Microsoft Write was one of two major PC applications that were released for the Atari ST (the other application being WordPerfect). Microsoft Write was released for the Atari ST in 1988. In 2014 the source code for Word for Windows in the version 1.1a was made available to the Computer History Museum and the public for educational purposes. Word for DOS The first Microsoft Word was released in 1983. It featured graphics video mode and mouse support in a WYSIWYG interface. It could run in text mode or graphics mode but the visual difference between the two was minor. In graphics mode, the document and interface were rendered in a fixed font size monospace character grid with italic, bold and underline features that was not available in text mode. It had support for style sheets in separate files (.STY). The first version of Word was a 16 bits PC DOS/MS-DOS application. A Macintosh 68000 version named Word 1.0 was released in 1985 and a Microsoft Windows version was released in 1989. The three products shared the same Microsoft Word name, the same version numbers but were very different products built on different code bases. Three product lines co-existed: Word 1.0 to Word 5.1a for Macintosh, Word 1.0 to Word 2.0 for Windows and Word 1.0 to Word 5.5 for DOS. Word 1.1 for DOS was released in 1984 and added the Print Merge support, equivalent to the Mail Merge feature in newer Word systems. Word 2.0 for DOS was released in 1985 and featured Extended Graphics Adapter (EGA) support. Word 3.0 for DOS was released in 1986. Word 4.0 for DOS was released in 1987 and added support for revision marks (equivalent to the Track Changes feature in more recent Word versions), search/replace by style and macros stored as key stroke sequences. Word 5.0 for DOS, released in 1989, added support for bookmarks, cross-references and conditions and loops in macros, remaining backwards compatible with Word 3.0 macros. The macro language differed from the WinWord 1.0 WordBasic macro language. Word 5.5 for DOS, released in 1990, significantly changed the user interface, with popup menus and dialog boxes. Even in graphics mode, these Graphical User Interface (GUI) elements got the monospace ASCII art look and feel found in text mode programs like Microsoft QuickBasic. Word 6.0 for DOS, the last Word for DOS version, was released in 1993, at the same time as Word 6.0 for Windows (16 bits) and Word 6.0 for Macintosh. Although Macintosh and Windows versions shared the same code base, the Word for DOS was different. The Word 6.0 for DOS macro language was compatible with the Word 3.x-5.x macro language while Word 6.0 for Windows and Word 6.0 for Macintosh inherited WordBasic from the Word 1.0/2.0 for Windows code base. The DOS and Windows versions of Word 6.0 had different file formats. Word for Windows 1989 to 1995 The first version of Word for Windows was released in November 1989 at a price of USD $498, but was not very popular as Windows users still comprised a minority of the market. The next year, Windows 3.0 debuted, followed shortly afterwards by WinWord 1.1 which was updated for the new OS. The failure of WordPerfect to produce a Windows version proved a fatal mistake. The following year, in 1991, WinWord 2.0 was released which had further improvements and finally solidified Word's marketplace dominance. WinWord 6.0 came out in 1993 and was designed for the newly released Windows 3.1. The early versions of Word also included copy protection mechanisms that tried to detect debuggers, and if one was found, it produced the message "The tree of evil bears bitter fruit. Only the Shadow knows. Now trashing program disk." and performed a zero seek on the floppy disk (but did not delete its contents). After MacWrite, Word for Macintosh never had any serious rivals, although programs such as Nisus Writer provided features such as non-continuous selection, which were not added until Word 2002 in Office XP. Word 5.1 for the Macintosh, released in 1992, was a very popular word processor, owing to its elegance, relative ease of use and feature set. However, version 6.0 for the Macintosh, released in 1994, was widely derided, unlike the Windows version. It was the first version of Word based on a common code base between the Windows and Mac versions; many accused the Mac version of being slow, clumsy and memory intensive. With the release of Word 6.0 in 1993 Microsoft again attempted to synchronize the version numbers and coordinate product naming across platforms; this time across the three versions for DOS, Macintosh, and Windows (where the previous version was Word for Windows 2.0). There may have also been thought given to matching the current version 6.0 of WordPerfect for DOS and Windows, Word's major competitor. However, this wound up being the last version of Word for DOS. In addition, subsequent versions of Word were no longer referred to by version number, and were instead named after the year of their release (e.g. Word 95 for Windows, synchronizing its name with Windows 95, and Word 98 for Macintosh), once again breaking the synchronization. When Microsoft became aware of the Year 2000 problem, it released the entire DOS port of Microsoft Word 5.5 instead of getting people to pay for the update. As of November 2019, it is still available for download from Microsoft's web site. Word 6.0 was the second attempt to develop a common code base version of Word. The first, code-named Pyramid, had been an attempt to completely rewrite the existing product. It was abandoned when Chris Peters replaced Jeff Raikes at the lead developer of the Word project and determined it would take the development team too long to rewrite and then catch up with all the new capabilities that could have been added in the same time without a rewrite. Therefore, Word 6.0 for Windows and Macintosh were both derived from Word 2.0 for Windows code base. The Word 3.0 to 5.0 for Windows version numbers were skipped (outside of DBCS locales) in order to keep the version numbers consistent between Macintosh and Windows versions. Supporters of Pyramid claimed that it would have been faster, smaller, and more stable than the product that was eventually released for Macintosh, and which was compiled using a beta version of Visual C++ 2.0 that targets the Macintosh, so many optimizations have to be turned off (the version 4.2.1 of Office is compiled using the final version), and sometimes use the Windows API simulation library included. Pyramid would have been truly cross-platform, with machine-independent application code and a small mediation layer between the application and the operating system. More recent versions of Word for Macintosh are no longer ported versions of Word for Windows. Later versions of Word have more capabilities than merely word processing. The drawing tool allows simple desktop publishing operations, such as adding graphics to documents. Microsoft Office Word 95 Word 95 was released as part of Office 95 and was numbered 7.0, consistently with all Office components. It ran exclusively on the Win32 platform, but otherwise had few new features. The file format did not change. Word 97 Word 97 had the same general operating performance as later versions such as Word 2000. This was the first copy of Word featuring the Office Assistant, "Clippit", which was an animated helper used in all Office programs. This was a takeover from the earlier launched concept in Microsoft Bob. Word 97 introduced the macro programming language Visual Basic for Applications (VBA) which remains in use in Word 2016. Word 98 Word 98 for the Macintosh gained many features of Word 97, and was bundled with the Macintosh Office 98 package. Document compatibility reached parity with Office 97 and Word on the Mac became a viable business alternative to its Windows counterpart. Unfortunately, Word on the Mac in this and later releases also became vulnerable to future macro viruses that could compromise Word (and Excel) documents, leading to the only situation where viruses could be cross-platform. A Windows version of this was only bundled with the Japanese/Korean Microsoft Office 97 Powered By Word 98 and could not be purchased separately. It was then released in the same period as well. Word 2000 Word 2001/Word X Word 2001 was bundled with the Macintosh Office for that platform, acquiring most, if not all, of the feature set of Word 2000. Released in October 2000, Word 2001 was also sold as an individual product. The Macintosh version, Word X, released in 2001, was the first version to run natively on (and required) Mac OS X. Word 2002/XP Word 2002 was bundled with Office XP and was released in 2001. It had many of the same features as Word 2000, but had a major new feature called the 'Task Panes', which gave quicker information and control to a lot of features that were before only available in modal dialog boxes. One of the key advertising strategies for the software was the removal of the Office Assistant in favor of a new help system, although it was simply disabled by default. Word 2003 Microsoft Office 2003 is an office suite developed and distributed by Microsoft for its Windows operating system. Office 2003 was released to manufacturing on August 19, 2003, and was later released to retail on October 21, 2003. It was the successor to Office XP and the predecessor to Office 2007. Word 2004 A new Macintosh version of Office was released in May 2004. Substantial cleanup of the various applications (Word, Excel, PowerPoint) and feature parity with Office 2003 (for Microsoft Windows) created a very usable release. Microsoft released patches through the years to eliminate most known macro vulnerabilities from this version. While Apple released Pages and the open source community created NeoOffice, Word remains the most widely used word processor on the Macintosh. Office 2004 for Mac is a version of Microsoft Office developed for Mac OS X. It is equivalent to Office 2003 for Windows. The software was originally written for PowerPC Macs, so Macs with Intel CPUs must run the program under Mac OS X's Rosetta emulation layer. Also: Stable release: v11.6.6 / December 13, 2011; 7 years ago Word 2007 The release includes numerous changes, including a new XML-based file format, a redesigned interface, an integrated equation editor and bibliographic management. Additionally, an XML data bag was introduced, accessible via the object model and file format, called Custom XML – this can be used in conjunction with a new feature called Content Controls to implement structured documents. It also has contextual tabs, which are functionality specific only to the object with focus, and many other features like Live Preview (which enables you to view the document without making any permanent changes), Mini Toolbar, Super-tooltips, Quick Access toolbar, SmartArt, etc. Word 2007 uses a new file format called docx. Word 2000–2003 users on Windows systems can install a free add-on called the "Microsoft Office Compatibility Pack" to be able to open, edit, and save the new Word 2007 files. Alternatively, Word 2007 can save to the old doc format of Word 97–2003. Word 2008 Word 2008 was released on January 15, 2008. It includes some new features from Word 2007, such as a ribbon-like feature that can be used to select page layouts and insert custom diagrams and images. Word 2008 also features native support for the new Office Open XML format, although the old doc format can be set as a default. Microsoft Office 2008 for Mac is a version of the Microsoft Office productivity suite for Mac OS X. It supersedes Office 2004 for Mac and is the Mac OS X equivalent of Office 2007. Office 2008 was developed by Microsoft's Macintosh Business Unit and released on January 15, 2008. Word 2010 Microsoft Office 2010 is a version of the Microsoft Office productivity suite for Microsoft Windows. Office 2010 was released to manufacturing on April 15, 2010, and was later made available for retail and online purchase on June 15, 2010. It is the successor to Office 2007 and the predecessor to Office 2013. Word 2011 Word 2013 The release of Word 2013 has brought Word a cleaner look and this version focuses further on Cloud Computing with documents being saved automatically to OneDrive (previously Skydrive). If enabled, documents and settings roam with the user. Other notable features are a new read mode which allows for horizontal scrolling of pages in columns, a bookmark to find where the user left off reading their document and opening PDF documents in Word just like Word content. The version released for the Windows 8 operating system is modified for use with a touchscreen and on tablets. It is the first version of Word to not run on Windows XP or Windows Vista. Word 2016 On July 9, 2015, Microsoft Word 2016 was released. Features include the tell me, share and faster shape formatting options. Other useful features include realtime collaboration, which allows users to store documents on Share Point or OneDrive, as well as an improved version history and a smart lookup tool. As usual, several editions of the program were released, including one for home and one for business. Word 2019 Word 2019 added support for Scalable Vector Graphics, Microsoft Translator, and LaTeX, as well as expanded drawing functionality. Word included with Office 365 Microsoft Office 365 is a free/paid subscription plan for the classic Office applications. References Further reading Tsang, Cheryl. Microsoft: First Generation. New York: John Wiley & Sons, Inc. . Liebowitz, Stan J. & Margolis, Stephen E. WINNERS, LOSERS & MICROSOFT: Competition and Antitrust in High Technology Oakland: Independent Institute. . External links Microsoft Word home page The Word Object Model Ms Word Files Generation using .net framework Changing the Normal.dot file in Microsoft templates Microsoft Word 1.0 for Macintosh screenshots Word History of Microsoft Microsoft Word
4544913
https://en.wikipedia.org/wiki/Metrics%20%28networking%29
Metrics (networking)
Router metrics are metrics used by a router to make routing decisions. A metric is typically one of many fields in a routing table. Router metrics help the router choose the best route among multiple feasible routes to a destination. The route will go in the direction of the gateway with the lowest metric. A router metric is typically based on information such as path length, bandwidth, load, hop count, path cost, delay, maximum transmission unit (MTU), reliability and communications cost. Examples A metric can include: measuring link utilization (using SNMP) number of hops (hop count) speed of the path packet loss (router congestion/conditions) Network delay path reliability path bandwidth throughput [SNMP - query routers] load Maximum transmission unit (MTU) administrator configured value In EIGRP, metrics is represented by an integer from 0 to 4,294,967,295 (The size of a 32-bit integer). In Microsoft Windows XP routing it ranges from 1 to 9999. A metric can be considered as: additive - the total cost of a path is the sum of the costs of individual links along the path, concave - the total cost of a path is the minimum of the costs of individual links along the path, multiplicative - the total cost of a path is the product of the costs of individual links along the path. Service level metrics Router metrics are metrics used by a router to make routing decisions. It is typically one of many fields in a routing table. Router metrics can contain any number of values that help the router determine the best route among multiple routes to a destination. A router metric typically based on information like path length, bandwidth, load, hop count, path cost, delay, MTU, reliability and communications cost. See also Administrative distance, indicates the source of routing table entry and is used in preference to metrics for routing decisions References External links Survey of routing metrics Computer network analysis Network performance Metrics
44685794
https://en.wikipedia.org/wiki/EMIS%20Health
EMIS Health
EMIS Health, formerly known as Egton Medical Information Systems, supplies electronic patient record systems and software used in primary care, acute care and community pharmacy in the United Kingdom. The company is based in Leeds. It claims that more than half of GP practices across the UK use EMIS Health software and holds number one or two market positions in its main markets. Competitors The other approved GP systems are SystmOne, Microtest Health and Vision. In England EMIS and SystmOne have a duopoly. The pair were paid £77 million for primary care software in 2018. EMIS Group EMIS Group, which includes Egton Medical Information Systems Limited, also comprises: Ascribe, rebranded to EMIS Health, a provider of software and IT services to secondary care, sold to EMIS Group by ECI Partners in 2013; Rx Systems, rebranded to EMIS Health, whose software is used by 37% of community pharmacies in the UK; Egton, who provides IT infrastructure, engineering and support. Patient UK, used by 10 million people globally each month for health information and described as "The top health website you can't live without" by The Times. Dovetail Lab, purchased in 2018, a health technology company developing blockchain software to facilitate the integration of healthcare data. Pinnacle Health Partnership LLP / Pinnacle Systems Management Ltd, owners and operators of the widely-used PharmOutcomes platform The company bought Pinbellcom Group which supplies administration and compliance software to both the primary and the secondary care markets in July 2015, now a part of Egton. Healthcare record systems EMIS is one of the suppliers approved by the GP Systems of Choice and so funded by the NHS. Through its Patient Access service, EMIS was the first clinical system providers to enable patients to book GP appointments online and order repeat prescriptions. Patient Access also enables patients to access their own records online. EMIS Web had been rolled out to 3,750 practices in September 2014. EMIS Web allows primary, secondary and community healthcare practitioners to view and contribute to a patient's electronic healthcare record. Failures to link up medical records held by hospitals and those kept by their family doctors put patient's lives at risk, according to Prof Steve Field of the Care Quality Commission. He says this could be tackled by giving patients access to their own records – a system pioneered using EMIS software, in an attempt to restore patient confidence, by Dr Amir Hannan when he took over Harold Shipman's practice. "He's got examples of patients being admitted to hospital where they have had to show the consultants their record which may have saved their lives. It's policy to try and make it happen. But it's not moving quickly enough." EMIS was the first provider of GP record systems to permit Patient record access. EMIS said that the numbers of practices providing patients with online access to their records 'shot up' after it allowed GPs to tailor the parts of the record that patients can see. Dr Shaun O'Hanlon, EMIS's Chief Medical Officer says that the legal framework around data sharing is the main problem in integrating patient data because the Data Protection Act 1998 puts responsibilities on GPs to protect the confidentiality of patient data, but at the same time they have a "duty to share" when it is in the best interests of the patient. He says the quickest, easiest route to large scale record sharing is to put patients in the driving seat using smartphone technology. He quotes a YouGov poll which found that 85% of the population wanted any medical professional directly responsible for their treatment to have secure electronic access to key data from their GP record, such as long term conditions, medication history or allergies. EMIS Web supports Summary Care Records. Royal Free London NHS Foundation Trust has access to patients' GP records in the Urgent Care Centre run by Haverstock Healthcare in its A&E department using the EMIS Web integrated clinical IT system. This enables the majority of patients to be sent home with written information on self care or referred to a pharmacy. In March 2015 the company made an agreement to share patient data with SystmOne the second biggest supplier of GP software after IMS MAXIMS released an open source version of its software, which acute trusts can use and alter the code to tailor the system to their needs. The companies say they hope to deliver functionality to support cross-organisational working such as shared tasks and shared appointment booking. This agreement is independent of the medical interoperability gateway. Patient information is integrated with the record systems so patients can manage their own care with an information library, health apps, online and mobile services such as GP appointment booking and repeat prescription ordering. Ascribe is a supplier of clinical record systems for hospital pharmacy, A&E, mental health and patient administration (PAS)/electronic patient record systems (EPR). Seventy per cent of NHS secondary care organisations use an Ascribe solution. Tracey Grainger, Head of Digital Primary Care Development at NHS England, who manages the Prime Minister's Challenge Fund in July 2015 asked the company for assistance in obtaining "extracts of de-identified patient level data from systems that either record appointments or record consultations or in some cases both" on a monthly basis back to April 2013. This included the postcode sector of the patient, the date, time and duration of appointments as well as the reason for the consultation. The company is working with Central London Community Healthcare NHS Trust and Waltham Forest Clinical Commissioning Group on two pilots that will allow users of its software to see patient records on TPP's SystmOne and vice versa without any external software. Pharmacy RX Systems, now EMIS Health, is a supplier of software to pharmacists. The ProScript and ProScript Connect software systems are community pharmacy dispensary management systems, managing the dispensary process, labelling and endorsing patient records, ordering and stock control. In February 2018 the company piloted EMIS Web for Pharmacy, which enabled community pharmacists to read and write to the GP patient record and to see a full history of medication and diagnostic results. References External links EMIS Health GP Systems Five minutes with the chief medical officer of Emis Group Electronic health record software companies Private providers of NHS services Companies based in Leeds
11154407
https://en.wikipedia.org/wiki/Vendor%20management%20system
Vendor management system
A vendor management system (VMS) is an Internet-enabled, often Web-based application that acts as a mechanism for business to manage and procure staffing services – temporary, and, in some cases, permanent placement services – as well as outside contract or contingent labor. Typical features of a VMS application include order distribution, consolidated billing and significant enhancements in reporting capability that outperforms manual systems and processes. In the financial industry due to recent regulations (see FRB SR13-19; OCC 2013-29 and CFPB 2012-03), vendor management implies consistent risk classification and due diligence to manage third-party risk. A number of institutions have re-classified or renamed their programs to Third Party Risk Management (TPRM) to align with the verbiage used by the regulatory agencies. Definitions The contingent workforce is a provisional group of workers who work for an organization on a non-permanent basis, also known as freelancers, independent professionals, temporary contract workers, independent contractors or consultants. VMS is a type of contingent workforce management. There are several other terms associated with VMS which are all relevant to the contingent workforce, or staffing industry. A vendor is a person or organization that vends or sells contingent labor. Specifically a vendor can be an independent consultant, a consulting company, or staffing company (who can also be called a supplier – because they supply the labor or expertise rather than selling it directly). A VOP, or Vendor On Premises, is a vendor that sets up shop on the client's premises. They are concerned with filling the labor needs and requirements of the client. The VOP does this either by sourcing labor directly from themselves or from other suppliers with whom they may be competing. Also, the VOP manages and coordinates this labor for the client. A MSP, or Managed Service Provider, manages vendors and measure their effectiveness in recruiting according to the client's standards and requirements. MSPs generally do not recruit directly, but try to find the best suppliers of vendors according to the client's requirements. This, in essence, makes the MSP more neutral than a VOP in finding talent because they themselves do not provide the labor., VMS is a tool, specifically a computer program, that distributes job requirements to staffing companies, recruiters, consulting companies, and other vendors (i.e. Independent consultants). It facilitates the interview and hire process, as well as labor time collection approval and payment. A CSM, or Contracted Service Management System, is a tool which interfaces with the Access Control Systems of large refineries, plants, and manufacturing facilities and the ERP system in order to capture the real-time hours/data between contractors and client. This type of system will typically involve a collaborative effort between the contractor and facility owner to simplify the timekeeping process and improve project cost visibility. An EOR, or Employer of Record, is designed to facilitate all components of independent contractor management, including classification, auditing, and compliance reviews. Employer of Records help drive down the risk of co-employment and allow enterprises to engage and manage independent contractors without the stress of government audits or tax liabilities. History and Evolution of VMS VMS (Vendor Management System) is a fairly recent advancement in managing contingent labor spend. VMS is an evolution of the Master Service Provider (MSP) / Vendor-On-Premises (VOP) concept, which became more prevalent in the late-1980s to the mid-1990s when larger enterprises began looking for ways to reduce outsourcing costs. An MSP or VOP was essentially a master vendor who is responsible for on-site management of their customer’s temporary help / contract worker needs. In keeping with the BPO (Business Process Outsourcing) concept, the master vendor enters into subcontractor agreements with approved staffing agencies. It is noteworthy to mention that VMS really started to evolve around the time Michael Hammer and James Champy's Reengineering the Corporation became a bestseller. Large enterprises were looking for ways to compete in the global economy. The main advantage for U.S. businesses during this time period was that their purchasing departments were able to channel new contract personnel requisitions to one source – the VOP – and, in turn, reduce procurement costs by simplifying their payment process. In effect, they only had to write a check to one vendor vis-à-vis hundreds of suppliers. With the Internet came new ways of doing business, which included electronic payment. According to Staffing Industry Analysts, Inc. the emergence of eBusiness, B2B, E-Procurement et al. was the catalyst that began the VMS industry. As businesses began to integrate this e-business concept, online auctions began to appear. The value proposition was, they claimed, that they could reduce spend for purchasing office suppliers, industrial suppliers and other commodities by putting these purchase requests out for bid via an online auction. The Pioneers In 1993, one such company recognized the contingent labor spend management niche as an immense opportunity – Geometric Results Inc. (GRI). At its origin, GRI was a wholly owned Ford Motor Company subsidiary and it was GRI who developed one of the first significant VMS applications in the industry, PeopleNet. Originally starting out as a manual process, some system automation was introduced in 1995. A year later, PeopleNet became an automated VMS system. Overall, GRI managed nearly $200 million in spend at Ford. In 1997, MSX International purchased GRI and continued its growth in the marketplace offering a vendor neutral VMS for automotive industry. MSXI later launched a new proprietary Internet software - b2bBuyer, and the program continued to grow with the expansion of MSXI's European operations. Their success is achieved through best in class processes and technology supported by a vendor neutral model. MSXI also created a 51/49 minority-owned subsidiary and repackaged its web-based application as “TechCentral” to service former GM parts supplier, Delphi Corporation. Today, The Bartech Group—a minority-owned staffing supplier and new MSP—assumed the Delphi VMS in 2006 and currently runs the program using the Fieldglass VMS platform. During the same time ProcureStaff Technologies also launched a vendor neutral VMS solution for human capital management in 1996. ProcureStaff Technologies spun off as a subsidiary of its parent company, Volt Information Sciences to address the glaring need for vendor neutrality in the procurement of this commodity. ProcureStaff Technologies implemented a vendor-neutral model for its first client, a global telecommunications company, because it promoted competition by opening requisitions up to a larger number of pre-qualified staffing suppliers without bias or favoritism. The benefits realized to the customer included reduced cycle times and lower overall contingent labor spend. It was not long after this time that other companies, eager to capitalize on the expanding marketplace, entered the fray. Although Chimes was a wholly owned subsidiary of Computer Horizons Corp., the key differentiator between it and other VMS providers that were emerging was that it positioned itself as a “vendor-neutral” provider of Business Process Outsourcing (BPO) services instead of just a technology company that licensed its VMS software. Chimes value proposition was it would create and staff a Program Office (PO) that integrated with the customer’s business Purchasing, HR, and Accounting processes. That is, Chimes realized that simply licensing its software to its customers was a strategy that could not guarantee a successful implementation and realization of the benefits of the VMS concept. In February 2007, Axium International purchased Chimes, Inc. from its parent company (CHC) and merged it with Ensemble Workforce Solutions. The companies together form ECG (Ensemble Chimes Global), the largest VMS provider in the world. Fiscal improprieties led to the unexpected implosion of Chimes (ECG) and its parent company Axium in early 2008. In January 2008, Axium International Inc., the parent of the Ensemble Chimes Global, filed for Chapter 7 bankruptcy in Los Angeles and both Axium International and Ensemble Chimes Global ceased operations. On January 24, 2008, Beeline, the workforce solutions business unit of MPS Group, Inc., announced that it was the successful bidder for the assets of Chimes. The Aberdeen Group, an independent research organization, found that less than 17% of companies who have implemented a program to manage their contingent labor workforce have seen an improvement in spend and source-to-cycle performance metrics. This supports Chimes contention that the best implementations are those that include an emphasis on improving business processes versus just selling a tool to a customer. Benefits to U.S. Businesses By 2002, there were over 50 VMS solution providers. The software was now web-based, so stakeholders – customer hiring managers, VMS program office staff, and suppliers – could access the system from the internet. Typical benefits included: Streamlined requisition approval workflow Reduced time-to-fill cycle times Simplified vendor management Bill rate standardization / management Optimization of supplier base Consolidated invoicing Improved security and asset management Availability of vendor performance metrics Visibility and cost control over maverick spend 10-20% reduction in contingent labor spend VMS Trends Rapid Growth and Maturity According to Aberdeen research, 72% of US companies have a single program for managing contract labor and professional services sourcing and procurement. This is amazing proliferation since VMS software has only been around for about ten years. This proves, like everything else in a broadband world, the Industry (Maturity) Life Cycle for the VMS market is on an accelerated curve. Although the industry is still in the latter phase of the growth stage, vendors should be aware of the symptoms that indicate the arrival of the industry decline, such as when: A) competitive pressures force MSP/VMS margins to weaken; B) there is a rash of competitor consolidation via merger, acquisition or abandonment; C) sales expansion within the existing customer base is dramatically reduced; and D) sales volume to new customers in the US decline. Once customers have realized the initial benefits of gaining control and managing their contingent labor workforce, there will be efforts towards continuous improvement—to include cost reductions as well as analysis of what other indirect spend categories can be expanded. Opportunities for VMS providers include project-based spend, independent contractors, and professional services, among others. Increase Scrutiny in Vendors In United States, agencies such as Consumer Financial Protection Bureau (CFBP), Federal Financial Institution Examination Council (FFIEC) and Federal Reserve Board (FRB) are becoming more involved in assessing regulatory compliance of financial institution. The increased focus means that organizations are relooking into how they select their vendors. Emphasis on measuring CyberRisk A quick analysis of data breaches and exposures since 2014 indicates that breaches has become more frequent and bigger scale than ever. With much more data generated and stored on cloud, malicious actors are interested to exploit systems that are wrongly configured. Hackers gain access to data by exploiting vendor's security loopholes. One challenge in measuring vendor's cyber risk is that the conventional way of measuring a vendor via questionnaire is dated and not comprehensive enough. See also Contingent workforce Professional employer organization Human resources Human resource management Contingent labor Contractor management IT cost transparency References Business software Human resource management Human resource management software
2155539
https://en.wikipedia.org/wiki/Ateneo%20de%20Zamboanga%20University
Ateneo de Zamboanga University
The Ateneo de Zamboanga University (Filipino: Pamantasang Ateneo de Zamboanga), also referred to by its acronym AdZU is a private Catholic Coeducational basic and higher education institution run by the Philippines Province of the Society of Jesus in Zamboanga City, Zamboanga del Sur, Philippines. AdZU began in 1912 as Escuela Catolica, a parochial school run by Spanish Jesuits. It is the second oldest Jesuit school in the Philippines and the second Jesuit school to be named Ateneo. It initially offered primary and secondary education for boys. It became a college in 1952 and a university in August 2001. It operates in three campuses. The main campus located in La Purisima Street, Zamboanga City is where the tertiary and senior high school departments are located and the annex campus in Barangay Tumaga, outside Zamboanga City proper, is where the junior high school and grade school units are located. The third and newest campus is the Lantaka Campus located in N. S. Valderroza St. Zamboanga City. This campus was a former resort hotel which was repurposed by the AdZU. It is now being used for educational, spiritual, religious, and social development purposes. History Pre-war Ateneo The Ateneo de Zamboanga University began in 1912 as Escuela Catolica, a parochial school run by Spanish Jesuits at the old site of the Immaculate Conception Church across from the Sunken Garden. Manuel M. Sauras was the first director and served in that capacity up to 1926. Escuela Catolica served as the parochial school of the Immaculada Concepcion Parish headed by Miguel Saderra Mata. Classes were held on the ground floor of the rectory of the parish, which was adjacent to Plaza Pershing. While the curriculum was similar to that of the public elementary school, the Spanish Jesuits emphasized religious teaching alongside quality education. Catholic education later became a factor in the decision of the Jesuits to open a school that was empowered to issue the titulo oficial upon completion of studies. In 1916, the Escuela Catolica expanded and became the Ateneo Elementary School. Its grade school opened that year with seven grades. The school name was changed to Ateneo de Zamboanga when its high school opened in 1928. High school classes were held on the top floor of the three-story Ateneo building along I. Magno corners P. Reyes and Urdaneta streets. The building was the Mindanao Theater, now the site of the City Theater. Five lay teachers and the Jesuit director made up the faculty. The elementary school occupied the lower floors. The first high school students graduated from Ateneo in 1932. The ten young male graduates belonged to Zamboanga City's crème de la crème, one of whom was Roseller T. Lim who would become the first Zamboangueño senator of the Philippines. The American Jesuits took over from the Spanish Jesuits in 1930, with Thomas J. Murray as the first American director. In 1932 the government gave official recognition to the high school. In 1938, AdZU opened night classes in commerce and pre-law, thus pioneering its expansion to college, later interrupted by World War II. Pre-war Ateneo expanded with an enrollment of 230 in the grade school and 376 in the high school under Eusebio G. Salvador. A Zamboangueño and a product of Escuela Catolica, Salvador was the first Filipino director of AdZU. In 1938 a library was built on the first floor of the Knights of Columbus building. A façade, an auditorium, and an annex were also built. John Shinn was appointed headmaster of the grade school and Francis X. Clark became the principal and dean of discipline of the high school. The school was closed during World War II when the Japanese used it as a public school until it was shelled and bombed by American forces on March 8 and 9 in 1945, before the liberation of the city. Post-war Ateneo The high school and intermediate classes (grades 5 and 6) reopened in 1946, in a nipa-sawali building on a new site outside of the poblacion called the Jardin de Chino near Camino Nuevo. It was providential that shortly before the outbreak of the war, Eusebio G. Salvador had bought 18 adjoining lots in Jardin de Chino on Bailen Street (now La Purisima Street). In 1947 he bought an additional 1.5 hectares along Camino Nuevo St. adjoining the Bailen St. property to bring to a total of 4.3 hectares the La Purisima campus. Together with Frs. Kyran B. Egan and Cesar E. Maravilla, he reopened high school classes. In 1949, Ateneo became independent of the Jesuit mission in Zamboanga, separating itself from the Immaculate Conception Cathedral. It was officially recognized as a Jesuit school separate from the parish. The postliberation years were a period of rapid physical, curricular, and enrollment expansion of the school. By 1949, AdZU underwent a major make-over. A solid structure of fine wood replaced the nipa-and-sawali classrooms. The gymnasium-auditorium (now Brebeuf Gymnasium) was constructed in 1950, making it the oldest structure on campus today. A college was established in 1952 and a graduate school in 1976. In the years between 1946 and 1952, a total of 2,766 students graduated from high school. The college opened in 1952 with a two-year collegiate program, which offered pre-law courses and a degree in Associate in Arts. The college gradually expanded to include four-year bachelor's programs in the arts, commerce, education, and nursing. In 1956, the college and high school became separate departments. Expansion AdZ experienced a “building boom” beginning with the completion of the Jesuit Residence in 1959, Sacred Heart Chapel in 1961, Gonzaga Hall in 1964, and Canisius Hall in 1967. In 1972, the two one-storey grade school buildings, Berchmans and Kostka halls, were built. By the mid-70s, basic education was well established with an enrolment reaching 381 students in the grade school. In 1976, higher education expanded to include the graduate school which opened an MBA course, the first graduate program in business administration in Region IX. Soon after, other master's degrees followed: public administration, nursing, guidance and counselling, and education. These programs were offered under the guidance of Ernesto A. Carretero, the first president elected by the Board of Trustees of AdZ. In the years that followed, more changes and developments took place. Campion Hall and Bellarmine Hall were built in 1979 and 1980. The Fr. Jose Ma. Rosauro, S.J. Center was finished in 1986. The Learning Resource Center was inaugurated in 1987 to accommodate the library, book center, audio-visual unit, and various offices. Fr. Carretero obtained the PAASCU accreditation of the high school in 1975, the liberal arts and commerce in 1981, and education, nursing, and the grade school in 1982. AdZ attained Level III in the accreditation of these colleges, the highest rank given to tertiary schools in the Philippines at that time. In 1984, girls were accepted for the first time in the grade school. Twenty-two girls were part of the grade six graduating class in 1992. They became the first batch of girls to study at the Ateneo High School in 1992. Presidency of Fr. William H. Kreutz (1989-2006) Fr. Carretero, S.J., and after 1989 Fr. William H. Kreutz, S.J., Ateneo presidents, sought university status for the school. By this time and into the mid-90s the school had added undergraduate programs including accountancy, arts and sciences, business management, and computer science. In 1994, a group of Zamboanga-based doctors entered into a consortium with the Ateneo de Zamboanga under Fr. Kreutz's presidency to establish a medical school named Zamboanga Medical School Foundation (ZMSF). It was located on the La Purisima campus. In 2004, ZMSF was turned over to Ateneo. The medical school became the Ateneo de Zamboanga University School of Medicine. Partnering with the universities of Calgary, Laos, Nepal, and Cambodia, the School of Medicine uses problem-based learning and community-based approach to medical practice with emphasis on serving communities in Western Mindanao. University status On August 20, 2001, AdZ was officially declared a university by the Commission on Higher Education (CHED). In the same year it was granted autonomous status by CHED, making Ateneo de Zamboanga University (AdZU) one of only thirty higher education institutions in the Philippines, and the only one in Western Mindanao, to be granted full deregulation and autonomy. The new College building and the Multi-Purpose Covered Courts were inaugurated in 2001. New campus in Tumaga On July 31, 2005, groundbreaking was held for the new high school building in Savanah, Tumaga, Zamboanga City. Fr. Kreutz's 18-year presidency came to a close together with the completion of the new campus in Tumaga, which was later named Fr. William H. Kreutz, S.J., campus. The high school was transferred to this new site in 2006 for school year 2006–2007. Presidency of Fr. Antonio F. Moreno (2006-2013) On October 11, 2006, Fr. Antonio F. Moreno, S.J., was elected as the second president of the University, replacing Fr. William H. Kreutz, S.J. He assumed office on May 21, 2007, and was officially installed on September 22, 2007. Meanwhile, AdZU and Xavier University – Ateneo de Cagayan launched the Xavier University College of Law – Zamboanga on 25 June 2011. The College of Law is in a new four-storey building named in honor of Fr. Manuel Ma. Sauras, S.J. Sauras Hall is also home to the new university cafeteria. The school's Sacred Heart chapel, built in 1961, was replaced after 50 years by a new Spanish-colonial University Church of the Sacred Heart in time for the university's centennial in 2012. Presidency of Fr. Karel S. San Juan (2013-present) When in 2013 Fr. Moreno became Provincial Superior of the Philippine Province of the Society of Jesus, the University Board of Trustees elected Fr. Karel S. San Juan, S.J., as its third president, on February 23, 2013. As Fr. San Juan was in his final year of Jesuit spiritual formation, he did not assume the presidency until October 8, 2013. Groundbreaking for a new grade school building at the Tumaga campus was held on July 30, 2013; the new building was opened on June 13, 2015. Plans for this campus include construction of an auditorium, an amphitheater, sports facilities, and a chapel which will replace the chapel in the high school building. Senior High School On June 13, 2016, the Senior High School unit was officially inaugurated at the Fr Salvador Eusebio Campus, La Purisima St, Zamboanga City. The unit originally occupied the Kostka and Xavier Halls. On April 3, 2017, the Ateneo de Zamboanga University held the groundbreaking and contract-signing ceremonials for the construction of the new Senior High School (SHS) building. This five-story building, later named the Francisco Wee Saavedra (FWS) building, houses classrooms, offices, laboratories, a chapel, prayer room, cafeteria and commercial spaces. The FWS building was inaugurated on December 8, 2018, coinciding with the last day of the Ateneo Fiesta that year. Brebeuf Gym Fire and RISE Capital Campaign On July 7, 2017, the historic 67-year-old Brebeuf Gymnasium was razed to the ground; the Sauras, Kostka, and Gonzaga Halls were also affected. The Zamboanga City government estimated the damages to amount to 5 million. In response, the University, led by the University Office for Advancement, initiated the RISE Capital Campaign and adjusted its campus expansion plans. The campaign aims to rase PHP 1 billion, allocating PHP 700 million to infrastructure, PHP 200 million to scholarships, and PHP 100 million to research. The Gonzaga Hall was rebuilt, while the Sauras Hall was retrofitted with carbon fiber materials. In the former site of the Brebeuf Gymnasium, a second Multi-Purpose Covered Court was built. Lantaka Campus The former Lantaka Hotel was donated by the Walstrom family to AdZU. The hotel was converted to a satellite campus of AdZU. The inauguration of the campus was held on March 19, 2019. The new AdZU campus is being used as a place for students' and staff's retreats, recollections, meetings, seminars and conferences. Academics Programs AdZU offers academic programs at the kindergarten, elementary, junior high, senior high, undergraduate, and graduate levels. As is the norm in the Philippine education system, instruction is primarily delivered in the English language with the exception of Philippine and foreign language subjects. Given that Jesuit educational institutions in the Philippines have a shared liberal arts tradition, AdZU's academic programs always have core subjects that educate students from across all levels not only on the government-required areas of history, social studies, and literature, but also subjects on Jesuit philosophy and Catholic theology. However, students of any religious background are allowed to study at AdZU. Graduate and professional schools The AdZU Graduate School offers Masters-level and Doctor of Philosophy (PhD) programs across the disciplines covered by the five undergraduate schools. In addition, AdZU has career-specific professional schools, including the School of Medicine and the College of Law. Undergraduate schools The AdZU undergraduate schools comprise the following: School of Education (SEd) - offers degrees in education as well as a Certificate in Professional Education (CPE) program for graduates of other undergraduate programs. SEd is considered by the Commission on Higher Education as a Center of Excellence for Teacher Education. School of Liberal Arts (SLA) - offers programs in the humanities and social sciences. School of Management and Accountancy (SMA) - facilitates programs in the realms of business and management. College of Nursing (CON) - offers a single degree program, Bachelor of Science in Nursing. College of Science and Information Technology (CSIT) - confers four-year Bachelor of Science degrees in natural sciences, mathematics, information technology, and engineering, including the Philippines's first fully fledged (non-minor) Biomedical Engineering program, as well as a two-year associate degree program. CSIT is considered by the Commission on Higher Education as a Center of Development for Information Technology. Basic Education AdZU offers education in the Grade School (GS), Junior High School (JHS), and Senior High School (SHS) units. The Grade School unit offers primary education in the 1st to 6th grade as well as two years of kindergarten (preparatory) education. The Junior High School unit covers the 7th to 10th grade of secondary education, and offers accelerated mathematics curricula to qualified students. Both units achieved the top level of PAASCU accreditation for their education levels. The Senior High School offers 11th and 12th grade education. The unit was launched following the K-12 curriculum overhaul project of the Department of Education. It offers the Science, Technology, Engineering, and Mathematics (STEM); Accountancy and Business Management (ABM); Humanities and Social Sciences (HUMSS); and Information and Communications Technology - 2D Animation (ICT-2DA) strands. Admissions AdZU operates with a selective admissions policy. All of the units of the University require, among other things, passing an entrance examination (which in the case of the Grade School is instead termed an 'assessment test'); past education records are also required for examination. The School of Medicine requires results in the National Medical Admission Test while the College of Law requires results in the PhilSAT; all other units administer their own entrance examinations. Accreditation After receiving a five-year Level III Accreditation for the Arts and Sciences, Education, Business and Accountancy, and Nursing programs in May 2014, AdZU requested Institutional Accreditation, an evaluation process that looks at the overall quality of the school. In September 2014, the Philippine Accrediting Association of Schools, Colleges and Universities (PAASCU) granted Institutional Accreditation to Ateneo de Zamboanga. The University has also been declared by the Commission on Higher Education as a Center of Excellence (CoE) in Teacher Education and a Center of Development (CoD) in Information Technology. In May 2017, the Junior High School was conferred a Level III accreditation status from PAASCU accreditors. Campuses AdZU operates three campuses. As part of the RISE ADZU capital campaign, the University is currently undergoing a campus revitalization and expansion program. Fr. Eusebio G. Salvador, S.J. (Main) Campus The main campus (4.3 hectares) on La Purisima Street, named for Fr. Eusebio Salvador, S.J., houses the undergraduate and graduate schools and the Senior High School unit. Buildings used as learning spaces include the Bellarmine-Campion building (home to the Bellarmine and Campion halls), the Canisius and Gonzaga halls (collectively colloquially referred to as the C building), the Xavier Hall, the Kostka Hall, and the Francisco Wee Saavedra (FWS) Building. The FWS Building permanently hosts the Senior High School unit, while other buildings are shared by the undergraduate, graduate, and senior high schools. The Learning Resource Center on-campus houses the Fr. Jose Bacatan, S.J. library, the University's main library, as well as several important offices and the administration of the Graduate School. The campus also contains two Multi-Purpose Covered Courts, the Backfield, the Bellarmine-Campion Quadrangle, the Jesuit Residences, the Rosauro (JMR) Hall, and the Sauras Hall. Fr. William H. Kreutz, S.J. Campus The second campus (8.3 hectares) in Barangay Tumaga, which is outside Zamboanga City proper, named for Fr. William Kreutz, S.J., is home to the Junior High School and Grade School units. The campus initially opened as the new home of the Ateneo de Zamboanga University High School, which transferred from the main campus in La Purisima St. as it needed more space for expansion due to its growing student population. Such expansion was no longer possible in the crowded Main Campus. Eventually, the Grade School was also moved to this campus in 2015, and the campus was given purpose as the basic education center of the University. Aside from the junior high school and grade school buildings, the campus also includes a soccer field, a multi-purpose covered court, and a playground. Lantaka Campus The Ateneo de Zamboanga University Lantaka Campus is situated along the developed historic coastline of Zamboanga City. The campus was previously a resort hotel. The University repurposed the site for educational, spiritual, religious, and social development purposes. The new Ateneo campus will be used as a place for students' and staff's retreats, recollections, meetings, seminars and conferences. Research The University Research Office (URO), formerly known as the Ateneo Research Center (ARC), is the office of AdZU concerned with supervising the research activities of the University. The URO is supervised by the University Research and Publication Council (URPC), a policy-making body that is concerned with setting research agenda and providing mentoring to AdZU academics. The Ateneo de Zamboanga University Press is the publisher of the Asia Mindanaw: Dialogue on Peace and Development journal, which covers research papers and projects that examine Western Mindanao and the Sulu archipelago, as well as the dynamics between MIndanao itself and Asia as a whole. School motto The motto of the school is Pro Deo et Patria (In the Service of God and Country) School mascot The "Azul Aguila" (Blue Eagle) is the mascot of the Ateneo de Zamboanga. It combines the majesty of the eagle with the color of the school's patroness, Blessed Virgin Mary, under the title of Our Lady of the Immaculate Conception. University church Being a Catholic school, there is a church inside AdZU campus named University Church of the Sacred Heart of Jesus. Student life The units of AdZU together have dozens of student organizations that cater to a wide variety of preferences, ranging from advocacy and humanitarian groups such as institutional chapters of Rotary International's Interact to niche interests such as debate and worship music. In addition, the different Schools (including undergraduate, professional, and basic education schools) have Academic Organizations (AOs) that manage the specific needs of students in a given School. These AOs are instrumental in rallying students from these schools together for matters of interest and importance, as well as organizing teams for sports festivals and other intra-University activities. Athletics The college varsity teams compete in the Private Schools Athletic Association (PRISAA), and the grade school and high school varsity teams participate in the Private Schools of Zamboanga City Athletic Association (PSZCAA). AdZU holds a school-wide annual sportsfest called "Ateneo Fiesta," a week-long event beginning in the last week of November. Most of the high school and college competitors in the Fiesta are varsity athletes preparing for competition in sports events on the PRISAA agenda. The Fiesta is considered by the students as the most important event of the academic year. Student councils The Sanggunian ng mga Mag-aaral ng Ateneo de Zamboanga University (SMADZU) was the College unit's student government before 2009. Currently in its place is El Consejo Atenista, which was created with the objective of being a more representative student government. The basic education units also have their own student councils; the junior high school student government is called the Council of Leaders, while the senior high school government is called the Ateneo Student Executive Council (ASEC). The grade school's council is simply named Student Government. Student publications Official student publications are The Beacon Magazine for the College unit, The Oculus Publications and Vista de Aguila for the Senior High School, Blue Eagle Publications and La Liga Atenista for the Junior High School, and The Quill for the Grade School. The student publications of the University regularly compete in local, regional, and national journalism competitions. See also List of Jesuit educational institutions in the Philippines List of Jesuit educational institutions Zamboanga City List of Jesuit sites References External links Official website Jesuit universities and colleges in the Philippines Educational institutions established in 1912 Universities and colleges in Zamboanga City 1912 establishments in the Philippines Schools in Zamboanga City
40115446
https://en.wikipedia.org/wiki/Adam%20Wierman
Adam Wierman
Adam Wierman is Professor of Computer Science in the Department of Computing and Mathematical Sciences at the California Institute of Technology. He is known for his work on scheduling (computing), heavy tails, green computing, queueing theory, and algorithmic game theory. Academic biography Wierman studied at Carnegie Mellon University, where he completed his BS in Computer Science and Mathematics in 2001 and his MS and PhD degrees in Computer Science in 2004 and 2007. His PhD work was supervised by Mor Harchol-Balter. His dissertation received the Carnegie Mellon School of Computer Science Distinguished Dissertation Award. He has been on the faculty of the California Institute of Technology since 2007. Research Wierman's research has centred on resource allocation and scheduling decisions in computer systems and services. More specifically, his work has focused both on developing analytic techniques in stochastic modelling, queueing theory, scheduling theory, and game theory, and on applying these techniques to application domains such as energy-efficient computing, data centres, social networks, and electricity markets. Awards and honors Wierman was the recipient of an NSF CAREER award in 2009 and the ACM SIGMETRICS Rising Star award in 2011. His work has received "Best Paper" awards at the ACM SIGMETRICS, IEEE INFOCOM, and IFIP Performance conferences, among others. An extension of his work was used in HP's Net-zero Data Center Architecture, which was named a 2013 Computerworld Honours Laureate. His work received the 2014 IEEE William R. Bennet Prize. References California Institute of Technology faculty Living people Theoretical computer scientists 1979 births
27317345
https://en.wikipedia.org/wiki/YEd
YEd
yEd is a general-purpose diagramming program with a multi-document interface. It is a cross-platform application written in Java that runs on Windows, Linux, Mac OS, and other platforms that support the Java Virtual Machine. It is released under a proprietary software license, that allows using a single copy gratis. An online version of yEd, yEd Live , also exists, and there is a Confluence version of yEd, Graphity for Confluence . yEd can be used to draw many different types of diagrams, including flowcharts, network diagrams, UMLs, BPMN, mind maps, organization charts, and entity-relationship diagrams. yEd also allows the use of custom vector and raster graphics as diagram elements. yEd loads and saves diagrams from/to GraphML, an XML-based format. It can also print very large diagrams that span multiple pages. Features Automatic layout yEd can automatically arrange diagram elements using a variety of graph layout algorithms, including force-based layout, hierarchical layout (for flowcharts), orthogonal layout (for UML class diagrams), and tree layout (for organization charts). Data exchange yEd can import data in various formats to generate diagrams out of it. Import formats include the Microsoft Excel .xls format for spreadsheet data, the Gedcom format for genealogical data, and also arbitrary XML-based file formats, which are transformed by means of XSLT stylesheets. Predefined XSLT stylesheets provided by yEd can process the Apache Ant build script format used to define dependency information in software build processes and the OWL file format for the description of ontologies. Other XML-based data is processed in a generic way. yEd can export diagrams to various raster and vector formats, including GIF, JPEG, PNG, EMF, BMP, PDF, EPS, and SVG. It can also export to SWF (Shockwave Flash) file format and HTML image maps. The structural information of a diagram can be exported as GML (Graph Modeling Language) and TGF (Trivial Graph Format). Development yEd is a product of yWorks GmbH, a German software company. See also List of UML tools References External links Diagramming software Graph drawing software UML tools Unix software Classic Mac OS software Windows text-related software MacOS text-related software Cross-platform software Java (programming language) software
29229
https://en.wikipedia.org/wiki/Slot%20machine
Slot machine
A slot machine (American English), known variously as a fruit machine (British English), puggy (Scottish English), the slots (Canadian English and American English), poker machine/pokies (Australian English and New Zealand English), fruities (British English) or slots (American English), is a gambling machine that creates a game of chance for its customers. Slot machines are also known pejoratively as one-armed bandits because of the large mechanical levers affixed to the sides of early mechanical machines and the games' ability to empty players' pockets and wallets as thieves would. A slot machine's standard layout features a screen displaying three or more reels that "spin" when the game is activated. Some modern slot machines still include a lever as a skeuomorphic design trait to trigger play. However, the mechanics of early machines have been superseded by random number generators, and most are now operated using buttons and touchscreens. Slot machines include one or more currency detectors that validate the form of payment, whether coin, cash, voucher, or token. The machine pays out according to the pattern of symbols displayed when the reels stop "spinning". Slot machines are the most popular gambling method in casinos and constitute about 70% of the average U.S. casino's income. Digital technology has resulted in variations on the original slot machine concept. As the player is essentially playing a video game, manufacturers are able to offer more interactive elements, such as advanced bonus rounds and more varied video graphics. Etymology The "slot machine" term derives from the slots on the machine for inserting and retrieving coins. "Fruit machine" comes from the traditional fruit images on the spinning reels such as lemons and cherries. History Sittman and Pitt of Brooklyn, New York developed a gambling machine in 1891 that was a precursor to the modern slot machine. It contained five drums holding a total of 50 card faces and was based on poker. The machine proved extremely popular, and soon many bars in the city had one or more of them. Players would insert a nickel and pull a lever, which would spin the drums and the cards that they held, the player hoping for a good poker hand. There was no direct payout mechanism, so a pair of kings might get the player a free beer, whereas a royal flush could pay out cigars or drinks; the prizes were wholly dependent upon what the establishment would offer. To improve the odds for the house, two cards were typically removed from the deck, the ten of spades and the jack of hearts, doubling the odds against winning a royal flush. The drums could also be rearranged to further reduce a player's chance of winning. Because of the vast number of possible wins in the original poker-based game, it proved practically impossible to make a machine capable of awarding an automatic payout for all possible winning combinations. At some time between 1887 and 1895, Charles Fey of San Francisco, California devised a much simpler automatic mechanism with three spinning reels containing a total of five symbols: horseshoes, diamonds, spades, hearts and a Liberty Bell; the bell gave the machine its name. By replacing ten cards with five symbols and using three reels instead of five drums, the complexity of reading a win was considerably reduced, allowing Fey to design an effective automatic payout mechanism. Three bells in a row produced the biggest payoff, ten nickels (50¢). Liberty Bell was a huge success and spawned a thriving mechanical gaming device industry. After a few years, the devices were banned in California, but Fey still could not keep up with the demand for them from elsewhere. The Liberty Bell machine was so popular that it was copied by many slot-machine manufacturers. The first of these, also called the "Liberty Bell", was produced by the manufacturer Herbert Mills in 1907. By 1908, many "bell" machines had been installed in most cigar stores, saloons, bowling alleys, brothels and barber shops. Early machines, including an 1899 Liberty Bell, are now part of the Nevada State Museum's Fey Collection. The first Liberty Bell machines produced by Mills used the same symbols on the reels as did Charles Fey's original. Soon afterward, another version was produced with patriotic symbols, such as flags and wreaths, on the wheels. Later, a similar machine called the Operator's Bell was produced that included the option of adding a gum-vending attachment. As the gum offered was fruit-flavored, fruit symbols were placed on the reels: lemons, cherries, oranges and plums. A bell was retained, and a picture of a stick of Bell-Fruit Gum, the origin of the bar symbol, was also present. This set of symbols proved highly popular and was used by other companies that began to make their own slot machines: Caille, Watling, Jennings and Pace. A commonly used technique to avoid gambling laws in a number of states was to award food prizes. For this reason, a number of gumball and other vending machines were regarded with mistrust by the courts. The two Iowa cases of State v. Ellis and State v. Striggles are both used in criminal law classes to illustrate the concept of reliance upon authority as it relates to the axiomatic ignorantia juris non excusat ("ignorance of the law is no excuse"). In these cases, a mint vending machine was declared to be a gambling device because the machine would, by internally manufactured chance, occasionally give the next user a number of tokens exchangeable for more candy. Despite the display of the result of the next use on the machine, the courts ruled that "[t]he machine appealed to the player's propensity to gamble, and that is [a] vice." In 1963, Bally developed the first fully electromechanical slot machine called Money Honey (although earlier machines such as Bally's High Hand draw-poker machine had exhibited the basics of electromechanical construction as early as 1940). Its electromechanical workings made Money Honey the first slot machine with a bottomless hopper and automatic payout of up to 500 coins without the help of an attendant. The popularity of this machine led to the increasing predominance of electronic games, with the side lever soon becoming vestigial. The first video slot machine was developed in 1976 in Kearny Mesa, California by the Las Vegas–based Fortune Coin Co. This machine used a modified Sony Trinitron color receiver for the display and logic boards for all slot-machine functions. The prototype was mounted in a full-size, show-ready slot-machine cabinet. The first production units went on trial at the Las Vegas Hilton Hotel. After some modifications to defeat cheating attempts, the video slot machine was approved by the Nevada State Gaming Commission and eventually found popularity on the Las Vegas Strip and in downtown casinos. Fortune Coin Co. and its video slot-machine technology were purchased by IGT (International Gaming Technology) in 1978. The first American video slot machine to offer a "second screen" bonus round was Reel ’Em In, developed by WMS Industries in 1996. This type of machine had appeared in Australia from at least 1994 with the Three Bags Full game. With this type of machine, the display changes to provide a different game in which an additional payout may be awarded. Operation Depending on the machine, the player can insert cash or, in "ticket-in, ticket-out" machines, a paper ticket with a barcode, into a designated slot on the machine. The machine is then activated by means of a lever or button (either physical or on a touchscreen), which activates reels that spin and stop to rearrange the symbols. If a player matches a winning combination of symbols, the player earns credits based on the paytable. Symbols vary depending on the theme of the machine. Classic symbols include objects such as fruits, bells, and stylized lucky sevens. Most slot games have a theme, such as a specific aesthetic, location, or character. Symbols and other bonus features of the game are typically aligned with the theme. Some themes are licensed from popular media franchises, including films, television series (including game shows such as Wheel of Fortune), entertainers, and musicians. Multi-line slot machines have become more popular since the 1990s. These machines have more than one payline, meaning that visible symbols that are not aligned on the main horizontal may be considered as winning combinations. Traditional three-reel slot machines commonly have one, three, or five paylines while video slot machines may have 9, 15, 25, or as many as 1024 different paylines. Most accept variable numbers of credits to play, with 1 to 15 credits per line being typical. The higher the amount bet, the higher the payout will be if the player wins. One of the main differences between video slot machines and reel machines is in the way payouts are calculated. With reel machines, the only way to win the maximum jackpot is to play the maximum number of coins (usually three, sometimes four or even five coins per spin). With video machines, the fixed payout values are multiplied by the number of coins per line that is being bet. In other words: on a reel machine, the odds are more favorable if the gambler plays with the maximum number of coins available. However, depending on the structure of the game and its bonus features, some video slots may still include features that improve chances at payouts by making increased wagers. "Multi-way" games eschew fixed paylines in favor of allowing symbols to pay anywhere, as long as there is at least one in at least three consecutive reels from left to right. Multi-way games may be configured to allow players to bet by-reel: for example, on a game with a 3x5 pattern (often referred to as a 243-way game), playing one reel allows all three symbols in the first reel to potentially pay, but only the center row pays on the remaining reels (often designated by darkening the unused portions of the reels). Other multi-way games use a 4x5 or 5x5 pattern, where there are up to five symbols in each reel, allowing for up to 1,024 and 3,125 ways to win respectively. The Australian manufacturer Aristocrat Leisure brands games featuring this system as "Reel Power", "Xtra Reel Power" and "Super Reel Power" respectively. A variation involves patterns where symbols pay adjacent to one another. Most of these games have a hexagonal reel formation, and much like multi-way games, any patterns not played are darkened out of use. Denominations can range from 1 cent ("penny slots") all the way up to $100.00 or more per credit. The latter are typically known as "high limit" machines, and machines configured to allow for such wagers are often located in dedicated areas (which may have a separate team of attendants to cater to the needs of those who play there). The machine automatically calculates the number of credits the player receives in exchange for the cash inserted. Newer machines often allow players to choose from a selection of denominations on a splash screen or menu. Terminology A bonus is a special feature of the particular game theme, which is activated when certain symbols appear in a winning combination. Bonuses and the number of bonus features vary depending upon the game. Some bonus rounds are a special session of free spins (the number of which is often based on the winning combination that triggers the bonus), often with a different or modified set of winning combinations as the main game and/or other multipliers or increased frequencies of symbols, or a "hold and re-spin" mechanic in which specific symbols (usually marked with values of credits or other prizes) are collected and locked in place over a finite number of spins. In other bonus rounds, the player is presented with several items on a screen from which to choose. As the player chooses items, a number of credits is revealed and awarded. Some bonuses use a mechanical device, such as a spinning wheel, that works in conjunction with the bonus to display the amount won. A candle is a light on top of the slot machine. It flashes to alert the operator that change is needed, hand pay is requested or a potential problem with the machine. It can be lit by the player by pressing the "service" or "help" button. Carousel refers to a grouping of slot machines, usually in a circle or oval formation. A coin hopper is a container where the coins that are immediately available for payouts are held. The hopper is a mechanical device that rotates coins into the coin tray when a player collects credits/coins (by pressing a "Cash Out" button). When a certain preset coin capacity is reached, a coin diverter automatically redirects, or "drops", excess coins into a "drop bucket" or "drop box". (Unused coin hoppers can still be found even on games that exclusively employ Ticket-In, Ticket-Out technology, as a vestige.) The credit meter is a display of the amount of money or number of credits on the machine. On mechanical slot machines, this is usually a seven-segment display, but video slot machines typically use stylized text that suits the game's theme and user interface. The drop bucket or drop box is a container located in a slot machine's base where excess coins are diverted from the hopper. Typically, a drop bucket is used for low-denomination slot machines and a drop box is used for high-denomination slot machines. A drop box contains a hinged lid with one or more locks whereas a drop bucket does not contain a lid. The contents of drop buckets and drop boxes are collected and counted by the casino on a scheduled basis. EGM is short for "Electronic Gaming Machine". Free spins are a common form of bonus, where a series of spins are automatically played at no charge at the player's current wager. Free spins are usually triggered via a scatter of at least three designated symbols (with the number of spins dependent on the number of symbols that land). Some games allow the free spins bonus to "retrigger", which adds additional spins on top of those already awarded. There is no theoretical limit to the number of free spins obtainable. Some games may have other features that can also trigger over the course of free spins. A hand pay refers to a payout made by an attendant or at an exchange point ("cage"), rather than by the slot machine itself. A hand pay occurs when the amount of the payout exceeds the maximum amount that was preset by the slot machine's operator. Usually, the maximum amount is set at the level where the operator must begin to deduct taxes. A hand pay could also be necessary as a result of a short pay. Hopper fill slip is a document used to record the replenishment of the coin in the coin hopper after it becomes depleted as a result of making payouts to players. The slip indicates the amount of coin placed into the hoppers, as well as the signatures of the employees involved in the transaction, the slot machine number and the location and the date. MEAL book (Machine entry authorization log) is a log of the employee's entries into the machine. Low-level or slant-top slot machines include a stool so the player may sit down. Stand-up or upright slot machines are played while standing. Optimal play is a payback percentage based on a gambler using the optimal strategy in a skill-based slot machine game. Payline is a line that crosses through one symbol on each reel, along which a winning combination is evaluated. Classic spinning reel machines usually have up to nine paylines, while video slot machines may have as many as one hundred. Paylines could be of various shapes (horizontal, vertical, oblique, triangular, zigzag, etc.) Persistent state refers to passive features on some slot machines, some of which able to trigger bonus payouts or other special features if certain conditions are met over time by players on that machine. Roll-up is the process of dramatizing a win by playing sounds while the meters count up to the amount that has been won. Short pay refers to a partial payout made by a slot machine, which is less than the amount due to the player. This occurs if the coin hopper has been depleted as a result of making earlier payouts to players. The remaining amount due to the player is either paid as a hand pay or an attendant will come and refill the machine. A scatter is a pay combination based on occurrences of a designated symbol landing anywhere on the reels, rather than falling in sequence on the same payline. A scatter pay usually requires a minimum of three symbols to land, and the machine may offer increased prizes or jackpots depending on the number that land. Scatters are frequently used to trigger bonus games, such as free spins (with the number of spins multiplying based on the number of scatter symbols that land). The scatter symbol usually cannot be matched using wilds, and some games may require the scatter symbols to appear on consecutive reels in order to pay. On some multiway games, scatter symbols still pay in unused areas. Taste is a reference to the small amount often paid out to keep a player seated and continuously betting. Only rarely will machines fail to pay even the minimum out over the course of several pulls. Tilt is a term derived from electromechanical slot machines' "tilt switches", which would make or break a circuit when they were tilted or otherwise tampered with that triggered an alarm. While modern machines no longer have tilt switches, any kind of technical fault (door switch in the wrong state, reel motor failure, out of paper, etc.) is still called a "tilt". A theoretical hold worksheet is a document provided by the manufacturer for every slot machine that indicates the theoretical percentage the machine should hold based on the amount paid in. The worksheet also indicates the reel strip settings, number of coins that may be played, the payout schedule, the number of reels and other information descriptive of the particular type of slot machine. Volatility or variance refers to the measure of risk associated with playing a slot machine. A low-volatility slot machine has regular but smaller wins, while a high-variance slot machine has fewer but bigger wins. Weight count is an American term referring to the total value of coins or tokens removed from a slot machine's drop bucket or drop box for counting by the casino's hard count team through the use of a weigh scale. Wild symbols substitute for most other symbols in the game (similarly to a joker card), usually excluding scatter and jackpot symbols (or offering a lower prize on non-natural combinations that include wilds). How jokers behave are dependent on the specific game and whether the player is in a bonus or free games mode. Sometimes wild symbols may only appear on certain reels, or have a chance to "stack" across the entire reel. Pay table Each machine has a table that lists the number of credits the player will receive if the symbols listed on the pay table line up on the pay line of the machine. Some symbols are wild and can represent many, or all, of the other symbols to complete a winning line. Especially on older machines, the pay table is listed on the face of the machine, usually above and below the area containing the wheels. On video slot machines, they are usually contained within a help menu, along with information on other features. Technology Reels Historically, all slot machines used revolving mechanical reels to display and determine results. Although the original slot machine used five reels, simpler, and therefore more reliable, three reel machines quickly became the standard. A problem with three reel machines is that the number of combinations is only cubic – the original slot machine with three physical reels and 10 symbols on each reel had only 103 = 1,000 possible combinations. This limited the manufacturer's ability to offer large jackpots since even the rarest event had a likelihood of 0.1%. The maximum theoretical payout, assuming 100% return to player would be 1000 times the bet, but that would leave no room for other pays, making the machine very high risk, and also quite boring. Although the number of symbols eventually increased to about 22, allowing 10,648 combinations, this still limited jackpot sizes as well as the number of possible outcomes. In the 1980s, however, slot machine manufacturers incorporated electronics into their products and programmed them to weight particular symbols. Thus the odds of losing symbols appearing on the payline became disproportionate to their actual frequency on the physical reel. A symbol would only appear once on the reel displayed to the player, but could, in fact, occupy several stops on the multiple reel. In 1984, Inge Telnaes received a patent for a device titled, "Electronic Gaming Device Utilizing a Random Number Generator for Selecting the Reel Stop Positions" (US Patent 4448419), which states: "It is important to make a machine that is perceived to present greater chances of payoff than it actually has within the legal limitations that games of chance must operate." The patent was later bought by International Game Technology and has since expired. A virtual reel that has 256 virtual stops per reel would allow up to 2563 = 16,777,216 final positions. The manufacturer could choose to offer a $1 million jackpot on a $1 bet, confident that it will only happen, over the long term, once every 16.8 million plays. Computerization With microprocessors now ubiquitous, the computers inside modern slot machines allow manufacturers to assign a different probability to every symbol on every reel. To the player, it might appear that a winning symbol was "so close", whereas in fact the probability is much lower. In the 1980s in the U.K., machines embodying microprocessors became common. These used a number of features to ensure the payout was controlled within the limits of the gambling legislation. As a coin was inserted into the machine, it could go either directly into the cashbox for the benefit of the owner or into a channel that formed the payout reservoir, with the microprocessor monitoring the number of coins in this channel. The drums themselves were driven by stepper motors, controlled by the processor and with proximity sensors monitoring the position of the drums. A "look-up table" within the software allows the processor to know what symbols were being displayed on the drums to the gambler. This allowed the system to control the level of payout by stopping the drums at positions it had determined. If the payout channel had filled up, the payout became more generous; if nearly empty, the payout became less so (thus giving good control of the odds). Video slot machines Video slot machines do not use mechanical reels, but use graphical reels on a computerized display. As there are no mechanical constraints on the design of video slot machines, games often use at least five reels, and may also use non-standard layouts. This greatly expands the number of possibilities: a machine can have 50 or more symbols on a reel, giving odds as high as 300 million to 1 against – enough for even the largest jackpot. As there are so many combinations possible with five reels, manufacturers do not need to weight the payout symbols (although some may still do so). Instead, higher paying symbols will typically appear only once or twice on each reel, while more common symbols earning a more frequent payout will appear many times. Video slot machines usually make more extensive use of multimedia, and can feature more elaborate minigames as bonuses. Modern cabinets typically use flat-panel displays, but cabinets using larger curved screens (which can provide a more immersive experience for the player) are not uncommon. Video slot machines typically encourage the player to play multiple "lines": rather than simply taking the middle of the three symbols displayed on each reel, a line could go from top left to the bottom right or any other pattern specified by the manufacturer. As each symbol is equally likely, there is no difficulty for the manufacturer in allowing the player to take as many of the possible lines on offer as desire – the long-term return to the player will be the same. The difference for the player is that the more lines they play, the more likely they are to get paid on a given spin (because they are betting more). To avoid seeming as if the player's money is simply ebbing away (whereas a payout of 100 credits on a single-line machine would be 100 bets and the player would feel they had made a substantial win, on a 20-line machine, it would only be five bets and not seem as significant), manufacturers commonly offer bonus games, which can return many times their bet. The player is encouraged to keep playing to reach the bonus: even if they are losing, the bonus game could allow them to win back their losses. Random number generators All modern machines are designed using pseudorandom number generators ("PRNGs"), which are constantly generating a sequence of simulated random numbers, at a rate of hundreds or perhaps thousands per second. As soon as the "Play" button is pressed, the most recent random number is used to determine the result. This means that the result varies depending on exactly when the game is played. A fraction of a second earlier or later and the result would be different. It is important that the machine contains a high-quality RNG implementation. Because all PRNGs must eventually repeat their number sequence and, if the period is short or the PRNG is otherwise flawed, an advanced player may be able to "predict" the next result. Having access to the PRNG code and seed values, Ronald Dale Harris, a former slot machine programmer, discovered equations for specific gambling games like Keno that allowed him to predict what the next set of selected numbers would be based on the previous games played. Most machines are designed to defeat this by generating numbers even when the machine is not being played so the player cannot tell where in the sequence they are, even if they know how the machine was programmed. Payout percentage Slot machines are typically programmed to pay out as winnings 0% to 99% of the money that is wagered by players. This is known as the "theoretical payout percentage" or RTP, "return to player". The minimum theoretical payout percentage varies among jurisdictions and is typically established by law or regulation. For example, the minimum payout in Nevada is 75%, in New Jersey 83%, and in Mississippi 80%. The winning patterns on slot machines – the amounts they pay and the frequencies of those payouts – are carefully selected to yield a certain fraction of the money paid to the "house" (the operator of the slot machine) while returning the rest to the players during play. Suppose that a certain slot machine costs $1 per spin and has a return to player (RTP) of 95%. It can be calculated that, over a sufficiently long period such as 1,000,000 spins, the machine will return an average of $950,000 to its players, who have inserted $1,000,000 during that time. In this (simplified) example, the slot machine is said to pay out 95%. The operator keeps the remaining $50,000. Within some EGM development organizations this concept is referred to simply as "par". "Par" also manifests itself to gamblers as promotional techniques: "Our 'Loose Slots' have a 93% payback! Play now!" A slot machine's theoretical payout percentage is set at the factory when the software is written. Changing the payout percentage after a slot machine has been placed on the gaming floor requires a physical swap of the software or firmware, which is usually stored on an EPROM but may be loaded onto non-volatile random access memory (NVRAM) or even stored on CD-ROM or DVD, depending on the capabilities of the machine and the applicable regulations. In certain jurisdictions, such as New Jersey, the EPROM has a tamper-evident seal and can only be changed in the presence of Gaming Control Board officials. Other jurisdictions, including Nevada, randomly audit slot machines to ensure that they contain only approved software. Historically, many casinos, both online and offline, have been unwilling to publish individual game RTP figures, making it impossible for the player to know whether they are playing a "loose" or a "tight" game. Since the turn of the century, some information regarding these figures has started to come into the public domain either through various casinos releasing them—primarily this applies to online casinos—or through studies by independent gambling authorities. The return to player is not the only statistic that is of interest. The probabilities of every payout on the pay table is also critical. For example, consider a hypothetical slot machine with a dozen different values on the pay table. However, the probabilities of getting all the payouts are zero except the largest one. If the payout is 4,000 times the input amount, and it happens every 4,000 times on average, the return to player is exactly 100%, but the game would be dull to play. Also, most people would not win anything, and having entries on the paytable that have a return of zero would be deceptive. As these individual probabilities are closely guarded secrets, it is possible that the advertised machines with high return to player simply increase the probabilities of these jackpots. The casino could legally place machines of a similar style payout and advertise that some machines have 100% return to player. The added advantage is that these large jackpots increase the excitement of the other players. The table of probabilities for a specific machine is called the Probability and Accounting Report or PAR sheet, also PARS commonly understood as Paytable and Reel Strips. Mathematician Michael Shackleford revealed the PARS for one commercial slot machine, an original International Gaming Technology Red White and Blue machine. This game, in its original form, is obsolete, so these specific probabilities do not apply. He only published the odds after a fan of his sent him some information provided on a slot machine that was posted on a machine in the Netherlands. The psychology of the machine design is quickly revealed. There are 13 possible payouts ranging from 1:1 to 2,400:1. The 1:1 payout comes every 8 plays. The 5:1 payout comes every 33 plays, whereas the 2:1 payout comes every 600 plays. Most players assume the likelihood increases proportionate to the payout. The one mid-size payout that is designed to give the player a thrill is the 80:1 payout. It is programmed to occur an average of once every 219 plays. The 80:1 payout is high enough to create excitement, but not high enough that it makes it likely that the player will take their winnings and abandon the game. More than likely the player began the game with at least 80 times his bet (for instance there are 80 quarters in $20). In contrast the 150:1 payout occurs only on average of once every 6,241 plays. The highest payout of 2,400:1 occurs only on average of once every 643 = 262,144 plays since the machine has 64 virtual stops. The player who continues to feed the machine is likely to have several mid-size payouts, but unlikely to have a large payout. He quits after he is bored or has exhausted his bankroll. Despite their confidentiality, occasionally a PAR sheet is posted on a website. They have limited value to the player, because usually a machine will have 8 to 12 different possible programs with varying payouts. In addition, slight variations of each machine (e.g., with double jackpots or five times play) are always being developed. The casino operator can choose which EPROM chip to install in any particular machine to select the payout desired. The result is that there is not really such a thing as a high payback type of machine, since every machine potentially has multiple settings. From October 2001 to February 2002, columnist Michael Shackleford obtained PAR sheets for five different nickel machines; four IGT games Austin Powers, Fortune Cookie, Leopard Spots and Wheel of Fortune and one game manufactured by WMS; Reel 'em In. Without revealing the proprietary information, he developed a program that would allow him to determine with usually less than a dozen plays on each machine which EPROM chip was installed. Then he did a survey of over 400 machines in 70 different casinos in Las Vegas. He averaged the data, and assigned an average payback percentage to the machines in each casino. The resultant list was widely publicized for marketing purposes (especially by the Palms casino which had the top ranking). One reason that the slot machine is so profitable to a casino is that the player must play the high house edge and high payout wagers along with the low house edge and low payout wagers. In a more traditional wagering game like craps, the player knows that certain wagers have almost a 50/50 chance of winning or losing, but they only pay a limited multiple of the original bet (usually no higher than three times). Other bets have a higher house edge, but the player is rewarded with a bigger win (up to thirty times in craps). The player can choose what kind of wager he wants to make. A slot machine does not afford such an opportunity. Theoretically, the operator could make these probabilities available, or allow the player to choose which one so that the player is free to make a choice. However, no operator has ever enacted this strategy. Different machines have different maximum payouts, but without knowing the odds of getting the jackpot, there is no rational way to differentiate. In many markets where central monitoring and control systems are used to link machines for auditing and security purposes, usually in wide area networks of multiple venues and thousands of machines, player return must usually be changed from a central computer rather than at each machine. A range of percentages is set in the game software and selected remotely. In 2006, the Nevada Gaming Commission began working with Las Vegas casinos on technology that would allow the casino's management to change the game, the odds, and the payouts remotely. The change cannot be done instantaneously, but only after the selected machine has been idle for at least four minutes. After the change is made, the machine must be locked to new players for four minutes and display an on-screen message informing potential players that a change is being made. Linked machines Some varieties of slot machines can be linked together in a setup sometimes known as a "community" game. The most basic form of this setup involves progressive jackpots that are shared between the bank of machines, but may include multiplayer bonuses and other features. In some cases multiple machines are linked across multiple casinos. In these cases, the machines may be owned by the manufacturer, who is responsible for paying the jackpot. The casinos lease the machines rather than owning them outright. Casinos in New Jersey, Nevada, and South Dakota now offer multi-state progressive jackpots, which now offer bigger jackpot pools. Fraud Mechanical slot machines and their coin acceptors were sometimes susceptible to cheating devices and other scams. One historical example involved spinning a coin with a short length of plastic wire. The weight and size of the coin would be accepted by the machine and credits would be granted. However, the spin created by the plastic wire would cause the coin to exit through the reject chute into the payout tray. This particular scam has become obsolete due to improvements in newer slot machines. Another obsolete method of defeating slot machines was to use a light source to confuse the optical sensor used to count coins during payout. Modern slot machines are controlled by EPROM computer chips and, in large casinos, coin acceptors have become obsolete in favor of bill acceptors. These machines and their bill acceptors are designed with advanced anti-cheating and anti-counterfeiting measures and are difficult to defraud. Early computerized slot machines were sometimes defrauded through the use of cheating devices, such as the "slider", "monkey paw", "lightwand" and "the tongue". Many of these old cheating devices were made by the late Tommy Glenn Carmichael, a slot machine fraudster who reportedly stole over $5 million. In the modern day, computerized slot machines are fully deterministic and thus outcomes can be sometimes successfully predicted. Skill stops Skill stop buttons predated the Bally electromechanical slot machines of the 1960s and 1970s. They appeared on mechanical slot machines manufactured by Mills Novelty Co. as early as the mid 1920s. These machines had modified reel-stop arms, which allowed them to be released from the timing bar, earlier than in a normal play, simply by pressing the buttons on the front of the machine, located between each reel. "Skill stop" buttons were added to some slot machines by Zacharias Anthony in the early 1970s. These enabled the player to stop each reel, allowing a degree of "skill" so as to satisfy the New Jersey gaming laws of the day which required that players were able to control the game in some way. The original conversion was applied to approximately 50 late-model Bally slot machines. Because the typical machine stopped the reels automatically in less than 10 seconds, weights were added to the mechanical timers to prolong the automatic stopping of the reels. By the time the New Jersey Alcoholic Beverages Commission (ABC) had approved the conversion for use in New Jersey arcades, the word was out and every other distributor began adding skill stops. The machines were a huge hit on the Jersey Shore and the remaining unconverted Bally machines were destroyed as they had become instantly obsolete. Legislation United States In the United States, the public and private availability of slot machines is highly regulated by state governments. Many states have established gaming control boards to regulate the possession and use of slot machines and other form of gaming. Nevada is the only state that has no significant restrictions against slot machines both for public and private use. In New Jersey, slot machines are only allowed in hotel casinos operated in Atlantic City. Several states (Indiana, Louisiana and Missouri) allow slot machines (as well as any casino-style gambling) only on licensed riverboats or permanently anchored barges. Since Hurricane Katrina, Mississippi has removed the requirement that casinos on the Gulf Coast operate on barges and now allows them on land along the shoreline. Delaware allows slot machines at three horse tracks; they are regulated by the state lottery commission. In Wisconsin, bars and taverns are allowed to have up to five machines. These machines usually allow a player to either take a payout, or gamble it on a double-or-nothing "side game". The territory of Puerto Rico places significant restrictions on slot machine ownership, but the law is widely flouted and slot machines are common in bars and coffeeshops. In regards to tribal casinos located on Native American reservations, slot machines played against the house and operating independently from a centralized computer system are classified as "Class III" gaming by the Indian Gaming Regulatory Act (IGRA), and sometimes promoted as "Vegas-style" slot machines. In order to offer Class III gaming, tribes must enter into a compact (agreement) with the state that is approved by the Department of the Interior, which may contain restrictions on the types and quantity of such games. As a workaround, some casinos may operate slot machines as "Class II" games—a category that includes games where players play exclusively against at least one other opponent and not the house, such as bingo or any related games (such as pull-tabs). In these cases, the reels are an entertainment display with a pre-determined outcome based on a centralized game played against other players. Under the IGRA, Class II games are regulated by individual tribes and the National Indian Gaming Commission, and do not require any additional approval if the state already permits tribal gaming. Some historical race wagering terminals operate in a similar manner, with the machines using slots as an entertainment display for outcomes paid using the parimutuel betting system, based on results of randomly-selected, previously-held horse races (with the player able to view selected details about the race and adjust their picks before playing the credit, or otherwise use an auto-bet system). Private ownership Alaska, Arizona, Arkansas, Kentucky, Maine, Minnesota, Nevada, Ohio, Rhode Island, Texas, Utah, Virginia, and West Virginia place no restrictions on private ownership of slot machines. Conversely, in Connecticut, Hawaii, Nebraska, South Carolina, and Tennessee, private ownership of any slot machine is completely prohibited. The remaining states allow slot machines of a certain age (typically 25–30 years) or slot machines manufactured before a specific date. Canada The Government of Canada has minimal involvement in gambling beyond the Canadian Criminal Code. In essence, the term "lottery scheme" used in the code means slot machines, bingo and table games normally associated with a casino. These fall under the jurisdiction of the province or territory without reference to the federal government; in practice, all Canadian provinces operate gaming boards that oversee lotteries, casinos and video lottery terminals under their jurisdiction. OLG piloted a classification system for slot machines at the Grand River Raceway developed by University of Waterloo professor Kevin Harrigan, as part of its PlaySmart initiative for responsible gambling. Inspired by nutrition labels on foods, they displayed metrics such as volatility and frequency of payouts. OLG has also deployed electronic gaming machines with pre-determined outcomes based on a bingo or pull-tab game, initially branded as "TapTix", which visually resemble slot machines. Australia In Australia "Poker Machines" or "pokies" are officially termed "gaming machines". In Australia, gaming machines are a matter for state governments, so laws vary between states. Gaming machines are found in casinos (approximately one in each major city), pubs and clubs in some states (usually sports, social, or RSL clubs). The first Australian state to legalize this style of gambling was New South Wales, when in 1956 they were made legal in all registered clubs in the state. There are suggestions that the proliferation of poker machines has led to increased levels of problem gambling; however, the precise nature of this link is still open to research. In 1999 the Australian Productivity Commission reported that nearly half Australia's gaming machines were in New South Wales. At the time, 21% of all the gambling machines in the world were operating in Australia and, on a per capita basis, Australia had roughly five times as many gaming machines as the United States. Australia ranks 8th in total number of gaming machines after Japan, U.S.A., Italy, U.K., Spain and Germany. This primarily is because gaming machines have been legal in the state of New South Wales since 1956; over time, the number of machines has grown to 97,103 (at December 2010, including the Australian Capital Territory). By way of comparison, the U.S. State of Nevada, which legalised gaming including slots several decades before N.S.W., had 190,135 slots operating. Revenue from gaming machines in pubs and clubs accounts for more than half of the $4 billion in gambling revenue collected by state governments in fiscal year 2002–03. In Queensland, gaming machines in pubs and clubs must provide a return rate of 85%, while machines located in casinos must provide a return rate of 90%. Most other states have similar provisions. In Victoria, gaming machines must provide a minimum return rate of 87% (including jackpot contribution), including machines in Crown Casino. As of December 1, 2007, Victoria banned gaming machines that accepted $100 notes; all gaming machines made since 2003 comply with this rule. This new law also banned machines with an automatic play option. One exception exists in Crown Casino for any player with a VIP loyalty card: they can still insert $100 notes and use an autoplay feature (whereby the machine will automatically play until credit is exhausted or the player intervenes). All gaming machines in Victoria have an information screen accessible to the user by pressing the "i key" button, showing the game rules, paytable, return to player percentage, and the top and bottom five combinations with their odds. These combinations are stated to be played on a minimum bet (usually 1 credit per line, with 1 line or reel played, although some newer machines do not have an option to play 1 line; some machines may only allow maximum lines to be played), excluding feature wins. Western Australia has the most restrictive regulations on electronic gaming machines in general, with the Crown Perth casino resort being the only venue allowed to operate them, and banning slot machines with spinning reels entirely. This policy had an extensive political history, reaffirmed by the 1974 Royal Commission into Gambling: While Western Australian gaming machines are similar to the other states', they do not have spinning reels. Therefore, different animations are used in place of the spinning reels in order to display each game result. Nick Xenophon was elected on an independent No Pokies ticket in the South Australian Legislative Council at the 1997 South Australian state election on 2.9 percent, re-elected at the 2006 election on 20.5 percent, and elected to the Australian Senate at the 2007 federal election on 14.8 percent. Independent candidate Andrew Wilkie, an anti-pokies campaigner, was elected to the Australian House of Representatives seat of Denison at the 2010 federal election. Wilkie was one of four crossbenchers who supported the Gillard Labor government following the hung parliament result. Wilkie immediately began forging ties with Xenophon as soon as it was apparent that he was elected. In exchange for Wilkie's support, the Labor government are attempting to implement precommitment technology for high-bet/high-intensity poker machines, against opposition from the Tony Abbott Coalition and Clubs Australia. During the COVID-19 pandemic of 2020, every establishment in the country that facilitated poker machines was shut down, in an attempt to curb the spread of the virus, bringing Australia's usage of poker machines effectively to zero. Russia In Russia, "slot clubs" appeared quite late, only in 1992. Before 1992, slot machines were only in casinos and small shops, but later slot clubs began appearing all over the country. The most popular and numerous were "Vulcan 777" and "Taj Mahal". Since 2009 when gambling establishments were banned, almost all slot clubs disappeared and are found only in a specially authorized gambling zones. United Kingdom Slot machines are covered by the Gambling Act 2005, which superseded the Gaming Act 1968. Slot machines in the U.K. are categorised by definitions produced by the Gambling Commission as part of the Gambling Act of 2005. Casinos built under the provisions of the 1968 Act are allowed to house either up to twenty machines of categories B–D or any number of C–D machines. As defined by the 2005 Act, large casinos can have a maximum of one hundred and fifty machines in any combination of categories B–D (subject to a machine-to-table ratio of 5:1); small casinos can have a maximum of eighty machines in any combination of categories B–D (subject to a machine-to-table ratio of 2:1). Category A Category A games were defined in preparation for the planned "Super Casinos". Despite a lengthy bidding process with Manchester being chosen as the single planned location, the development was cancelled soon after Gordon Brown became Prime Minister of the United Kingdom. As a result, there are no lawful Category A games in the U.K. Category B Category B games are divided into subcategories. The differences between B1, B3 and B4 games are mainly the stake and prizes as defined in the above table. Category B2 games – Fixed odds betting terminals (FOBTs) – have quite different stake and prize rules: FOBTs are mainly found in licensed betting shops, or bookmakers, usually in the form of electronic roulette. The games are based on a random number generator; thus each game's probability of getting the jackpot is independent of any other game: probabilities are all equal. If a pseudorandom number generator is used instead of a truly random one, probabilities are not independent since each number is determined at least in part by the one generated before it. Category C Category C games are often referred to as fruit machines, one-armed bandits and AWP (amusement with prize). Fruit machines are commonly found in pubs, clubs, and arcades. Machines commonly have three but can be found with four or five reels, each with 16–24 symbols printed around them. The reels are spun each play, from which the appearance of particular combinations of symbols result in payment of their associated winnings by the machine (or alternatively initiation of a subgame). These games often have many extra features, trails and subgames with opportunities to win money; usually more than can be won from just the payouts on the reel combinations. Fruit machines in the U.K. almost universally have the following features, generally selected at random using a pseudorandom number generator: A player (known in the industry as a punter) may be given the opportunity to hold one or more reels before spinning, meaning they will not be spun but instead retain their displayed symbols yet otherwise count normally for that play. This can sometimes increase the chance of winning, especially if two or more reels are held. A player may also be given a number of nudges following a spin (or, in some machines, as a result in a subgame). A nudge is a step rotation of a reel chosen by the player (the machine may not allow all reels to be nudged for a particular play). Cheats can also be made available on the internet or through emailed newsletters to subscribers. These cheats give the player the impression of an advantage, whereas in reality the payout percentage remains exactly the same. The most widely used cheat is known as hold after a nudge and increases the chance that the player will win following an unsuccessful nudge. Machines from the early 1990s did not advertise the concept of hold after a nudge when this feature was first introduced, it became so well known amongst players and widespread amongst new machine releases that it is now well-advertised on the machine during play. This is characterized by messages on the display such as DON'T HOLD ANY or LET 'EM SPIN and is a designed feature of the machine, not a cheat at all. Holding the same pair three times on three consecutive spins also gives a guaranteed win on most machines that offer holds. It is known for machines to pay out multiple jackpots, one after the other (this is known as a "repeat") but each jackpot requires a new game to be played so as not to violate the law about the maximum payout on a single play. Typically this involves the player only pressing the Start button at the "repeat" prompt, for which a single credit is taken, regardless of whether this causes the reels to spin or not. Machines are also known to intentionally set aside money, which is later awarded in a series of wins, known as a "streak". The minimum payout percentage is 70%, with pubs often setting the payout at around 78%. Japan Japanese slot machines, known as or pachislot from the words "pachinko" and "slot machine", are a descendant of the traditional Japanese pachinko game. Slot machines are a fairly new phenomenon and they can be found mostly in pachinko parlors and the adult sections of amusement arcades, known as game centers. The machines are regulated with integrated circuits, and have six different levels changing the odds of a 777. The levels provide a rough outcome of between 90% to 160% (200% for skilled players). Japanese slot machines are "beatable". Parlor operators naturally set most machines to simply collect money, but intentionally place a few paying machines on the floor so that there will be at least someone winning, encouraging players on the losing machines to keep gambling, using the psychology of the gambler's fallacy. Despite the many varieties of pachislot machines, there are certain rules and regulations put forward by the , an affiliate of the National Police Agency. For example, there must be three reels. All reels must be accompanied by buttons which allow players to manually stop them, reels may not spin faster than 80 RPM, and reels must stop within 0.19 seconds of a button press. In practice, this means that machines cannot let reels slip more than 4 symbols. Other rules include a 15 coin payout cap, a 50 credit cap on machines, a 3 coin maximum bet, and other such regulations. Although a 15 coin payout may seem quite low, regulations allow "Big Bonus" (c. 400–711 coins) and "Regular Bonus" modes (c. 110 coins) where these 15 coin payouts occur nearly continuously until the bonus mode is finished. While the machine is in bonus mode, the player is entertained with special winning scenes on the LCD display, and energizing music is heard, payout after payout. Three other unique features of Pachisuro machines are "stock", "renchan", and . On many machines, when enough money to afford a bonus is taken in, the bonus is not immediately awarded. Typically the game merely stops making the reels slip off the bonus symbols for a few games. If the player fails to hit the bonus during these "standby games", it is added to the "stock" for later collection. Many current games, after finishing a bonus round, set the probability to release additional stock (gained from earlier players failing to get a bonus last time the machine stopped making the reels slip for a bit) very high for the first few games. As a result, a lucky player may get to play several bonus rounds in a row (a "renchan"), making payouts of 5,000 or even 10,000 coins possible. The lure of "stock" waiting in the machine, and the possibility of "renchan" tease the gambler to keep feeding the machine. To tease them further, there is a tenjō (ceiling), a maximum limit on the number of games between "stock" release. For example, if the tenjō is 1,500, and the number of games played since the last bonus is 1,490, the player is guaranteed to release a bonus within just 10 games. Because of the "stock", "renchan", and tenjō systems, it is possible to make money by simply playing machines on which someone has just lost a huge amount of money. This is called being a "hyena". They are easy to recognize, roaming the aisles for a "kamo" ("sucker" in English) to leave his machine. In short, the regulations allowing "stock", "renchan", and tenjō transformed the pachisuro from a low-stakes form of entertainment just a few years back to hardcore gambling. Many people may be gambling more than they can afford, and the big payouts also lure unsavory "hyena" types into the gambling halls. To address these social issues, a new regulation (Version 5.0) was adopted in 2006 which caps the maximum amount of "stock" a machine can hold to around 2,000–3,000 coins' worth of bonus games. Moreover, all pachisuro machines must be re-evaluated for regulation compliance every three years. Version 4.0 came out in 2004, so that means all those machines with the up to 10,000 coin payouts will be removed from service by 2007. Jackpot disputes Electronic slot machines can malfunction. When the displayed amount is smaller than the one it is supposed to be, the error usually goes unnoticed. When it happens the other way, disputes are likely. Below are some notable arguments caused by the owners of the machines saying that the displayed amounts were far larger than the ones patrons should get. United States of America Two such cases occurred in casinos in Colorado in 2010, where software errors led to indicated jackpots of $11 million and $42 million. Analysis of machine records by the state Gaming Commission revealed faults, with the true jackpot being substantially smaller. State gaming laws did not require a casino to honour payouts in that case. Vietnam On October 25, 2009, while a Vietnamese American man, Ly Sam, was playing a slot machine in the Palazzo Club at the Sheraton Saigon Hotel in Ho Chi Minh City, Vietnam, it displayed that he had hit a jackpot of US$55,542,296.73. The casino refused to pay, saying it was a machine error, Mr Ly sued the casino. On January 7, 2013, the District 1 People's Court in Ho Chi Minh City decided that the casino had to pay the amount Mr Ly claimed in full, not trusting the error report from an inspection company hired by the casino. Both sides appealed thereafter, and Mr Ly asked for interest while the casino refused to pay him. In January, 2014, the news reported that the case had been settled out of court, and Mr Ly had received an undisclosed sum. Problem gambling and slot machines Natasha Dow Schüll, associate professor in New York University's Department of Media, Culture and Communication, uses the term "machine zone" to describe the state of immersion that users of slot machines experience when gambling, where they lose a sense of time, space, bodily awareness, and monetary value. Mike Dixon, PhD, professor of psychology at the University of Waterloo, studies the relationship between slot players and machines. In one of Dixon's studies, players were observed experiencing heightened arousal from the sensory stimulus coming from the machines. They "sought to show that these 'losses disguised as wins' (LDWs) would be as arousing as wins, and more arousing than regular losses." Psychologists Robert Breen and Marc Zimmerman found that players of video slot machines reach a debilitating level of involvement with gambling three times as rapidly as those who play traditional casino games, even if they have engaged in other forms of gambling without problems. Eye-tracking research in local bookkeepers' offices in the UK suggested that, in slots games, the reels dominated players' visual attention, and that problem gamblers looked more frequently at amount-won messages than did those without gambling problems. The 2011 60 Minutes report "Slot Machines: The Big Gamble" focused on the link between slot machines and gambling addiction. See also Casino European Gaming & Amusement Federation List of probability topics Multi-armed bandit Pachinko Problem gambling Progressive jackpot Quiz machine United States state slot machine ownership regulations Video bingo Video lottery terminal (VLT) Video poker References Bibliography Brisman, Andrew. The American Mensa Guide to Casino Gambling: Winning Ways (Stirling, 1999) Grochowski, John. The Slot Machine Answer Book: How They Work, How They've Changed, and How to Overcome the House Advantage (Bonus Books, 2005) Legato, Frank. How to Win Millions Playing Slot Machines! ...Or Lose Trying (Bonus Books, 2004) External links American inventions Arcade games Gaming devices Commercial machines Gambling games
3561105
https://en.wikipedia.org/wiki/Pando%20%28application%29
Pando (application)
Pando was an application which was mainly aimed at sending (and receiving) files which would normally be too large to send via more "conventional" means. It used both peer-to-peer (BitTorrent protocol) and client-server architectures and was released for Windows and Mac OS X operating systems. Pando shut down its servers and ceased business on August 31, 2013. As of February 24, 2014, the Pando Media Booster had been hijacked, and unsuspecting persons who installed a prompted update had their internet browsers hijacked, and a virus called the "Sweet Page" browser virus was installed on their machines. Details Pando functioned as a normal BitTorrent client and used the BitTorrent protocol to transfer files. Using Pando was very similar to using any BitTorrent client. A Pando upload began with meta-data stored within a file with a .pando filename extension. Also like BitTorrent, this file could be sent via e-mail or published on a website or exchanged with the recipient in some other way (such as via IM). And, like BitTorrent, the downloader had to first install the Pando software. Pando used a 256-bit end-to-end encryption method to secure communication among peers. The primary difference from traditional BitTorrent file transfer operations was that a copy of the shared file was uploaded to Pando's servers and remained there for a limited time, seeding it. In this way, the file remained available even after the original sender went offline. Features A non-subscription version was ad-supported; i.e. it offered to install the SmartShopper malware on the computers of its users. A subscription version extended the capabilities. Beyond the features listed below, there were additional service offerings for high-volume publishers and subscription content delivery. Its common features were: Server-assisted delivery provided increased file availability and delivery speeds. (Subscribers received faster server-assisted speeds.) Seven days, or thirty days, of file retention on Pando's servers, depending on how users shared. (Subscriptions doubled these retention times to fourteen days or sixty days.) Statistics about number and time of downloads were provided to the sender. It was fully supported client software for both the Windows and the Mac OS X operating systems. An assortment of software could be added-on to popular web browsers, instant messaging, or e-mail clients. Users were allowed an unlimited number of uploads. The maximum file size that could be so transferred by non-subscribers was 1 GB; this was increased to 3 GB with subscriptions. Pando Media Booster The Pando Media Booster (PMB) was an application by Pando Networks that publishers of games and software could employ to ensure safe, complete and speedy downloads of large files. PMB was primarily used to download MMORPGs. Users of PMB participated in a secure, closed peer-to-peer network where users received pieces of the download package from a Content Delivery Network (CDN) as well as "peers, or other active users. PMB was usually packaged and automatically uploaded to a PC without the knowledge of the user. Many times, users would experience drastically slower downloads of these MMORPGs while PMB was being installed. Unlike Pando, PMB could not be used to send files from the user's computer. PMB was only activated to deliver Pando-enabled downloads from commercial sources such as TV networks, software publishers and gaming companies. Conflicts The Pando Media Booster (PMB.exe) listened on TCP ports 443 and 563. People who were having trouble getting web servers such as Apache, IIS, or others to work were advised to consider removing the Pando Media Booster. The Pando Media Booster ran at system startup and took priority to other downloads. Slower-than-normal downloading speeds or general network performance issues might be related to the product. These conflicts ceased to be reported in 2013, when Pando shut down its servers and ceased business. But after 2014, when the Pando Media Browser was hijacked, unsuspecting persons who installed a prompted update instead had their internet browsers hijacked and the "Sweet Page" browser virus installed. See also Cloud storage Comparison of file hosting services DropSend SendThisFile WeTransfer References External links With notice of termination of services Email attachment replacements File sharing software BitTorrent clients
18888009
https://en.wikipedia.org/wiki/TinyLinux
TinyLinux
TinyLinux is a project started by Matt Mackall in 2003 to reduce the size of the Linux kernel, in both memory usage and binary filesize. The purpose was to make a compact Linux system for embedded devices. The development was sponsored by the CE Linux Forum. It is also known as the -tiny tree. By 2006 the project was mostly abandoned. It received new attention in 2007, again with sponsorship from CELF, but has seen minimal activity since 2007. TinyLinux consists of a set of patches against the Linux kernel which make certain features optional, or add system monitoring and measurement so that further optimization can take place. They are made to be mergeable with the mainline kernel, and many patches have been merged to date. Features include: the ability to disable ELF core dumps, reduce the number of swap files, use of the SLOB memory allocator, ability to disable BUG(). For measuring and accounting features include: ability for kmalloc/kfree allocations to be monitored through /proc/kmalloc; and measurement of inline usage during kernel compiling. TinyLinux requires Intel 80386 or better to run. See also Embeddable Linux Kernel Subset (ELKS) References External links Linux kernel
31586697
https://en.wikipedia.org/wiki/Fedora%20Linux%20release%20history
Fedora Linux release history
Fedora Linux is a Linux distribution developed by the Fedora Project. Fedora attempts to maintain a six-month release schedule, offering new versions in May and November. Release history Fedora Linux 35 Fedora Linux 35 was released on November 2, 2021. Fedora Linux 34 Fedora Linux 34 was released April 27, 2021. Its change set includes GNOME 40, filesystem compression by default, exclusive use of Pipewire, and defaulting KDE Plasma to Wayland. Fedora Linux 33 Fedora Linux 33 was released on October 27, 2020. Its change set is here. Fedora 33 Workstation Edition was the first version of the operating system to default to using Btrfs as its default file system, and replacement of a swap partition with zram. It featured version 3.38 of the GNOME desktop environment, and Linux kernel 5.8.15. For the first time since version 7, Fedora defaulted to a slideshow background (four png images of the Earth, from space) that changes hue according to the time of day. GNU nano became the default text editor for the command-line interface in place of vi. Fedora IoT, while previously available as a "Fedora Spin", was promoted to an official edition of the operating system. Fedora Linux 32 Fedora Linux 32 was released April 28, 2020. Its change set is here. Fedora Linux 31 Fedora Linux 31 was released October 29, 2019. Its change set is here. Fedora Linux 30 Fedora Linux 30 was released on April 30, 2019. Its change set is here. Fedora Linux 29 Fedora Linux 29 was released on October 30, 2018. Notable new features: Fedora Modularity across all variants, a new optional package repository called Modular (also referred to as the "Application Stream" or AppStream), Gnome 3.30, ZRAM for ARM images, Fedora Scientific Vagrant images Fedora Linux 28 Fedora Linux 28 was released on May 1, 2018. Red Hat Enterprise Linux 8 and other derivatives are based on Fedora 28. Notable new features: a modular software repository, curated third-party software repositories. Fedora Linux 27 Fedora Linux 27 was released on November 14, 2017. The Workstation edition of Fedora 27 features GNOME 3.26. Both the Display and Network configuration panels have been updated, along with the overall Settings panel appearance improvement. The system search now shows more results at once, including the system actions. This release also features LibreOffice 5.4. Fedora Linux 26 Fedora Linux 26 was released on July 11, 2017. Fedora Linux 25 Fedora Linux 25 was released on November 22, 2016. Some notable changes (see for more) are the use of the Wayland display system, Unicode 9, PHP 7.0, Node.js 6 and IBus Emoji typing. Fedora Linux 24 Fedora Linux 24 was released on June 21, 2016. Some notable system wide changes (see for more) are the use of GNOME 3.20, GCC 6, and Python 3.5. Fedora Linux 23 Fedora Linux 23 was released on November 3, 2015. It offers GNOME 3.18. It comes with LibreOffice 5. The Fedora release updater, fedup, was integrated into DNF. It uses a Python3 (specifically python3.4.3) as the operating system's default Python implementation. See also. Fedora Linux 22 Fedora Linux 22 was released on May 26, 2015. Major features include: GNOME 3.16 with a completely redesigned notification system and automatically hiding scrollbars DNF replacing yum as the default package manager the default display server for the GNOME Display Manager being Wayland instead of Xorg Fedora Linux 21 Fedora Linux 21, the first version without a codename, was released on December 9, 2014. GNOME 3.14 Fedora now has three flavors providing different specialized set of preinstalled packages depending on use purpose: Workstation, Server and Desktop Fedora Linux 20 Fedora Linux 20, codenamed "Heisenbug", was released on December 17, 2013. Some of the features of Fedora 20 include: GNOME 3.10 ARM as primary architecture in addition to x86 and x86_64 Replacement of the gnome-packagekit frontends with a new application installer, tentatively named gnome-software Fedora Linux 19 Fedora Linux 19, codenamed "Schrödinger's Cat", was released on July 2, 2013. Red Hat Enterprise Linux 7 and other derivatives are based on Fedora 19. Some of the features of Fedora 19 include: Further improvements to the new Anaconda installer A new initial setup application Support to application checkpointing through CRIU Default desktop upgraded to GNOME 3.8 Updated to KDE Plasma 4.10 and MATE 1.6 MariaDB has replaced MySQL GCC has been updated to version 4.8 RPM Package Manager has been updated to version 4.11 Includes the new Developers Assistant tool Numerous upstream improvements to firewall and systemd Improved cloud support, including better compatibility with Amazon EC2 Fedora Linux 18 Fedora Linux 18, codenamed "Spherical Cow", was released on January 15, 2013. Some of the features of Fedora 18 include: Linux kernel 3.6.10 Support for UEFI Secure Boot A rewrite of the Anaconda installer A new system upgrade utility called FedUp Default desktop upgraded to GNOME 3.6.3 Updated to KDE Plasma 4.9 and Xfce 4.10 Inclusion of MATE and Cinnamon desktops Better Active Directory support through FreeIPA v3 Support for NetworkManager hotspots Support for 256 color terminals by default Offline system updates utilizing systemd and PackageKit Better cloud computing support with the inclusion of Eucalyptus, Heat, and OpenStack Folsom firewalld replaces system-config-firewall as default Fedora Linux 17 Fedora Linux 17, codenamed "Beefy Miracle", which was released on May 29, 2012. Some of the features of Fedora 17 include: Linux kernel 3.3.4 Integrated UEFI support. Inclusion of GNOME 3.4 desktop, offering software rendering support for GNOME Shell Updated to latest KDE Software Compilation 4.8.3 A new filesystem structure moving more things to /usr Removable disks are now mounted under /run/media due to a change in udisks systemd-logind replaces ConsoleKit, offering multiseat improvements Inclusion of the libvirt sandbox; virt-manager now supports USB pass-through Services now use private temp directories to improve security Fedora Linux 16 Fedora Linux 16, codenamed "Verne", was released on November 8, 2011. Fedora 16 was also dedicated to the memory of Dennis Ritchie, who died about a month before the release. Some of the features of Fedora 16 included: Linux kernel 3.1.0 Inclusion of GNOME 3.2.1 desktop Updated to latest KDE Software Compilation 4.7.2 GRUB2 begun the default boot-loader Ext4 driver used for Ext3 and Ext2 file systems HAL daemon removed in favour of udisks, upower, and libudev Unification of the user interfaces for all problem reporting programs and mechanisms Virtualization improvements including OpenStack and Aeolus Conductor Fedora uses UID/GIDs up through 999 for system accounts Enhanced cloud support including Condor Cloud, HekaFS, and pacemaker-cloud Fedora Linux 15 Fedora Linux 15, codenamed Lovelock, was released on May 24, 2011. Features of Fedora 15 include: Inclusion of GNOME 3 desktop LibreOffice replaced OpenOffice.org Inclusion of GNU Compiler Collection 4.6 Responsibility for booting is taken up by Systemd LLVMpipe replacing Mesa software rasterizer Inclusion of BoxGrinder software Support for dynamic firewalls with firewalld Inclusion of PowerTOP 2.x Adoption of Consistent Network Device Naming Better support for encrypted Home directories Fedora Linux 14 Fedora Linux 14, codenamed Laughlin, was released on November 2, 2010. It was the last to use the GNOME 2 desktop environment (now forked as MATE). GNOME 2 had been the desktop environment of the operating system since its inception in 2003. Features of Fedora 14 include: Updated Boost to the upstream 1.44 release Addition of the D compiler (LDC) and D standard runtime library (Tango) Concurrent release of Fedora 14 on the Amazon EC2 cloud Updated Fedora's Eclipse stack to Helios releases Updated Erlang to the upstream R14 release Replacement of libjpeg with libjpeg-turbo Inclusion of virt-v2v tool Inclusion of Spice framework for VDI deployment Updates to Rakudo Star implementation of Perl 6 NetBeans IDE updated to the 6.9 release Inclusion of ipmiutil system management tool Inclusion of a tech preview of the GNOME Shell environment Python 2.7 Fedora Linux 13 Fedora Linux 13, codenamed "Goddard", was released on May 25, 2010. During early development, Fedora project leader Paul Frields anticipated "looking at the fit and finish issues. We have tended to build a really tight ship with Fedora, but now we want to make the décor in the cabins a little more sumptuous and to polish the deck chairs and railings." Features of Fedora 13 include: Automatic printer-driver installation Automatic language pack installation Redesigned user-account tool Color management to calibrate monitors and scanners Experimental 3D support for NVIDIA video cards A new way to install Fedora over the Internet SSSD authentication for users Updates to NFS Inclusion of Zarafa Open Source edition System rollback for the Btrfs file system Better SystemTap probes Support for the entire Java EE 6 spec in Netbeans 6.8 KDE Plasma PulseAudio Integration New command-line interface for NetworkManager Fedora Linux 12 Fedora Linux 12, codenamed Constantine, was released on November 17, 2009. Red Hat Enterprise Linux 6 and other derivatives are based on Fedora 12. Some of the features in Fedora 12 are: Optimized performance. All software packages on 32-bit (x86_32) architecture have been compiled for i686 systems Improved Webcam support (Cheese) Better video codec with a newer version of Ogg Theora Audio improvements Automatic bug reporting tool (abrt) Bluetooth on demand Enhanced NetworkManager to manage broadband Many virtualization enhancements (KVM, libvirt, libguestfs) ext4 used even for the boot partition Moblin interface Yum-presto plugin providing Delta RPMs for updates by default New compression algorithm (XZ, the new LZMA format) in RPM packages for smaller and faster updates Experimental 3D support for ATI R600/R700 cards GCC 4.4 SystemTap 1.0 with Eclipse integration GNOME 2.28 GNOME Shell preview KDE Plasma 4.3, Plasma 4.4 was pushed to updates repository on February 27, 2010 (KDE Spin) 2.6.31 Linux kernel, Kernel 2.6.32 was pushed to updates repository on February 27, 2010 X server 1.7 with Multi-Pointer X (MPX) support NetBeans 6.7 PHP 5.3 Rakudo Perl 6 compiler Fedora Linux 11 Fedora Linux 11, codenamed Leonidas, was released on June 9, 2009. This was the first release whose artwork is determined by the name instead of by users voting on themes. Some of the features in Fedora 11 are: ext4 as the default file system experimental Btrfs activated by IcantbelieveitsnotBTR command line option at bootup faster bootup aimed at 20 seconds. GCC 4.4 GNOME 2.26 KDE Plasma 4.2 (KDE Spin) 2.6.29 Linux kernel Eclipse 3.4.2 Netbeans 6.5 nVidia kernel modesetting through the open source nouveau (graphics) driver. OpenOffice 3.1 Python 2.6 Xfce to 4.6 (Xfce Spin) X server 1.6 fprint – support for systems with fingerprint readers Fedora Linux 10 Fedora Linux 10, codenamed Cambridge, was released on November 25, 2008. It flaunts the new Solar artwork. Its features include: Faster startup using Plymouth (instead of Red Hat Graphical Boot used in previous versions) Support for ext4 filesystem Sugar Desktop Environment LXDE Desktop Environment (LXDE Spin) GNOME 2.24 KDE Plasma 4.1 (KDE Spin) OpenOffice.org 3.0 Fedora Linux 9 Fedora Linux 9, codenamed Sulphur, was released on May 13, 2008. Some of the new features of Fedora 9 included: GNOME 2.22. KDE Plasma 4.0, which is the default interface as part of the KDE spin. OpenJDK 6 has replaced IcedTea. PackageKit is included as a front-end to yum, and as the default package manager. One Second X allows the X Window System to perform a cold start from the command line in nearly one second; similarly, shutdown of X should be as quick. Upstart introduced Many improvements to the Anaconda installer; among these features, it now supports resizing ext2, ext3 and NTFS file systems, and can create and install Fedora to encrypted file systems. Firefox 3.0 beta 5 is included in this release, and the 3.0 package was released as an update the same day as the general release. Perl 5.10, which features a smaller memory footprint and other improvements. Data Persistence in USB images. Fedora 9 featured a new artwork entitled Waves which, like Infinity in Fedora 8, changes the wallpaper to reflect the time of day. Fedora Linux 8 Fedora Linux 8, codenamed Werewolf, was released on November 8, 2007. Some of the new features and updates in Fedora 8 included: PulseAudio – a sound daemon that allows different applications to control the audio. Fedora was the first distribution to enable it by default. system-config-firewall – a new firewall configuration tool that replaces system-config-security level from previous releases. Codeina – a tool that guides users using content under proprietary or patent-encumbered formats to purchase codecs from fluendo; it is an optional component that may be uninstalled in favor of GStreamer codec plug-ins which are free of charge. IcedTea – a project that attempts to bring OpenJDK to Fedora by replacing encumbered code. NetworkManager – faster, more reliable connections; better security (through the use of the keyring); clearer display of wireless networks; better D-Bus integration. Better laptop support – enhancements to the kernel to reduce battery load, disabling of background cron jobs when running on the battery, and additional wireless drivers. Fedora 8 also included a new desktop artwork entitled Infinity, and a new desktop theme called Nodoka. A unique feature of Infinity is that the wallpaper can change during the day to reflect the time of day. In February 2008, a new Xfce Live CD "spin" was announced for the x86 and x86-64 architectures. This Live CD version uses the Xfce desktop environment, which aims to be fast and lightweight, while still being visually appealing and easy to use. Like the GNOME and KDE spins, the Xfce spin can be installed to the hard disk. Fedora Linux 7 Fedora Linux 7, codenamed Moonshine, was released on May 31, 2007. The biggest difference between Fedora Core 6 and Fedora 7 was the merging of the Red Hat "Core" and Community "Extras" repositories, dropping "Core" from the name "Fedora Core," and the new build system put in place to manage those packages. This release used entirely new build and compose tools that enabled the user to create fully customized Fedora distributions that could also include packages from any third-party provider. There were three official spins available for Fedora 7: Live – two Live CDs (one for GNOME and one for KDE); Fedora – a DVD that includes all the major packages available at shipping; Everything – simply an installation tree for use by yum and Internet installations. Fedora 7 features GNOME 2.18 and KDE 3.5, a new theme entitled Flying High, OpenOffice.org 2.2 and Firefox 2.0. Fast user switching was fully integrated and enabled by default. Also, there were a number of updates to SELinux, including a new setroubleshoot tool for debugging SELinux security notifications, and a new, comprehensive system-config-selinux tool for fine-tuning the SELinux setup. Fedora Core 6 Fedora Core 6 was released on October 24, 2006, codenamed Zod. This release introduced the Fedora DNA artwork, replacing the Fedora Bubbles artwork used in Fedora Core 5. The codename is derived from the infamous villain, General Zod, from the Superman DC Comic Books. This version introduced support for the Compiz compositing window manager and AIGLX (a technology that enables GL-accelerated effects on a standard desktop). It shipped with Firefox 1.5 as the default web browser, and Smolt, a tool that allows users to inform developers about the hardware they use. Fedora Core 5 This Core release introduced specific artwork that defined it. This is a trend that has continued in later Fedora versions. Fedora Core 5 was released on March 20, 2006, with the codename Bordeaux, and introduced the Fedora Bubbles artwork. It was the first Fedora release to include Mono and tools built with it such as Beagle, F-Spot and Tomboy. It also introduced new package management tools such as pup and pirut (see Yellowdog Updater, Modified). It also was the first Fedora release not to include the long deprecated (but kept for compatibility) LinuxThreads, replaced by the Native POSIX Thread Library. Fedora Core 4 Fedora Core 4 was released on June 13, 2005, with the codename Stentz. It shipped with Linux 2.6.11, KDE 3.4 and GNOME 2.10. This version introduced the new Clearlooks theme, which was inspired by the Red Hat Bluecurve theme. It also shipped with the OpenOffice.org 2.0 office suite, as well as Xen, a high performance and secure open source virtualization framework. It also introduced support for the PowerPC CPU architecture, and over 80 new policies for Security-Enhanced Linux (SELinux). Fedora Core 3 Fedora Core 3 was released on November 8, 2004, codenamed Heidelberg. This was the first release of Fedora Core to include the Mozilla Firefox web browser, as well as support for the Indic scripts. This release also saw the LILO boot loader deprecated in favour of GNU GRUB. Security-Enhanced Linux (SELinux) was also enabled by default, but with a new targeted policy, which was less strict than the policy used in Fedora Core 2. Fedora Core 3 shipped with GNOME 2.8 and KDE 3.3. It was the first release to include the new Fedora Extras repository. Fedora Core 2 Fedora Core 2 was released on May 18, 2004, codenamed Tettnang. It shipped with Linux 2.6, GNOME 2.6, KDE 3.2, and Security-Enhanced Linux (SELinux) (SELinux was disabled by default due to concerns that it radically altered the way that Fedora Core ran). XFree86 was replaced by the newer X.org, a merger of the previous official X11R6 release, which additionally included a number of updates to Xrender, Xft, Xcursor, fontconfig libraries, and other significant improvements. Fedora Core 1 Fedora Core 1 is the first version of Fedora and was released on November 6, 2003. It was codenamed Yarrow. Fedora Core 1 was based on Red Hat Linux 9 and shipped with version 2.4.19 of the Linux kernel, version 2.4 of the GNOME desktop environment, and K Desktop Environment 3.1. References External links Fedora Project Lists of operating systems Software version histories
30174309
https://en.wikipedia.org/wiki/OpenNebula
OpenNebula
OpenNebula is a cloud computing platform for managing heterogeneous distributed data center infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private, public and hybrid implementations of Infrastructure as a Service. The two primary uses of the OpenNebula platform are data center virtualization and cloud deployments based on the KVM hypervisor, LXD system containers, and AWS Firecracker microVMs. The platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top of existing VMware infrastructure. In early June 2020, OpenNebula announced the release of a new Enterprise Edition for corporate users, along with a Community Edition. OpenNebula CE is free and open-source software, released under the Apache License version 2. OpenNebula CE comes with free access to maintenance releases but with upgrades to new minor/major versions only available for users with non-commercial deployments or with significant contributions to the OpenNebula Community. OpenNebula EE is distributed under a closed-source license and requires a commercial Subscription. History The OpenNebula Project was started as a research venture in 2005 by Ignacio M. Llorente and Ruben S. Montero. The first public release of the software occurred in 2008. The goals of the research were to create efficient solutions for managing virtual machines on distributed infrastructures. It was also important that these solutions had the ability to scale at high levels. Open-source development and an active community of developers have since helped mature the project. As the project matured it began to become more and more adopted and in March 2010 the primary writers of the project founded C12G Labs, now known as OpenNebula Systems, which provides value-added professional services to enterprises adopting or utilizing OpenNebula. Description OpenNebula orchestrates storage, network, virtualization, monitoring, and security technologies to deploy multi-tier services (e.g. compute clusters) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's 2010 report "... only few cloud dedicated research projects in the widest sense have been initiated – most prominent amongst them probably OpenNebula ...". The toolkit includes features for integration, management, scalability, security and accounting. It also claims standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (Amazon EC2 Query, OGF Open Cloud Computing Interface and vCloud) and hypervisors (VMware vCenter, KVM, LXD and AWS Firecracker), and can accommodate multiple hardware and software combinations in a data center. OpenNebula is sponsored by OpenNebula Systems (formerly C12G). OpenNebula is widely used by a variety of industries, including cloud providers, telecommunication, information technology services, government, banking, gaming, media, hosting, supercomputing, research laboratories, and international research projects. The OpenNebula Project is also used by some other cloud solutions as a cloud engine. OpenNebula has grown significantly since going public and now has many notable users from a variety of industries. Notable users from the telecommunications and internet industry include Akamai, Blackberry, Fuze, Telefónica, and INdigital. Users in the information technology industry include CA Technologies, Hewlett Packard Enterprise, Hitachi Vantara, Informatica, CentOS, Netways, Ippon Technologies, Terradue 2.0, Unisys, MAV Technologies, Liberologico, Etnetera, EDS Systems, Inovex, Bosstek, Datera, Saldab, Hash Include, Blackpoint, Deloitte, Sharx dc, Server Storage Solutions, and NTS. Government solutions utilizing the OpenNebula Project include the National Central Library of Florence, bDigital, Deutsch E-Post, RedIRIS, GRNET, Instituto Geografico Nacional, CSIC, Gobex, ASAC Communications, KNAW, Junta De Andalucia, Flanders Environmental Agency, red.es, CENATIC, Milieuinfo, SIGMA, and Computaex. Notable users in the financial sector include TransUnion, Produpan, Axcess Financial, Farm Credit Services of America, and Nasdaq Dubai. Media and gaming users include BBC, Unity, R.U.R., Crytek, iSpot.tv, and Nordeus. Hosting providers include ON VPS, NBSP, Orion VM, CITEC, LibreIT, Quobis, Virtion, OnGrid, Altus, DMEx, LMD, HostColor, Handy Networks, BIT, Good Hosting, Avalon, noosvps, Opulent Cloud, PTisp, Ungleich.ch, TAS France, TeleData, CipherSpace, Nuxit, Cyon, Tentacle Networks, Virtiso BV, METANET, e-tugra, lunacloud, todoencloud, Echelon, Knight Point Systems, 2 Twelve Solutions, and flexyz. SaaS and enterprise users include Scytl, LeadMesh, OptimalPath, RJMetrics, Carismatel, Sigma, GLOBALRAP, Runtastic, MOZ, Rentalia, Vibes, Yuterra, Best Buy, Roke, Intuit, Securitas Direct, trivago, and Booking.com. Science and academia implementations include FAS Research Computing at Harvard University, FermiLab, NIKHEF, LAL CNRS, DESY, INFN, IPB Halle, CSIRO, fccn, AIST, KISTI, KIT, ASTI, Fatec Lins, MIMOS, SZTAKI, Ciemat, SurfSARA, ESA, NASA, ScanEX, NCHC, CESGA, CRS4, PDC, CSUC, Tokyo Institute of Technology, CSC, HPCI, Cerit-SC, LRZ, PIC, Telecom SUD Paris, Universidade Federal de Ceara, Instituto Superiore Mario Barella, Academia Sinica, UNACHI, UCM, Universite Catholique de Louvain, Universite de Strasbourg, ECMWF, EWE Tel, INAFTNG, TeideHPC, Cujae, and Kent State University. Cloud products using OpenNebula include ClassCat, HexaGrid, NodeWeaver, Impetus, and ZeroNines. Development OpenNebula follows a rapid release cycle to improve user satisfaction by rapidly delivering features and innovations based on user requirements and feedback. In other words, giving customers what they want more quickly, in smaller increments, while additionally increasing technical quality. Major upgrades generally occur every 3-5 years and each upgrade generally has 3-5 updates. The OpenNebula project is mainly open-source and possible thanks to the active community of developers and translators supporting the project. Since version 5.12 the upgrade scripts are under a closed source license, which makes upgrading between versions impossible without a subscription unless you can prove you are operating a non-profit cloud or made a significant contribution to the project. Release History Version TP and TP2, technology previews, offered host and VM management features, based on Xen hypervisor. Version 1.0 was the first stable release, introduced KVM and EC2 drivers, enabling hybrid clouds. Version 1.2 added new structure for the documentation and more hybrid functionality. Version 1.4 added public cloud APIs on top of oned to build public cloud and virtual network management. Version 2.0 added mysql backend, LDAP authentication, management of images and virtual networks. Version 2.2 added integration guides, ganglia monitoring and OCCI (converted as add-ons in later releases), Java bindings for the API and the Sunstone GUI. Version 3.0 added a migration path from previous versions, VLAN, ebtables and OVS integration for virtual networks, ACLs and accounting subsystem, VMware driver, Virtual Data Centers and federation across data centers. Version 3.2 added firewalling for VMs (deprecated later on by security groups). Version 3.4 introduced iSCSI datastore, cluster as a first class citizen and quotas. Version 3.6 added Virtual Routers, LVM datastores and the public OpenNebula marketplace integration. Version 3.8 added the OneFlow components for service management and OneGate for application insight. Version 4.0 added support for Ceph and Files datastore and the onedb tool. Version 4.2 added a new self service portal (Cloud View) and VMFS datastore. Version 4.4 released in 2014, brought a number of innovations in Open Cloud, improved cloud bursting, and implemented the use of multiple system datastores for storage load policies. Version 4.6 allowed users to have different instances of OpenNebula in geographically dispersed and different data centers, this was known as the Federation of OpenNebula. A new cloud portal for cloud consumers was also introduced and in App market support was provided to import OVAs. Version 4.8 began offering support for Microsoft Azure and IBM. Developers, it also continued evolving and improving the platform by incorporating support for OneFlow in cloud view. This meant end users could now define virtual machine applications and services elastically. Version 4.10 integrated the support portal with the Sunstone GUI. Login token was also developed, and support was provided for VMS and vCenter. Version 4.12 offered new functionality to implement security groups and improve vCenter integration. Show back model was also deployed to track and analyze clouds due to different departments. Version 4.14 introduced a newly redesigned and modularized graphical interface code, Sunstone. This was intended to improve code readability and ease the task of adding new components. Version 5.0 'Wizard' introduced marketplaces as means to share images across different OpenNebula instances. Management of Virtual Routers with a network topology visual tool in Sunstone. Version 5.2 'Excession' added a IPAM subsystem to aid in network integrations, and also added LDAP group dynamic mapping. Version 5.4 'Medusa' introduced Full storage and network management for vCenter, and support for VM Groups to define affinity between VMs and hypervisors. Own implementation of RAFT for HA of the controller. Version 5.6 'Blue Flash' focused on scalability improvements, as well as UX improvements. Version 5.8 'Edge' added support for LXD for infrastructure containers, automatic NIC selection and Distributed Datacenters (DDC), which is the ability to use bare metal providers to build remote clusters in edge and hybrid cloud environments. Version 5.10 'Boomerang' added NUMA and CPU pinning, NSX integration, revamped hook subsystem based ion 0MQ, DPDK support and 2FA authentication for Sunstone. Version 5.12 'Firework' removal of upgrade scripts, added support to AWS Firecracker micro-VMs, a new integration with Docker Hub, Security Group integration (NSX), several improvements to Sunstone, a revamped OneFlow component, and an improved monitoring subsystem. Version 6.0 'Mutara' new multi-cloud architecture based on "Edge Clusters", enhanced Docker and Kubernetes support, new FireEdge webUI, revamped OneFlow, new backup capabilities. Version 6.2 'Red Square' improvements to LXC driver, new support to workload portability, beta preview of the new Sunstone GUI. Internal architecture Basic components Host: Physical machine running a supported hypervisor. Cluster: Pool of hosts that share datastores and virtual networks. Template: Virtual Machine definition. Image: Virtual Machine disk image. Virtual Machine: Instantiated Template. A Virtual Machine represents one life-cycle, and several Virtual Machines can be created from a single Template. Virtual Network: A group of IP leases that VMs can use to automatically obtain IP addresses. It allows the creation of Virtual Networks by mapping over the physical ones. They will be available to the VMs through the corresponding bridges on hosts. Virtual network can be defined in three different parts: Underlying of physical network infrastructure. The logical address space available (IPv4, IPv6, dual stack). Context attributes (e.g. net mask, DNS, gateway). OpenNebula also comes with a Virtual Router appliance to provide networking services like DHCP, DNS etc. Components and Deployment Model The OpenNebula Project's deployment model resembles classic cluster architecture which utilizes A front-end (master node) Hypervisor enabled hosts (worker nodes) Datastores A physical network Front-end machine The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services. This is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine include the management daemon (oned), scheduler (sched), the web interface server (Sunstone server), and other advanced components. These services are responsible for queuing, scheduling, and submitting jobs to other machines in the cluster. The master node also provides the mechanisms to manage the entire system. This includes adding virtual machines, monitoring the status of virtual machines, hosting the repository, and transferring virtual machines when necessary. Much of this is possible due to a monitoring subsystem which gathers information such as host status, performance, and capacity use. The system is highly scalable and is only limited by the performance of the actual server. Hypervisor enabled-hosts The worker nodes, or hypervisor enabled-hosts, provide the actual computing resources needed for processing all jobs submitted by the master node. OpenNebula hypervisor enabled-hosts use a virtualization hypervisor such as Vmware, Xen, or KVM. The KVM hypervisor is natively supported and used by default. Virtualization hosts are the physical machines that run the virtual machines and various platforms can be used with OpenNebula. A Virtualization Subsystem interacts with these hosts to take the actions needed by the master node. Storage The datastores simply hold the base images of the Virtual Machines. The datastores must be accessible to the front-end; this can be accomplished by using one of a variety of available technologies such as NAS, SAN, or direct attached storage. Three different datastore classes are included with OpenNebula, including system datastores, image datastores, and file datastores. System datastores hold the images used for running the virtual machines. The images can be complete copies of an original image, deltas, or symbolic links depending on the storage technology used. The image datastores are used to store the disk image repository. Images from the image datastores are moved to or from the system datastore when virtual machines are deployed or manipulated. The file datastore is used for regular files and is often used for kernels, ram disks, or context files. Physical networks Physical networks are required to support the interconnection of storage servers and virtual machines in remote locations. It is also essential that the front-end machine can connect to all the worker nodes or hosts. At the very least two physical networks are required as OpenNebula requires a service network and an instance network. The froimage files. The instance network allows the virtual machines to connect across different hosts. The network subsystem of OpenNebula is easily customizable to allow easy adaptation to existing data centers. See also OpenStack CloudStack Cloud computing Cloud computing comparison Ganeti openQRM oVirt References External links OpenNebula Website Cloud infrastructure Free software programmed in Java (programming language) Free software programmed in Ruby Free software programmed in C Free software programmed in C++ Free software for cloud computing Virtualization-related software for Linux
51579506
https://en.wikipedia.org/wiki/Gili%20Raanan
Gili Raanan
Gili Raanan (born 1969) is an Israeli venture capitalist and one of the inventors of CAPTCHA (US patent application with 1997 priority date ), the WAF (web application firewall) and many other inventions in the fields of application security and discovery. Raanan started Sanctum in 1997, and invented the first Web application firewall AppShield and the first Web application penetration testing software AppScan. He later started NLayers which was acquired by EMC Corporation pioneering the science of Application discovery and understanding. He is an investor and a General Partner at Sequoia Capital, the Founder of Cyberstarts, and was a board member at Adallom, Armis Security, Onavo, Moovit, Innovid (NYSE:CTV) and Snaptu. Biography Gili Raanan was born in Kfar Saba, Israel. He earned a Bachelor of Computer Science In 2002 from the Tel Aviv University, he received a Master of Business Administration degree from the Recanati School of the Tel Aviv University. Business career Raanan started Sanctum in 1997, and invented the first Web application firewall AppShield and the first Web application penetration testing software AppScan. As part of the research on Application Security Raanan co-invented CAPTCHA, as described in the patent application "The invention is based on applying human advantage in applying sensory and cognitive skills to solving simple problems that prove to be extremely hard for computer software. Such skills include, but are not limited to processing of sensory information such as identification of objects and letters within a noisy graphical environment". Raanan later started NLayers in 2003 which was acquired by EMC Corporation pioneering the science of Application discovery and understanding. Venture capitalist In 2009 Raanan joined Sequoia Capital in Israel as a General Partner. Raanan was a board member at Adallom, Onavo and Snaptu. In 2018 Raanan founded Cyberstarts which is an early stage VC focused on Cybersecurity. Philanthropy In 2010 Raanan was one of the early contributors to SpaceIL's Beresheet, Israel’s privately funded, engineered and launched mission to the Moon’s surface. References Living people 1969 births Israeli inventors
53597507
https://en.wikipedia.org/wiki/Marcus%20Holloway
Marcus Holloway
Marcus "Retr0" Holloway is a character from Ubisoft's Watch Dogs video game franchise. Marcus first appears as the player character of the 2016 title, Watch Dogs 2: he is presented as a young hacker based in the San Francisco Bay Area who is wrongfully flagged with a criminal profile by ctOS 2.0, the electronic mechanism employed to manage the region's infrastructure and surveillance network. Marcus succeeds in infiltrating a facility to wipe his profile from the system, and joins the hacktivist collective DedSec to raise social awareness about the risks posed by ctOS 2.0 and expose the corruption of its creators, the Blume Corporation. An older Marcus makes a cameo appearance in the 2021 downloadable content (DLC) expansion for Watch Dogs Legion, Bloodlines. Marcus is portrayed by American actor Ruffin Prentiss. Marcus was first revealed along with Watch Dogs 2 in June 2016 by Ubisoft. He was intentionally designed to be very different in terms of personality and gameplay utility when compared to Aiden Pearce, the player character of the first Watch Dogs. The developers of Watch Dogs 2 encouraged collaboration with African American professionals within the entertainment industry to avoid turning characters like Marcus into caricatures. In keeping with the general shift in tone of Watch Dogs and expansion of its protagonist's abilities, the developers suggested that Marcus' intended playstyle mostly involve stunning enemies using nonlethal weaponry or using hacking skills to create distractions. Marcus has been the subject of generally positive reception following the release of Watch Dogs 2, with many critics recognizing the character's importance as an unusual representation of African Americans in popular media as well as black people in the video game medium as a whole. Concept and design Marcus Holloway was first revealed during the Watch Dogs 2 world premiere video uploaded on Ubisoft's official YouTube channel in June 2016. He is designed to be an athletic player character. His approach to navigate the game world of Watch Dogs 2 emphasizes fast-paced parkour maneuvers similar to other video games like Mirror's Edge. Marcus' gadgets, which help extend his ability to hack electronics, include a tiny remote control car which can conduct physical hacks and divert enemies with high-pitched insults, and a drone which scout from an elevated position and help plan his moves. Marcus could also using his hacking skills to create proximity triggers on electronics and place traps for his enemies. Although Marcus is presented as an antihero hacker like the protagonist of the first Watch Dogs, Aiden Pearce, he is designed to be more expressive and charismatic by contrast. According to Watch Dogs 2 producer Dominic Guay, Marcus is "an optimistic man. He needs people. He sees good things in people, he’s young, he’s funny, he’s charming". Marcus goes by the handle Retr0 due to his appreciation for the nostalgic aspects of popular culture, such as old-school hacker culture and "classic" songs from the R&B, hip hop, and electronic music genres. Marcus' background is conceptualized as an individual who is raised in Oakland, California. This became crucial to the story as the character is likely to have experienced social injustice early in his life as a member of a marginalized community, which informs the character's motivations throughout the story. Unlike the first Watch Dogs, it is possible to play the entirety of Watch Dogs 2 as Marcus without killing any enemies, which Guay explained is informed by the game's setting and lighthearted tone. Most of the Marcus' abilities result in rendering his enemies unconscious on the floor with little "z"s floating from their bodies for a time in the aftermath. Marcus takes down enemies within melee range in a brutal but non-lethal manner using the "Thunder Ball", a makeshift weapon made from a billiard ball strung on a chain. The developers' desire to balance player agency with the coherence of the narrative meant that there are certain decisions the character would and would not make, and that there are no cinematics or predetermined moments in the game where Marcus shoots or kills a non-player character as it would be inconsistent with his personality. Although players may equip Marcus with conventional firearm weapons, electroshock weaponry could be acquired through 3D printers in DedSec hideouts or "hackerspaces" However, the game only featured two non-lethal weapons at launch until a major content update released on April 2017 offered more nonlethal options for players. Portrayal Marcus Holloway is portrayed by Ruffin Prentiss through performance capture in Watch Dogs 2. The developers noted that they had a responsibility to ensure Marcus resonates with players as an authentic character as opposed to being a caricature. To accomplish this, Ubisoft actively sought out African-American script consultants and actors for their involvement with finding the characters' voices and encouraged improvised dialogue to make them feel real. Watch Dogs 2 Creative Director Jonathan Morin said the team encouraged the actors to embrace the natural synergy that spontaneously happens during their interactions with each other, noting in particular that the actors who played Marcus and another African American character named Horatio Carlin had a natural way of saying things to each other due to their similar cultural backgrounds. Prentiss said his work experience on the game was positive: in an interview with IGN in October 2020, he praised the game's developers for the opportunity to "play a character in a video game that was brave enough to tackle such tough topics" as well as their willingness to not only explore issues of racism, but also give him creative freedom to interpret certain scenes from his perspective as a black man. Prentiss also acknowledged the positive fan reception to the close friendship between Marcus and a DedSec team member named The Wrench, which had been likened to a "bromance". According to Prentiss, the writing team for Watch Dogs 2 had considered the idea of implementing crossover storylines featuring characters from other titles like Marcus in an Avengers-like coalition for future work in the Watch Dogs series. Prentiss did not hear from Ubisoft before or shortly after the announcement of the then-upcoming Bloodlines even though Wrench was confirmed to appear, although he did express an interest about reprising the role as it is one of his favorite roles. Prentiss was eventually contacted by Ubisoft late in the development of the Bloodlines DLC to provide several voiced lines, which he was happy to oblige. Appearances Watch Dogs 2 The story of Watch Dogs 2 begins with Marcus infiltrating a ctOS 2.0 server facility owned by Blume, a security company that operates a surveillance network spying on the populace of the San Francisco Bay Area and stores their personal information on a cluster of servers, as members of the hacktivist collective DedSec observed his actions remotely. Marcus hacks into the relevant server and removes incriminating information which is wrongfully placed on his data profile, then flees the facility with the help of DedSec members, who invite him to join the collective. Together with his newfound team members, they celebrated his successful infiltration of a highly secure installation near the facility, where they cross paths with Blume chief technology officer (CTO) Dušan Nemec by chance. As part of his efforts to neutralize DedSec as a threat to Blume, Dušan executed automated social media accounts or bots to artificially inflate the popularity of DedSec's presence on social media, and exploited paranoia over the perceived security threat posed by DedSec as leverage to promote the widespread adoption of ctOS 2.0 to deter hacking attacks. Marcus is soon lured to the Blume CEO's office, where he is confronted by Dušan and narrowly escapes an ambush set up by him in collaboration with the police. Marcus regroups with the core group members of DedSec, gains an ally in the notorious hacker hacker Raymond "T-Bone" Kenney, and soon rises to a leadership position within the collective as they attempt to fight the undue influence of Blume and its allies throughout the Bay Area. Dušan eventually succeeds in having Marcus placed on the FBI's most wanted list, but DedSec retaliates by publicly exposing Dušan's illegal dealings retrieved from the ctOS 2.0 network, leading to the latter's arrest by the authorities. Watch Dogs Legion Marcus plays a prominent role in Wrench's story arc in Bloodlines, a 2021 downloadable content expansion to Watch Dogs Legion. He makes several cameo voice appearances throughout the expansion. Reception Marcus has received a generally positive reception from video game critics. In a preview article about Watch Dogs 2 written for PCGamesN, Kirk McKeand liked that Marcus has more imaginative and varied tastes in music and fashion, exhibits a wider range of emotions, and better gadgets compared to Aiden Pearce. Both IGNs Dan Stapleton and Joe Skrebels liked Marcus as a video game protagonist; Skrebels in particular thought Marcus "felt notably like a person, not just a collection of voice lines designed to string missions together". Polygon ranked Marcus Holloway among the best video game characters of the 2010s, with Jeff Ramos praising Marcus as a "more engaging and relatable protagonist" whose leadership role serves as a "rallying point who inspires and enables others to fight back" against systematic oppression. George Foster claimed that he is the Watch Dogs franchise’s best character, and expressed displeasure in an editorial published by TheGamer that Marcus was seemingly left out of the then-upcoming Bloodlines DLC for Legion. Foster further criticized the procedurally generated player character approach adopted by Legion in lieu of a clearly defined character like Marcus and that it ended up being an inferior game in his opinion. Several critics have positively assessed Marcus within the context of how black representation is handled by developers in the history of the video game medium. In an article published by Paste Magazine, Jeremy Winslow was pleased that Marcus is portrayed as a compelling character with a multifaceted, rounded personality. Winslow argued that Marcus' overall depiction in Watch Dogs 2, particularly with his disadvantaged background, disrupts the typical norms and stereotypes surrounding the vast majority of black video game characters, particularly with his atypical visual design and reliance on intellect as opposed to physical brawn. Skrebels agreed that Marcus' story is groundbreaking in how it presents a narrative about a black man’s experiences without relying on cliches. Writing for Polygon, Tanya D said she was impressed by a scene where Marcus interacts with Horatio Carlin, another core DedSec group member who is also African American, while on a visit to the latter's workplace, a prestigious company in the information technology industry located within Silicon Valley. She believed that their conversations are an authentic depiction of code-switching between standard American English and African-American Vernacular English among members of the African American community. Citing Tanya D's opinion on the scene, David J. Leonard agreed that the scene's boldness in challenging existing stereotypes of African Americans and vocalizing the daily realities of racism is not only disruptive but a source of pleasure. Leonard further argued that its importance extends beyond its edifying and truth-telling conversation about racism and the lack of diversity within workplaces in Silicon Valley, and that it also connects the racial injustice perpetuated by Silicon Valley throughout society to the hegemony of white masculinity. Not all reception towards Marcus' depiction in Watch Dogs 2 have been positive. Vice Waypoint staff commended the game's early attempts to engage with questions of identity and marginalization through Marcus and Horatio's interactions, but were left disappointed when Horatio is found murdered by a local street gang midway through the game's narrative. They felt that Marcus' drive for revenge in response to Horatio's murder compromised the character's integrity, and that Horatio is treated as an expendable afterthought since the game made no further attempts to commemorate his significance to Marcus and the rest of the team once the specific questline has concluded. Some reviewers also questioned the ludonarrative dissonance surrounding the potential in-game use of lethal firearms by Marcus. Wesley Yin-Poole from Eurogamer said it felt "off" to him that he could make the "likeable Marcus Holloway shoot to kill", a sentiment also shared by Stapleton who described the "weird disconnect" as feeling different than roleplaying as a violent criminal like the player characters of Grand Theft Auto V. Stapleton observed that Marcus' personality is the only motivating factor that pushes players toward a non-lethal playstyle of stealth and silent takedowns, as it may become impractical during intense combat situations. Phillip Kollar from Polygon criticized the ease of access to firearms in the first place as "a complete failure of imagination" on the part of the game's developers. References Black characters in video games Male characters in video games Fictional African-American people Fictional American people in video games Fictional melee weapons practitioners Fictional programmers Hackers in video games Science fiction video game characters Ubisoft characters Video game characters introduced in 2016 Video game protagonists Vigilante characters in video games‎
28431998
https://en.wikipedia.org/wiki/John%20Paul%20Vergara
John Paul Vergara
John Paul C. Vergara is a Professor at the Department of Information Systems and Computer Science, School of Science and Engineering, Ateneo de Manila University. He is currently the Vice President for the Loyola Schools of the University, succeeding Ma. Assunta Caoile-Cuyegkeng, Ph.D. Education Vergara graduated from Philippine Science High School in 1982. He then attended the Ateneo de Manila University as National Science and Technology Agency scholar, graduating with a Bachelor of Science degree with majors in Computer Science and in Mathematics in 1986. In 1990, he completed his Master of Science studies in computer science and applications at Virginia Polytechnic Institute and State University (Virginia Tech). In 1997, he completed his doctorate in the same field at the same institution, where he was recognized for Scholarly Performance in Graduate Study. As a scientist and educator Vergara became a member of the Ateneo de Manila University faculty in 1986. He became professor and chair of the Department of Information Systems and Computer Science, and also became head of the Information Technology department of the Ateneo Graduate School of Business as well as Assistant Director of the Ateneo Information Technology Institute. In 2008, he was a visiting adjunct professor at Virginia Tech's Department of Computer Science. Vergara has also held numerous consulting positions, is a member of numerous scientific organizations, and has refereed various academic journals. As Ateneo administrator Vergara was appointed Vice President for Administration and Planning by the Ateneo's Board of Trustees, a position he held from April 2009 to March 2010. Subsequently, he was chosen as the next Vice President for the Loyola Schools. Awards and recognition Vergara's awards include recognition for Scholarly Performance in Graduate Study by Virginia Tech in 1997. He was also awarded the DuPont Miracles of Science Award by DuPont Far East in July 2001. Vergara was likewise named one of the Outstanding Young Scientists by the Philippines' National Academy of Science and Technology. References Living people Filipino educators Filipino chemists Ateneo de Manila University alumni Ateneo de Manila University faculty Year of birth missing (living people)
18782488
https://en.wikipedia.org/wiki/Institute%20of%20IT%20Professionals
Institute of IT Professionals
The Institute of IT Professionals (IITP) is a non-profit incorporated society in New Zealand. As New Zealand's ICT professional body, the IITP exists to promote education and ensure a high level of professional practice amongst ICT professionals. Before July 2012, IITP was known as the New Zealand Computer Society Inc (NZCS). Objects The objects of the Institute of IT Professionals, as provided in the Institute's constitution, are to: develop the discipline of Information Technology in New Zealand. foster and promote the education, training and qualification of persons practising or intending to practice within the discipline in New Zealand. promote education by granting qualifications and grades of membership to members of the public in recognition of their proficiency within the discipline of Information Technology. promote proper conduct and set ethical standards for the discipline. develop or provide educational lectures, meetings, conferences and publications and to promote research within the discipline of Information Technology. take a public position on matters of concern to the Information Technology discipline and make submissions or advise government as appropriate. advance the education of the public of New Zealand in relation to Information Technology. promote any other related activities that are, in the opinion of the Institute, in the interests of the public or discipline. Codes of Ethics All IITP members must formally agree to a Code of Ethics. The IITP Code of Ethics is mostly concerned with non-discrimination, zeal, community, skills, competence, continuous development, consequences, and conflicts of interest, and contains the following 8 tenets: Good Faith – Members shall treat people with dignity, good faith and equity; without discrimination; and have consideration for the values and cultural sensitivities of all groups within the community affected by their work; Integrity – Members shall act in the execution of their profession with integrity, dignity and honour to merit the trust of the community and the profession, and apply honesty, skill, judgement and initiative to contribute positively to the well-being of society; Community-focus – Members’ responsibility for the welfare and rights of the community shall come before their responsibility to their profession, sectional or private interests or to other members; Skills – Members shall apply their skills and knowledge in the interests of their clients or employers for whom they will act without compromising any other of these Tenets; Continuous Development – Members shall develop their knowledge, skills and expertise continuously through their careers, contribute to the collective wisdom of the profession, and actively encourage their associates to do likewise; Informed Consent – Members shall take reasonable steps to inform themselves, their clients or employers of the economic, social, environmental or legal consequences which may arise from their actions; Managed Conflicts of Interest – Members shall inform their clients or employers of any interest which may be, or may be perceived as being, in conflict with the interests of their clients or employers, or which may affect the quality of service or impartial judgement; Competence – Members shall follow recognised professional practice, and provide services and advice carefully and diligently only within their areas of competence Membership The IITP has an estimated membership of approximately 3,500 individual members, plus around 120 Corporate Partners (businesses who have joined on behalf of their staff) resulting in an estimated representation of over 10,000 ICT professionals. IITP provides for multiple membership levels depending on a member's stage of career and requirements. Full membership Professional membership is for those in the ICT profession who meet certain requirements in terms of experience and qualifications. Member (MIITP) is the full membership level Fellow (FIITP) is the very senior membership level Associate membership Associate Member is a membership open to anyone who abides by the Institute's Code of Ethics Honorary Fellowship Honorary Fellow (HFIITP) is a title conferred on a small number of individuals who have had a major impact on the sector, and is regarded as the highest honour in the ICT profession Organisational membership Corporate Partner is for organisations wishing to align with and support the work of the IITP (includes significant benefits for staff) Educational Partner is for educational institutions wishing to align with and support the work of the IITP (includes significant benefits for staff) Structure The Institute of IT Professionals is a single nationwide non-profit incorporated society. Within the Institute are five branches based on geographic location, being Auckland, Wellington, Canterbury, Waikato/Bay of Plenty, and Otago/Southland. The IITP also encompasses a number of Specialist Groups in topics such as Software Testing and Computer Security. IITP branches and specialist groups are staffed by volunteers. The Institute is governed by a National Council made up of the IITP President, Deputy President, and five Councillors, with each councillor being appointed by one of the branches of the Institute. The Institute maintains a fully staffed operational head office in Wellington and is managed by a Chief Executive who also sits on Council in a non-voting capacity. Advocacy IITP is regarded as the voice of the ICT profession in New Zealand and undertakes significant advocacy on behalf of the profession and wider sector. IITP is represented on most ICT-related advisory groups, panels and public ICT-related boards in New Zealand, and was a founding member of the Digital Development Council, a body set up by the New Zealand Government to help achieve New Zealand's digital potential. The Institute is engaged with government (both ministerial and official level), industry and academia and works as a catalyst and conduit for these three important sub-sectors to work together in the interests of the overall ICT Sector, both in the area of ethics and professional practice as well as to solve issues such as the current ICT skills shortage and drop in tertiary ICT enrolments. IITP also takes an active interest in educational issues and in 2008 completed a detailed analysis of ICT-related NCEA Achievement Standards in secondary schools and outlined a number of significant and serious problems with these standards. The Institute also promotes digital literacy. Certification In 2009 the Institute released an internationally aligned ICT professional certification in New Zealand, the Information Technology Certified Professional (ITCP) qualification. Events The IITP runs numerous events throughout New Zealand, but predominantly in Auckland, Wellington, Christchurch, Hamilton and Dunedin. As well as around 20 local events a month, the Institute began a monthly nationwide Innovators of ICT event series in August 2008, taking notable and successful entrepreneurs such as Rod Drury and Don Christie on a speaking trip to the five cities above to promote innovation and "thinking outside the square" to New Zealand's development and ICT community. History The Institute was founded as the New Zealand Data Processing and Computer Society Inc in October 1960 in Wellington, New Zealand and changed its name to New Zealand Computer Society Inc in 1967. Honorary Fellowships The IITP occasionally confers the title of Honorary Fellow of the IITP (HFIITP) on an individual who has made a significant contribution to the ICT sector in New Zealand over a period of time, or the Institute itself over many years. HFIITP recipients include former Minister of ICT Hon David Cunliffe and ICT entrepreneur Rod Drury. There are currently 25 Honorary Fellows. International Relationships The IITP is a full member of the International Federation for Information Processing (IFIP), an international umbrella organisation originally set up by UNESCO, and South East Asia Regional Computer Confederation (SEARCC). The Institute also works with other professional bodies around the world, such as the Australian Computer Society and the British Computer Society. See also Australian Computer Society (ACS) British Computer Society (BCS) Canadian Information Processing Society (CIPS) Computer Society of Southern Africa (CSSA) Association for Computing Machinery (ACM) IEEE Computer Society (IEEE CS) Institution of Analysts and Programmers (IAP) International Federation for Information Processing (IFIP) Information technology industry in New Zealand References External links IITP official website ITCP certification official website Organizations established in 1960 Engineering societies based in New Zealand Information technology in New Zealand Information technology organizations based in Oceania 1960 establishments in New Zealand
4055998
https://en.wikipedia.org/wiki/Robot%20software
Robot software
Robot software is the set of coded commands or instructions that tell a mechanical device and electronic system, known together as a robot, what tasks to perform. Robot software is used to perform autonomous tasks. Many software systems and frameworks have been proposed to make programming robots easier. Some robot software aims at developing intelligent mechanical devices. Common tasks include feedback loops, control, pathfinding, data filtering, locating and sharing data. Introduction While it is a specific type of software, it is still quite diverse. Each manufacturer has their own robot software. While the vast majority of software is about manipulation of data and seeing the result on-screen, robot software is for the manipulation of objects or tools in the real world. Industrial robot software Software for industrial robots consists of data objects and lists of instructions, known as program flow (list of instructions). For example, Go to Jig1 It is an instruction to the robot to go to positional data named Jig1. Of course, programs can also contain implicit data for example Tell axis 1 move 30 degrees. Data and program usually reside in separate sections of the robot controller memory. One can change the data without changing the program and vice versa. For example, one can write a different program using the same Jig1 or one can adjust the position of Jig1 without changing the programs that use it. Examples of programming languages for industrial robots Due to the highly proprietary nature of robot software, most manufacturers of robot hardware also provide their own software. While this is not unusual in other automated control systems, the lack of standardization of programming methods for robots does pose certain challenges. For example, there are over 30 different manufacturers of industrial robots, so there are also 30 different robot programming languages required. There are enough similarities between the different robots that it is possible to gain a broad-based understanding of robot programming without having to learn each manufacturer's proprietary language. One method of controlling robots from multiple manufacturers is to use a Post processor and Off-line programming (robotics) software. With this method, it is possible to handle brand-specific robot programming language from a universal programming language, such as Python (programming language). however, compiling and uploading fixed off-line code to a robot controller doesn't allow the robotic system to be state aware, so it cannot adapt its motion and recover as the environment changes. Unified real-time adaptive control for any robot is currently possible with a few different third-party tools. Some examples of published robot programming languages are shown below. Task in plain English: Move to P1 (a general safe position) Move to P2 (an approach to P3) Move to P3 (a position to pick the object) Close gripper Move to P4 (an approach to P5) Move to P5 (a position to place the object) Open gripper Move to P1 and finish VAL was one of the first robot ‘languages’ and was used in Unimate robots. Variants of VAL have been used by other manufacturers including Adept Technology. Stäubli currently use VAL3. Example program: PROGRAM PICKPLACE 1. MOVE P1 2. MOVE P2 3. MOVE P3 4. CLOSEI 0.00 5. MOVE P4 6. MOVE P5 7. OPENI 0.00 8. MOVE P1 .END Example of Stäubli VAL3 program: begin movej(p1,tGripper,mNomSpeed) movej(appro(p3,trAppro),tGripper,mNomSpeed) movel(p3,tGripper,mNomSpeed) close(tGripper) movej(appro(p5,trAppro),tGripper,mNomSpeed) movel(p5,tGripper,mNomSpeed) open(tGripper) movej(p1,tGripper,mNomSpeed) end trAppro is cartesian transformation variable. If we use in with appro command, we do not need to teach P2 land P4 point, but we dynamically transform an approach to position of pick and place for trajectory generation. Epson RC+ (example for a vacuum pickup) Function PickPlace Jump P1 Jump P2 Jump P3 On vacuum Wait .1 Jump P4 Jump P5 Off vacuum Wait .1 Jump P1 Fend ROBOFORTH (a language based on FORTH). : PICKPLACE P1 P3 GRIP WITHDRAW P5 UNGRIP WITHDRAW P1 ; (With Roboforth you can specify approach positions for places so you do not need P2 and P4.) Clearly, the robot should not continue the next move until the gripper is completely closed. Confirmation or allowed time is implicit in the above examples of CLOSEI and GRIP whereas the On vacuum command requires a time delay to ensure satisfactory suction. Other robot programming languages Visual programming language The LEGO Mindstorms EV3 programming language is a simple language for its users to interact with. It is a graphical user interface (GUI) written with LabVIEW. The approach is to start with the program rather than the data. The program is constructed by dragging icons into the program area and adding or inserting into the sequence. For each icon, you then specify the parameters (data). For example, for the motor drive icon you specify which motors and by how much they move. When the program is written it is downloaded into the Lego NXT 'brick' (microcontroller) for test. Scripting languages A scripting language is a high-level programming language that is used to control the software application, and is interpreted in real-time, or "translated on the fly", instead of being compiled in advance. A scripting language may be a general-purpose programming language or it may be limited to specific functions used to augment the running of an application or system program. Some scripting languages, such as RoboLogix, have data objects residing in registers, and the program flow represents the list of instructions, or instruction set, that is used to program the robot. Programming languages are generally designed for building data structures and algorithms from scratch, while scripting languages are intended more for connecting, or “gluing”, components and instructions together. Consequently, the scripting language instruction set is usually a streamlined list of program commands that are used to simplify the programming process and provide rapid application development. Parallel languages Another interesting approach is worthy of mention. All robotic applications need parallelism and event-based programming. Parallelism is where the robot does two or more things at the same time. This requires appropriate hardware and software. Most programming languages rely on threads or complex abstraction classes to handle parallelism and the complexity that comes with it, like concurrent access to shared resources. URBI provides a higher level of abstraction by integrating parallelism and events in the core of the language semantics. whenever(face.visible) { headPan.val += camera.xfov * face.x & headTilt.val += camera.yfov * face.y } The above code will move the headPan and headTilt motors in parallel to make the robot head follow the human face visible on the video taken by its camera whenever a face is seen by the robot. Robot application software Regardless which language is used, the end result of robot software is to create robotic applications that help or entertain people. Applications include command-and-control and tasking software. Command-and-control software includes robot control GUIs for tele-operated robots, point-n-click command software for autonomous robots, and scheduling software for mobile robots in factories. Tasking software includes simple drag-n-drop interfaces for setting up delivery routes, security patrols and visitor tours; it also includes custom programs written to deploy specific applications. General purpose robot application software is deployed on widely distributed robotic platforms. Safety considerations Programming errors represent a serious safety consideration, particularly in large industrial robots. The power and size of industrial robots mean they are capable of inflicting severe injury if programmed incorrectly or used in an unsafe manner. Due to the mass and high-speeds of industrial robots, it is always unsafe for a human to remain in the work area of the robot during automatic operation. The system can begin motion at unexpected times and a human will be unable to react quickly enough in many situations, even if prepared to do so. Thus, even if the software is free of programming errors, great care must to be taken to make an industrial robot safe for human workers or human interaction, such as loading or unloading parts, clearing a part jam, or performing maintenance. The ANSI/RIA R15.06-1999 American National Standard for Industrial Robots and Robot Systems - Safety Requirements (revision of ANSI/ R15.06-1992) book from the Robotic Industries Association is the accepted standard on robot safety. This includes guidelines for both the design of industrial robots, and the implementation or integration and use of industrial robots on the factory floor. Numerous safety concepts such as safety controllers, maximum speed during a teach mode, and use of physical barriers are covered. See also Behavior-based robotics and Subsumption architecture Developmental robotics Epigenetic robotics Evolutionary robotics Industrial robot Cognitive robotics Robot control RoboLogix Automated planning and scheduling Cybernetics Artificial intelligence Robotics suite Telerobotics / Telepresence Robotic automation software Swarm robotics platforms References External links Linux Devices. ANSI/RIA R15.06-1999 American National Standard for Industrial Robots and Robot Systems - Safety Requirements (revision of ANSI/RIA R15.06-1992)
3414121
https://en.wikipedia.org/wiki/Hecuba%20%28play%29
Hecuba (play)
Hecuba (, Hekabē) is a tragedy by Euripides written . It takes place after the Trojan War, but before the Greeks have departed Troy (roughly the same time as The Trojan Women, another play by Euripides). The central figure is Hecuba, wife of King Priam, formerly Queen of the now-fallen city. It depicts Hecuba's grief over the death of her daughter Polyxena, and the revenge she takes for the murder of her youngest son Polydorus. Plot In the play's opening, the ghost of Polydorus tells how when the war threatened Troy, he was sent to King Polymestor of Thrace for safekeeping, with gifts of gold and jewelry. But when Troy lost the war, Polymestor treacherously murdered Polydorus, and seized the treasure. Polydorus has foreknowledge of many of the play's events and haunted his mother's dreams the night before. The events take place on the coast of Thrace, as the Greek navy returns home from Troy. The Trojan queen Hecuba, now enslaved by the Greeks, mourns her great losses and worries about the portents of her nightmare. The Chorus of young slave women enters, bearing fateful news. One of Hecuba's last remaining daughters, Polyxena, is to be killed on the tomb of Achilles as a blood sacrifice to his honor (reflecting the sacrifice of Iphigenia at the start of the war). Greek commander Odysseus enters, to escort Polyxena to an altar where Neoptolemus will shed her blood. Odysseus ignores Hecuba's impassioned pleas to spare Polyxena, and Polyxena herself says she would rather die than live as a slave. In the first Choral interlude, the Chorus lament their own doomed fate, cursing the sea breeze that will carry them on ships to the foreign lands where they will live in slavery. The Greek messenger Talthybius arrives, tells a stirring account of Polyxena's strikingly heroic death, and delivers a message from Agamemnon, chief of the Greek army, to bury Polyxena. Hecuba sends a slave girl to fetch water from the sea to bathe her daughter's corpse. After a second Choral interlude, the body of Polydorus is brought on stage, having washed up on shore. Upon recognizing her son whom she thought safe, Hecuba reaches new heights of despair. Hecuba rages inconsolably against the brutality of such an action, and resolves to take revenge. Agamemnon enters, and Hecuba, tentatively at first and then boldly requests that Agamemnon help her avenge her son's murder. Hecuba's daughter Cassandra is a concubine of Agamemnon so the two have some relationship to protect and Agamemnon listens. Agamemnon reluctantly agrees, as the Greeks await a favorable wind to sail home. The Greek army considers Polymestor an ally and Agamemnon does not wish to be observed helping Hecuba against him. Polymestor arrives with his sons. He inquires about Hecuba's welfare, with a pretense of friendliness. Hecuba reciprocates, concealing her knowledge of the murder of Polydorus. Hecuba tells Polymestor she knows where the remaining treasures of Troy are hidden, and offers to tell him the secrets, to be passed on to Polydorus. Polymestor listens intently. Hecuba convinces him and his sons to enter an offstage tent where she claims to have more personal treasures. Enlisting help from other slaves, Hecuba kills Polymestor's sons and stabs Polymestor's eyes. He re-enters blinded and savage, hunting as if a beast for the women who ruined him. Agamemnon re-enters angry with the uproar and witnesses Hecuba's revenge. Polymestor argues that Hecuba's revenge was a vile act, whereas his murder of Polydorus was intended to preserve the Greek victory and dispatch a young Trojan, a potential enemy of the Greeks. The arguments take the form of a trial, and Hecuba delivers a rebuttal exposing Polymestor's speech as sophistry. Agamemnon decides justice has been served by Hecuba's revenge. Polymestor, again in a rage, foretells the deaths of Hecuba by drowning and Agamemnon by his wife Clytemnestra, who also kills Cassandra. Soon after, the wind finally rises again, the Greeks will sail, and the Chorus goes to an unknown, dark fate. The plot falls into two clearly distinguished parts: the Greeks' sacrifice of Hecuba's daughter, Polyxena, to the shade of Achilles, and the vengeance of Hecuba on Polymestor, the Thracian king. In popular culture A performance of Hecuba is a focus of the 2018 two-part comedy film A Bread Factory. Translations Edward P. Coleridge, 1891 – prose: full text Arthur S. Way, 1912 – verse: full text J. T. Sheppard, 1927 – verse Hugh O. Meredith, 1937 – verse William Arrowsmith, 1958 – verse: available for digital loan Philip Vellacott, 1963 – verse Robert Emmet Meagher, 1995 – verse (published as Hekabe) Timberlake Wertenbaker, 1995 – verse David Kovacs, 1995 – prose Frank McGuinness, 2004 – verse Anne Carson, 2006 – prose George Theodoridis 2007 – prose: full text Stephen Esposito, 2010 – verse Jay Kardan and Laura-Gray Street, 2011 – verse: full text References Further reading Zeitlin, Froma (1996). "The Body's Revenge: Dionysos and Tragic Action in Euripides' Hekabe", in Froma Zeitlin, Playing the Other: Gender and Society in Classical Greek Literature. Chicago: University of Chicago Press. pp. 172–216. Segal, C. (1990). "Violence and the Other: Greek, Female, and Barbarian in Euripides’ Hecuba". Transactions of the American Philological Association 120: 109–131. https://doi.org/10.2307/283981 Planinc, Z. (2018). "Expel the Barbarian from Your Heart': Intimations of the Cyclops in Euripides's Hecuba". Philosophy and Literature 42 (2): 403–415. https://muse.jhu.edu/article/708995 External links Plays about slavery Plays by Euripides Trojan War literature Plays set in ancient Greece Agamemnon
33096454
https://en.wikipedia.org/wiki/MDOS
MDOS
MDOS may refer to: Micropolis MDOS, an operating system for Z80 S-100 bus machines in the 1970s Motorola Disk Operating System for the M6800 based EXORciser development system in the 1970s Motorola Disk Operating System, also the underlying basis of the QDOS operating system of the Fairlight CMI digital sampling synthesizer series MIDAS (operating system) (originally named MDOS, and also known as M-DOS or My DOS), an 8-bit operating system for 8080/Z80, developed by Microsoft's Marc McDonald in 1979 Myarc Disk Operating System (aka MDOS), an operating system emulating the TI-99/4A for the Geneve 9640 in 1987 MS-DOS 4.0 (multitasking), a multitasking operating system Multitasking DOS sub-system in IBM OS/2, e.g. C:\OS2\MDOS\ Multiuser DOS (aka DR MDOS), a DOS- and CP/M compatible 32-bit protected mode operating system for 386 machines developed by Digital Research / Novell in the 1990s Multiuser DOS Federation, an industry alliance in the 1990s See also DOS (disambiguation) MOS (disambiguation) Wordmark Systems MyDOS, an operating system for 8-bit Atari homecomputers by Wordmark Systems in the 1980s
10541907
https://en.wikipedia.org/wiki/Windows%20Vista%20networking%20technologies
Windows Vista networking technologies
In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove. Architecture The Next Generation TCP/IP stack connects to NICs via a Network Driver Interface Specification (NDIS) driver. The network stack, implemented in tcpip.sys implements the Transport, Network and Data link layers of the TCP/IP model. The Transport layer includes implementations for TCP, UDP and unformatted RAW protocols. At the Network layer, IPv4 and IPv6 protocols are implemented in a dual-stack architecture. And the Data link layer (also called Framing layer) implements 802.3, 802.1, PPP, Loopback and tunnelling protocols. Each layer can accommodate Windows Filtering Platform (WFP) shims, which allows packets at that layer to be introspected and also host the WFP Callout API. The networking API is exposed via three components: Winsock A user mode API for abstracting network communication using sockets and ports. Datagram sockets are used for UDP, whereas Stream sockets are for TCP. While Winsock is a user mode library, it uses a kernel mode driver, called Ancillary Function Driver (AFD) to implement certain functionality. Winsock Kernel (WSK) A kernel-mode API providing the same socket-and-port abstraction as Winsock, while exposing other features such as Asynchronous I/O using I/O request packets. Transport Driver Interface (TDI) A kernel-mode API which can be used for legacy protocols like NetBIOS. It includes a component, known as TDX to map the TDI functionality to the network stack. User interface The user interface for configuring, troubleshooting and working with network connections has changed significantly from prior versions of Windows as well. Users can make use of the new "Network and Sharing Center" to see the status of their network connections, and to access every aspect of configuration. A single icon in the notification area (system tray) represents connectivity through all network adapters, whether wired or wireless. The network can be browsed using Network Explorer, which replaces Windows XP's "My Network Places". Network Explorer items can be a shared device such as a scanner, or a file share. The Network Location Awareness (NLA) service uniquely identifies each network and exposes the network's attributes and connectivity type so that applications can determine the optimal network configuration. However, applications have to use the NLA APIs explicitly to be aware of the network connectivity changes, and adapt accordingly. Windows Vista uses the Link Layer Topology Discovery (LLTD) protocol to graphically present how different devices are connected over a network, as a Network Map. In addition, the Network Map uses LLTD to determine connectivity information and media type (wired or wireless), so that the map is topologically accurate. The ability to know network topology is important for diagnosing and solving networking problems, and for streaming content over a network connection. Any device can implement LLTD to appear on the Network Map with an icon representing the device, allowing users one-click access to the device's user interface. When LLTD is invoked, it provides metadata about the device that contains static or state information, such as the MAC address, IPv4/IPv6 address, signal strength etc. Network classification by location Windows Vista classifies the networks it connects to as either Public, Private or Domain and uses Network Location Awareness to switch between network types. Different network types have different firewall policies. An open network such as a public wireless network is classified as Public and is the most restrictive of all network settings. In this mode other computers on the network are not trusted and external access to the computer, including sharing of files and printers, is disabled. A home network is classified as Private, and it enables file sharing between computers. If the computer is joined to a domain, the network is classified as a Domain network; in such a network the policies are set by the domain controller. When a network is first connected to, Windows Vista prompts to choose the correct network type. On subsequent connections to the network, the service is used to gain information on which network is connected to and automatically switch to the network configuration for the connected network. Windows Vista introduces a concept of network profiles. For each network, the system stores the IP address, DNS server, Proxy server and other network features specific to the network in that network's profile. So when that network is subsequently connected to, the settings need not be reconfigured, the ones saved in its profile are used. In the case of mobile machines, the network profiles are chosen automatically based on what networks are available. Each profile is part of either a Public, Private or Domain network. Internet Protocol v6 The Windows Vista networking stack supports the dual Internet Protocol (IP) layer architecture in which the IPv4 and IPv6 implementations share common Transport and Framing layers. Windows Vista provides a GUI for configuration of both IPv4 and IPv6 properties. IPv6 is now supported by all networking components and services. The Windows Vista DNS client can use IPv6 transport. Internet Explorer in Windows Vista and other applications that use WinINet (Windows Mail, file sharing) support literal IPv6 addresses (). Windows Firewall and the IPsec Policies snap-in support IPv6 addresses as permissible character strings. In IPv6 mode, Windows Vista can use the Link Local Multicast Name Resolution (LLMNR) protocol, as described in , to resolve names of local hosts on a network which does not have a DNS server running. This service is useful for networks without a central managing server, and for ad hoc wireless networks. IPv6 can also be used over PPP-based dial-up and PPPoE connections. Windows Vista can also act as a client/server for file sharing or DCOM over IPv6. Support for DHCPv6, which can be used with IPv6, is also included. IPv6 can even be used when full native IPv6 connectivity is not available, using Teredo tunneling; this can even traverse most IPv4 symmetric Network Address Translations (NATs) as well. Full support for multicast is also included, via the MLDv2 and SSM protocols. The IPv6 interface ID is randomly generated for permanent autoconfigured IPv6 addresses to prevent determining the MAC address based on known company IDs of NIC manufacturers. Wireless networks Support for wireless networks is built into the network stack itself as a new set of APIs called Native Wifi, and does not emulate wired connections, as was the case with previous versions of Windows. This allows implementation of wireless-specific features such as larger frame sizes and optimized error recovery procedures. Native Wifi is exposed by Auto Configuration Module (ACM) which replaces Windows XP's Wireless Zero Configuration. The ACM is extensible, so developers can incorporate additional wireless functionality (such as automatic wireless roaming) and override the automatic configuration and connection logic without affecting the built-in framework. It is easier to find wireless networks in range and tell which networks are open and which are closed. Hidden wireless networks, which do not advertise their name (SSID) are better supported. Security for wireless networks is improved with improved support for newer wireless standards like 802.11i. EAP-TLS is the default authentication mode. Connections are made at the most secure connection level supported by the wireless access point. WPA2 can be used even in ad-hoc mode. Windows Vista also provides a Fast Roaming service that will allow users to move from one access point to another without loss of connectivity. Preauthentication with the new wireless access point can be used to retain the connectivity. Wireless networks are managed from either the Connect to a network dialog box within the GUI or the netsh wlan command from the shell. Settings for wireless networks can also be configured using Group policy. Windows Vista enhances security when joining a domain over a wireless network. It can use Single Sign On to use the same credentials to join a wireless network as well as the domain housed within the network. In this case, the same RADIUS server is used for both PEAP authentication for joining the network and MS-CHAP v2 authentication to log into the domain. A bootstrap wireless profile can also be created on the wireless client, which first authenticates the computer to the wireless network and joins the network. At this stage, the machine still does not have any access to the domain resources. The machine will run a script, stored either on the system or on USB thumb drive, which authenticates it to the domain. Authentication can be done either by using username and password combination or security certificates from a Public key infrastructure (PKI) vendor such as VeriSign. Wireless setup and configuration Windows Vista features Windows Connect Now which supports setting up a wireless network using several methods supported in the Wi-Fi Protected Setup standard. It implements a native code API, Web Services for Devices (WSDAPI) to support Devices Profile for Web Services (DPWS) and also a managed code implementation in WCF. DPWS enables simpler device discoverability like UPnP and describes available services to those clients. Function Discovery is a new technology that serves as an abstraction layer between applications and devices, allowing applications to discover devices by referencing the device's function, rather than by its bus type or the nature of its connection. Plug and Play Extensions (PnP-X) allow network-connected devices to appear as local devices inside Windows connected physically. UPnP support has also been enhanced to include integration with PnP-X and Function Discovery. Network performance Windows Vista's networking stack also uses several performance optimizations, which allow higher throughput by allowing faster recovery from packet losses, when using a high packet loss environment such as wireless networks. Windows Vista uses the NewReno () algorithm which allows a sender to send more data while retrying in case it receives a partial acknowledgement, which is acknowledgement from the receiver for only a part of data that has been received. It also uses Selective Acknowledgements (SACK) to reduce the amount of data to be retransmitted in case a portion of the data sent was not received correctly, and Forward RTO-Recovery (F-RTO) to prevent unnecessary retransmission of TCP segments when round trip time increases. It also includes Neighbour Unreachability Detection capability in both IPv4 and IPv6, which tracks the accessibility of neighboring nodes. This allows faster error recovery, in case a neighboring node fails. NDIS 6.0 introduced in Windows Vista supports offloading IPv6 traffic and checksum calculations for IPv6, improved manageability, scalability and performance with reduced complexity for NDIS miniports, and simpler models for writing Lightweight Filter Drivers (LWF). LWF drivers are a combination of NDIS intermediate drivers and a miniport driver that eliminate the need to write a separate protocol and miniport and have a bypass mode to examine only selected control and data paths. The TCP/IP stack also provides fail-back support for default gateway changes by periodically attempting to send TCP traffic through a previously detected unavailable gateway. This can provide faster throughput by sending traffic through the primary default gateway on the subnet. Another significant change that aims to improve network throughput is the automatic resizing of TCP Receive window. The receive window (RWIN) specifies how much data a host is prepared to receive, and is limited by, among other things, the available buffer space. In other words, it is a measure of how much data the remote transmitter can send before requiring an acknowledgement for the outstanding data. When the receive window is too small, the remote transmitter will frequently find that it has hit the limit of how much outstanding data it can transmit, even though there is enough bandwidth available to transmit more data. This leads to incomplete link utilization. So using a larger RWIN size boosts throughput in such situations; an auto-adjusting RWIN tries to keep the throughput rate as high as is permissible by the bandwidth of the link. Receive window auto tuning functionality continually monitors the bandwidth and the latency of TCP connections individually and optimize the receive window for each connection. The window size is increased in high-bandwidth (~5 Mbit/s+) or high-latency (>10ms) situations. Traditional TCP implementations uses the TCP Slow Start algorithm to detect how fast it can transmit without choking the receiver (or intermediate nodes). In a nutshell, it specifies that transmission should start at a slow rate, by transmitting a few packets. This number is controlled by the Congestion window – which specifies the number of outstanding packets that has been transmitted but for which an acknowledgement of receipt from the receiver has not yet been received. As acknowledgements are received, the congestion window is expanded, one TCP segment at a time till an acknowledgement fails to arrive. Then the sender assumes that with the congestion window size of that instant, the network gets congested. However, a high bandwidth network can sustain a quite large congestion window without choking up. The slow start algorithm can take quite some time to reach that threshold – leaving the network under-utilized for a significant time. The new TCP/IP stack also supports Explicit Congestion Notification (ECN) to keep throughput hit due to network congestion as low as possible. Without ECN, a TCP message segment is dropped by some router when its buffer is full. Hosts get no notice of building congestion until packets start being dropped. The sender detects the segment did not reach the destination; but due to lack of feedback from the congested router, it has no information on the extent of reduction in transmission rate it needs to make. Standard TCP implementations detect this drop when they time out waiting for acknowledgement from the receiver. The sender then reduces the size of its congestion window, which is the limit on the amount of data in flight at any time. Multiple packet drops can even result in a reset of the congestion window, to TCP's Maximum Segment Size, and a TCP Slow Start. Exponential backoff and only additive increase produce stable network behaviour, letting routers recover from congestion. However, the dropping of packets has noticeable impacts on time-sensitive streams like streaming media, because it takes time for the drop to be noticed and retransmitted. With ECN support enabled, the router sets two bits in the data packets that indicate to the receiver it is experiencing congestion (but not yet fully choked). The receiver in turn lets the sender know that a router is facing congestion and then the sender lowers its transmission rate by some amount. If the router is still congested, it will set the bits again, and eventually the sender will slow down even more. The advantage of this approach is that the router does not get full enough to drop packets, and thus the sender does not have to lower the transmission rate significantly to cause serious delays in time-sensitive streams; nor does it risk severe under-utilization of bandwidth. Without ECN, the only way routers can tell hosts anything is by dropping packets. ECN is like Random Early Drop, except that the packets are marked instead of dropped. The only caveat is that both sender and receiver, as well as all intermediate routers, have to be ECN-friendly. Any router along the way can prevent the use of ECN if it considers ECN-marked packets invalid and drops them (or more typically the whole connection setup fails because of a piece of network equipment that drops connection setup packets with ECN flags set). Routers that don't know about ECN can still drop packets normally, but there is some ECN-hostile network equipment on the Internet. For this reason, ECN is disabled by default. It can be enabled via the netsh interface tcp set global ecncapability=enabled command. In previous versions of Windows, all processing needed to receive or transfer data over one network interface was done by a single processor, even in a multi processor system. With supported network interface adapters, Windows Vista can distribute the job of traffic processing in network communication among multiple processors. This feature is called Receive Side Scaling. Windows Vista also supports network cards with TCP Offload Engine, that have certain hardware-accelerated TCP/IP-related functionality. Windows Vista uses its TCP Chimney Offload system to offload to such cards framing, routing, error-correction and acknowledgement and retransmission jobs required in TCP. However, for application compatibility, only TCP data transfer functionality is offloaded to the NIC, not TCP connection setup. This will remove some load from the CPU. Traffic processing in both IPv4 and IPv6 can be offloaded. Windows Vista also supports NetDMA, which uses the DMA engine to allow processors to be freed from the hassles of moving data between network card data buffers and application buffers. It requires specific hardware DMA architectures, such as Intel I/O Acceleration to be enabled. Compound TCP Compound TCP is a modified TCP congestion avoidance algorithm, meant to improve networking performance in all applications. It is not enabled by default in the pre-Service Pack 1 version of Windows Vista, but enabled in SP1 and Windows Server 2008. It uses a different algorithm to modify the congestion window – borrowing from TCP Vegas and TCP New Reno. For every acknowledgement received, it increases the congestion window more aggressively, thus reaching the peak throughput much faster, increasing overall throughput. Quality of service Windows Vista's networking stack includes integrated policy-based quality of service (QoS) functionality to prioritize network traffic. Quality of service can be used to manage network usage by specific applications or users, by throttling the bandwidth available to them, or it can be used to limit bandwidth usage by other applications when high priority applications, such as real time conferencing applications, are being run, to ensure they get the bandwidth they need. Traffic throttling can also be used to prevent large data transfer operations from using up all the available bandwidth. QoS policies can be confined by application executable name, folder path, source and destination IPv4 or IPv6 addresses, source and destination TCP or UDP ports or a range of ports. In Windows Vista, QoS policies can be applied to any application at the Network Layer, thus eliminating the need to rewrite applications using QoS APIs to be QoS-aware. QoS policies can either be set on a per-machine basis or set by Active Directory Group policy objects which ensures that all Windows Vista clients connected to the Active Directory container (a domain, a site or an organizational unit) will enforce the policy settings. Windows Vista supports the Wireless Multimedia (WMM) profile classes for QoS in wireless networks as certified by the Wi-Fi Alliance: BG (for background data), BE (for Best Effort non real time data), VI (for real time videos) and VO (for real time voice data). When both the wireless access point as well as the wireless NIC supports the WMM profiles, Windows Vista can provide preferential treatment to the data sent. qWave Windows Vista includes a specialized QoS API called qWave (Quality Windows Audio/Video Experience), which is a pre-configured quality of service module for time dependent multimedia data, such as audio or video streams. qWave uses different packet priority schemes for real-time flows (such as multimedia packets) and best-effort flows (such as file downloads or e-mails) to ensure that real-time data gets as little delays as possible, while providing a high quality channel for other data packets. qWave is intended to ensure real-time transport of multimedia networks within a wireless network. qWave supports multiple simultaneous multimedia as well as data streams. qWave does not depend solely on bandwidth reservation schemes, as provided by RSVP for providing QoS guarantees, as the bandwidth in a wireless network fluctuates constantly. As a result, it also uses continuous bandwidth monitoring to implement service guarantees. Applications have to explicitly use the qWave APIs to use the service. When the multimedia application requests qWave to initiate a new media stream, qWave tries to reserve bandwidth using RSVP. At the same time, it uses QoS probes to make sure the network has enough bandwidth to support the stream. If the conditions are met, the stream is allowed, and prioritized so that other applications do not eat into its share of bandwidth. However, environmental factors can affect the reception of the wireless signals, which can reduce the bandwidth, even if no other stream is allowed to access the reserved bandwidth. Due to this, qWave continuously monitors the available bandwidth, and if it decreases, the application is informed, creating a feedback loop, so that it can adapt the stream to fit into the lower bandwidth range. If more bandwidth is available, qWave automatically reserves it and informs the application of the improvement. For probing the quality of the network, probe packets are sent to the source and statistics (such as round trip time, loss, latency jitter etc.) of their path analyzed and the results are cached. The probe is repeated after specific time intervals to update the cache. Whenever the stream is requested, the cache is looked up. qWave also serializes creation of multiple simultaneous streams, even across devices, so that probes sent for one stream are not interfered by others. qWave uses client side buffers to keep transmission rate within range of the slowest part in the network, so that the access point buffers are not overwhelmed, thus reducing packet loss. qWave works best if both the source and sink (client) of the multimedia stream are qWave aware. Also, the wireless access point (AP) needs to be QoS-enabled, supporting bandwidth reservation. It can also work without QoS-aware APs; however, since qWave cannot reserve bandwidth in this case, it has to depend on the application to adapt the stream based on the available bandwidth, which not only will be affected by network conditions, but other data in the network as well. qWave is also available for other devices as a part of the Windows Rally technologies. Network security In order to provide better security when transferring data over a network, Windows Vista provides enhancements to the cryptographic algorithms used to obfuscate data. Support for 256-bit, 384-bit and 512-bit Elliptic curve Diffie–Hellman (ECDH) algorithms, as well as for 128-bit, 192-bit and 256-bit Advanced Encryption Standard (AES) is included in the network stack itself. Direct support for SSL connections in new Winsock API allows socket applications to directly control security of their traffic over a network (such as providing security policy and requirements for traffic, querying security settings) rather than having to add extra code to support a secure connection. Computers running Windows Vista can be a part of logically isolated networks within an Active Directory domain. Only the computers which are in the same logical network partition will be able to access the resources in the domain. Even though other systems may be physically on the same network, unless they are in the same logical partition, they won't be able to access partitioned resources. A system may be part of multiple network partitions. Windows Vista also includes an Extensible Authentication Protocol Host (EAPHost) framework that provides extensibility for authentication methods for commonly used protected network access technologies such as 802.1X and PPP. It allows networking vendors to develop and easily install new authentication methods known as EAP methods. A planned feature in the new TCP/IP suite known as "Routing Compartments", utilized a per-user routing table, thus compartmentalizing the network according to the user's needs, so that data from one segment would not go into another. This feature however was removed before the release of Windows Vista, and is slated to be included possibly in a future release of Windows. Network Access Protection Windows Vista also introduces Network Access Protection (NAP), which makes sure that computers connecting to a network conform to a required level of system health as has been set by the administrator of the network. With NAP enabled on a network, when a Windows Vista computer attempts to join a network, it is verified that the computer is up-to-date with security updates, virus signatures and other factors, including configuration of IPsec and 802.1x authentication settings, specified by the network administrator. It will be granted full access to the network only when the criteria are met, failing which it may be either denied access to the network or granted limited access only to certain resources. It may optionally be granted access to servers which will provide it with the latest updates. Once the updates are installed, the computer is granted access to the network. However, Windows Vista can only be a NAP client, i.e., a client computer which connects to a NAP enabled network. Health policy and verification servers have to be running Windows Server 2008. IPsec and Windows Firewall IPsec configuration is now fully integrated into the Windows Firewall with Advanced Security snap-in and netsh advfirewall command-line tool to prevent contradictory rules and offer simplified configuration along with an authenticating firewall. Advanced firewall filtering rules (exceptions) and IPsec policies can be set up such as by domain, public, and private profiles, source and destination IP addresses, IP address range, source and destination TCP and UDP ports, all or multiple ports, specific types of interfaces, ICMP and ICMPv6 traffic by Type and Code, services, edge traversal, IPsec protection state and specified users and computers based on Active Directory accounts. Prior to Windows Vista, setting up and maintaining IPsec policy configuration in many scenarios required setting up a set of rules for protection and another set of rules for traffic exemptions. IPsec nodes in Windows Vista communicate while simultaneously negotiating protected communications and if a response is received and negotiation completes, subsequent communications are protected. This eliminates the need to set up IPsec filters for exemptions for the set of hosts that do not or cannot support IPsec, allows setting up required incoming protected initiated communication and optional outgoing communication. IPsec also allows securing traffic between domain controllers and member computers, while still allowing clear text for domain joins and other communication types. IPsec protected domain joins are allowed if using NTLM v2 and if both, the domain controllers and member computers are running Windows Server 2008 and Windows Vista respectively. IPsec fully supports IPv6, AuthIP (which allows for a second authentication), integration with NAP for authenticating with a health certificate, Network Diagnostics Framework support for failed IPsec negotiation, new IPsec performance counters, and improved detection of cluster node failure and faster renegotiation of security associations. There is support for stronger algorithms for main mode negotiation (stronger DH algorithms and Suite B) and data integrity and encryption (AES with CBC, AES-GMAC, SHA-256, AES-GCM). Network Diagnostics Framework (NDF) The ability to assist the user in diagnosing a network problem is expected to be a major new networking feature. There is extensive support for runtime diagnostics for both wired and wireless networks, including support for TCP Management information base (MIB)-II and better system event logging and tracing. The Vista TCP/IP stack also supports ESTATS which defines extended performance statistics for TCP and can help in determining the cause of network performance bottlenecks. Windows Vista can inform the user of most causes of network transmission failure, such as incorrect IP address, incorrect DNS and default gateway settings, gateway failure, port in use or blocked, receiver not ready, DHCP service not running, NetBIOS over TCP/IP name resolution failure etc. Transmission errors are also exhaustively logged, which can be analyzed to better find the cause of error. Windows Vista has a greater awareness of the network topology the host computer is in, using technologies such as Universal Plug and Play. With this new network awareness technology, Windows Vista can provide help to the user in fixing network issues or simply provide a graphical view of the perceived network configuration. Windows Filtering Platform The Windows Vista network stack includes Windows Filtering Platform, which allows external applications to access and hook into the packet processing pipeline of the networking subsystem. WFP allows incoming and outgoing packets to be filtered, analyzed or modified at several layers of the TCP/IP protocol stack. Because WFP has an inbuilt filtering engine, applications need not write any custom engine, they just need to provide the custom logic for the engine to use. WFP includes a Base Filtering Engine which implements the filter requests. The packets are then processed using the Generic Filtering Engine, which also includes a Callout Module, where applications providing the custom processing logic can be hooked up. WFP can be put to uses such as inspecting packets for malware, selective packet restriction, such as in firewalls, or providing custom encryption systems, among others. Upon its initial release WFP was plagued with bugs including memory leaks and race conditions. The Windows Firewall in Windows Vista is implemented through WFP. Peer-to-peer communication Windows Vista includes significant peer-to-peer support with the introduction of new APIs and protocols. A new version of the Peer Name Resolution Protocol (PNRP v2), as well as a set of Peer Distributed Routing Table, Peer Graphing, Peer Grouping, Peer Naming, and Peer Identity Management APIs are introduced. Contacts can be created and administered with the new peer-to-peer subsystem—serverless presence allows users to manage real-time presence information and track the presence of other registered users across a subnet or the Internet. A new People Near Me service allows for the discovery and management of contacts on the same subnet and uses Windows Contacts to manage and store contact information; the new capabilities allows peers to send application invitations to other peers (ad-hoc collaboration is also supported) without a centralized server. Windows Meeting Space is an example of such an application. PNRP also allows creating an overlay network called a Graph. Each peer in the overlay network corresponds to a node in the graph. All the nodes in a graph share book-keeping information responsible for the functioning of the network as a whole. For example, in a distributed resource management network, which node has what resource needs to be shared. Such information is shared as Records, which are flooded to all the peers in a graph. Each peer stores the Record to a local database. A Record consists of a header and a body. The body contains data specific to the application that is using the API; the header contains metadata to describe the data in the body as name-value pairs serialized using XML, in addition to author and version information. It can also contain an index of the body data, for fast searching. A node can connect to other nodes directly as well, for communication that need not be shared with the entire Graph. The API also allows creation of a secure overlay network called a Group, consisting of all or a subset of nodes in a Graph. A Group can be shared by multiple applications, unlike a Graph. All peers in a Group must be identifiable by a unique named, registered using PNRP, and have a digital signature certificate termed as Group Member Certificate (GMC). All Records exchanged are digitally signed. Peers must be invited into a Group. The invitation contains the GMC that enables it to join the group. A new Windows Internet Computer Names (WICN) peer networking feature allows an IPv6-connected machine to obtain a custom or unique domain name. If the computer is connected to the Internet, users can specify a secured or unsecured host name for their computer from a console command, without requiring to register a domain name and configuring a dynamic DNS. WICN can be used in any application that accepts an IP address or DNS name; PNRP performs all the domain name resolution at the peer-to-peer level. Another planned feature in Windows Vista would have provided a new domain-like networking setup known as a Castle, but this did not make it into the release. Castle would have made it possible to have an identification service, which provides user authentication, for all members on the network, without a centralized server. It would have allowed user credentials to propagate across the peer-to-peer network, making them more suitable for a home network. People Near Me People Near Me (formerly People Nearby) is a peer-to-peer service designed to simplify communication and collaboration among users connected to the same subnet. People Near Me is used by Windows Meeting Space for collaboration and contact discovery. People Near Me was listed as part of Microsoft's mobile platform strategy as revealed during the Windows Hardware Engineering Conference of 2004. People Near Me uses Windows Contacts to manage contact information; by default, a user may receive invitations from all users connected to the same subnet, but a user can designate another user as a trusted contact to enable collaboration across the Internet, to increase security, and to determine the presence of these contacts. Background Intelligent Transfer Service The new Background Intelligent Transfer Service (BITS) 3.0 has a new feature called Neighbor Casting which supports peer-to-peer file transfers within a domain. This facilitates peer caching, allows users to download and serve content (such as WSUS updates) from peers on the same subnet, receive notification when a file is downloaded, access the temporary file while the download is in progress, and control HTTP redirects. This saves bandwidth on the network and reduces performance load on the server. BITS 3.0 also uses Internet Gateway Device Protocol counters to more accurately calculate available bandwidth. Core networking driver and API improvements The HTTP kernel mode driver in Windows Vista, Http.sys has been enhanced to support server-side authentication, logging, IDN hostnames, Event Tracing and better manageability through netsh http and new performance counters. WinINet, the protocol handler for HTTP and FTP handles IPv6 literal addresses, includes support for Gzip and deflate decompression to improve content encoding performance, Internationalized domain names support and Event Tracing. WinHTTP, the client API for server-based applications and services supports IPv6, AutoProxy, HTTP/1.1 chunked transfer encoding, larger data uploads, SSL and client certificates, server and proxy authentication, automatic handling of redirects and keep-alive connections and HTTP/1.0 protocol, including support for keep-alive (persistent) connections and session cookies. Winsock has been updated with new APIs and support for Event Tracing. Winsock Layered Service Provider support has been enhanced with logged installations and removals, a new API for reliably installing LSPs, a command to reliably remove LSPs, facilities to categorize LSPs and to remove most LSPs from the processing path for system critical services and support for Network Diagnostics Framework. Winsock Kernel Winsock Kernel (WSK) is a new transport-independent kernel-mode Network Programming Interface (NPI) that provides TDI client developers with a sockets-like programming model similar to those supported in user-mode Winsock. While most of the same sockets programming concepts exist as in user-mode Winsock such as socket, creation, bind, connect, accept, send and receive, Winsock Kernel is a completely new programming interface with unique characteristics such as asynchronous I/O that uses IRPs and event callbacks to enhance performance. TDI is supported in Windows Vista for backward compatibility. Server Message Block 2.0 A new version of the Server Message Block (SMB) protocol was introduced with Windows Vista. It has a number of changes to improve performance and add additional capabilities. Windows Vista and later operating systems use SMB 2.0 when communicating with other machines running Windows Vista or later. SMB 1.0 continues in use for connections to any previous version of Windows, or to Samba. Samba 3.6 also includes support for SMB 2.0. Remote Differential Compression Remote Differential Compression (RDC) is a client-server synchronization protocol allows data to be synchronized with a remote source using compression techniques to minimize the amount of data sent across the network. It synchronizes files by calculating and transferring only the differences between them on-the-fly. Therefore, RDC is suitable for efficient synchronization of files that have been updated independently, or when network bandwidth is small or in scenarios where the files are large but the differences between them are small. Bluetooth support The Windows Vista Bluetooth stack is improved with support for more hardware IDs, EDR performance improvements, Adaptive frequency hopping for Wi-Fi co-existence, and Synchronous Connection Oriented (SCO) protocol support which is needed for audio profiles. The Windows Vista Bluetooth stack supports a kernel mode device driver interface besides the user-mode programming interface, which enables third parties to add support for additional Bluetooth Profiles such as SCO, SDP, and L2CAP. This was lacking in the Windows XP Service Pack 2 built-in Bluetooth stack, which had to be entirely replaced by a third-party stack for additional profile support. It also provides RFCOMM support using sockets besides virtual COM ports. KB942567 called Windows Vista Feature Pack for Wireless adds Bluetooth 2.1+EDR support and remote wake from S3 or S4 support for self-powered Bluetooth modules. This feature pack while initially only available to OEMs, was eventually included in Windows Vista Service Pack 2. Virtual Private Networking (VPN) Windows Vista and later support the use of PEAP with PPTP. The authentication mechanisms supported are PEAPv0/EAP-MSCHAPv2 (passwords) and PEAP-TLS (smartcards and certificates). Secure Socket Tunneling Protocol (SSTP), introduced in Windows Vista Service Pack 1 is a form of VPN tunnel that provides a mechanism to transport PPP or L2TP traffic through an SSL 3.0 channel. SSL provides transport-level security with key-negotiation, encryption and traffic integrity checking. References External links Enterprise Networking with Windows Vista Connecting to Wireless Networks with Windows Vista Policy-based QoS Architecture in Windows Server 2008 and Windows Vista Windows Core Networking Networking technologies Computer networking Windows Server
30270444
https://en.wikipedia.org/wiki/Comparison%20of%20Android%20e-reader%20software
Comparison of Android e-reader software
The following tables detail e-book reader software for the Android operating system. Each section corresponds to a major area of functionality in an e-book reader software. The comparisons are based on the latest released version. Software reading systems File formats supported See Comparison of e-book formats for details on the file formats. Navigation features Display features Edit-tool features Book source management features Other software e-book readers for Android Other e-book readers for Android devices include: BookShout!, Nook e-Reader applications for third party devices and OverDrive Media Console. Additionally, Palmbookreader reads some formats (such as PDB and TXT) on Palm OS and Android devices. The Readmill app, introduced in February 2011, reads numerous formats on Android and iOS devices but shut down July 1, 2014. Another popular app Bluefire Reader was removed from Google Play Store in 2019. See also Comparison of e-book formats - includes both device and software formats Comparison of e-book readers - includes hardware e-book readers Comparison of iOS e-book reader software E-book software References External links E-reader comparison Google lists Multimedia software comparisons
57914571
https://en.wikipedia.org/wiki/PFCP
PFCP
Packet Forwarding Control Protocol (PFCP) is a 3GPP protocol used on the Sx/N4 interface between the control plane and the user plane function, specified in TS 29.244. It is one of the main protocols introduced in the 5G Next Generation Mobile Core Network (aka 5GC), but also used in the 4G/LTE EPC to implement the Control and User Plane Separation (CUPS). PFCP and the associated interfaces seek to formalize the interactions between different types of functional elements used in the Mobile Core Networks as deployed by most operators providing 4G, as well as 5G, services to mobile subscribers. These 2 types of components are: The Control Plane (CP) functional elements, handling mostly signaling procedures (e.g. network attachment procedures, management of User-data Plane paths and even delivery of some light-weight services as SMS) The User-data Plane (UP) functional elements, handling mostly packet forwarding, based on rules set by the CP elements (e.g. packet forwarding for IPv4, IPv6 - or possibly even Ethernet with future 5G deployments - between the various supported wireless RANs and the PDN representing the Internet or an enterprise network). PFCP's scope is similar to that of OpenFlow, however it was engineered to serve the particular use-case of Mobile Core Networks. PFCP is also used on the interface between the control plane and user plane functions of a disaggregated BNG, as defined by the BroadBand Forum in TR-459. Overview Albeit similar to GTP in concepts and implementation, PFCP is complementary to it. It provides the control means for a signaling component of the Control-Plane to manage packet processing and forwarding performed by an User-Plane component. Typical EPC or 5G Packet Gateways are split by the protocol in 2 functional parts, allowing for a more natural evolution and scalability. The PFCP protocol is used on the following 3GPP mobile core interfaces: Sxa - between SGW-C and SGW-U Sxb - between PGW-C and PGW-U Sxc - between TDF-C and TDF-U (Traffic Detection Function) N4 - between SMF and UPF Note: Sxa and Sxb can be combined, in case a merged SGW/PGW is implemented. Functionality The Control-Plane functional element (e.g. PGW-C, SMF) controls the packet processing and forwarding in the User-Plane functional elements (e.g. PGW-U, UPF), by establishing, modifying or deleting PFCP Sessions. User plane packets shall be forwarded between the CP and UP functions by encapsulating the user plane packets using GTP-U encapsulation (see 3GPP TS 29.281 [3]). For forwarding data from the UP function to the CP function, the CP function shall provision PDR(s) per PFCP session context, with the PDI identifying the user plane traffic to forward to the CP function and with a FAR set with the Destination Interface "CP function side" and set to perform GTP-U encapsulation and to forward the packets to a GTP-u F-TEID uniquely assigned in the CP function per PFCP session and PDR. The CP function shall then identify the PDN connection and the bearer to which the forwarded data belongs by the F-TEID in the header of the encapsulating GTP-U packet. For forwarding data from the CP function to the UP function, the CP function shall provision one or more PDR(s) per PFCP session context, with the PDI set with the Source Interface "CP function side" and identifying the GTP-u F-TEID uniquely assigned in the UP function per PDR, and with a FAR set to perform GTP-U decapsulation and to forward the packets to the intended destination. URRs and QERs may also be configured. Per session multiple PDRs, FARs, QERs, URR and/or BARs are sent. Here are the main concepts used, organized in their logical association model: PDRs - Packet Detection Rules - contain information for matching data packets to certain processing rules. Both outer encapsulation and inner user-plane headers can be matched. The following rules can be applied on positive matching: FARs - Forwarding Action Rules - whether and how the packets matching PDRs should be dropped, forwarded, buffered or duplicated, including a trigger for first packet notification; it includes packet encapsulation or header enrichment rules. In case of buffering, the following rules can be applied: BARs - Buffering Action Rules - how much data to buffer and how to notify the Control-Plane. QERs - QoS Enforcement Rules - rules for providing Gating and QoS Control, flow and service level marking. URRs - Usage Reporting Rules - contain rules for counting and reporting traffic handled by the User-Plane function, generating reports to enable charging functionality in the Control-Plane functions. Messages IEs are defined either as having a proprietary encoding, or as grouped. Grouped IEs are simply a list of other IEs, encoded one after the other like in the PFCP Message Payload. IE Types 0..32767 are 3GPP specific and do not have an Enterprise-ID set. IE Types 32768..65535 can be used by custom implementation and the Enterprise-ID must be set to the IANA SMI Network Management Private Enterprise Codes of the issuing party. Messages Transport Very similar to GTP-C, PFCP uses UDP. Port 8805 is reserved. For reliability, a similar re-transmission strategy as for GTP-C is employed, lost messages being sent N1-times at T1-intervals. Transactions are identified by the 3-byte long Sequence Number, the IP address and port of the communication peer. The protocol includes an own Heart-beat Request/Response model, which allows monitoring the availability of communication peers and detecting restarts (by use of a Recovery-Timestamp Information Element). For User-Plane packet exchanges between the Control and User Plane functional elements, GTP-U for the Sx-u interface, or alternatively a simpler UDP or Ethernet encapsulation for the N4-u interface (to be confirmed, as standards are still incomplete). See also GTP-C RANAP Evolved Packet Core 5G Software-defined mobile network Software-defined networking Notes Network protocols Mobile telecommunications standards 3GPP standards Internet Protocol Mobile technology LTE (telecommunication) 5G (telecommunication) Telecommunications infrastructure
23723266
https://en.wikipedia.org/wiki/Subliminal%20channel
Subliminal channel
In cryptography, subliminal channels are covert channels that can be used to communicate secretly in normal looking communication over an insecure channel. Subliminal channels in digital signature crypto systems were found in 1984 by Gustavus Simmons. Simmons describes how the "Prisoners' Problem" can be solved through parameter substitution in digital signature algorithms. (Note that Simmons' Prisoners' Problem is not the same as the Prisoner's Dilemma.) Signature algorithms like ElGamal and DSA have parameters which must be set with random information. He shows how one can make use of these parameters to send a message subliminally. Because the algorithm's signature creation procedure is unchanged, the signature remains verifiable and indistinguishable from a normal signature. Therefore, it is hard to detect if the subliminal channel is used. Subliminal channels can be classified into broadband and narrowband channel types. Broadband and narrowband channels can exist in the same datastream. The broadband channel uses almost all available bits that are available to use. This is commonly understood to mean {≥50% but ≤90%} channel utilization. Every channel which uses fewer bits is called a narrow-band channel. The additional used bits are needed for further protection, e.g., impersonation. The broadband and the narrow-band channels can use different algorithm parameters. A narrow-band channel cannot transport maximal information, but it can be used to send the authentication key or datastream. Research is ongoing : further developments can enhance the subliminal channel, e.g., allow for establishing a broadband channel without the need to agree on an authentication key in advance. Other developments try to avoid the entire subliminal channel. Examples An easy example of a narrowband subliminal channel for normal human-language text would be to define that an even word count in a sentence is associated with the bit "0" and an odd word count with the bit "1". The question "Hello, how do you do?" would therefore send the subliminal message "1". The Digital Signature Algorithm has one subliminal broadband and three subliminal narrow-band channels At signing the parameter has to be set random. For the broadband channel this parameter is instead set with a subliminal message . Key generation choose prime choose prime calculate generator choose authentication key and send it securely to the receiver calculate public key mod Signing choose message (hash function is here substituted with a modulo reduction by 107) calculate message hash value mod mod instead of random value subliminal message is chosen calculate inverse of the subliminal message mod calculate signature value mod mod mod mod calculate signature value mod mod sending message with signature triple Verifying receiver gets message triple calculate message hash mod mod calculate inverse mod calculate mod mod calculate mod mod calculate signature mod mod mod mod since , the signature is valid Message extraction on receiver side from triple (1337; 12, 3) extract message mod The formula for message extraction is derived by transposing the signature value calculation formula. mod mod mod Example: Using a Modulus n = pqr In this example, an RSA modulus purporting to be of the form n = pq is actually of the form n = pqr, for primes p, q, and r. Calculation shows that exactly one extra bit can be hidden in the digitally signed message. The cure for this was found by cryptologists at the Centrum Wiskunde & Informatica in Amsterdam, who developed a Zero-knowledge proof that n is of the form n = pq. This example was motivated in part by The Empty Silo Proposal. Example - RSA Case study Here is a (real, working) PGP public key (using the RSA algorithm), which was generated to include two subliminal channels - the first is the "key ID", which should normally be random hex, but below is "covertly" modified to read "C0DED00D". The second is the base64 representation of the public key - again, supposed to be all random gibberish, but the English-readable message "//This+is+Christopher+Drakes+PGP+public+key//Who/What+is+watcHIng+you//" has been inserted. Adding both these subliminal messages was accomplished by tampering with the random number generation during the RSA key generation phase. PGP Key. RSA 2020/C0DED00D Fprint: 250A 7E38 9A1F 8A86 0811 C704 AF21 222C -----BEGIN PGP PUBLIC KEY BLOCK----- Version: Private mQESAgAAAAAAAAEH5Ar//This+is+Christopher+Drakes+PGP+public+key// Who/What+is+watcHIng+you//Di0nAraP+Ebz+iq83gCa06rGL4+hc9Gdsq667x 8FrpohTQzOlMF1Mj6aHeH2iy7+OcN7lL0tCJuvVGZ5lQxVAjhX8Lc98XjLm3vr1w ZBa9slDAvv98rJ8+8YGQQPJsQKq3L3rN9kabusMs0ZMuJQdOX3eBRdmurtGlQ6AQ AfjzUm8z5/2w0sYLc2g+aIlRkedDJWAFeJwAVENaY0LfkD3qpPFIhALN5MEWzdHt Apc0WrnjJDby5oPz1DXxg6jaHD/WD8De0A0ARRAAAAAAAAAAAbQvQ2hyaXN0b3Bo ZXIgRHJha2UgPENocmlzdG9waGVyLkRyYWtlQFBvQm94LmNvbT60SE5ldFNhZmUg c2VjdXJpdHkgc29mdHdhcmUgZGlyZWN0b3IgQ2hyaXN0b3BoZXIgRHJha2UgPE5l dFNhZmVAUG9Cb3guY29tPokBEgMFEDPXgvkcP9YPwN7QDQEB25oH4wWEhg9cBshB i6l17fJRqIJpXKAz4Zt0CfAfXphRGXC7wC9bCYzpHZSerOi1pd3TpHWyGX3HjGEP 6hyPfMldN/sm5MzOqgFc2pO5Ke5ukfgxI05NI0+OKrfc5NQnDOBHcm47EkK9TsnM c3Gz7HlWcHL6llRFwk75TWwSTVbfURbXKx4sC+nNExW7oJRKqpuN0JZxQxZaELdg 9wtdArqW/SY7jXQn//YJV/kftKvFrA24UYLxvGOXfZXpP7Gl2CGkDI6fzism75ya xSAgn9B7BqQ4BLY5Vn+viS++6Rdavykyd8j9sDAK+oPz/qRtYJrMvTqBErN4C5uA IV88P1U= =/BRt -----END PGP PUBLIC KEY BLOCK----- Improvements A modification to the Brickell and DeLaurentis signature scheme provides a broadband channel without the necessity to share the authentication key. The Newton channel is not a subliminal channel, but it can be viewed as an enhancement. Countermeasures With the help of the zero-knowledge proof and the commitment scheme it is possible to prevent the usage of the subliminal channel. It should be mentioned that this countermeasure has a 1-bit subliminal channel. The reason for that is the problem that a proof can succeed or purposely fail. Another countermeasure can detect, but not prevent, the subliminal usage of the randomness. References Bruce Schneier. Applied Cryptography, Second Edition: Protocols, Algorithms, and Source Code in C, 2. Ed. Wiley Computer Publishing, John Wiley & Sons, Inc., 1995. External links Seminar 'Covert Channels and Embedded Forensics' Cryptography
40608268
https://en.wikipedia.org/wiki/Qmmp
Qmmp
qmmp (for Qt-based MultiMedia Player) is a free and open-source cross-platform audio player that is similar to Winamp. It is written in C++ using the Qt widget toolkit for the user interface. It officially supports the operating systems Linux, FreeBSD and Microsoft Windows. In most popular Linux distributions, it is available through the standard package repositories. It is the only audio player not featuring a database that uses the Qt library. Features qmmp is known for its small, themeable user interface and low use of system resources. The user interface and behaviour is very similar to the at its time very popular Winamp. By supporting Winamp (Classic) skin files it can easily be configured to look exactly the same as Winamp 2.x. It is also catering for more discerning or audiophile listeners with support for cue sheets and volume normalization according to the ReplayGain standard. Album cover art is supported using separate sidecar files or embedded in ID3v2 tags and can be automatically fetched if missing. A simple, intuitive user interface Ogg Vorbis, FLAC and MP3 music playback support Support for multiple artist and performer tags per song A notification area icon Plugin support Translations into many languages Equalizer See also Comparison of free software for audio#Players References External links Free audio software Linux media players Free media players Free software programmed in C++ Audio software that uses Qt
1204926
https://en.wikipedia.org/wiki/MiKTeX
MiKTeX
MiKTeX is a free and open-source distribution of the TeX/LaTeX typesetting system for Microsoft Windows (and for Mac and certain Linux distributions such as Ubuntu, Debian and Fedora). It also contains a set of related programs. MiKTeX provides the tools necessary to prepare documents using the TeX/LaTeX markup language, as well as a simple TeX editor: TeXworks. The name comes from Christian Schenk's login: MiK for Micro-Kid. MiKTeX can update itself by downloading new versions of previously installed components and packages, and has an easy installation process. Additionally, it can ask users whether they wish to download any packages that have not yet been installed but are requested by the current document. The current version of MiKTeX is 21.12 and is available at the MiKTeX homepage. In June 2020, Schenk decided to change the numbering convention; the new one is based on the release date. Thus 20.6 was released in June 2020. Since version 2.7, MiKTeX has support for XeTeX, MetaPost and pdfTeX and compatibility with Windows 7. A portable version of MikTeX, as well as a command-line installer of it, are also available. See also LyX – An open-source cross-platform word processor MacTeX TeX Live Texmaker – An open-source cross-platform editor and shell TeXnicCenter – An open-source Windows editor and shell MeWa – An open-source Windows editor based on TeXnicCenter WinShell – A Windows freeware, closed-source multilingual integrated development environment (IDE) TeXstudio – An open-source cross-platform LaTeX editor. References External links MiKTeX project homepage MiKTeX on GitHub Free software programmed in C Free software programmed in C++ Free software programmed in Pascal Free TeX software Linux TeX software TeX software for Windows TeX SourceForge projects
67405247
https://en.wikipedia.org/wiki/Chris%20Horn%20%28computer%20scientist%29
Chris Horn (computer scientist)
Christopher J. Horn is an Irish academic and businessperson, co-founder and CEO of Ireland's first NASDAQ-listed company, IONA Technologies, once one of the world's top ten software-only companies by revenue. He also led fundraising for, and became founding chairperson of, Dublin's Science Gallery, and later its international spinoff projects. Horn, an electronics engineer and holder of a PhD in computer science, has also written extensively on technology and business innovation, and on privacy, including for The Irish Times. A former president of Engineers Ireland, and later made a Fellow of that body, he was awarded an honorary doctorate by Trinity College Dublin, and a Gold Medal of the Royal Dublin Society. He has been chairperson or member of multiple commercial and voluntary boards, including those of Trinity College Dublin and Science Foundation Ireland. Early life and education Christopher J. Horn was born in the UK and his family moved to Bray, County Wicklow when he was very young. He grew up in Blackrock, Dublin, attending the local Newpark Comprehensive School. His first job was as an attendant at the Butlin's Mosney holiday camp north of Dublin. He took his first degrees in Trinity College Dublin (TCD), graduating with BA and BAI (Engineering) in 1978, with a specialism in electronic engineering. He continued study at Trinity, completing a PhD in Computing and Control Science and Technology in 1983, the thesis for which, entitled Dada - the language and its implementation, was published in 1984. Career Academic career Horn was hired as a junior lecturer at TCD in 1979, working on a new BA moderatorship in Computer Science. After completion of his PhD, he worked for a year as a consultant for Chaco, which later became part of Baltimore Technologies, as a contracted civil servant ("functionary") at the European Commission principal offices in Brussels, dealing with the ESPRIT programme. He then continued as a lecturer in the Department of Computer Science at TCD, where he worked full-time until 1991. IONA In 1981 Horn had visited Stanford University, where he met Andy Bechtolsheim, inventor of the Stanford University Network (SUN) workstation, and Bill Joy, and when they later went on to co-found Sun Microsystems, he began to talk to fellow academics about starting their own venture. Eventually, in 1991, Horn, Sean Baker and Annrai O’Toole, all then academics in the Department of Computer Science at TCD, put in each to found IONA Technologies. The company aimed to produce object-oriented software, specifically seeing a market demand for middleware. IONA received limited support from Trinity College, including an office in a TCD innovation centre on Westland Row. Horn took up the role of CEO, and was also the lead architect for at least one major product. The agreement with Trinity College did allow for Horn and one of his colleagues to work part-time for 2–3 years after launching IONA. The firm's main object-oriented middleware software product, Orbix, was successful. The company, which did not raise angel or venture capital, but did have some IDA Ireland support, grew, and, after securing a 25% investment from Sun Microsystems in 1993, was able to float on the NASDAQ, achieving the fifth largest debut on that exchange to date. At peak the company reached a market valuation of . Horn sold a substantial tranche of shares in 1998. Horn stepped down from the CEO role in 2000, but remained as a non-executive director; he returned to the CEO role from 2003 to 2005, after the "dotcom crash". Having cashed in shares previously, Horn, who was vice-chairperson from 2005 onwards, received a further payout of around when the company was finally sold in 2008. He remained a shareholder, selling more shares in 2011, but still holding 10% of IONA, worth , after that; IONA was dissolved in 2017. After IONA Horn invested in a search and advertising technology provider, Sophia (sold to Boxfish), Nomos Software and a data storage enterprise, Gridstore (later Hypergrid), among others. He also worked with private equity outfit Atlantic Bridge, eventually joining as a partner and advisor. He served as a non-executive director on the boards two billing software companies, Sepro Telecom and LeCayla, and on a cloud-based dev-ops outfit, Cloudsmith, which he earlier co-founded. He writes regularly for The Irish Times. Voluntary and public service roles Horn was elected as president of Engineers Ireland in 2008, and devised a detailed plan for his one-year term, reporting on progress against this during the year, attending or hosting 88 events. He was also a member of the board of Irish State agency Science Foundation Ireland. Trinity College and Science Gallery Horn has been a member of the Board of Trinity College Dublin (TCD), and of the board of TCD's Trinity Foundation. He led the fundraising committee for the proposed Science Gallery, hosted by Trinity College. He later chaired its first governing board from the gallery's launch in 2008. Subsequently he led the board of Science Gallery International - which promoted similar facilities, attached to third-level institutions, in a range of countries - until 2019. He commented about his shock and great disappointment at an abrupt announcement by Trinity College in late October 2021 that Science Gallery Dublin would close in early 2022. Other roles Horn chaired the Irish Management Institute (IMI), and was the founding chairperson of the Ireland China Business Association as well as sitting as chairperson of UNICEF Ireland for several years. He has also spoken, with Karlin Lillington, for the Front Line Defenders human rights charity, returning to the topic of technology-based risks to human rights defenders in 2021. In January 2013 Horn took on the chairmanship of Northern Ireland Science Park Connect, a program which aimed to support early-stage and "wantrepreneur" businesses, a role he held until 2016. He has been a judge for the Irish Times Innovation Awards since 2013, a role he still holds as of 2021. He is a director of Ambisense, an environmental analytics company, among several ventures, and is and has been a member of multiple other boards and award committees. Recognition Horn received an honorary doctorate from Trinity College Dublin in 2001, and was elected as a Fellow of Engineers Ireland, as well as being awarded the Gold Medal for Industry of the Royal Dublin Society. Horn was also awarded an Innovation Award from TCD in 2006, and a Whitaker Award from the Irish Academy of Management in 2019. Publication Aside from his columns for The Irish Times, and blog, Horn edited a book, Professor John Byrne: Reminiscences: Father of Computing in Ireland about a pioneering TCD professor and researcher in computer science. Personal life Horn is married to Susie Horn, and they have four adult children, two boys and two girls. He has lived in Shankill, a coastal southern suburb of Dublin, for many years. He was noted for his modest lifestyle, still living in a 3-bedroom semi-detached suburban house when his wealth exceeded , his only indulgence being a mid-range new car. In 1998, he bought an historic Georgian house, Askefield, the former rectory of the Church of Ireland in southern Shankill, then the home of journalist and politician Shane Ross, on 6 acres, for over , and he and his wife moved their four young children there in 1999. He is a member of the Church of Ireland and the Horns hosted an Alpha course book club. Jointly with technology journalist Karlin Lillington, he has been a senior sponsor of the Irish National Opera since its launch year. References External links Official blog of Chris J. Horn, cross-linked with Twitter, and carrying many of his Irish Times articles Official Twitter site, cross-linked with official blog English emigrants to Ireland People from Blackrock, Dublin 20th-century Irish engineers Alumni of Trinity College Dublin Irish computer scientists Academics of Trinity College Dublin People from Dún Laoghaire–Rathdown Irish company founders Irish columnists The Irish Times people Irish non-fiction writers Irish male non-fiction writers 21st-century Irish engineers Living people Year of birth missing (living people)
326155
https://en.wikipedia.org/wiki/Diebold%20Nixdorf
Diebold Nixdorf
Diebold Nixdorf is an American multinational financial and retail technology company that specializes in the sale, manufacture, installation and service of self-service transaction systems (such as ATMs and currency processing systems), point-of-sale terminals, physical security products, and software and related services for global financial, retail, and commercial markets. Currently Diebold Nixdorf is headquartered in the Akron-Canton area with a presence in around 130 countries, and the company employs approximately 23,000 people. Founded in 1859 in Cincinnati, Ohio as the Diebold Bahmann Safe Company, the company eventually changed its name to Diebold Safe & Lock Company. In 1921, Diebold Safe & Lock Company sold the world's largest commercial bank vault to Detroit National Bank. Diebold has since branched into diverse markets, and is currently the largest provider of ATMs in the United States. Diebold Nixdorf was founded when Diebold Inc. acquired Germany's Wincor Nixdorf in 2016. It is estimated that Wincor Nixdorf controls about 35 percent of the global ATM market. Diebold history Diebold Safe & Lock Company to Diebold, Incorporated (1859-1960s) Diebold was founded in 1859 in Cincinnati, Ohio as the Diebold Bahmann Safe Company. Under the leadership of founder Charles Diebold, a German immigrant, the company's 250 initial employees began manufacturing safes and bank vaults out of a factory in Canton, Ohio. Diebold states that 878 of its safes protected some of the only undamaged property in the Great Chicago Fire of 1871, and the following year Diebold moved its operations and headquarters to Canton to meet increased demand. In 1874, Diebold was contracted to build the world's largest safe, to be installed in the San Francisco branch of Wells Fargo. In 1876, after becoming incorporated in Ohio, the company changed its name to Diebold Safe & Lock Company. Diebold secured its first international sale in 1881, when it built a safe for the President of Mexico. Diebold debuted manganese steel doors marketed as TNT-proof in 1890, and in 1921, Diebold sold the world's largest commercial bank vault to Detroit National Bank. Diebold became a publicly traded company in the 1930s. Also around that time, Diebold introduced a "robbery-deterrent system for banks that flooded the bank lobby with tear gas" to help deal with robbers such as the infamous John Dillinger. In 1936, Diebold expanded its product lines by acquiring companies specializing in products such as paper-based filing systems, and it began developing armor plate for military tanks that year. Between 1939 and 1945, Diebold devoted 98 percent of its activities to the war effort. Among other projects, during World War II Diebold employed around 2,900 workers and "sold $65 million in armor plate for more than 36,000 U.S. Army scout cars," particularly the M2 Scout car model. In 1943, Diebold Safe & Lock Company changed its name to Diebold, Incorporated, in an effort to reflect the company's increasing diversification of products. The prohibition agent Eliot Ness was on the Diebold board from 1944 until 1951, and in 1952 Raymond Koontz was named Diebold's president, after first joining Diebold as an assistant to the president in 1947. Diebold earned a net income of $1.7 million in 1959. Computer security and ATMs (1960s-1990s) On April 27, 1964, Diebold went public on the New York Stock Exchange with the ticker symbol . In 1965 Diebold began offering pneumatic tube delivery systems to diverse institutions including banks and post offices. Still involved in safes and vaults, in 1968 the First National Bank of Chicago purchased the world's largest double vault doors from Diebold. Diebold subsequently began offering computer-controlled security and surveillance systems in 1970. Between the early 1950s and the late 1970s, Diebold's annual revenue increased from US$229 million to $451 million. These results were in no small measure the consequence of the successful strategies by Diebold's president Raymond Koontz. In the early 1970s, Koontz began pushing the company into the then emerging market for automated teller machines. This drive was evident as early as 1966, when Richard Glyer demonstrated a Diebold cash machine prototype at the annual meeting of the American Bankers Association in San Francisco, CA. Then in July 1970, Daniel Maggin, chairman of the board, accompanied Koontz to England with the specific purpose of meeting (without prior notice) with Chubb’s Managing Director, William E. Randall. Diebold wanted exclusivity to distribute Chubb’s cash machines throughout the USA. The Chubb units, however, were found somewhat disappointing by the US market. After repeated failures and a limited availability of spare parts and service engineers, Diebold's staff and customers thought the Chubb devices did not meet their service expectations. Not surprisingly Diebold finally stopped distributing Chubb devices in 1973 and at the same time, decided to develop and eventually launch its own Total Automatic Banking System (TABS) 500. This device was developed by Robert W. Clark, Phillip C. Dolsen and Donald E. Kinker, and first installed in 1974. Diebold's Event (alarm) Monitoring Center opened in 1985, allowing Diebold to monitor its "ATMs, kiosks, facilities and operations" full-time from a singular facility. Robert Mahoney was appointed company CEO in 1985. Koontz retired as chairman in 1988, although he continued to serve on the board. In 1989, Diebold shipped 12 percent of the world's ATMs sold worldwide. Diebold partnered with IBM on InterBold in 1990, a joint venture chiefly formed to provide self-service products for the financial industry. Under the terms of the joint venture, Diebold marketed their combined ATM lines in the US, while IBM marketed them abroad. By September 1995, Diebold was making over half of the ATMs used in the United States. In 1996, Diebold generated US$1 billion in revenue as a company for the first time in a single year. The InterBold partnership was dissolved on January 19, 1998, when Diebold purchased IBM's share of the partnership for $16.1 million. International growth (1998-2001) In the 1990s the company significantly diversified its products, and by 1998 was offering "automated teller machines, electronic and physical security equipment, automated medication dispensing systems, software, supplies and integrated systems solutions." Under Diebold chairman and CEO Robert Mahoney, Diebold debuted an ATM in 1999 that identified customers using iris recognition, which was the first of its kind. Also that year, Diebold introduced the first talking ATM in the United States. In October 1999, Diebold acquired all the stock of Procomp Amazonia Industria Electronica, S.A, a manufacturer of retail and banking automation equipment such as ATMs based in Sao Paulo, Brazil. The U.S. National Archives in Washington, D.C. hired Diebold in 2001 to secure documents such as the Charters of Freedom, The Constitution, The Bill of Rights, and the Declaration of Independence. In February 2002, Diebold announced it would acquire the financial self-service assets of the European companies Getronics NV and Groupe Bull for approximately US$160 million. The agreement put Diebold near "$2 billion in revenue on an annualized basis." By the end of 2002, Diebold had 13,000 associates and serviced 88 countries. The company also continued to secure historical items such as the Hope Diamond at the Smithsonian Institution. Seeking to expand in India, at the end of 2002, Diebold announced a new production unit in Goa manufacturing ATMs in collaboration with Tata Infotech, and soon after announced a new corporate office in Mumbai. Revenue in 2003 was $2.1 billion for Diebold overall, with stock up 36% for the year. Diebold Election Systems and UTC (2002-2009) In 2002, Diebold entered the United States elections industry through the acquisition of Global Election Systems, a producer of touch-screen voting technology based in McKinney, Texas. Branded Diebold Election Systems (DES), the acquisition was their smallest business segment, and in late 2002, 3.7 million voters in Georgia used DES touch-screen stations. DES was soon the subject of controversy amid allegations surrounding the security and reliability of some of its products, as well as the political fundraising activities of Diebold's then-CEO Walden O'Dell in 2003. Critics argued O'Dell had a political conflict of interest which could compromise the security of Diebold's ballots, which O'Dell denied. Shortly afterwards, Diebold forbade its top executives from making political donations. Citing personal reasons, O'Dell resigned in December 2005 after several consecutive quarters of poor performance, with his role taken by Tom Swidarski. In August 2007, DES rebranded itself as Premier Election Solutions, and two years later the division was sold to a competitor, Election Systems & Software. Wired Magazine reported in 2007 that an editor using a Diebold IP address had removed negative information from the Diebold Wikipedia page, with the information later moved to a more appropriate location. Diebold was increasingly focusing on technology related to mobile banking as of 2008, incorporating mobile banking into many of its products. That year Diebold was selected to be the sole ATM provider at certain Beijing Olympics venues. In March 2008, United Technologies Corporation (UTC), a large engineering and defense conglomerate, announced it had made a $2.63 billion bid to buy Diebold, which was later rejected as too low. In October 2008, UTC announced it was breaking off acquisition talks after Diebold rejected the offer. The company had 17,000 workers worldwide by April 2009. In 2009 Bank Technology News ranked Diebold as No. 1 on its FINTECH 100 list of ATM providers. New facilities and acquisitions (2010-2013) After a lawsuit brought by the SEC alleging deceptive accounting between 2002 and 2007, several Diebold executives paid settlements in June 2010 to have the charges dropped, without admitting any liability. Other executives refused to settle. By 2011, Diebold was the largest manufacturer of ATMs in the United States. The company debuted a prototype of the first virtualized ATM that year, which was created jointly with VMware and used cloud technology. In 2011, Diebold was hired to implement "advanced security solutions" at the World Trade Center Transportation Hub. Also that year, SDM Magazine named Diebold its 2011 Systems Integrator of the Year. In 2012, Diebold debuted what it claims is the "world's first 4G LTE-enabled ATM concept," as well as "two-way concierge video services" to its ATMs. After acquiring around 4,400 ATMs from Toronto-Dominion Bank in 2012, in September 2012, Diebold acquired the Brazilian online banking company Gas Tecnologia, which protects around 70% of the internet banking transactions in Brazil. On October 25, 2012, the company announced it was suspending plans to build a new world headquarters in Green, Ohio, saying it was no longer economically feasible. CEO and President Thomas Swidarski resigned in January 2013 after pressure from the board over poor financial performance. Henry D.G. Wallace, a former CFO for Ford Motor Company, assumed oversight of Diebold until a new CEO could be selected. Andy W. Mattes, a former Hewlett-Packard and Siemens executive, was appointed Diebold's new president and CEO in June 2013. Diebold debuted new ATM models in 2013, and also "increased its cash dividend for the 60th consecutive year." In 2013, Diebold was charged with violating the Foreign Corrupt Practices Act, after international division leaders and Diebold agents were alleged to have provided "improper gifts" to officials overseas. The Justice Department agreed to drop the charges if Diebold complied with various terms, including 18 months of compliance monitoring and a $48 million settlement. Recent years and Wincor Nixdorf acquisition (2014-2017) Diebold announced that it was buying the Danish PIN pad maker Cryptera in June 2014. Under the agreement, Cryptera remained a separate business operating under Diebold, and also remained an "original equipment manufacturer of EPP devices for Diebold and other existing customers." In July 2014, Diebold introduced its ActivEdge card reader, which it claims "prevents all known forms of skimming [ATM crime]." Diebold's revenue in 2014 equaled US$3.05 billion, an increase from the year before. Operating income equaled $117.0 million, net income equaled $114.4 million, and assets totaled $2.34 billion. As of 2014, Diebold held the record for consecutive dividend increases in its stock value. In March 2015, Diebold acquired the Canadian ATM software company Phoenix Interactive Design. Based in London, Ontario, Phoenix was known for working with clients such as TD Canada Trust and Fifth Third Bank. Diebold sold the North American aspects of its electronic security business to Securitas in October 2015. Based in Stockholm, Securitas purchased the assets for US$350 million. On October 25, 2015, Diebold publicly debuted two new ATM concepts. The first model, Irving, allows customers to withdraw money with an iris scan instead of a card, while the second concept, titled Janus, was described by Fortune as "a dual-sided, self-service ATM that can serve two customers at the same time." In June 2015, Diebold was reportedly in talks to acquire its German rival Wincor Nixdorf. with the new company to be named Diebold Nixdorf. On November 23, 2015, Diebold Incorporated and Wincor Nixdorf AG entered into a business combination agreement, with Diebold offering $1.8 billion in cash and shares to finance the acquisition. Combined, it was estimated that the two companies would control about 35 percent of the global ATM market. The combined company would have registered offices in North Canton, Ohio, and be operated from headquarters in North Canton and Wincor Nixdorf's facilities in Paderborn, Germany. Software development for the new company would take place in North America, with Diebold citing their Phoenix Interactive Design subdivision based in Ontario, Canada. Diebold announced it had satisfied the share tender condition to acquire Wincor Nixdorf on March 24, 2016. On August 15, 2016, it was announced that the acquisition had been completed, with Diebold Nixdorf beginning operations under the name Diebold Nixdorf on August 16. Nixdorf history Founded by Heinz Nixdorf, Nixdorf Computer was formed in 1952. In 1990 the company was purchased by Siemens AG and renamed Siemens Nixdorf Informationssysteme. The company was re-focused exclusively on its current product set in 1998 and renamed Siemens Nixdorf Retail and Banking Systems GmbH. Following a buyout by Kohlberg Kravis Roberts and Goldman Sachs Capital Partners on October 1, 1999, the company was renamed Wincor Nixdorf. The company was taken public May 19, 2004 with a successful IPO. On November 8, 2006, Chief Executive Officer Karl-Heinz Stiller announced his resignation from the board. Eckard Heidloff was elected as his replacement. Markets and services Diebold Nixdorf markets its products and services in diverse industries, including the financial, commercial, and retail spheres. The company is split into three regional divisions including the Americas (including North America and Latin America), and the Asia Pacific. The Middle East, Europe, and Africa divisions operate as one segment. Beyond designing and producing its own physical product lines, according to Bloomberg Diebold provides services involving "installation and ongoing maintenance of products, remote services, availability management, branch automation, and distribution channel consulting; and outsourced and managed services, such as remote monitoring, troubleshooting, transaction processing, currency management, maintenance services, and online communication services." The company also engages in project analysis for clients, as well as systems integration and architectural engineering. Products Diebold Nixdorf is known for designing, manufacturing, and servicing numerous product lines related to automated service. By 1998, the company offered "automated teller machines, electronic and physical security equipment, automated medication dispensing systems, software, supplies and integrated systems solutions," among other products and services. Safes and metal work Diebold was founded in 1859 as a manufacturer of safes and bank vaults, and bank safes and vaults would prove a staple of the company for many decades. Automated dispensors Over the years Diebold has developed a number of products involved with automated dispensation, for example automated teller machines, movie vending machines, airline ticket vending machines, and credit-card activated gas pumps. In 1965 Diebold began "offering pneumatic tube delivery systems to banks, hospitals, post offices, libraries, office buildings" and many other industrial facilities. In the mid-1990s Diebold created its MedSelect Systems division, which introduced an automated drug dispensing system in 1995. Security measures Diebold has developed a number of physical and electronic security products, and in recent years has been contracted to protect the World Trade Center Transportation Hub, the Hope Diamond at the Smithsonian Institution, and the United States Constitution, among other notable artifacts and landmarks. The company no longer engages in specialized physical security projects, and has since sold its North America-based electronic security business in October 2015. For ATM security, Diebold introduced its ActivEdge card reader in 2014, which it describes as "the industry's first complete anti-skimming card reader prevents all known forms of skimming – the most prevalent type of ATM crime – as well as other forms of ATM fraud." Automated teller machines Diebold branched into the emerging market for automated teller machines (ATMs) in the early 1970s, and has since debuted numerous ATM product lines. Diebold's Total Automatic Banking System 500 (TABS 500) product was revealed in 1972. Another early ATM created by Diebold was the Diebold 10xx, introduced in 1985 as part of the 10xx series. InterBold, the ATM sales and marketing arm of Diebold, introduced a number of ATMs in the early 1990s. In 1999, Diebold debuted an ATM that identified customers using iris recognition, which was the first of its kind. Diebold also introduced the first talking ATM in the United States that year, which was installed on October 1, 1999 in San Francisco's City Hall. In July 2002 Diebold introduced its 3030 Bulk Cash Recycler Model (BCRM), and in 2003, Diebold launched its Opteva line of ATMs. On December 8, 2014, Diebold debuted the 3500 and 3700 ATM series, both of which handle cash recycling among other functions. On October 25, 2015, Diebold publicly debuted two new ATM concepts at the Las Vegas Money20/20 show. The first model Irving, which was undergoing testing by Citigroup at the time, allows customers to withdraw money with an iris scan, removing the need for a card. The second ATM concept, titled Janus, was described by Fortune as a "dual-sided, self-service ATM that can serve two customers at the same time," with videoconferencing also available for help with complex transactions. Diebold Foundation The philanthropic arm of Diebold, Inc., The Diebold Foundation, has supported a number of non-profits, including local branches of Meals on Wheels, as well as the Group Plan Commission to support the redevelopment of Cleveland's Public Square. See also Membership of ATM Industry Association (ATMIA) Companies listed on the New York Stock Exchange (D) List of companies of the United States by state List of S&P 400 companies Premier Election Solutions (formerly Diebold Election Systems) Economy of Ohio References External links Dieboldnixdorf.com American brands Companies listed on the New York Stock Exchange Manufacturing companies based in Ohio Manufacturing companies established in 1859 Financial technology companies 1859 establishments in Ohio Summit County, Ohio
4459250
https://en.wikipedia.org/wiki/Electronics%20Technicians%20Association
Electronics Technicians Association
The Electronics Technicians Association, International® (doing business as ETA® International) is a US-based not-for-profit 501(c)(6) trade association founded in 1978. The association provides certifications in industries such as basic electronics, fiber optics and data cabling, renewable energy, information technology, photonics and precision optics, customer service, biomedical, avionics, wireless communications, radar, and smart home. ETA is also one of the 12 COLEMs (Commercial Operator License Examination Manager) for U.S. Federal Communications Commission (FCC) testing. ETA works with technicians, educators, and military personnel. ETA also partners with companies such as Motorola Solutions to provide certification to their employees. History In 1965 the U.S. Labor Department, Bureau of Apprenticeship & Training (BAT) instigated a jobs program in cooperation with NEA (National Electronics Association). Local school systems, local TV association members and USDL worked together on an 8,000 hour apprenticeship program aimed at solving the labor shortage problem while finding new vocations for those put out of work by modern technology. This new program would reward trainees, but would not cover experienced technicians. Because of this, the Certified Electronics Technician (CET) program was created. In 1970 a group of technicians decided to form an organization to promote the CET program and the electronics industry as a whole. This organization would be called the International Society of Certified Electronics Technicians (ISCET). It became a subdivision of NEA. In the mid-1970s NEA and NATESA merged to form the National Electronic Service Dealers Association (NESDA) with ISCET remaining as a subdivision. Due to a power struggle within the organization, ETA was formed in 1978 by a group of former NESDA members and officers. Among those were Richard "Dick" Glass and Ron Crow, two of the original founders of the CET program and the only administrators at that time. This made it easy to continue the CET program with the new organization. In 1993 ETA became a COLEM for the FCC Commercial Radio License program and offers professionals the chance to sit for seven different FCC commercial licenses at ETA test sites including the General Radio Operator License (GROL). In 2004, ETA helped create the Certified Service Center (CSC) program whose mission is to encourage professionalism with the service industry. The Certified Service Center designation is presented to those service facilities that show they have a percentage of technicians and service managers certified, utilize a code of conduct, provide a service warranty and insurance coverage, adhere to zoning laws, use industry-approved equipment, and provide a clean and accessible facility. From the 1980s to the present, ETA has continued to expand their certification offerings to fill knowledge and skills assessment gaps in technology. ETA works with many different educators, businesses, and trainers to create vendor-neutral accredited certifications. ETA certifications are used by many different sectors including secondary and post-secondary schools, training businesses, corporations, government agencies, and the U.S. military. Certifications ETA offers certifications in various knowledge areas, but does not offer courses or training in these areas. ETA does, however, offer endorsements of courses offered through educational institutions through their Course Approval program. Maintenance or renewal of certifications is required to keep in line with the ISO-17024 standards. Most certifications are good for four years. Basic Electronics Certifications Associate Certified Electronics Technician (CETa) (designated as CESa in Canada) Basic Systems Technician (BST) Electronics Modules (EM1-5) Student Electronics Technician (SET) Biomedical Biomedical Electronics Technician (BET) Biomedical Imaging Equipment Technician (BIET) Communications 5G Technician (5GT) Broadband-Voice over Internet Protocol (B-VoIP) Certified Satellite Installer (CSI) Distributed Antenna Systems (DAS) General Communications Technician - Level 1 (GCT1) General Communications Technician - Level 2 (GCT2) Line and Antenna Sweep (LAS) Microwave Radio Technician (MRT) Mobile Communications and Electronics Installer (MCEI) Passive Intermodulation Testing (PIM) Practical Antenna Basics (PAB) RF Interference Mitigation (RFIM) Advanced RF Interference Mitigation (AIM) Radar (RAD) Telecommunications (TCM) Wireless Communications (WCM) Fiber Optics and Data Cabling Data Cabling Installer (DCI) Fiber Optics Installer (FOI) Fiber Optics Technician (FOT) Fiber Optics Technician-Inside Plant (FOT-ISP) Fiber Optics Technician-Outside Plant (FOT-OSP) Fiber To Any Antenna (FTAA) Fiber Optics Designer (FOD) Termination and Testing Technician (TTT) ETA Aerospace-based Fiber Optics Certifications ARINC Fiber Optics Fundamentals Professional (AFOF) ARINC Fiber Optics Installer (AFI) ARINC Fiber Optics Technician (AFT) Fiber Optics Evaluation & Endface Cleaning (FEEC) SAE Fiber Optics Fabricator (SFF) and SAE-ARINC Fiber Optics Fabricator (SAFF) Information Technology Computer Service Technician (CST) Information Technology Security (ITS) Network Computer Technician (NCT) Network Systems Technician (NST) Wireless Networking Technician (WNT) Photonics and Precision Optics Photonics Technician Operator (PTO) Photonics Technician Specialist (PTS) Specialist in Precision Optics (SPO) Technician in Precision Optics (TPO) Renewable Energy Photovoltaic Installer - Level 1 (PVI1) Photovoltaic Installer/Designer (PV2) Small Wind Installer (SWI) Electric Vehicle Technician (EVT) Smart Home Certified Alarm-Security Technician (CAST) Electronic Security Networking Technician (ESNT) Smart Technology Systems (STS) Audio-Video (AV) endorsement Computer Networking (CN) endorsement Environmental Controls (EC) endorsement Security-Surveillance (SS) endorsement Master Smart Technology Systems (STSma) Workforce Readiness Certified Service Manager (CSM) Customer Service Specialist (CSS) Additional Certifications Audio-Video Forensic Analyst (AVFA) Avionics (AVN) Commercial Audio Technician (CAT) Digital Video Editor (DVE) Gaming & Vending Technician (GVT) Industrial Electronics (IND) Radio Frequency Identification Technical Specialist (RFID) Levels of certification Associate Electronics Technician (CETa) (designated as CESa in Canada) The Associate Electronics Technician exam is a certification of entry-level electronics professional knowledge to include not only electronics but also safety, record keeping and professionalism. The CETa is good for four years by itself and can be renewed without a journeyman certification. The CETa was changed in November 2013 to allow renewal on a four year basis. Journeyman Certified Electronics Technician (CET) (designated as CES in Canada) To attain the CET, ETA requires the candidate to pass the CETa exam and a qualifying Journeyman Certification Option. The CET is good for four years and can be renewed by retesting or demonstrating 40 hours of upgrade electronics training. Senior Certified Electronics Technician (CETsr) (designated as CESsr in Canada) The Senior Certified Electronics Technician is an upgrade to the Journeyman CET. It requires six-years work experience and an 85% passing score on the CET exam. Certified Electronics Technician Master Specialty (CETms) (designated as CESms in Canada) The ETA Certified Electronics Technician Master Specialty (CETms) certification is designed for any professional with four or more certifications in areas such as fiber optics, information technology, RF communications, and telecommunications. Master Certified Electronics Technician (CETma) (designated as CESma in Canada) A technician with six or more years combined work and electronics training may be eligible for the ETA Master Certified Electronics Technician (CETma) certification. The Master certification was created to showcase those technicians who are able to demonstrate proficiency in the many fields of electronics. Accreditation All technical certifications are accredited by the International Certification Accreditation Council (ICAC) and align with the ISO-17024 standard. Independent audits are conducted on a regular basis to ensure compliance. Membership Membership is open to anyone who is involved in one of the industries ETA serves. Membership allows voting rights for such things as biannual officer elections and service awards as well as by-law changes and other association business. ETA offers six types of membership for educators, professionals, technicians, and students. Each membership includes an subscription to the High Tech News, ETA's bi-monthly membership magazine. See also General radiotelephone operator license References External links Electronics Technicians Association International International Certification Accreditation Council United States Department of Labor CareerOneStop United States Army COOL (Credentialing Opportunities On-Line) United States Navy COOL (Credentialing Opportunities On-Line) United States Marine Corps COOL (Credentialing Opportunities On-Line) Motorola Solutions Electronics industry Engineering societies based in the United States Professional associations based in the United States Organizations established in 1978 1978 establishments in the United States
28847003
https://en.wikipedia.org/wiki/Goatse%20Security
Goatse Security
Goatse Security (GoatSec) was a loose-knit, nine-person grey hat hacker group that specialized in uncovering security flaws. It was a division of the anti-blogging Internet trolling organization known as the Gay Nigger Association of America (GNAA). The group derives its name from the Goatse.cx shock site, and it chose "Gaping Holes Exposed" as its slogan. The website has been abandoned without an update since May 2014. In June 2010, Goatse Security obtained the email addresses of approximately 114,000 Apple iPad users. This led to an FBI investigation and the filing of criminal charges against two of the group's members. Founding The GNAA had several security researchers within its membership. According to Goatse Security spokesperson Leon Kaiser, the GNAA could not fully utilize their talents since the group believed that there would not be anyone who would take security data published by the GNAA seriously. In order to create a medium through which GNAA members can publish their security findings, the GNAA created Goatse Security in December 2009. Discovery of browser vulnerabilities In order to protect its web browser from inter-protocol exploitation, Mozilla blocked several ports that HTML forms would not normally have access to. In January 2010, the GNAA discovered that Mozilla's blocks did not cover port 6667, which left Mozilla browsers vulnerable to cross-protocol scripts. The GNAA crafted a JavaScript-based exploit in order to flood IRC channels. Although EFnet and OFTC were able to block the attacks, Freenode struggled to counteract the attacks. Goatse Security exposed the vulnerability, and one of its members, Andrew Auernheimer, aka "weev," posted information about the exploit on Encyclopedia Dramatica. In March 2010, Goatse Security discovered an integer overflow vulnerability within Apple's web browser, Safari, and posted an exploit on Encyclopedia Dramatica. They found out that a person could access a blocked port by adding 65,536 to the port number. This vulnerability was also found in Arora, iCab, OmniWeb, and Stainless. Although Apple fixed the glitch for desktop versions of Safari in March, the company left the glitch unfixed in mobile versions of the browser. Goatse Security claimed that a hacker could exploit the mobile Safari flaw in order to gain access and cause harm to the Apple iPad. AT&T/iPad email address leak In June 2010, Goatse Security uncovered a vulnerability within the AT&T website. AT&T was the only provider of 3G service for Apple's iPad in the United States at the time. When signing up for AT&T's 3G service from an iPad, AT&T retrieves the ICC-ID from the iPad's SIM card and associates it with the email address provided during sign-up. In order to ease the log-in process from the iPad, the AT&T website receives the SIM card's ICC-ID and pre-populates the email address field with the address provided during sign-up. Goatse Security realized that by sending a HTTP request with a valid ICC-ID embedded inside it to the AT&T website, the website would reveal the email address associated with that ICC-ID. On June 5, 2010, Daniel Spitler, aka "JacksonBrown", began discussing this vulnerability and possible ways to exploit it, including phishing, on an IRC channel. Goatse Security constructed a PHP-based brute force script that would send HTTP requests with random ICC-IDs to the AT&T website until a legitimate ICC-ID is entered, which would return the email address corresponding to the ICC-ID. This script was dubbed the "iPad 3G Account Slurper." Goatse Security then attempted to find an appropriate news source to disclose the leaked information, with Auernheimer attempting to contact News Corporation and Thomson Reuters executives, including Arthur Siskind, about AT&T's security problems. On June 6, 2010, Auernheimer sent emails with some of the ICC-IDs recovered in order to verify his claims. Chat logs from this period also reveal that attention and publicity may have been incentives for the group. Contrary to what it first claimed, the group initially revealed the security flaw to Gawker Media before notifying AT&T and also exposed the data of 114,000 iPad users, including those of celebrities, the government and the military. These tactics re-provoked significant debate on the proper disclosure of IT security flaws. Auernheimer has maintained that Goatse Security used common industry standard practices and has said that, "We tried to be the good guys". Jennifer Granick of the Electronic Frontier Foundation has also defended the tactics used by Goatse Security. On June 14, 2010, Michael Arrington of TechCrunch awarded the group a Crunchie award for public service. This was the first time a Crunchie was awarded outside the annual Crunchies award ceremony. The FBI then opened an investigation into the incident, leading to a criminal complaint in January 2011 and a raid on Auernheimer's house. The search was related to the AT&T investigation and Auernheimer was subsequently detained and released on bail on state drug charges, later dropped. After his release on bail, he broke a gag order to protest and to dispute the legality of the search of his house and denial of access to a public defender. He also asked for donations via PayPal, to defray legal costs. In 2011 the Department of Justice announced that he will be charged with one count of conspiracy to access a computer without authorization and one count of fraud. A co-defendant, Daniel Spitler, was released on bail. On November 20, 2012, Auernheimer was found guilty of one count of identity fraud and one count of conspiracy to access a computer without authorization, and tweeted that he would appeal the ruling. Alex Pilosov, a friend who was also present for the ruling, tweeted that Auernheimer would remain free on bail until sentencing, "which will be at least 90 days out." On November 29, 2012, Auernheimer authored an article in Wired Magazine entitled "Forget Disclosure - Hackers Should Keep Security Holes to Themselves," advocating the disclosure of any zero-day exploit only to individuals who will "use it in the interests of social justice." On April 11, 2014, the Third Circuit issued an opinion vacating Auernheimer's conviction, on the basis that venue in New Jersey was improper. The judges did not address the substantive question on the legality of the site access. He was released from prison late on April 11. Other accomplishments In May 2011, a DoS vulnerability affecting several Linux distributions was disclosed by Goatse Security, after the group discovered that a lengthy Advanced Packaging Tool URL would cause compiz to crash. In September 2012, Goatse Security was credited by Microsoft for helping to secure their online services. References External links Hacker groups Computer security organizations Organizations established in 2009
1195660
https://en.wikipedia.org/wiki/Evans%20%26%20Sutherland%20ES-1
Evans & Sutherland ES-1
The ES-1 was Evans & Sutherland's abortive attempt to enter the supercomputer market. It was aimed at technical and scientific users who would normally buy a machine like a Cray-1 but did not need that level of power or throughput for graphics-heavy workloads. About to be released just as the market was drying up in the post-cold war military wind-down, only a handful were built and only two sold. Background Jean-Yves Leclerc was a computer designer who was unable to find funding in Europe for a high-performance server design. In 1985 he visited Dave Evans, his former PhD. adviser, looking for advice. After some discussion he eventually convinced him that since most of their customers were running E&S graphics hardware on Cray Research machines and other supercomputers, it would make sense if E&S could offer their own low-cost platform instead. Eventually a new Evans & Sutherland Computer Division, or ESCD, was set up in 1986 to work on the design. Unlike the rest of E&S's operations which are headquartered in Salt Lake City, Utah, it was felt that the computer design would need to be in the "heart of things" in Silicon Valley, and the new division was set up in Mountain View, California. Basic design Instead of batch mode number crunching, the design would be tailored specifically to interactive use. This would include a built-in graphics engine and 2 GB of RAM, running BSD Unix 4.2. The machine would offer performance on par with contemporary Cray and ETA Systems. 8 × 8 crossbar The basic idea of Leclerc's system was to use an 8×8 crossbar switch to connect eight custom CMOS CPUs together at high speed. An extra channel on the crossbar allowed it to be connected to another crossbar, forming a single 16-processor unit. The units were 16-sized (instead of 8) in order to fully utilize a 16-bank high-speed memory that had been designed along with the rest of the system. Since memory was logically organized on the "far side" of the crossbars, the memory controller handled many of the tasks that would normally be left to the processors, including interrupt handling and virtual memory translation, avoiding a trip through the crossbar for these housekeeping tasks. The resulting 16-unit processor/memory blocks could then be connected using another 8×8 crossbar, creating a 128-processor machine. Although the delays between the 16-unit blocks would be high, if the task could be cleanly separated into units the delay would not have a huge effect on performance. When data did have to be shared across the banks the system balanced the requests; first the "leftmost" processor in the queue would get access, then the "rightmost". Processors added their requests onto the proper end of the queue based on their physical location in the machine. It was felt that the simplicity and speed of this algorithm would make up for the potential gains of a more complex load-balancing system. Instruction pipeline In order to allow the system to work even with the high inter-unit latencies, each processor used an 8-deep instruction pipeline. Branches used a variable delay slot, the end of which was signaled by a bit in the next instruction. The bit indicated that the results of the branch had to be re-merged at this point, stalling the processor until this took place. Each processor also included a floating point unit from Weitek. For marketing purposes, each processor was called a "computational unit", and a card-cage populated with 16 was referred to as a "processor". This allowed favorable per-processor performance comparisons with other supercomputers of the era. The processors ran at 20 MHz in the integer units and 40 MHz for the FPUs, with the intention being to increase this to 50 MHz by the time it shipped. At about 12 Mflops peak per CU, the machine as a whole would deliver up to 1.5 Gflops, although due to the memory latencies this was typically closer to 250 Mflops. While this was fast for a CMOS machine processor of the time, it was hardly competitive for a supercomputer. Nevertheless, the machine was air cooled, and would have been the fastest such machine on the market. The machine ran an early version of the Mach kernel for multi-processor support. The compilers were designed to keep the processors as full as possible by reducing the number of branch delay slots, and did a particularly good job of it. Fatal flaw When it was introduced in 1989, configurations ran from $2 to $8 million, with the largest claimed to run at 1.6 Gflops. In trying to position the machine, Ivan Sutherland noted that their flight simulation systems actually ran at higher speeds, and that the ES-1 was "a step down for us". When the machine was first announced it was notable for its price/performance ratio. It completely outperformed most competitors machines, at least in theory. With peak performance of 1600 MIPS and a price $2.2 million, it was $1375/MIPS, compared to a contemporary Alliant FX/40 minicomputer at $4650/MIPS. A 1989 Computerworld review of the market for mid-range high-performance machines showed only one machine in the same class, the Connection Machine CM-2. The new leftmost-rightmost algorithm had a fatal flaw. In high-contention cases the "middle" units would never be serviced, and could stall for thousands of cycles. By 1989, it was clear this was going to need a redesign, but by this point other machines with similar price/performance ratios were coming on the market and the pressure was on to ship immediately. The first two machines were shipped to Caltech in October 1989 and the University of Colorado at Boulder in November, but there were no other immediate sales. One sample ES-1 is in storage at the Computer History Museum. Evans resigned from the E&S board in 1989, and suddenly the votes turned against continuing the project. E&S looked for a buyer who was interested in continuing the effort, but finding none they instead closed the division in January 1990. References Citations Bibliography Supercomputers Massively parallel computers
40075611
https://en.wikipedia.org/wiki/Susan%20H.%20Rodger
Susan H. Rodger
Susan H. Rodger is an American computer scientist known for work in computer science education including developing the software JFLAP for over twenty years. JFLAP is educational software for visualizing and interacting with formal languages and automata. Rodger is also known for peer-led team learning in computer science and integrating computing into middle schools and high schools with Alice. She is also currently serving on the board of CRA-W and was chair of ACM SIGCSE from 2013 to 2016. Biography Rodger was born in Columbia, South Carolina. She received a B.S. in computer science and a B.S. in mathematics from North Carolina State University in 1983. She received a M.S. in computer science from Purdue University in 1985 and a Ph.D. in computer science from Purdue University in 1989. She immediately joined the Department of Computer Science at Rensselaer Polytechnic Institute as an assistant professor. In 1994 she moved to Duke University as an assistant professor of the Practice of Computer Science. She was promoted to associate professor of the practice of computer science in 1997 and to professor of the practice in 2008. Awards 2006: Rodger was named an ACM Distinguished Member. 2007: Finalist in the NEEDS Premier Award for Excellence in Engineering Education Courseware (for the software JFLAP). 2014: ACM Karl V. Karlstrom Outstanding Educator Award. 2019: IEEE Computer Society Taylor L. Booth Education Award. 2019: David and Janet Vaughan Brooks Award References External links Duke University: Susan H. Rodger, Department of Computer Science Duke University: Susan H. Rodger, Department of Computer Science Susan H. Rodger, Writing Wikipedia Pages for Notable Women in Computing American women computer scientists Duke University faculty Living people American computer scientists People from Columbia, South Carolina North Carolina State University alumni Purdue University alumni Year of birth missing (living people) Computer science educators American women academics 21st-century American women
2419719
https://en.wikipedia.org/wiki/Audiogenic
Audiogenic
Audiogenic Software was a British video game development company. It was established in 1985 following an earlier Audiogenic company that had been founded in the late 1970s. It published its last new title in 1997, after the core of the development team were taken over by Codemasters to create Brian Lara Cricket on the PlayStation. The company is, however, still in existence and continues to license its portfolio of titles to third parties for conversion onto new formats. Though almost unknown in the United States, the company was successful in the United Kingdom and in Australia with a line of cricket and rugby games, some versions of which were licensed to other publishers. Several games were also published under licence in Japan, including World Class Rugby for the Super NES, and a follow-up, World Class Rugby 2, both of which were published by Imagineer. First company The original company, Audiogenic Limited, was started as a recording studio called Sun in Reading, Berkshire in 1975 by Martin Maynard. It was one of the first 8 track studios to operate outside London. By comparison with modern studios the recording equipment was very basic; however, it still recorded for bands including The Vibrators, XTC, Stadium Dogs, Van Morrison, Alan Clayson and The New Seekers. It offered an audio cassette duplication service and the company also made arrangements for pressing vinyl. Terry Clark recently performed (February 2008 JonesFest) a song about the studio at a tribute concert for Garry Jones at the South Street centre in Reading. Around 1979 Audiogenic became interested in the Commodore PET computer and gained a contract to duplicate computer software on cassette. Subsequently Commodore International gave Audiogenic the software manufacturing and selling rights, but this arrangement came to an end with the advent of the Commodore VIC-20. Martin Maynard flew to California and signed agreements with United Microware Industries, Cosmi, Creative Software and Broderbund, some of the biggest suppliers of VIC software at that time. Audiogenic published software successfully in the UK, but a decision to diversify by importing peripherals, notably the Koala Pad and the Entrepo Quick Data Drive (a continuous loop storage device for the Commodore 64) contributed to a decline in profitability which led to the company ceasing to trade in 1985. Martin Maynard returned to the audio duplication business, and is still operating Sounds Good Ltd now located in Southport, Merseyside. Second company The second Audiogenic, Audiogenic Software Limited was formed to acquire the assets and goodwill of the original company. Although financed and controlled by Supersoft, run by Peter Calver and Pearl Wellard, a minority stake was held by Martin Maynard. At this time the company employed Darryl Still, who produced a number of successful releases for the BBC Micro, such as Psycastria and Thunderstruck, written by former members of the Icon Software team in North East England. Peter Scott and Gary Partis amongst them. Maynard left the board in 1987 and Still went on to manage the launch of the Atari ST, Lynx handheld and Jaguar consoles in Europe, before stints with Electronic Arts and Nvidia. In 1996 the Audiogenic came to an arrangement with Codemasters as a result of which the latter acquired the development team behind the Brian Lara series of cricket games, and the following year the company ceased developing new titles. Peter Calver still owns Supersoft and Audiogenic, but now runs LostCousins, a family history website. Games Audiogenic published and/or developed many popular games for a variety of computers and games consoles. The company's first release in 1985 was Graham Gooch's Test Cricket, which had been developed by Supersoft, and the company continued to release sports games. For many years it was the world's leading producer of cricket games: Brian Lara Cricket and Lara '96 were developed by Audiogenic for the Mega Drive and released by Codemasters - both reached No.1 in the UK charts. Other sports titles included Emlyn Hughes International Soccer, Graham Gooch World Class Cricket, Allan Border Cricket, European Champions, Lothar Matthäus, Super League Manager, Rugby League Coach, World Class Rugby, European Champions, Wembley International Soccer, Wembley Rugby League, Shane Warne Cricket, and Super Tennis Champs. With Emlyn Hughes International Soccer in 1988 Audiogenic pioneered the concept of a fast-moving sports simulation featuring on-screen commentary, named players and management elements; later with World Class Rugby and then European Champions Audiogenic introduced the concept of sports simulations with a choice of viewpoints. Other titles included Exterminator (a coin-op conversion), Helter Skelter, Impact, Krusty's Fun House, Bubble & Squeak, Exile, and Loopz. Loopz, designed by Ian Upton, is one of the few computer games to have been converted to a coin-operated arcade game, and whilst Capcom (the licensee) never brought the game to market, a video of the completed game exists. It was also licensed to Barcrest for release as a skill-with-prizes amusement machine, but this version also failed to make it to market. However versions were released for 18 different computer and video game formats including NES, Game Boy, IBM PC, Amiga and Atari ST. A follow up game, Super Loopz, was licensed to Imagineer for the Super NES and was published for the Amiga by Audiogenic. See also Supersoft Brian Lara Cricket series Codemasters References External links BBC Games Archive page for Audiogenic Audiogenic at ehis64.net British companies established in 1985 Video game companies of the United Kingdom Video game development companies Video game publishers
1692431
https://en.wikipedia.org/wiki/DVD%20region%20code
DVD region code
DVD region codes are a digital rights management technique introduced in 1997. It is designed to allow rights holders to control the international distribution of a DVD release, including its content, release date, and price, all according to the appropriate region. This is achieved by way of region-locked DVD players, which will play back only DVDs encoded to their region (plus those without any region code). The American DVD Copy Control Association also requires that DVD player manufacturers incorporate the regional-playback control (RPC) system. However, region-free DVD players, which ignore region coding, are also commercially available, and many DVD players can be modified to be region-free, allowing playback of all discs. DVDs may use one code, multiple codes (multi-region), or all codes (region free). Region codes and countries Any combination of regions can be applied to a single disc. For example, a DVD designated Region 2/4 is suitable for playback in Europe, Latin America, Oceania, and any other Region 2 or Region 4 area. So-called "Region 0" and "ALL" discs are meant to be playable worldwide. The term "Region 0" also describes the DVD players designed or modified to incorporate Regions 1–8, thereby providing compatibility with most discs, regardless of region. This apparent solution was popular in the early days of the DVD format, but studios quickly responded by adjusting discs to refuse to play in such machines by implementing a system known as "Regional Coding Enhancement" (RCE). DVDs sold in the Baltic states use both region 2 and 5 codes, having previously been in region 5 (due to their history as part of the USSR) but EU single market law concerning the free movement of goods caused a switch to region 2. European region 2 DVDs may be sub-coded "D1" to "D4". "D1" are the UK only releases; "D2" and "D3" are not sold in the UK and Ireland; "D4" are distributed throughout Europe. Overseas territories of the United Kingdom and France (both in region 2) often have other regions (4 or 5, depending on geographical situation) than their homelands. Most DVDs sold in Mexico and the rest of Latin America carry both region 1 and 4 codes. Some are region 1 only after 2006 to coincide with Blu-Ray region A. Egypt, Eswatini, Lesotho, and South Africa are in DVD region 2, while all other African countries are in region 5, but all African countries are in the same Blu-ray region code (region B). North Korea and South Korea have different DVD region codes (North Korea: region 5, South Korea: region 3), but use the same Blu-ray region code (region A). In China, two DVD region codes are used: Mainland China uses region 6, but Hong Kong and Macau use region 3. There are also two Blu-ray regions used: Mainland China uses region C, but Hong Kong and Macau use region A. Most DVDs in India combine the region 2, region 4, and region 5 codes, or are region 0. Region-code enhanced Region-code enhanced, also known as just "RCE" or "REA", was a retroactive attempt to prevent the playing of one region's discs in another region, even if the disc was played in a region-free player. The scheme was deployed on only a handful of discs. The disc contained the main program material region coded as region 1. But it also contained a short video loop of a map of the world showing the regions, which was coded as region 2, 3, 4, 5, and 6. The intention was that when the disc was played in a non-region 1 player, the player would default to playing the material for its native region. This played the aforementioned video loop of a map, which was impossible to escape from, as the user controls were disabled. The scheme was fundamentally flawed, as a region-free player tries to play a disc using the last region that worked with the previously inserted disc. If it cannot play the disc, then it tries another region until one is found that works. RCE could be defeated by briefly playing a "normal" region 1 disc, and then inserting the RCE protected region 1 disc, which would now play. RCE also caused a few problems with genuine region 1 players. Many "multi-region" DVD players defeated regional lockout and RCE by automatically identifying and matching a disc's region code or allowing the user to manually select a particular region. Some manufacturers of DVD players now freely supply information on how to disable regional lockout, and on some recent models, it appears to be disabled by default. Computer programs such as DVD Shrink, Digiarty WinX DVD Ripper Platinum can make copies of region-coded DVDs without RCE restriction. Purpose One purpose of region coding is controlling release dates. One practice of movie marketing which was threatened by the advent of digital home video was the tradition of releasing a movie to cinemas and then for general rental or sale later in some countries than in others. This practice was historically common because before the advent of digital cinema, releasing a movie at the same time worldwide used to be prohibitively expensive. Most importantly, manufacturing a release print of a film for public exhibition in a cinema has always been expensive, but a large number of release prints are needed only for a narrow window of time during the first few weeks after a film's release. Spreading out release dates allows for reuse of some release prints in other regions. Videotapes were inherently regional since formats had to match those of the encoding system used by television stations in that particular region, such as NTSC and PAL, although from the early 1990s PAL machines increasingly offered NTSC playback. DVDs are less restricted in that sense. Region coding allows movie studios to better control the global release dates of DVDs. Also, the copyright in a title may be held by different entities in different territories. Region coding enables copyright holders to (attempt to) prevent a DVD from a region from which they do not derive royalties from being played on a DVD player inside their region. Region coding attempts to dissuade importing of DVDs from one region into another. The cultural differences between regions may have also played a part. PAL/SECAM vs. NTSC DVDs are also formatted for use on two conflicting regional television systems: 480i/60 Hz and 576i/50 Hz, which in analog contexts are often referred to as 525/60 (NTSC) and 625/50 (PAL/SECAM) respectively. Strictly speaking, PAL and SECAM are analog color television signal formats which have no relevance in the digital domain (as evident in the conflation of PAL and SECAM, which are actually two distinct analog color systems). However, the DVD system was originally designed to encode the information necessary to reproduce signals in these formats, and the terms continue to be used (incorrectly) as a method of identifying refresh rates and vertical resolution. However, an "NTSC", "PAL" or "SECAM" DVD player that has one or more analog composite video output (baseband or modulated) will only produce NTSC, PAL or SECAM signals, respectively, from those outputs, and may only play DVDs identified with the corresponding format. NTSC is the analog TV format historically associated with the United States, Canada, Japan, South Korea, Mexico, Philippines, Taiwan, and other countries. PAL is the analog color TV format historically associated with most of Europe, most of Africa, China, India, Australia, New Zealand, Israel, North Korea, and other countries (Brazil adopted the variant PAL-M, which uses the refresh rate and resolution commonly associated with NTSC). SECAM, a format associated with French-speaking Europe, while using the same resolution and refresh rate as PAL, is a distinct format which uses a very different system of color encoding. Some DVD players can only play discs identified as NTSC, PAL or SECAM, while others can play multiple standards. In general, it is easier for consumers in PAL/SECAM countries to view NTSC DVDs than vice versa. Almost all DVD players sold in PAL/SECAM countries are capable of playing both kinds of discs, and most modern PAL TVs can handle the converted signal. NTSC discs may be output from a PAL DVD player in three different ways: using a non-chroma encoded format such as RGB SCART or YPBPR component video. using PAL 60 encoded composite video/S-Video—a "hybrid" system which uses NTSC's 525/60 line format along with PAL's chroma subcarrier using NTSC encoded composite video/S-Video. However, most NTSC players cannot play PAL discs, and most NTSC TVs do not accept 576i video signals as used on PAL/SECAM DVDs. Those in NTSC countries, such as the United States, generally require both a region-free, multi-standard player and a multi-standard television to view PAL discs, or a converter box, whereas those in PAL countries generally require only a region-free player to view NTSC discs (with the possible exception of Japanese discs in most European countries, since they are in the same region - this means European region 2 users could import Japanese discs and play them on their players without any obstacles.) There are also differences in pixel aspect ratio (720 × 480 vs. 720 × 576 with the same image aspect ratio) and display frame rate (29.97 vs. 25). Most computer-based DVD software and hardware can play both NTSC and PAL video and both audio standards. Blu-ray players, which use up to 1080p signals, are backwards compatible with both NTSC and PAL DVDs. Implementations of region codes Standalone DVD players Usually a configuration flag is set in each player's firmware at the factory. This flag holds the region number that the machine is allowed to play. Region-free players are DVD players shipped without the ability to enforce regional lockout (usually by means of a chip that ignores any region coding), or without this flag set. However, if the player is not region-free, it can often be unlocked with an unlock code entered via the remote control. This code simply allows the user to change the factory-set configuration flag to another region, or to the special region "0". Once unlocked this way, the DVD player allows the owner to watch DVDs from any region. Many websites exist on the Internet offering these codes, often known informally as hacks. Many websites provide instructions for different models of standalone DVD players, to hack, and their factory codes. Computer DVD drives Older DVD drives use RPC-1 (Regional Playback Control) firmware, which means the drive allows DVDs from any region to play. Newer drives use RPC-2 firmware, which enforces the DVD region coding at the hardware level. These drives can often be reflashed or hacked with RPC-1 firmware, effectively making the drive region-free. This may void the drive warranty. Some drives may come set as region-free, so the user is expected to assign their region when they buy it. In this case, some DVD programs may prompt the user to select a region, while others may actually assign the region automatically based on the locale set in the operating system. In most computer drives, users are allowed to change the region code up to five times. If the number of allowances reaches zero, the region last used will be permanent even if the drive is transferred to another computer. This limit is built into the drive's controller software, called firmware. Resetting the firmware count can be done with first- or third-party software tools, or by reflashing (see above) to RPC-1 firmware. Since some software does not work correctly with RPC-1 drives, there is also the option of reflashing the drive with a so-called auto-reset firmware. This firmware appears as RPC-2 firmware to software, but will reset the region changes counter whenever power is cycled, reverting to the state of a drive that has never had its region code changed. Software DVD players Most freeware and open source DVD players ignore region coding. VLC, for example, does not attempt to enforce region coding; however, it requires access to the DVD's raw data to overcome CSS encryption, and such access may not be available on some drives with RPC-2 firmware when playing a disc from a different region than the region to which the drive is locked. Most commercial players are locked to a region code, but can be easily changed with software. Other software, known as DVD region killers, transparently remove (or hide) the DVD region code from the software player. Some can also work around locked RPC-2 firmware. Circumvention The region coding of a DVD can be circumvented by making a copy that adds flags for all region codes, creating an all-region DVD. DVD backup software can do this, and some can also remove Macrovision, CSS, and disabled user operations (UOps). In common region-locked DVDs (but not in RCE-DVDs), the region code is stored in the file "VIDEO_TS.IFO" (table "VMGM_MAT"), byte offsets 34 and 35. The eight regions each correspond to a value which is a power of 2: Region 1 corresponds to 1 (20), Region 2 to 2 (21), Region 3 to 4 (22), and so on through Region 8, which corresponds to 128 (27). The values of each region that the disc is not encoded for are added together to give the value in the file. For example, a disc that is encoded for Region 1 but not Regions 2–8 will have the value 2+4+8+16+32+64+128=254. A disc encoded for Regions 1, 2 and 4 will have the value 4+16+32+64+128=244. A region-free or RCE-protected DVD will carry the value zero, since no regions are excluded. Video game consoles The Xbox, Xbox 360, PlayStation 2 and PlayStation 3 consoles are all region-locked for DVD playback. The PlayStation 2 can be modified to have its regional-locking disabled through the use of modchips. Although region locked on film DVDs and film Blu-ray Discs, the PlayStation 3, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X are region free for video games, though add-on content on the online store is region locked and must match the region of the disc. Blu-ray Disc region codes Blu-ray Discs use a much simpler region-code system than DVD with only three regions, labeled A, B and C. As with DVDs, many Blu-rays are encoded region 0 (region free), making them suitable for players worldwide. Unlike DVD regions, Blu-ray regions are verified only by the player software, not by the computer system or the drive. The region code is stored in a file or the registry, and there are hacks to reset the region counter of the player software. In stand-alone players, the region code is part of the firmware. Some Blu-Rays are region-free. For bypassing region codes, there are software and multi-regional players available. A new form of Blu-ray region coding tests not only the region of the player/player software, but also its country code. This means, for example, while both the US and Japan are Region A, some American discs will not play on devices/software installed in Japan or vice versa, since the two countries have different country codes (the United States has 21843 or Hex 5553 ("US" in ASCII, according to ISO 3166-1), and Japan has 19024, or Hex 4a50 ("JP"); Canada has 17217 or Hex 4341 ("CA"). Although there are only three Blu-ray regions, the country code allows much more precise control of the regional distribution of Blu-ray discs than the six (or eight) DVD regions. In Blu-ray discs, there are no "special regions" such as the regions 7 and 8 in DVDs. UMD region codes For the UMD, a disc type used for the PlayStation Portable, UMD movies have region codes similar to DVDs, although many PSP games are region-free. Criticism and legal concerns Region-code enforcement has been discussed as a possible violation of World Trade Organization free trade agreements or competition law. It is believed that the only entities benefiting from DVD Region Coding are the movie studios, the marketers of Code-Free DVD players and DVD decrypters. Oceania The Australian Competition & Consumer Commission (ACCC) have warned that DVD players that enforce region-coding may violate the Competition and Consumer Act 2010. A December 2000 report from the ACCC advised consumers to "exercise caution when purchasing a DVD video player because of the restrictions that limit their ability to play imported DVDs." The report stated, "These restrictions are artificially imposed by a group of multinational film entertainment companies and are not caused by the existing differences in television display formats such as PAL, NTSC and SECAM [...] The ACCC is currently investigating whether Australian consumers are paying higher prices for DVDs because of the ability of copyright owners, such as film companies, to prevent competition by restricting imports from countries where the same (authorised) video titles are sold more cheaply." In 2012, a report from The Sydney Morning Herald revealed that region-free DVD players were legal in Australia, as they were exempt from the Technological Protection Measures (TPMs) included in the US Free Trade Agreement. Under New Zealand copyright law, DVD region codes and the mechanisms in DVD players to enforce them have no legal protection. Europe The practice has also been criticized by the European Commission which as of 14 March 2001 were investigating whether the resulting price discrimination amounts to a violation of EU competition law. North America The Washington Post has highlighted how DVD region-coding has been a major inconvenience for travelers who wish to legally purchase DVDs abroad and return with them to their countries of origin, students of foreign languages, immigrants who want to watch films from their homeland and foreign film enthusiasts. Another criticism is that region-coding allows for local censorship. For example, the Region 1 DVD of the 1999 drama film Eyes Wide Shut contains the digital manipulations necessary for the film to secure an MPAA R-rating, whereas these manipulations are not evident in non–region 1 discs. See also Broadcast television systems DVD Copy Control Association Geo-blocking Regional lockout References External links Blu-ray and DVD Region Codes and Video Standards at Brenton Film DVD region information with regards to RCE from Home Theater Info Region Coding - Explanations & Help from The DVDCodes Source Amazon.co.uk DVD Regions guide Digital rights management Region code Self-censorship Hardware restrictions
46686994
https://en.wikipedia.org/wiki/Zen%20%28first%20generation%29
Zen (first generation)
Zen is the codename for the first iteration in a family of computer processor microarchitectures of the same name from AMD. It was first used with their Ryzen series of CPUs in February 2017. The first Zen-based preview system was demonstrated at E3 2016, and first substantially detailed at an event hosted a block away from the Intel Developer Forum 2016. The first Zen-based CPUs, codenamed "Summit Ridge", reached the market in early March 2017, Zen-derived Epyc server processors launched in June 2017 and Zen-based APUs arrived in November 2017. Zen is a clean sheet design that differs from AMD's previous long-standing Bulldozer architecture. Zen-based processors use a 14 nm FinFET process, are reportedly more energy efficient, and can execute significantly more instructions per cycle. SMT has been introduced, allowing each core to run two threads. The cache system has also been redesigned, making the L1 cache write-back. Zen processors use three different sockets: desktop and mobile Ryzen chips use the AM4 socket, bringing DDR4 support; the high-end desktop Zen-based Threadripper chips support quad-channel DDR4 RAM and offer 64 PCIe 3.0 lanes (vs 24 lanes), using the TR4 socket; and Epyc server processors offer 128 PCI 3.0 lanes and octa-channel DDR4 using the SP3 socket. Zen is based on a SoC design. The memory, PCIe, SATA, and USB controllers are incorporated into the same chip(s) as the processor cores. This has advantages in bandwidth and power, at the expense of chip complexity and die area. This SoC design allows the Zen microarchitecture to scale from laptops and small-form factor mini PCs to high-end desktops and servers. By 2020, 260 million Zen cores have already been shipped by AMD. Design According to AMD, the main focus of Zen is on increasing per-core performance. New or improved features include: The L1 cache has been changed from write-through to write-back, allowing for lower latency and higher bandwidth. SMT (simultaneous multithreading) architecture allows for two threads per core, a departure from the CMT (clustered multi-thread) design used in the previous Bulldozer architecture. This is a feature previously offered in some IBM, Intel and Oracle processors. A fundamental building block for all Zen-based CPUs is the Core Complex (CCX) consisting of four cores and their associated caches. Processors with more than four cores consist of multiple CCXs connected by Infinity Fabric. Processors with non-multiple-of-four core counts have some cores disabled. Four ALUs, two AGUs/load–store units, and two floating-point units per core. Newly introduced "large" micro-operation cache. Each SMT core can dispatch up to six micro-ops per cycle (a combination of 6 integer micro-ops and 4 floating point micro-ops per cycle). Close to 2× faster L1 and L2 bandwidth, with total L3 cache bandwidth up 5×. Clock gating. Larger retire, load, and store queues. Improved branch prediction using a hashed perceptron system with Indirect Target Array similar to the Bobcat microarchitecture, something that has been compared to a neural network by AMD engineer Mike Clark. The branch predictor is decoupled from the fetch stage. A dedicated stack engine for modifying the stack pointer, similar to that of Intel Haswell and Broadwell processors. Move elimination, a method that reduces physical data movement to reduce power consumption. Binary compatibility with Intel's Skylake (excluding VT-x and private MSRs): RDSEED support, a set of high-performance hardware random number generator instructions introduced in Broadwell. Support for the SMAP, SMEP, XSAVEC/XSAVES/XRSTORS, and CLFLUSHOPT instructions. ADX support. SHA support. CLZERO instruction for clearing a cache line. Useful for handling ECC-related Machine-check exceptions. PTE (page table entry) coalescing, which combines 4kB page tables into 32kB page size. "Pure Power" (more accurate power monitoring sensors). Support for intel-style running average power limit (RAPL) measurement. Smart Prefetch. Precision Boost. eXtended Frequency Range (XFR), an automated overclocking feature which boosts clock speeds beyond the advertised turbo frequency. The Zen architecture is built on a 14 nanometer FinFET process subcontracted to GlobalFoundries, which in turn licenses its 14nm process from Samsung Electronics. This gives greater efficiency than the 32 nm and 28 nm processes of previous AMD FX CPUs and AMD APUs, respectively. The "Summit Ridge" Zen family of CPUs use the AM4 socket and feature DDR4 support and a 95 W TDP (thermal design power). While newer roadmaps don't confirm the TDP for desktop products, they suggest a range for low-power mobile products with up to two Zen cores from 5 to 15 W and 15 to 35 W for performance-oriented mobile products with up to four Zen cores. Each Zen core can decode four instructions per clock cycle and includes a micro-op cache which feeds two schedulers, one each for the integer and floating point segments. Each core has two address generation units, four integer units, and four floating point units. Two of the floating point units are adders, and two are multiply-adders. However, using multiply-add-operations may prevent simultaneous add operation in one of the adder units. There are also improvements in the branch predictor. The L1 cache size is 64 KB for instructions per core and 32 KB for data per core. The L2 cache size 512 KB per core, and the L3 is 1–2 MB per core. L3 caches offer 5× the bandwidth of previous AMD designs. History and development AMD began planning the Zen microarchitecture shortly after re-hiring Jim Keller in August 2012. AMD formally revealed Zen in 2015. The team in charge of Zen was led by Keller (who left in September 2015 after a 3-year tenure) and Zen Team Leader Suzanne Plummer. The Chief Architect of Zen was AMD Senior Fellow Michael Clark. Zen was originally planned for 2017 following the ARM64-based K12 sister core, but on AMD's 2015 Financial Analyst Day it was revealed that K12 was delayed in favor of the Zen design, to allow it to enter the market within the 2016 timeframe, with the release of the first Zen-based processors expected for October 2016. In November 2015, a source inside AMD reported that Zen microprocessors had been tested and "met all expectations" with "no significant bottlenecks found". In December 2015, it was rumored that Samsung may have been contracted as a fabricator for AMD's 14 nm FinFET processors, including both Zen and AMD's then-upcoming Polaris GPU architecture. This was clarified by AMD's July 2016 announcement that products had been successfully produced on Samsung's 14 nm FinFET process. AMD stated Samsung would be used "if needed", arguing this would reduce risk for AMD by decreasing dependence on any one foundry. In December 2019, AMD started putting out first generation Ryzen products built using the second generation Zen+ architecture. Advantages over predecessors Manufacturing process Processors based on Zen use 14 nm FinFET silicon. These processors are reportedly produced at GlobalFoundries. Prior to Zen, AMD's smallest process size was 28 nm, as utilized by their Steamroller and Excavator microarchitectures. The immediate competition, Intel's Skylake and Kaby Lake microarchitecture, are also fabricated on 14 nm FinFET; though Intel planned to begin the release of 10 nm parts later in 2017. Intel was unable to reach this goal, and in 2021, only mobile chips have been produced with the 10nm process. In comparison to Intel's 14 nm FinFET, AMD claimed in February 2017 the Zen cores would be 10% smaller. Intel has later announced in July 2018 that 10nm mainstream processors should not be expected before the second half of 2019. For identical designs, these die shrinks would use less current (and power) at the same frequency (or voltage). As CPUs are usually power limited (typically up to ~125W, or ~45W for mobile), smaller transistors allow for either lower power at the same frequency, or higher frequency at the same power. Performance One of Zen's major goals in 2016 was to focus on performance per-core, and it was targeting a 40% improvement in instructions per cycle (IPC) over its predecessor. Excavator, in comparison, offered 4–15% improvement over previous architectures. AMD announced the final Zen microarchitecture actually achieved 52% improvement in IPC over Excavator. The inclusion of SMT also allows each core to process up to two threads, increasing processing throughput by better use of available resources. The Zen processors also employ sensors across the chip to dynamically scale frequency and voltage. This allows for the maximum frequency to be dynamically and automatically defined by the processor itself based upon available cooling. AMD has demonstrated an 8-core/16-thread Zen processor outperforming an equally-clocked Intel Broadwell-E processor in Blender rendering and HandBrake benchmarks. Zen supports AVX2 but it requires two clock cycles to complete each AVX2 instruction compared to Intel's one. This difference was corrected in Zen 2. Memory Zen supports DDR4 memory (up to eight channels) and ECC. Pre-release reports stated APUs using the Zen architecture would also support High Bandwidth Memory (HBM). However, the first demonstrated APU did not use HBM. Previous APUs from AMD relied on shared memory for both the GPU and the CPU. Power consumption and heat output Processors built at the 14 nm node on FinFET silicon should show reduced power consumption and therefore heat over their 28 nm and 32 nm non-FinFET predecessors (for equivalent designs), or be more computationally powerful at equivalent heat output/power consumption. Zen also uses clock gating, reducing the frequency of underutilized portions of the core to save power. This comes from AMD's SenseMI technology, using sensors across the chip to dynamically scale frequency and voltage. Enhanced security and virtualization support Zen added support for AMD's Secure Memory Encryption (SME) and AMD's Secure Encrypted Virtualization (SEV). Secure Memory Encryption is real-time memory encryption done per page table entry. Encryption occurs on a hardware AES engine and keys are managed by the onboard "Security" Processor (ARM Cortex-A5) at boot time to encrypt each page, allowing any DDR4 memory (including non-volatile varieties) to be encrypted. AMD SME also makes the contents of the memory more resistant to memory snooping and cold boot attacks. The Secure Encrypted Virtualization (SEV) feature allows the memory contents of a virtual machine (VM) to be transparently encrypted with a key unique to the guest VM. The memory controller contains a high-performance encryption engine which can be programmed with multiple keys for use by different VMs in the system. The programming and management of these keys is handled by the AMD Secure Processor firmware which exposes an API for these tasks. Connectivity Incorporating much of the southbridge into the SoC, the Zen CPU includes SATA, USB, and PCI Express NVMe links. This can be augmented by available Socket AM4 chipsets which add connectivity options including additional SATA and USB connections, and support for AMD's Crossfire and Nvidia's SLI. AMD, in announcing its Radeon Instinct line, argued that the upcoming Zen-based Naples server CPU would be particularly suited for building deep learning systems. The 128 PCIe lanes per Naples CPU allows for eight Instinct cards to connect at PCIe x16 to a single CPU. This compares favorably to the Intel Xeon line, with only 40 PCIe lanes. Features CPUs CPU features table APUs APU features table Products The Zen architecture is used in the current-generation desktop Ryzen CPUs. It is also in Epyc server processors (successor of Opteron processors), and APUs. The first desktop processors without graphics processing units (codenamed "Summit Ridge") were initially expected to start selling at the end of 2016, according to an AMD roadmap; with the first mobile and desktop processors of the AMD Accelerated Processing Unit type (codenamed "Raven Ridge") following in late 2017. AMD officially delayed Zen until Q1 of 2017. In August 2016, an early demonstration of the architecture showed an 8-core/16-thread engineering sample CPU at 3.0 GHz. In December 2016, AMD officially announced the desktop CPU line under the Ryzen brand for release in Q1 2017. It also confirmed Server processors would be released in Q2 2017, and mobile APUs in H2 2017. On March 2, 2017, AMD officially launched the first Zen architecture-based octacore Ryzen desktop CPUs. The final clock speeds and TDPs for the 3 CPUs released in Q1 of 2017 demonstrated significant performance-per-watt benefits over the previous K15h (Piledriver) architecture. The octacore Ryzen desktop CPUs demonstrated performance-per-watt comparable to Intel's Broadwell octacore CPUs. In March 2017, AMD also demonstrated an engineering sample of a server CPU based on the Zen architecture. The CPU (codenamed "Naples") was configured as a dual-socket server platform with each CPU having 32 cores/64 threads. Desktop processors Desktop APUs Ryzen APUs are identified by either the G or GE suffix in their name. Mobile APUs Embedded processors In February 2018, AMD announced the V1000 series of embedded Zen+Vega APUs with four SKUs. Server processors AMD announced in March 2017 that it would release a server platform based on Zen, codenamed Naples, in the second quarter of the year. The platform include 1- and 2-socket systems. The CPUs in multi-processor configurations communicate via AMD's Infinity Fabric. Each chip supports eight channels of memory and 128 PCIe 3.0 lanes, of which 64 lanes are used for CPU-to-CPU communication through Infinity Fabric when installed in a dual-processor configuration. AMD officially revealed Naples under the brand name Epyc in May 2017. On June 20, 2017, AMD officially released the Epyc 7000 series CPUs at a launch event in Austin, Texas. Embedded Server processors In February 2018, AMD also announced the EPYC 3000 series of embedded Zen CPUs. See also AMD K9 AMD K10 Jim Keller (engineer) Ryzen Steamroller (microarchitecture) Zen+ Zen 2 References External links Ryzen Processors AMD Advanced Micro Devices microarchitectures Computer-related introductions in 2017 X86 microarchitectures
47591720
https://en.wikipedia.org/wiki/2015%E2%80%9316%20USC%20Trojans%20women%27s%20basketball%20team
2015–16 USC Trojans women's basketball team
The 2015–16 USC Trojans women's basketball team will represent University of Southern California during the 2015–16 NCAA Division I women's basketball season. The Trojans, led by third year head coach Cynthia Cooper-Dyke, play their home games at the Galen Center and were members of the Pac-12 Conference. They finished the season 19–13, 6–12 in Pac-12 play to finish in eighth place. They advanced to the quarterfinals of the Pac-12 Women's Basketball Tournament where they lost to Oregon State. Roster Schedule |- !colspan=9 style="background:#990000; color:#FFCC00;"| Exhibition |- !colspan=9 style="background:#990000; color:#FFCC00;"| Non-conference regular season |- !colspan=9 style="background:#990000; color:#FFCC00;"| Pac-12 regular season |- !colspan=9 style="background:#990000;"| Pac-12 Conference Women's Tournament Rankings See also 2015–16 USC Trojans men's basketball team References USC Trojans women's basketball seasons USC USC Trojans USC Trojans
39404901
https://en.wikipedia.org/wiki/Cyber%20Crime%20Unit%20%28Hellenic%20Police%29
Cyber Crime Unit (Hellenic Police)
The Cyber Crime Unit (; which can be literally translated as Electronic Crime Prosecution or roughly Cyber Crime Prosecution) of the Hellenic Police, for which legislative responsibility remains with the Ministry of Citizen Protection, was officially founded in 2004 with Greek Presidential Decree 100/2004 Government Gazette 69/Α/3-3-2004. In 2011 with Presidential Decree 9/2011 Government Gazette 24/Α/21-2-2011 was the establishment of the Authority of Financial Police and Cyber Crime Subdivision (), of Police Directorate level, commenced operation in August 2011 comprises the Financial Police Subdivision and the Cyber Crime Subdivision. It was reformed in 2014 with Article 17 of Section 2 of Law 4249/2014 Government Gazette 73/Α/24-3-2014 in which renamed Cyber Crime Division (), including the foundation and structure of Cyber Crime Subdivision of Northern Greece in Thessaloniki. Although it is still continues to be commonly known to as Cyber Crime Unit or Cyber Crime Center. The legislation for the Cyber Crime Division has amended with the Article 31 of the Presidential Decree 82/2020 Government Gazette 183/A/23-9-2020. History Cyber crime law enforcement part of the Hellenic Police was introduced in 1995 in the line of duty carried out by police officers with information technology skills, among whom there was then police officer Manolis Sfakianakis who would later be appointed the first Head of the Cyber Crime Unit commenced in the Head’s post in the time of Unit's founding on 3 March 2004 renewing the term of office until 17 February 2016, following the appointment in of George Papaprodromou as the next Head from 27 May 2016 to 2 November 2018. The Cyber Crime Unit initially acquired IT equipments funded by private individual sponsors and consisted of around four police officers where it has since been staffed by personnel around eighty covering different competencies including civilian personnel qualified with university degree and postgraduate studies. Sectors The Office of the Public Prosecutor of Supreme Civil and Criminal Court of Greece (Court of Cassation; Areios Pagos) has been issued Encyclical 02/2019 on 22 May 2019 sets out the Cyber Crime Division’s terms of service and area of responsibility aimed specifically at what cases must submit to Cyber Crime Division. As of January 2020, the Cyber Crime Division () in Athens, and alongside its branch the Cyber Crime Subdivision at the Northern Greece () in Thessaloniki, operating 24/7 service is further divided into five Departments: 1. Department of Administrative Support and Information Management 2. Department of Innovative Actions and Strategy 3. Department of Electronic and Telephone Communication Security and Protection of Software and Intellectual Property Rights 4. Department of Minors Internet Protection and Digital Investigation 5. Department of Special Cases and Internet Economic Crimes Prosecution Services It has resolved a large number of fraud cases, extortion and child abuse, while several resolved cases were only partially committed in Greece. The Unit has intervened in a number of suicide attempts. In addition, the Cyber Crime Unit was launched a mobile app called CyberKid accompanied with its respective website, funded through private sponsorship by the Wind Hellas providing useful information to internet users, especially children, parents, legal guardians, using the internet. The CyberKid app for portable devices (smartphone, tablet) is available for free download from the Google Play, App Store and Microsoft Store that users can directly contact the Cyber Crime Division in the event of a cyber crime incident. The Cyber Crime Unit has organized a series of tele-conferences and a number of event days delivered a lecture and informative presentation open to the public in various locations across the Greece to showcase crucial cyber matters and dangers of internet. It was also co-organized together with the competence authorities the annual Conference of Safe Cyber Navigation in Athens funded by various sponsors, having simultaneously live streaming, with the first conference was held on 8 February 2012 honoring the Safer Internet Day (SID). The Cyber Crime Division is operating a website called CyberAlert. Awards On 2 April 2015, the First (Gold) Award in the Best Social and Economic Development App was bestowed on Cyber Crime Division for the CyberKid Mobile Application funded by the Wind Hellas, from the Mobility Forum & Apps Awards 2015, with the Konstantinos Ouzounis, CEO at Ethos Media S.A., presenting the award to Police Major General Manolis Sfakianakis on behalf of the Cyber Crime Division in a ceremony took place at the Divani Caravel Hotel in Athens. The CyberKid mobile application is an initiative of Hellenic Police Headquarters with Ministry of Public Order and Citizen Protection implemented by the Cyber Crime Division. On 20 November 2014, the UNICEF Greece 2014 Award was bestowed both on Cyber Crime Division of the Hellenic Police and in its head Manolis Sfakianakis in recognition of their contribution to promote and protect the rights of children in Greece, from the UNICEF’s Greek National Committee Awards 2014 – 25th celebration to commemorate the Declaration of the Rights of the Child, World Children’s Day. The ceremony took place in the Arcade of Book located in the Arsakeion Mansion in Athens. On 13 February 2006, a Honorary Distinction () was bestowed both on Cyber Crime Unit and in its head in recognition of their valuable social work, from the Ministry of National Education and Religious Affairs, with the Minister Marietta Giannakou presenting the award to Manolis Sfakianakis in a ceremony took place at the General Secretariat for Youth in Athens. References External links Hellenic Police Official Website Hellenic Police Government agencies established in 2004 2004 establishments in Greece Cybercrime
4813024
https://en.wikipedia.org/wiki/Sound%20Blaster%20AWE32
Sound Blaster AWE32
The Sound Blaster AWE32 is an ISA sound card from Creative Technology. It is an expansion board for PCs and is part of the Sound Blaster family of products. The Sound Blaster AWE32, introduced in March 1994, was a near full-length ISA sound card, measuring 14 inches (356 mm) in length, due to the number of features included. Sound Blaster AWE32 Backward compatibility The AWE32's digital audio section was basically an entire Sound Blaster 16, and as such, was compatible with Creative's earlier Sound Blaster 2.0 (minus the C/MS audio chips.) Its specifications included 16-bit 44.1 kHz AD/DA conversion with real-time on-board compression / decompression and the Yamaha OPL3 FM synthesizer chip. However, compatibility was not always perfect and there were situations where various bugs could arise in games. Many of the Sound Blaster AWE32 cards had codecs that supported bass, treble, and gain adjustments through Creative's included mixer software. There were many variants and revisions of the AWE32, however, with numerous variations in audio chipset, amplifier selection and design, and supported features. For example, the Sound Blaster AWE32 boards that utilize the VIBRA chip do not have bass and treble adjustments. MIDI capability The Sound Blaster AWE32 included two distinct audio sections; one being the Creative digital audio section with their audio codec and optional CSP/ASP chip socket, and the second being the E-mu MIDI synthesizer section. The synthesizer section consisted of the EMU8000 synthesizer and effects processor chip, 1 MB EMU8011 sample ROM, and a variable amount of RAM (none on the SB32, 512 KB on the AWE32; RAM was expandable to 28 MB on both cards). These chips comprised a powerful and flexible sample-based synthesis system, based on E-mu's high-end sampler systems such as the E-mu Emulator III and E-mu Proteus. The effects processor generated various effects (i.e. reverb and chorus) and environments on MIDI output, similar to the later EAX standard on Live! and newer cards. It can also add effects to the output from the Yamaha OPL3's FM synthesis. The AWE32 was the first sampler to support E-Mu's SoundFont standard, which allowed users to build custom sound sets using their own samples, the samples included in ROM, or both. The card was sold with software for building custom SoundFonts. All of Creative's subsequent cards, other than the Sound Blaster PCI64/128 series, support SoundFonts. On the initial release, Creative promoted the EMU8000 as a waveguide physical modelling synthesis engine, due to its ability to work with delay lines. The option was used mostly as an effect engine for chorus and flanging effects. Actual physical modeling instruments were not popular on the AWE, although some support exists in the SoundFont format. The AWE32 didn't use its MPU-401 port to access the EMU8000 — Creative decided to expose the EMU8000's registers directly, through three sets of non-standard ports, and interpret MIDI commands in software on the host CPU. As with the Gravis Ultrasound, software designers had to write special AWE32 support into their programs. To support older software, the AWE32 featured OPL-3 FM synthesis, and came with the AWEUTIL program which attempted to provide GM/MT-32/GS redirection to the native AWE hardware; however, AWEUTIL wasn't compatible with all programs or motherboards due to its use of the non-maskable interrupt (a feature that was omitted or disabled on many clone boards), and it used a lot of precious DOS conventional memory. Also, if a game used DOS 32-bit protected mode through a non-DPMI compliant DOS extender, then the MPU-401 emulation would not function and the EMU8000 would not be used unless directly supported by the software. This did not affect the Creative Wave Blaster daughterboard header. AWE32's usage in Windows was simplified by the fact that Windows 3.1x had drivers which made the OPL3 and the EMU8000 appear like any other MIDI peripheral, on their own MIDI interfaces. CD-ROM interfaces Also on AWE32 was a Panasonic/Sony/Mitsumi CD-ROM interface (to support proprietary non-ATAPI CD-ROM drives), the Wave Blaster header and two 30-pin SIMM slots to increase sample memory. Later Sound Blaster AWE32 revisions replaced the proprietary CD-ROM interfaces with the ATAPI interface. The Sound Blaster AWE32 supported up to 28 MB of additional SIMM memory. A maximum of 32 MB could be added to the Sound Blaster AWE32 but the synthesizer could not address all of it (4MB of the EMU8000's address space was reserved for sample ROM). Model numbers The following model numbers were assigned to the Sound Blaster AWE32: CT27**: CT2760, CT2760 Rev3(issues with wavetable db reported) CT36**: CT3601, CT3602, CT3603, CT3607, CT3630, CT3631, CT3632, CT3636, CT3660, CT3661, CT3662, CT3665, CT3666, CT3670, CT3680 CT37**: CT3780 CT39**: CT3900, CT3910, CT3919, CT3940, CT3960, CT3980, CT3990, CT3991, CT3999 CT43**: CT4330, CT4331, CT4332 Sound Blaster 32 The Sound Blaster 32 (SB32) was a value-oriented offering from Creative, announced on June 6, 1995, designed to fit below the AWE32 Value in the lineup. The SB32 lacked onboard RAM, the Wave Blaster header, and the CSP socket. The boards also used ViBRA integrated audio chips, which lacked adjustments for bass, treble, and gain (except ViBRA CT2502). The SB32 had the same MIDI capabilities as the AWE32, and had the same 30-pin SIMM RAM expansion capability. The board was also fully compatible with the AWE32 option in software and used the same Windows drivers. Once the SB32 was outfitted with 30-pin SIMMs, its sampler section performed identically to the AWE32's. OPL-3 support varied among the models: the CT3930 came with a Yamaha YMF262 OPL-3 FM synthesis chip, whereas most models feature CQM synthesis either integrated into the ViBRA chip or via an external CT1978 chip. The majority of Sound Blaster 32 cards used TDA1517 amplifiers. Some Sound Blaster 32 PnP with onboard 512kB RAM was sold as AWE32 OEM in Dell computers. Model numbers The following model numbers were assigned to the Sound Blaster 32: CT36**: CT3600, CT3604, CT3605, CT3620, CT3640, CT3670, CT3671, CT3672, CT3681, CT3690 CT39**: CT3930 CT43**: CT4335, CT4336 Sound Blaster AWE32 Value The Sound Blaster AWE32 Value was another value-oriented offering. It lacked SIMM slots and the ASP processor, but featured 512kB onboard RAM and an (empty) ASP chip socket. References External links "Programming the Soundblaster AWE-32" "Soundblaster AWE-32 Driver Download" Creative Technology products Sound cards IBM PC compatibles Computer-related introductions in 1994
2313837
https://en.wikipedia.org/wiki/Graphical%20identification%20and%20authentication
Graphical identification and authentication
The graphical identification and authentication (GINA) is a component of Windows 2000, Windows XP and Windows Server 2003 that provides secure authentication and interactive logon services. GINA is a replaceable dynamically linked library that is loaded early in the boot process in the context of Winlogon when the machine is started. It is responsible for handling the secure attention sequence, typically Control-Alt-Delete, and interacting with the user when this sequence is received. GINA is also responsible for starting initial processes for a user (such as the Windows Shell) when they first log on. GINA is discontinued in Windows Vista. Overview A default GINA library, MSGINA.DLL, is provided by Microsoft as part of the operating system, and offers the following features: Authentication against Windows domain servers with a supplied user name/password combination. Displaying of a legal notice to the user prior to presenting the logon prompt. Automatic Logon, allowing for a user name and password to be stored and used in place of an interactive logon prompt. Automatic logon can also be configured to execute only a certain number of times before reverting to interactive logon. In older versions of Windows NT, the password could only be stored in plain text in the registry; support for using the Local Security Authority's private storage capabilities was introduced in Windows NT 4.0 Workstation Service Pack 3 and Windows NT Server 3.51. "Security Options" dialog when the user is logged on, which provides options to shut down, log off, change the password, start the Task Manager, and lock the workstation. Winlogon can be configured to use a different GINA, providing for non-standard authentication methods such as smart card readers or identification based on biometrics, or to provide an alternate visual interface to the default GINA. Developers who implement a replacement GINA are required to provide implementations for a set of API calls which cover functionality such as displaying a "workstation locked" dialog, processing the secure attention sequence in various user states, responding to queries as to whether or not locking the workstation is an allowed action, supporting the collection of user credentials on Terminal Services-based connections, and interacting with a screensaver. A custom GINA could be made entirely from scratch, or just be the original GINA with modifications. A custom GINA can be specified by placing a string named GinaDLL in the registry location HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon. The Winlogon component is solely responsible for calling these APIs in the GINA library. When the Winlogon process starts, it compares its version number to that which is supported by the loaded GINA library. If the GINA library is of a higher version than Winlogon, Windows will not boot. This is done because a GINA library written for a given version of Winlogon will expect a certain set of API calls to be provided by Winlogon. Support for replaceable GINA DLLs was introduced with Windows NT Server 3.51 and Windows NT Workstation 4.0 SP3. Successive versions of Windows have introduced additional functionality into Winlogon, resulting in additional functionality that can be implemented by a replacement GINA. Windows 2000, for example, introduced support for displaying status messages (including verbose messages that can be turned on through Group Policy) about the current state to the user (e.g. "Applying computer settings."), and starting applications in the user's context; this facilitates restarting Windows Explorer automatically if it crashes, as well as starting the Task Manager. Windows XP introduced support for Fast User Switching, Remote Desktop and a more interactive, simplified and user-friendly full-screen logon. End of life In Windows Vista, GINA has been replaced by credential providers, which allow for significantly increased flexibility in supporting multiple credential collection methods. To support the use of multiple GINA models, a complex chaining method used to be required and custom GINAs often did not work with fast user switching. GINA libraries do not work with Windows Vista and later Windows versions. One difference, however, is that GINA could completely replace the Windows logon user interface; Credential Providers cannot. See also List of Microsoft Windows components Winlogon Windows NT startup process References External links Winlogon and GINA, developer information on how the login components interact Customizing GINA Part 1, Developer tutorial for writing a custom GINA. Customizing GINA Part 2, Developer tutorial for writing a custom GINA. pGina, Open Source Windows Authentication Microsoft Windows security technology Windows components
16219268
https://en.wikipedia.org/wiki/6th%20Special%20Operations%20Squadron
6th Special Operations Squadron
The 6th Special Operations Squadron is part of the 492d Special Operations Wing at Duke Field, Florida. It is a combat aviation advisory unit, equipped with PZL C-145A Skytruck aircraft. Its mission is to assess, train, advise, and assist foreign aviation forces in airpower employment, sustainment and force integration. It has been active at the Eglin Air Force Base complex since 1994. The squadron was first activated in India during World War II as the 6th Fighter Squadron, Commando. The squadron served in combat in the China-Burma-India Theater until May 1945. It was activated again in 1962. In 1968, the squadron deployed to Vietnam, where it again flew combat missions, earning a Presidential Unit Citation, and two Air Force Outstanding Unit Awards with Combat "V" Device before inactivating in 1969. From 1970 to 1974, as the 6th Special Operations Training Squadron, it trained aircrews for special operations, primarily in Southeast Asia. Mission The squadron is manned by combat aviation advisors, who are specially trained for the conduct of special operations activities by, with, and through foreign aviation forces. Its primary mission is to assess, train, advise and assist those forces. Squadron advisors help friendly and allied forces employ their airpower resources in joint (multi-service) and combined (multi-national) operations. The squadron also uses its C-145s for special operations taskings involving night vision infiltration, exfiltration, resupply and other combat taskings on unimproved runways. History World War II The squadron was first activated at Asansol Airfield, India in September 1944 as the 6th Fighter Squadron, Commando and equipped with Republic P-47 Thunderbolts. In its first months of operation, it flew from several stations in what are now India and Bangladesh, maintaining detachments at Cox's Bazar from 15 to 21 October 1944, 2 to 8 November 1944 and 11 to 18 January 1945, and from Fenny Airfield from 1 to 24 December 1944. The 6th flew combat missions in the China-Burma-India Theater of World War II starting on 17 October 1944. In 1945, the 6th converted to the North American P-51 Mustang, continuing to fly missions until 8 May 1945. The squadron left India in October 1945 and was inactivated upon arriving at the port of embarkation in November. In 1948, the Air Force disbanded the squadron along with its other fighter commando squadrons. Vietnam War In 1962, the squadron was reconstituted and activated at Eglin Air Force Base Auxiliary Airfield No. 9, Florida, where it was equipped with Douglas B-26 Invaders and North American T-28 Trojans. The 6th trained crews in counterinsurgency and unconventional warfare. It also flew demonstration flights for those tactics. Squadron personnel deployed to Vietnam, where they served as advisors to Vietnamese Air Force personnel at Bien Hoa Air Base. They also trained airmen from Latin America at Howard Air Force Base, Panama Canal Zone in counterinsurgency tactics. The squadron reduced an all T-28 unit in 1963. Many of the 6th's personnel formed cadres for new special operations units being formed. By March 1964, the squadron manning had recovered to the point where it could deploy to Udorn Royal Thai Air Force Base, Thailand, to train air and ground crews in counterinsurgency operations. In 1966, the squadron was redesignated the 6th Air Commando Squadron, Fighter and moved to England Air Force Base along with its parent 1st Air Commando Wing. At England the squadron began to receive A-1 "Spad" aircraft to replace its Trojans. By December 1967, the last of the T-28s had been transferred. The unit deployed to Pleiku Air Base, Vietnam, in February 1968, where it was briefly assigned to the 14th Air Commando Wing until the Air Force formed the 633d Special Operations Wing at Pleiku in July, the same day the unit was renamed the 6th Special Operations Squadron. It began flying combat missions on 1 March 1968, including close air support for ground forces, air cover for transports flying Operation Ranch Hand missions, day and night interdiction missions, combat search and rescue support, armed reconnaissance, and forward air control missions. The unit was awarded a Presidential Unit Citation, and two Air Force Outstanding Unit Awards with Combat "V" Device during its twenty-one month tour in Vietnam. It was inactivated in Operation Keystone Cardinal, the first reduction in United States Air Forces combat forces as ceilings on forces in South Vietnam were reduced. It continued to fly combat until it was inactivated and its Douglas A-1 Skyraiders were transferred to the 56th Special Operations Wing, stationed in Thailand. The squadron returned to England Air Force Base on 8 January 1970 and equipped with Cessna A-37 Dragonfly light attack aircraft. Its mission was replacement training of US Air Force and allied air force pilots on the Dragonfly. The squadron's training mission was reflected in a name change to the 6th Special Operations Training Squadron in August 1972. At England, the 6th was initially assigned to the 4410th Combat Crew Training Wing. As US activity in Southeast Asia drew down, so did the need to train pilots for the war. The 4410th was reduced to a group, and finally inactivated in July 1973, when the squadron returned to the control of the 1st Special Operations Wing, which had left England for Hurlburt Field in 1969. In January 1974, the squadron was assigned to the host wing at England, the 23d Tactical Fighter Wing until it was inactivated in July. Combat Aviation Advisors The squadron was redesignated the 6th Special Operations Flight and activated at Hurlburt Field on 1 April 1994, when it absorbed the personnel of Detachment 7, Special Operations Combat Operations Staff, which had been organized in August 1993 to provide an aviation related foreign internal defense capability. Detachment 7, had just made its first major foreign internal defense deployment the preceding month, to Ecuador. By October 1994, the unit had grown and was renamed the 6th Special Operations Squadron once again. Two years later, on 11 October 1996, the squadron became a flying outfit when it received two Bell UH-1N Hueys. Since that time, the squadron has operated a number of US and foreign aircraft in its advisory role. Since 1994 the squadron has sent advisers to help US-allied forces employ and sustain their own airpower resources and, when necessary, integrate those resources into joint and multi-national operations. Until the activation of the 370th Air Expeditionary Advisory Squadron in Iraq in 2007, it was the "sole USAF unit whose primary mission encompassed the training-advising of host nation air forces." This mission often merged with counterinsurgency and foreign internal defense missions in host countries. The unit moved from Hurlburt Field to Duke Field in 2012, when the 711th Special Operations Squadron transitioned from the Lockheed MC-130E Combat Talon to the foreign internal defense role, the two units jointly assuming the new mission. "As the only two Air Force operational squadrons performing this mission, their deployment tempo is best described as continuous averaging around one deployment a month." In 2015, the 6th shares a building, flightline, PZL C-145 Skytruck aircraft and mission with Air Force Reserve Command's 711th Squadron at Duke Field. Starting early 2017, the 6th was also flying the Cessna 208 Caravan in an intelligence, surveillance and reconnaissance role. Lineage Constituted as the 6th Fighter Squadron, Commando on 22 September 1944 Activated on 30 September 1944 Inactivated on 3 November 1945 Disbanded on 8 October 1948 Reconstituted and activated on 18 April 1962 (not organized) Organized on 27 April 1962 Redesignated 6th Air Commando Squadron, Fighter on 15 June 1966 Redesignated 6th Special Operations Squadron on 15 July 1968 Inactivated on 15 November 1969 Activated on 8 January 1970 Redesignated 6th Special Operations Training Squadron on 31 August 1972 Inactivated on 15 September 1974 Redesignated 6th Special Operations Flight on 25 March 1994 Activated on 1 April 1994 Redesignated 6th Special Operations Squadron on 1 October 1994 Assignments 1st Air Commando Group, 30 September 1944 – 3 November 1945 (attached to 1st Provisional Fighter Group 7 February - 8 May 1945, 2d Air Commando Group, 23 May - 20 June 1945) Tactical Air Command, 18 April 1962 (not organized) 1st Air Commando Group (later 1st Air Commando Wing), 27 April 1962 14th Air Commando Wing, 29 February 1968 633d Special Operations Wing, 15 July 1968 – 15 November 1969 4410th Combat Crew Training Wing (later 4410th Special Operations Training Group), 8 January 1970 1st Special Operations Wing, 31 July 1973 23d Tactical Fighter Wing, 1 January 1974 – 15 September 1974 16th Operations Group, (later 1st Special Operations Group), 1 April 1994 Air Force Special Operations Training Center, 1 October 2012 Air Force Special Operations Air Warfare Center, 11 February 2013 492d Special Operations Wing, 10 May 2017 – present Stations Asansol Airfield, India, 30 September 1944 Hay, India, 7 February 1945 Asansol Airfield, India, 9 May 1945 Kalaikunda Airfield, India, 23 May 1945 Asansol Airfield, India, 22 June - 6 October 1945 Camp Kilmer, New Jersey, 1–3 November 1945 Eglin Air Force Base Auxiliary Airfield 9, Florida, 27 April 1962 England Air Force Base, Louisiana, 15 January 1966 – 17 February 1968 Pleiku Air Base, South Vietnam, 19 February 1968 – 15 November 1969 Detachment at Da Nang Air Base, South Vietnam, 1 April 1968 – 1 September 1969 England Air Force Base, Louisiana, 8 January 1970 – 15 September 1974 Hurlburt Field, Florida, 1 April 1994 Duke Field, Florida, 2012 – present Aircraft Republic P-47 Thunderbolt (1944–1945) North American P-51 Mustang (1945) Douglas B-26 Invader (1962–1963) Douglas RB-26 Invader (1962–1963) Helio L-28 (later Helio U-10 Courier) (1962–1963) North American T-28 Trojan (1962–1967) Douglas A-1 Skyraider (1963, 1966; 1967–1969) Cessna A-37 Dragonfly (1970–1974) Bell UH-1N Huey (1996–2012) CASA C-212 Aviocar (1998-unknown) Bell UH-1H Huey (1996–2012) Lockheed C-130 Hercules (1996–2012) Mil Mi-8 (2002–2012) Cessna 208 Caravan Beechcraft King Air 350 Eurocopter AS332 Super Puma Basler BT-67 (2002–2008) Mil Mi-17 (2002–2012) de Havilland Canada DHC-6 Twin Otter (2010–2012) Antonov An-26 (2003–2007) PZL C-145 Skytruck (2012 – present) References Notes Explanatory notes Citations Bibliography External links Military units and formations in Florida 006
2067339
https://en.wikipedia.org/wiki/Erik%20Brynjolfsson
Erik Brynjolfsson
Erik Brynjolfsson (born 1962) is an American academic, author and inventor. He is the Jerry Yang and Akiko Yamazaki Professor and a Senior Fellow at Stanford University where he directs the Digital Economy Lab at the Stanford Institute for Human-Centered AI, with appointments at SIEPR, the Stanford Department of Economics and the Stanford Graduate School of Business. He is also a Research Associate at the National Bureau of Economic Research and a best-selling author of several books. He is known for his contributions to the world of IT productivity research and work on the economics of information and the digital economy more generally. Biography Erik Brynjolfsson was born to Marguerite Reman Brynjolfsson and Ari Brynjolfsson, a nuclear physicist. He earned his A.B., magna cum laude, in 1984 and his S.M. in Applied Mathematics and Decision Sciences at Harvard University in 1984. He received a Ph.D. in Managerial Economics in 1991 from the MIT Sloan School of Management. Brynjolfsson served on the faculty of MIT from 1986 to 2020, where he was a Professor at the MIT Sloan School of Management and Director of the MIT Initiative on the Digital Economy, and Director of the MIT Center for Digital Business. Previously, he was at Harvard from 1985 to 1995 and Stanford from 1996 to 1998. In 2001 he was appointed the Schussel Family Professor of Management at the MIT Sloan School of Management. He lectures and consults worldwide, and serves on corporate boards. He taught the popular course 15.567, The Economics of Information: Strategy, Structure, and Pricing, at MIT and hosts a related blog, Economics of Information. He was also a contributing member to the Winter, 2004 Boston Ski and Sports Club (BSSC) Championship flag football team. In February 2020, Stanford announced that Brynjolfsson would join its faculty in July. His research has been recognized with nine "best paper" awards by fellow academics, including the John DC Little Award for the best paper in Marketing Science. Brynjolfsson is the founder of two companies and has been awarded five U.S. patents. Along with Andrew McAfee, he was awarded the top prize in the Digital Thinkers category at the Thinkers 50 Gala on November 9, 2015. Brynjolfsson is of Icelandic descent. Work Brynjolfsson's is one of the most widely cited scholars studying the economics of information systems. He was among the first researchers to measure productivity contributions of IT and the complementary role of organizational capital and other intangibles. Brynjolfsson has done research on digital commerce, the Long Tail , bundling and pricing models, intangible assets and the effects of IT on business strategy, productivity and performance. More recently, in his books The Second Machine Age and Race Against the Machine, Brynjolfsson and his co-author Andrew McAfee have argued that technology is racing ahead, and called for greater efforts to update our skills, organizations and institutions more rapidly. Information technology and productivity Brynjolfsson wrote an influential review of the "IT Productivity Paradox" and in separate research, documented a correlation between IT investment and productivity. His work provides evidence that the use of Information Technology is most likely to increase productivity when it is combined with complementary business processes and human capital. Selected publications AI, machine learning and the economy Brynjolfsson Erik and Mitchell, Tom (December, 2017) What can machine learning do? Workforce implications Science. Brynjolfsson Erik, Syverson, Chad and Rock Daniel (2019) Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics National Bureau of Economic Research. Brynjolfsson, Erik, Hui, Xiang and Liu, Meng (2019). Does machine translation affect international trade? Evidence from a large digital platform Management Science. Brynjolfsson, Erik and Andrew McAfee. (2017) The Business of Artificial Intelligence: What It Can—and Cannot—Do for Your Organization. Harvard Business Review. Brynjolfsson, Erik and Andrew McAfee. (2017) "What's Driving the Machine Learning Explosion?" Harvard Business Review. Measuring the digital economy Brynjolfsson, Erik, Avinash Collis, and Felix Eggers. (March, 2019) Using Massive Online Choice Experiments to Measure Changes in Well-Being Proceedings of the National Academy of Sciences. Brynjolfsson, Erik and Avinash Collis. (2019) How Should We Measure the Digital Economy? Harvard Business Review, Computers, productivity and organizational capital McAfee, Andrew and Brynjolfsson, Erik (June, 2017) Machine, Platform, Crowd: Harnessing the Digital Revolution, W.W. Norton & Company, Brynjolfsson, Erik and McAfee, Andrew (January, 2014) The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies, W.W. Norton & Company, Brynjolfsson, Erik and McAfee, Andrew (October 2011) Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Digital Frontier Press. Brynjolfsson, Erik and Saunders, Adam (October 2009) Wired for Innovation: How Information Technology is Reshaping the Economy. The MIT Press. Wu, Lynn and Brynjolfsson, Erik (August 2013) "The Future of Prediction: How Google Searches Foreshadow Housing Prices and Sales". NBER Conference Technological Progress & Productivity Measurement, 2009; WISE, 2009; ICIS, 2009. Brynjolfsson, Erik and Hitt, Lorin (June 2003) "Computing Productivity: Firm-level Evidence,Review of Economics and Statistics. Brynjolfsson, Erik and Hitt, Lorin (Fall 2000) "Beyond Computation: Information Technology, Organizational Transformation and Business Performance," Journal of Economic Perspectives, Vol. 14, No. 4, pp. 23–48. Bresnahan, Timothy, Brynjolfsson, Erik and Hitt, Lorin (February, 2002) "Information Technology, Workplace Organization and the Demand for Skilled Labor: Firm Level Evidence" Quarterly Journal of Economics, Vol. 117, pp. 339–376. Brynjolfsson, Erik, Hitt, Lorin and Yang, Shinkyu (2002)"Intangible Assets: Computers and Organizational Capital", Brookings Papers on Economic Activity: Macroeconomics, 137–199. Bundling and pricing of information goods Bakos, Yannis and Brynjolfsson, Erik (December, 1999) "Bundling Information Goods: Pricing, Profits and Efficiency", Management Science, Vol. 45, No. 12, pp. 1613–1630. Bakos, Yannis and Brynjolfsson, Erik (January, 2000) "Bundling and Competition on the Internet", Marketing Science, Vol. 19, No. 1, pp. 63–82. Internet commerce and the long tail Brynjolfsson, Erik, Smith, Michael and Hu, Yu (November, 2003) "Consumer Surplus in the Digital Economy: Estimating the Value of Increased Product Variety at Online Booksellers," Management Science, Vol 49, No. 11. Brynjolfsson, Erik, Hu, Yu and Rahman, Mohammad (November, 2009) "Battle of the Retail Channels: How Product Selection and Geography Drive Cross-channel Competition", Management Science, Vol. 55, No. 11. Brynjolfsson, Erik, Hu, Yu and Simester, Dunan (2006) "Goodbye Pareto Principle, Hello Long Tail: The Effect of Search Costs on the Concentration of Product Sales" References External links Brynjolfsson's Web Site with links to research papers. The Stanford Digital Economy Lab. Economics of Information Blog Profile in Business Week, September 29, 2003. ("If e-business had an oracle, Erik Brynjolfsson would be the anointed.") Profile in Optimize, October, 2005. (Brynjolfsson ranked second in research study of "most influential academics of business technology") Profile in Supply Chain Management, January, 2006. CIO Insight Interview, "Expert Voice: Erik Brynjolfsson on Organizational Capital" October, 2001. Profile in Informationweek, April 17, 2000. ("When it comes to explaining the relationship between IT and worker productivity—bandwagon jumpers like Federal Reserve chairman Alan Greenspan notwithstanding—the generally acknowledged expert in the field is Erik Brynjolfsson ...") TED Talk on the impact of technical change (TED2013) 1962 births Living people Harvard University alumni MIT Sloan School of Management alumni MIT Sloan School of Management faculty American people of Icelandic descent Harvard University faculty Stanford University faculty Information systems researchers
58222
https://en.wikipedia.org/wiki/Fujitsu
Fujitsu
is a Japanese multinational information and communications technology equipment and services corporation, established in 1935 and headquartered in Tokyo. Fujitsu is the world's sixth-largest IT services provider by annual revenue, and the largest in Japan, in 2021. The hardware offerings from Fujitsu are mainly of personal and enterprise computing products, including x86, SPARC and mainframe compatible server products, although the corporation and its subsidiaries also offer a diversity of products and services in the areas of data storage, telecommunications, advanced microelectronics, and air conditioning. It has approximately 126,400 employees and its products and services are available in approximately 180 countries. Fortune named Fujitsu as one of the world's most admired companies and a Global 500 company. Fujitsu is listed on the Tokyo Stock Exchange and Nagoya Stock Exchange; its Tokyo listing is a constituent of the Nikkei 225 and TOPIX 100 indices. History 1935 to 2000 Fujitsu was established on June 20, 1935, which makes it one of the oldest operating IT companies after IBM and before Hewlett Packard, under the name , as a spin-off of the Fuji Electric Company, itself a joint venture between the Furukawa Electric Company and the German conglomerate Siemens which had been founded in 1923. Despite its connections to the Furukawa zaibatsu, Fujitsu escaped the Allied occupation of Japan after the Second World War mostly unscathed. In 1954, Fujitsu manufactured Japan's first computer, the FACOM 100 mainframe, and in 1961 launched its second generation computers (transistorized) the FACOM 222 mainframe. The 1968 FACOM230 "5" Series marked the beginning of its third generation computers. Fujitsu offered mainframe computers from 1955 until at least 2002 Fujitsu's computer products have also included minicomputers, small business computers, servers and personal computers. In 1955, Fujitsu founded Kawasaki Frontale as a company football club; Kawasaki Frontale has been a J. League football club since 1999. In 1967, the company's name was officially changed to the contraction . Since 1985, the company also fields a company American football team, the Fujitsu Frontiers, who play in the corporate X-League, appeared in 7 Japan X Bowls, winning two, and won two Rice Bowls. In 1971, Fujitsu signed an OEM agreement with the Canadian company Consolidated Computers Limited (CCL) to distribute CCL's data entry product, Key-Edit. Fujitsu joined both International Computers Limited (ICL) who earlier began marketing Key-Edit in the British Commonwealth of countries as well as in both western and eastern Europe; and CCL's direct marketing staff in Canada, USA, London (UK) and Frankfurt. Mers Kutt, inventor of Key-Edit and founder of CCL, was the common thread that led to Fujitsu's later association with ICL and Gene Amdahl. In 1986, Fujitsu and The Queen's University of Belfast business incubation unit (QUBIS Ltd) established a joint venture called Kainos, a privately held software company based in Belfast, Northern Ireland. In 1990, Fujitsu acquired 80% of the UK-based computer company ICL for $1.29 billion. In September 1990, Fujitsu announced the launch of a new series of mainframe computers which were at that time the fastest in the world. In July 1991, Fujitsu acquired more than half of the Russian company KME-CS (Kazan Manufacturing Enterprise of Computer Systems). In 1992, Fujitsu introduced the world's first 21-inch full-color plasma display. It was a hybrid, based upon the plasma display created at the University of Illinois at Urbana-Champaign and NHK STRL, achieving superior brightness. In 1993, Fujitsu formed a flash memory manufacturing joint venture with AMD, Spansion. As part of the transaction, AMD contributed its flash memory group, Fab 25 in Texas, its R&D facilities and assembly plants in Thailand, Malaysia and China; Fujitsu provided its Flash memory business division and the Malaysian Fujitsu Microelectronics final assembly and test operations. From February 1989 until mid-1997, Fujitsu built the FM Towns PC variant. It started as a proprietary PC variant intended for multimedia applications and computer games, but later became more compatible with regular PCs. In 1993, the FM Towns Marty was released, a gaming console compatible with the FM Towns games. Fujitsu agreed to acquire the 58 percent of Amdahl Corporation (including the Canada-based DMR consulting group) that it did not already own for around $850 million in July 1997. In April 1997, the company acquired a 30 percent stake in GLOVIA International, Inc., an El Segundo, Calif., manufacturing ERP software provider whose software it had begun integrating into its electronics plants starting in 1994. In June 1999 Fujitsu's historical connection with Siemens was revived, when the two companies agreed to merge their European computer operations into a new 50:50 joint venture called Fujitsu Siemens Computers, which became the world's fifth-largest computer manufacturing company. 2000 to 2020 In April 2000, Fujitsu acquired the remaining 70% of GLOVIA International. In April 2002 ICL re-branded itself as Fujitsu. On March 2, 2004, Fujitsu Computer Products of America lost a class action lawsuit over hard disk drives with defective chips and firmware. In October 2004, Fujitsu acquired the Australian subsidiary of Atos Origin, a systems implementation company with around 140 employees which specialized in SAP. In August 2007, Fujitsu signed a £500 million, 10-year deal with Reuters Group under which Reuters outsourced the majority of its internal IT department to Fujitsu. As part of the agreement around 300 Reuters staff and 200 contractors transferred to Fujitsu. In October 2007, Fujitsu announced that it would be establishing an offshore development centre in Noida, India with a capacity to house 1,200 employees, in an investment of US$10 million. In October 2007, Fujitsu's Australia and New Zealand subsidiary acquired Infinity Solutions Ltd, a New Zealand-based IT hardware, services and consultancy company, for an undisclosed amount. In January 2009, Fujitsu reached an agreement to sell its HDD business to Toshiba. Transfer of the business was completed on October 1. 2009. In March 2009, Fujitsu announced that it had decided to convert FDK Corporation, at that time an equity-method affiliate, to a consolidated subsidiary from May 1, 2009 (tentative schedule) by subscribing to a private placement to increase FDK's capital. On April 1, 2009, Fujitsu agreed to acquire Siemens' stake in Fujitsu Siemens Computers for approximately EUR450m. Fujitsu Siemens Computers was subsequently renamed Fujitsu Technology Solutions. In April 2009, Fujitsu acquired Australian software company Supply Chain Consulting for a $48 million deal, just weeks after purchasing the Telstra subsidiary Kaz for $200 million. Concerning of net loss forecast amounted 95 billion yen in the year ending March 2013, in February 2013 Fujitsu announced to cut 5,000 jobs of which 3,000 jobs in Japan and the rest overseas from its 170,000 employees. Fujitsu also merged its Large-scale integration chip designing business with that of Panasonic Corporation, resulting in establishment of Socionext. In 2014, after severe losses, Fujitsu spun off its LSI chip manufacturing division as well, as Mie Fujitsu semiconductor, which was later bought in 2018 by United Semiconductor Japan Co., Ltd., wholly owned by United Microelectronics Corporation. In 2015, Fujitsu celebrated 80 years since establishment at a time when its IT business embarked upon the Fujitsu 2015 World Tour which has included 15 major cities globally and been visited by over 10,000 IT professionals with Fujitsu presenting its take on the future of Hyper Connectivity and Human Centric Computing. In April 2015 GLOVIA International is renamed FUJITSU GLOVIA, Inc. In November 2015, Fujitsu Limited and VMware announced new areas of collaboration to empower customers with flexible and secure cloud technologies. It also acquired USharesoft which provides enterprise-class application delivery software for automating the build, migration and governance of applications in multi-cloud environments. In January 2016, Fujitsu Network Communications Inc. announced a new suite of layered products to advance software-defined networking (SDN) for carriers, service providers and cloud builders. Virtuora NC, based on open standards, is described by Fujitsu as "a suite of standards-based, multi-layered, multi-vendor network automation and virtualization products" that "has been hands-on hardened by some of the largest global service providers." In 2019, Fujitsu started to deliver 5G telecommunications equipment to NTT Docomo, along with NEC. In March 2020, Fujitsu announced the creation of a subsidiary, later named Fujitsu Japan, that will enable the company to expand its business in the Japanese IT services market. In June 2020, Fugaku, co-developed with the RIKEN research institute, was declared the most powerful supercomputer in the world. The performance capability of Fugaku is 415.53 PFLOPS with a theoretical peak of 513.86 PFLOPS. It is three times faster than of the previous champion. Fugaku also ranked first place in categories that measure computational methods performance for industrial use, artificial intelligence applications, and big data analytics. The supercomputer is located in a facility in Kobe. In June 2020, Fujitsu developed an artificial intelligence monitor that can recognize complex hand movements, built on its crime surveillance technology. The AI is designed to check whether the subject complete proper hand washing procedure based on the guidelines issued by the WHO. In September 2020, Fujitsu introduced software-defined storage technology that incorporates Qumulo hybrid cloud file storage software to enable enterprises to unify petabytes of unstructured data from disparate locations, across multiple data centers and the cloud. Operations Fujitsu Laboratories Fujitsu Laboratories, Fujitsu's Research and Development division, has approximately 900 employees and a capital of JP¥5 billion. The current CEO is Hirotaka Hara. In 2012, Fujitsu announced that it had developed new technology for non-3D camera phones. The technology will allow the camera phones to take 3D photos. Fujitsu Electronics Europe GmbH Fujitsu Electronics Europe GmbH entered the market as a global distributor on January 1, 2016. Fujitsu Consulting Fujitsu Consulting is the consulting and services arm of the Fujitsu group, providing information technology consulting, implementation and management services. Fujitsu Consulting was founded in 1973 in Montreal, Quebec, Canada, under its original name "DMR" (an acronym of the three founder's names: Pierre Ducros, Serge Meilleur and Alain Roy) During the next decade, the company established a presence throughout Quebec and Canada, before extending its reach to international markets. For nearly thirty years, DMR Consulting grew to become an international consulting firm, changing its name to Fujitsu Consulting in 2002 after being acquired by Fujitsu Ltd. Fujitsu operates a division of the company in India, resulting from an acquisition of North America-based company, Rapidigm. It has offshore divisions at Noida, Pune, Hyderabad, Chennai and Bangalore with Pune being the head office. Fujitsu Consulting India launched its second $10 million development center at Noida in October 2007, a year after starting operation in the country. Following the expansion plan, Fujitsu Consulting India launched the fourth development center in Bengaluru in Nov 2011. Fujitsu General Fujitsu Ltd. has a 42% shareholding in Fujitsu General, which manufactures and markets various air conditioning units and humidity control solutions under the General & Fujitsu brands. In India, The company has ended its long-standing joint venture agreement with the Dubai-based ETA group and henceforth will operate under a wholly owned subsidiary Fujitsu General (India) Pvt Ltd, which was earlier known as ETA General. PFU Limited PFU Limited, headquartered in Ishikawa, Japan is a wholly owned subsidiary of Fujitsu Limited. PFU Limited was established in 1960, has approximately 4,600 employees globally and in 2013 turned over 126.4 billion Yen (US$1.2 Billion). PFU manufactures interactive kiosks, keyboards, network security hardware, embedded computers and imaging products (document scanners) all under the PFU or Fujitsu brand. In addition to hardware PFU also produce desktop and enterprise document capture software and document management software products. PFU has overseas Sales & Marketing offices in Germany (PFU Imaging Solutions Europe Limited), Italy (PFU Imaging Solutions Europe Limited), United Kingdom (PFU Imaging Solutions Europe Limited) and United States of America (Fujitsu Computer Products of America Inc). PFU Limited are responsible for the design, development, manufacture, sales and support of document scanners which are sold under the Fujitsu brand. Fujitsu are market leaders in professional document scanners with their best selling fi-series, Scansnap and ScanPartner product families as well as Paperstream IP, Paperstream Capture, ScanSnap Manager, ScanSnap Home, Cardminder, Magic Desktop and Rack2Filer software products. Fujitsu Glovia, Inc. Fujitsu Glovia, a wholly owned subsidiary of Fujitsu Ltd., is a discrete manufacturing enterprise resource planning software vendor based in El Segundo, California, with international operations in the Netherlands, Japan and the United Kingdom. The company offers on-premise and cloud-based ERP manufacturing software under the Glovia G2 brand, and software as a service (SaaS) under the brand Glovia OM. The company was established in 1970 as Xerox Computer Services, where it developed inventory, manufacturing and financial applications. Fujitsu acquired 30 percent of the renamed Glovia International in 1997 and the remaining 70 percent stake in 2000. Fujitsu Client Computing Limited Fujitsu Client Computing Limited (FCCL), headquartered in Kawasaki, Kanagawa, the city where the company was founded, is the division of Fujitsu responsible for research, development, design, manufacturing and sales of consumer PC products. Formerly a wholly owned subsidiary, in November 2017, FCCL was spun off into a joint venture with Lenovo and Development Bank of Japan (DBJ). The new company retains the same name, and Fujitsu is still responsible for sales and support of the products; however, Lenovo owns a majority stake at 51%, while Fujitsu retains 44%. The remaining 5% stake is held by DBJ. Fujitsu Network Communications, Inc. Fujitsu Network Communications, Inc., headquartered in Richardson, Texas, United States, is a wholly owned subsidiary of Fujitsu Limited. Established in 1996, Fujitsu Network Communications specializes in building, operating, and supporting optical and wireless broadband and telecommunications networks. The company’s customers include telecommunications service providers, internet service providers, cable companies, utilities, and municipalities. Fujitsu Network Communications provides multivendor solutions that integrate equipment from more than one manufacturer, as well as manufacturing its own network equipment in its Richardson, TX manufacturing facility. The Fujitsu Network Communications optical networking portfolio includes the 1FINITY™ and FLASHWAVE® hardware platforms; Virtuora® cloud software solutions; and NETSMART™ network management and design tools. The company also builds networks that comply with various next-generation technologies and initiatives, including the Open ROADM MSA, oRAN Alliance, and Telecom Infra Project. Products and services Computing products Fujitsu's computing product lines include: Relational Database: Fujitsu Enterprise Postgres Fujitsu has more than 35 years experience in database development and is a “major contributor” to open source Postgres. Fujitsu engineers have also developed an Enterprise Postgres version called Fujitsu Enterprise Postgres. Fujitsu Enterprise Postgres benefits include Enterprise Support; warranted code; High Availability enhancements; security enhancements (end to end transparent data encryption, data masking, auditing); Performance enhancements (In-Memory Columnar Index provides support for HTAP (Hybrid transactional/analytical processing) workloads); High-speed Backup and Recovery; High-speed data load; Global metacache (improved memory management); Oracle compatibility extensions (to assist migration from Oracle to Postgres). Fujitsu Enterprise Postgres can be deployed on X86 (Linux.Windows), IBM z/IBM LinuxONE; it is also packaged as a RedHat OpenShift (OCP) container. PRIMERGYIn May 2011, Fujitsu decided to enter the mobile phone space again, Microsoft announcing plans that Fujitsu would release Windows Phone devices. ETERNUS Fujitsu PRIMERGY and ETERNUS are distributed by TriTech Distribution Limited in Hong Kong. LIFEBOOK, AMILO: Fujitsu's range of notebook computers and tablet PCs. Cloud computing Fujitsu offers a public cloud service delivered from data centers in Japan, Australia, Singapore, the United States, the United Kingdom and Germany based on its Global Cloud Platform strategy announced in 2010. The platform delivers Infrastructure-as-a-Service (IaaS) – virtual information and communication technology (ICT) infrastructure, such as servers and storage functionality – from Fujitsu's data centers. In Japan, the service was offered as the On-Demand Virtual System Service (OViSS) and was then launched globally as Fujitsu Global Cloud Platform/S5 (FGCP/S5). Since July 2013 the service has been called IaaS Trusted Public S5. Globally, the service is operated from Fujitsu data centers located in Australia, Singapore, the United States, the United Kingdom, Germany and Japan. Fujitsu has also launched a Windows Azure powered Global Cloud Platform in a partnership with Microsoft. This offering, delivering Platform-as-a-Service (PaaS), was known as FGCP/A5 in Japan but has since been renamed FUJITSU Cloud PaaS A5 for Windows Azure. It is operated from a Fujitsu data center in Japan. It offers a set of application development frameworks, such as Microsoft .NET, Java and PHP, and data storage capabilities consistent with the Windows Azure platform provided by Microsoft. The basic service consists of compute, storage, Microsoft SQL Azure, and Windows Azure AppFabric technologies such as Service Bus and Access Control Service, with options for inter-operating services covering implementation and migration of applications, system building, systems operation, and support. Fujitsu acquired RunMyProcess in April 2013, a Cloud-based integration Platform-as-a-Service (PaaS) specialized in workflow automation and business application development. Fujitsu offers local cloud platforms, such as in Australia, that provide the ability to rely on its domestic data centers which keep sensitive financial data under local jurisdiction and compliance standards. Microprocessors Fujitsu produces the SPARC-compliant CPU (SPARClite), the "Venus" 128 GFLOP SPARC64 VIIIfx model is included in the K computer, the world's fastest supercomputer in June 2011 with a rating of over 8 petaflops, and in October 2011, K became the first computer to top 10 petaflops. This speed was achieved in testing on October 7 - 8, and the results were then presented at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) in November 2011. The Fujitsu FR, FR-V and ARM architecture microprocessors are widely used, additionally in ASICs and Application-specific standard products (ASSP) like the Milbeaut with customer variants named Nikon Expeed. They were acquired by Spansion in 2013. Advertising The old slogan "The possibilities are infinite" can be found below the company's logo on major advertisements and ties in with the small logo above the letters J and I of the word Fujitsu. This smaller logo represents the symbol for infinity. As of April 2010, Fujitsu is in the process of rolling out a new slogan focused on entering into partnerships with its customers and retiring the "possibilities are infinite" tagline. The new slogan is "shaping tomorrow with you". Criticism Fujitsu operated the Horizon IT system mentioned in the trial between the Post Office and its sub-postmasters. The case, settled in December 2019, found that the IT system was unreliable and that faults in the system caused discrepancies in branch accounts which were not due to the postmasters themselves. Mr. Justice Fraser, the judge hearing the case, noted that Fujitsu had given "wholly unsatisfactory evidence" and there had been a "lack of accuracy on the part of Fujitsu witnesses in their evidence". Following his concerns, Fraser sent a file to the Director of Public Prosecutions. Environmental record Fujitsu reports that all its notebook and tablet PCs released globally comply with the latest Energy Star standard. Greenpeace's Cool IT Leaderboard of April 2013 "examines how IT companies use their considerable influence to change government policies that will drive clean energy deployment" and ranks Fujitsu 4th out of 21 leading manufacturers, on the strength of "developed case study data of its solutions with fairly transparent methodology, and is the leading company in terms of establishing ambitious and detailed goals for future carbon savings from its IT solutions." Awards In 2021 Fujitsu Network Communications won the Optica Diversity & Inclusion Advocacy Recognition for "their investment in programs and initiatives celebrating and advancing Black, LGBTQ+ and women employees in pursuit of greater inclusion and equality within their company and the wider community." See also List of computer system manufacturers List of semiconductor fabrication plants See the World by Train, a daily Japanese TV mini-programme sponsored by Fujitsu since 1987 References External links Wiki collection of bibliographic works on Fujitsu 1935 establishments in Japan Cloud computing providers Companies listed on the Tokyo Stock Exchange Consumer electronics brands Defense companies of Japan Display technology companies Electronics companies established in 1935 Electronics companies of Japan Furukawa Group Heating, ventilation, and air conditioning companies Japanese brands Manufacturing companies based in Tokyo Mobile phone manufacturers Multinational companies headquartered in Japan Point of sale companies Software companies based in Tokyo Technology companies of Japan Telecommunications companies based in Tokyo Computer enclosure companies Japanese companies established in 1935
7915476
https://en.wikipedia.org/wiki/Giraffe%20radar
Giraffe radar
The Saab (formerly Ericsson Microwave Systems AB) Giraffe Radar is a family of land and naval two- or three-dimensional G/H-band (4 to 8 GHz) passive electronically scanned array radar-based surveillance and air defense command and control systems tailored for operations with medium- and Short Range Air Defense (SHORAD) missile or gun systems or for use as gap-fillers in a larger air defense system. The radar gets its name from the distinctive folding mast which when deployed allows the radar to see over nearby terrain features such as trees, extending its effective range against low-level air targets. The first systems were produced in 1977. By 2007, some 450 units of all types are reported as having been delivered. Military Technical Institute Belgrade purchased a licence for Giraffe 75 and producing a new model with several modifications. Domestic Serbia designation is M85 "Žirafa" on chassis of FAP 2026. Saab Electronic Defence Systems (EDS) in May 2014 unveiled two new classes of active electronically scanned array (AESA) radar—three land-based systems (Giraffe 1X, Giraffe 4A and Giraffe 8A) and two naval variants (Sea Giraffe 1X and Sea Giraffe 4A) in X- and S-band frequencies—to complement its existing surface radar portfolio. Description Giraffe is a family of G/H (formerly C-band) frequency agile, low to medium altitude pulse doppler air search radars and combat control centers which can be used in mobile or static short to medium range air defense applications. Giraffe is designed to detect low-altitude, low cross-section aircraft targets in conditions of severe clutter and electronic countermeasures. When equipped as an air-defense command center Giraffe provides an air picture to each firing battery using man portable radio communication. Giraffe uses Agile Multi-Beam (AMB), which includes an integrated Command, control and communication (C3) system. This enables Giraffe to act as the command and control center in an air defense system, it can also be integrated into a sensor net for greater coverage. It is normally housed in a single 6m long shelter mounted on an all-terrain vehicle for high mobility. Additionally the shelter can be augmented with Nuclear, Biological and Chemical protection and light layers of armor to protect against small arms and fragmentation threats. Variants Passive electronically-scanned array Giraffe 40 This is a short-range ( instrumented) air defense radar with command and control capability. It employs a folding antenna mast that extends to a height of when deployed and can be integrated with an Interrogation Friend or Foe (IFF) capability. Coverage is stated to be from ground level to in altitude. In Swedish service the radar is designated PS-70 and PS-701 and provides target data to RBS-70 SHORADS missiles and 40mm Bofors guns. A more powerful version with a 60 kW transmitter is known commercially as Super Giraffe and in Swedish service as PS-707. These radars are no longer marketed. Giraffe 50AT This is the model used in the Norwegian NALLADS air defense system which combines the radar and RBS-70 missiles with 20 mm anti-aircraft guns to provide low-level air defense for the combat brigades of the Norwegian army. Mounted on a BV-206 all-terrain tracked vehicle this version has an instrumented range of . The antenna extends to a height of and the system can control up to 20 firing units of guns or missiles or a combination of both. The Command and Control system features fully automatic track initiation, target tracking, target identification (IFF), target classification and designation, hovering helicopter detection threat evaluation and handling of "pop-up" targets. It can also exchange data with Giraffe 75 or AMB systems as part of a larger network. Giraffe 75 This features a antenna mast and is normally carried on a 6x6 5-ton cross-country truck which carries the radar and command and control shelter. Instrumented range is and altitude coverage extends from ground-level to . An optional add-on unit extends the radars coastal defense capabilities. In Swedish service the radar is designated PS-90. In the Greek Air Force Giraffe 75 is used in combination with Contraves (now Rheinmetall defense) Skyguard/Sparrow fire control systems. 1 Giraffe typically controls 2 Skyguard systems each with 2 twin 35 mm GDF-005 guns and 2 Sparrow surface-to-air missile launchers. Giraffe S Optimized as a mobile radar for un-manned remote-controlled applications as a "gap-filler" in air defense early warning systems concentrating on small, low-flying targets over a long distance. It can also be employed as a coastal surveillance radar where targets are small surface vessels and sea-skimming missiles or aircraft. A new antenna extends range coverage to with altitude coverage from ground level to . The antenna mast extends to . Giraffe AMB Giraffe Agile Multi Beam is a digital antenna array radar, providing multi-beam 3-Dimensional air coverage at 5.4 to 5.9 GHz with instrumented ranges of , and , the altitude coverage is extended from ground-level to with 70-degree elevation coverage. Data rate is 1-scan per-second. Its maintained pulse density suppresses high cluttering in adverse weather conditions. Ultra-low antenna side-lobes combined with pulse-to-pulse and burst-to-burst frequency agility provides some resistance to jamming. As in previous Giraffe radars automatic hovering helicopter detection is provided as is a rocket, artillery and mortar locating function, allowing the radar to detect incoming rounds and give 20 seconds or more of warning before impact. Giraffe AMB is the principal sensor of the Swedish RBS 23 BAMSE air defense missile system but is available for many other applications. The Giraffe AMB can be delivered with ground surveillance options fitted.. A skilled crew can deploy the radar in around 10 minutes and recover it in around 6 minutes. ARTE 740 This is a coastal defense radar based on the Giraffe 75 antenna and Giraffe AMB processing system optimized for surface and low-altitude coverage for the Swedish Amphibious Forces (formerly the Coastal Artillery). It is mounted on a MOWAG Piranha 10x10 armored vehicle. 6 systems are in service. Sea Giraffe AMB Saab's Sea Giraffe AMB is the naval variant of their Giraffe radar with 3D AMB technology. It can detect air and surface targets from the horizon up to a height of at elevations up to 70°, and can simultaneously handle multiple threats approaching from different directions and altitudes, including diving anti-ship missiles. Also, it is specialized for rapidly detecting small, fast moving targets at all altitudes and small surface targets in severe clutter. Sea Giraffe AMB is installed on the Republic of Singapore Navy's upgraded Victory-class corvette and US Navy's of Littoral Combat Ships, and has the designation AN/SPS 77 V(1) for LCS 2 and 4, and AN/SPS 77 V(2) for LCS 6 and higher. It has also been chosen for the Royal Canadian Navy's new Protecteur-class Joint Support Ships. The radar has an instrumented range of 180 kilometres. Its roles include: Air surveillance and tracking Surface surveillance and tracking Target identification for weapon systems High-resolution splash spotting Active electronically-scanned array Saab Electronic Defence Systems (EDS) in May 2014 unveiled two new classes of active electronically scanned array (AESA) radar—three land-based systems (Giraffe 1X, Giraffe 4A and Giraffe 8A) and two naval variants (Sea Giraffe 1X and Sea Giraffe 4A). Giraffe 8A At the top end of the range is the Giraffe 8A, a long-range IEEE S-band (NATO E/F) 3D sensor that can be produced in fixed, transportable and fully mobile configurations. Intended primarily for remote operation as part of an integrated air defence network, Giraffe 8A can also be operated locally. It has an instrumented range of 470 km and an altitude capability of more than 40,000m, bringing true long-range air defence capability to the Saab radar family for the first time. Giraffe 8A produces 15 stacked beams to provide elevation coverage from ground level to more than 65°. It can operate in a continuous 360° scan mode, rotating mechanically at 24rpm, or can be steered electronically across an operator-specified sector of 40° to 100°. More than 1,000 air defence tracks can be maintained, and the system also has anti-ballistic missile capability, in which case more than 100 tracks can be followed. Saab has paid special attention to Giraffe 8A's electronic counter-countermeasures properties. The radar generates very low sidelobes and incorporates sophisticated frequency agility in pulse-to-pulse, burst-to-burst and scan-to-scan regimes. It also switches and staggers pulse repetition frequency and transmits random jitter to further confuse countermeasures. It automatically selects the least jammed frequencies and can transmit intermittently or randomly. The radar offers a passive detection and tracking capability against jammers. Giraffe 4A While the Giraffe 8A occupies the high end of the family, Saab has introduced new radars in the medium-range category in the form of Giraffe 4A and Sea Giraffe 4A for naval use. Employing similar S-band technology to the larger radar, Giraffe 4A offers true 3D multirole capability, combining the air defence and weapon locating tasks in a single unit. Able to be airlifted in a single C-130 load, Giraffe 4A can be deployed by two people in less than 10 minutes. It can operate as a standalone. The Swedish armed forces designation for the G4A radar is PM24. Giraffe 1X To complete its new line-up, Saab has introduced two short-range radars, Giraffe 1X and Sea Giraffe 1X. Working in the IEEE X-band (NATO I-band), Giraffe 1X is intended primarily as a highly mobile radar that can work with very short-range air defence systems in the battlefield or at sea. Weighing less than 300 kg, Giraffe 1X can be mounted on a small vehicle or vessel or in fixed installations such as on a building or a mast. The radar has a sense-and-warn function and can be optionally configured for weapon location. Users : Sea Giraffe AMB G-band 3-D surveillance radar will equip MEKO A-200 frigates for the Algerian National Navy : Sea Giraffe installed on Canberra-class landing helicopter dock ships and ordered as a ground-based system. : In use by the Marine Corps since 1989, in the 50AT version, with a BV-206D tractor. To be replaced by the Saber M60. : Sea Giraffe is used on s. 5 systems in service, but has production licence and technology transfer for the system acquired during and after Croatian homeland war. : Giraffe AMB - 5 mobile truck mounted units used by Estonian Air Defence Battalion. : Jantronic J-1000 target acquisition systems with Ericsson Giraffe Mk IV radars on a XA-182 Pasi APC. Sea Giraffe installed on four Rauma-class missile boats : Giraffe AMB in use by the French Air Force. : Indonesian Army : Irish Army, Giraffe Mk IV on BV 206. : Sea Giraffe installed on Lekiu class frigate. Giraffe 40 used by Malaysian Army. : Norwegian Army, Giraffe Mk IV on BV 206. : Sea Giraffe AMB radars to be installed on the Gregorio del Pilar class frigates : Sea Giraffe is installed on Orkan class fast attack crafts. : Producing domestically upgraded and modernized M-85 Žirafa variant based on purchased licence by Military Technical Institute Belgrade : Giraffe S and AMB in service with the Republic of Singapore Air Force's air-defence radar network; Sea Giraffe AMB aboard the Republic of Singapore Navy's Victory class corvettes. : used by South African national defence force : Used by both the Army and Navy historically in large numbers and with most versions starting with the PS-70 and today the Giraffe AMB both on land and in the Visby Corvettes. The new 4A radar is planned to be acquired for the army's anti aircraft battalions when they switch from HAWK to Patriot missile systems. : Giraffe S and AMB used in Royal Thai Navy : UAE Navy Baynunah class corvette also use Sea Giraffe : The British Army and Royal Air Force jointly operate the G-AMB radar in 49 (Inkerman) Battery Royal Artillery. : Sea Giraffe AMB installed on the Independence-class littoral combat ship as AN/SPS-77(V)1 and AN/SPS-77V(2) : Giraffe 75 Under control of the Aerospace Defense Command FANB. See also ARTHUR References Citations Bibliography External links Older official website for Giraffe radar Official website for Giraffe AMB Official website for Sea Giraffe AMB maritime-index.com radartutorial.eu naval-technology.com Ground radars Military radars of Sweden Military equipment introduced in the 1970s
51190151
https://en.wikipedia.org/wiki/Ghost%20%28operating%20system%29
Ghost (operating system)
Ghost OS is an open-source hobbyist operating system and kernel. It is under development since 2014 and currently compatible with the x86 platform. The system is based on a microkernel and features symmetric multi-processing and multitasking. Most of the kernel and system programs are written in C++. Design The architectural concept is a micro-kernel design. Many of the functionalities that are usually integrated in the kernel in a monolithic or hybrid system are implemented as user-level applications. Drivers and some vital components (like the executable loader) are running as such processes. This approach tries to improve stability and avoid crashes due to faulty accesses, hardware uses or memory corruption. There is a userspace spawner process used to load executables. The current implementation supports static 32-bit ELF binaries. Dynamic linking is not supported yet. The kernel provides an application programming interface that is used for all inter-process communications and system commands. Driver processes access this interface to manage memory or request direct resource access. The interface functions are C-compatible. Library support A custom implementation of the libc is provided. This implementation incorporates the libm from the musl C library. libstdc++ is available as a default part when setting up the Ghost specific compiler toolchain. POSIX compatibility The system is partially POSIX.1 compatible. This was introduced to allow porting of third-party software, especially from the GNU environment, which heavily depend on standard C and POSIX functions. See also ToaruOS – hobby operating system by K. Lange References Free software operating systems Free software programmed in C++ Hobbyist operating systems Microkernel-based operating systems Self-hosting software X86 operating systems Operating system distributions bootable from read-only media
519402
https://en.wikipedia.org/wiki/Spectravideo
Spectravideo
Spectravideo International (SVI) was an American computer manufacturer and software house. It was originally called SpectraVision, a company founded by Harry Fox in 1981. The company produced video games and other software for the VIC-20 home computer, the Atari 2600 home video game console, and its CompuMate peripheral. Some of their own computers were compatible with the Microsoft MSX or the IBM PC. The company ceased operations in 1988. History SpectraVision was founded in 1981 by Harry Fox and Alex Weiss as a distributor of computer games, contracting external developers to write the software. Their main products were gaming cartridges for the Atari 2600 VCS, Colecovision and Commodore VIC-20. They also made the world's first ergonomic joystick, the QuickShot. In late 1982 the company was renamed to Spectravideo due to a naming conflict with On Command Corporation's Hotel TV system called SpectraVision. In the early 1980s, the company developed 11 games for the Atari 2600, including several titles of some rarity: Chase the Chuckwagon, Mangia and Bumper Bash. A few of their titles were only available through the Columbia House music club. The company's first attempt at a computer was an add-on for the Atari 2600 called the Spectravideo CompuMate, with a membrane keyboard and very simple programmability. Spectravideo's first real computers were the SV-318 and SV-328, released in 1983. Both were powered by a Z80 A at 3.6 MHz, but differed in the amount of RAM (SV-318 had 32KB and SV-328 had 80KB total, of which 16KB was reserved for video) and keyboard style. The main operating system, residing in ROM, was a version of Microsoft Extended BASIC, but if the computer was equipped with a floppy drive, the user had the option to boot with CP/M instead. These two computers were precedent to MSX and not fully compatible with the standard, though the changes made to their design to create MSX were minor. The system had a wide range of optional hardware, for example an adapter making it possible to run ColecoVision games on the SVI. SpectraVideo also created the QuickShot SVI-2000 Robot Arm which could be connected to a Commodore 64 user port or be controlled stand-alone with two joysticks. In May 1983, Spectravideo went public with the sale of 1 million shares of stock at $6.25 per share in an initial public offering underwritten by brokerage D. H. Blair & Co. However, Spectravideo quickly ran into trouble. By December 1983 its stock had fallen to 75 cents per share. In March 1984, the company agreed to sell a 60% stake of itself to Hong Kong-based Bondwell Holding in a deal that would have also required the resignation of president Harry Fox and vice-president Alex Weiss. That deal was set aside when Spectravideo was unable to restructure about $2.6 million worth of debt, and another deal where Fanon Courier U.S.A. Inc. would have purchased 80% of the company was struck in July. The Fanon Courier deal similarly fell through, and Fox resigned as president in September, with Bondwell Holding purchasing over half of the company's stock and installing Bondwell vice-president Christopher Chan as the new president. A later computer, the Spectravideo SVI-728, was made MSX compatible. SVI-738, also MSX compatible, came with a built-in 360 KB 3.5" floppy drive. The last computer produced by Spectravideo was the SVI-838 (also known as Spectravideo X'Press 16). It was a PC and MSX2 in the same device. Legacy The Spectravideo name was used by a UK-based company called SpectraVideo Plc, formerly known as Ash & Newman. That company was founded in 1977, and bought the Spectravideo brand name from Bondwell (SVI owner) in 1988. They sell their own range of Logic3 branded products, and do not have any connection to the old Spectravideo products. The company changed its name to Logic3 in 2006, and entered administration in 2013 after a licensing deal with Ferrari proved to be a failure. The company was dissolved on 2016 References External links Samdal.com: Roger's history of SpectraVision—Spectravideo webpage Spectravideo.se: Glenn's Spectravideo webpage 1981 disestablishments in the United States 1988 disestablishments in the United States American companies established in 1981 American companies disestablished in 1988 Computer companies established in 1981 Computer companies disestablished in 1988 Defunct computer companies of the United States Defunct video game companies of the United States Home computer hardware companies MSX Video game companies established in 1981 Video game companies disestablished in 1988
6422823
https://en.wikipedia.org/wiki/Search%20engine%20technology
Search engine technology
A search engine is an information retrieval software program that discovers, crawls, transforms and stores information for retrieval and presentation in response to user queries. A search engine normally consists of four components, that are search interface, crawler (also known as a spider or bot), indexer, and database. The crawler traverses a document collection, deconstructs document text, and assigns surrogates for storage in the search engine index. Online search engines store images, link data and metadata for the document as well. History of Search Technology The Memex The concept of hypertext and a memory extension originates from an article that was published in The Atlantic Monthly in July 1945 written by Vannevar Bush, titled As We May Think. Within this article Vannevar urged scientists to work together to help build a body of knowledge for all mankind. He then proposed the idea of a virtually limitless, fast, reliable, extensible, associative memory storage and retrieval system. He named this device a memex. Bush regarded the notion of “associative indexing” as his key conceptual contribution. As he explained, this was “a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing. All of the documents used in the memex would be in the form of microfilm copy acquired as such or, in the case of personal records, transformed to microfilm by the machine itself. Memex would also employ new retrieval techniques based on a new kind of associative indexing the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another to create personal "trails" through linked documents. The new procedures, that Bush anticipated facilitating information storage and retrieval would lead to the development of wholly new forms of the encyclopedia. The most important mechanism, conceived by Bush, is the associative trail. It would be a way to create a new linear sequence of microfilm frames across any arbitrary sequence of microfilm frames by creating a chained sequence of links in the way just described, along with personal comments and side trails. In 1965 Bush took part in the project INTREX of MIT, for developing technology for mechanization the processing of information for library use. In his 1967 essay titled "Memex Revisited", he pointed out that the development of the digital computer, the transistor, the video, and other similar devices had heightened the feasibility of such mechanization, but costs would delay its achievements. SMART Gerard Salton, who died on August 28 of 1995, was the father of modern search technology. His teams at Harvard and Cornell developed the SMART informational retrieval system. Salton's Magic Automatic Retriever of Text included important concepts like the vector space model, Inverse Document Frequency (IDF), Term Frequency (TF), term discrimination values, and relevancy feedback mechanisms. He authored a 56-page book called A Theory of Indexing which explained many of his tests upon which search is still largely based. String Search Engines In 1987 an article was published detailing the development of a character string search engine (SSE) for rapid text retrieval on a double-metal 1.6-μm n-well CMOS solid-state circuit with 217,600 transistors lain out on a 8.62x12.76-mm die area. The SSE accommodated a novel string-search architecture which combines a 512-stage finite-state automaton (FSA) logic with a content addressable memory (CAM) to achieve an approximate string comparison of 80 million strings per second. The CAM cell consisted of four conventional static RAM (SRAM) cells and a read/write circuit. Concurrent comparison of 64 stored strings with variable length was achieved in 50 ns for an input text stream of 10 million characters/s, permitting performance despite the presence of single character errors in the form of character codes. Furthermore, the chip allowed nonanchor string search and variable-length `don't care' (VLDC) string search. Web Search Engines Archie The first web search engines was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal. The author originally wanted to call the program "archives," but had to shorten it to comply with the Unix world standard of assigning programs and files short, cryptic names such as grep, cat, troff, sed, awk, perl, and so on. The primary method of storing and retrieving files was via the File Transfer Protocol (FTP). This was (and still is) a system that specified a common way for computers to exchange files over the Internet. It works like this: Some administrator decides that he wants to make files available from his computer. He sets up a program on his computer, called an FTP server. When someone on the Internet wants to retrieve a file from this computer, he or she connects to it via another program called an FTP client. Any FTP client program can connect with any FTP server program as long as the client and server programs both fully follow the specifications set forth in the FTP protocol. Initially, anyone who wanted to share a file had to set up an FTP server in order to make the file available to others. Later, "anonymous" FTP sites became repositories for files, allowing all users to post and retrieve them. Even with archive sites, many important files were still scattered on small FTP servers. Unfortunately, these files could be located only by the Internet equivalent of word of mouth: Somebody would post an e-mail to a message list or a discussion forum announcing the availability of a file. Archie changed all that. It combined a script-based data gatherer, which fetched site listings of anonymous FTP files, with a regular expression matcher for retrieving file names matching a user query. (4) In other words, Archie's gatherer scoured FTP sites across the Internet and indexed all of the files it found. Its regular expression matcher provided users with access to its database. Veronica In 1993, the University of Nevada System Computing Services group developed Veronica. It was created as a type of searching device similar to Archie but for Gopher files. Another Gopher search service, called Jughead, appeared a little later, probably for the sole purpose of rounding out the comic-strip triumvirate. Jughead is an acronym for Jonzy's Universal Gopher Hierarchy Excavation and Display, although, like Veronica, it is probably safe to assume that the creator backed into the acronym. Jughead's functionality was pretty much identical to Veronica's, although it appears to be a little rougher around the edges. The Lone Wanderer The World Wide Web Wanderer, developed by Matthew Gray in 1993 was the first robot on the Web and was designed to track the Web's growth. Initially, the Wanderer counted only Web servers, but shortly after its introduction, it started to capture URLs as it went along. The database of captured URLs became the Wandex, the first web database. Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of the software ran rampant through the Net and caused a noticeable netwide performance degradation. This degradation occurred because the Wanderer would access the same page hundreds of time a day. The Wanderer soon amended its ways, but the controversy over whether robots were good or bad for the Internet remained. In response to the Wanderer, Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB, in October 1993. As the name implies, ALIWEB was the HTTP equivalent of Archie, and because of this, it is still unique in many ways. ALIWEB does not have a web-searching robot. Instead, webmasters of participating sites post their own index information for each page they want listed. The advantage to this method is that users get to describe their own site, and a robot doesn't run about eating up Net bandwidth. Unfortunately, the disadvantages of ALIWEB are more of a problem today. The primary disadvantage is that a special indexing file must be submitted. Most users do not understand how to create such a file, and therefore they don't submit their pages. This leads to a relatively small database, which meant that users are less likely to search ALIWEB than one of the large bot-based sites. This Catch-22 has been somewhat offset by incorporating other databases into the ALIWEB search, but it still does not have the mass appeal of search engines such as Yahoo! or Lycos. Excite Excite, initially called Architext, was started by six Stanford undergraduates in February 1993. Their idea was to use statistical analysis of word relationships in order to provide more efficient searches through the large amount of information on the Internet. Their project was fully funded by mid-1993. Once funding was secured. they released a version of their search software for webmasters to use on their own web sites. At the time, the software was called Architext, but it now goes by the name of Excite for Web Servers. Excite was the first serious commercial search engine which launched in 1995. It was developed in Stanford and was purchased for $6.5 billion by @Home. In 2001 Excite and @Home went bankrupt and InfoSpace bought Excite for $10 million. Some of the first analysis of web searching was conducted on search logs from Excite Yahoo! In April 1994, two Stanford University Ph.D. candidates, David Filo and Jerry Yang, created some pages that became rather popular. They called the collection of pages Yahoo! Their official explanation for the name choice was that they considered themselves to be a pair of yahoos. As the number of links grew and their pages began to receive thousands of hits a day, the team created ways to better organize the data. In order to aid in data retrieval, Yahoo! (www.yahoo.com) became a searchable directory. The search feature was a simple database search engine. Because Yahoo! entries were entered and categorized manually, Yahoo! was not really classified as a search engine. Instead, it was generally considered to be a searchable directory. Yahoo! has since automated some aspects of the gathering and classification process, blurring the distinction between engine and directory. The Wanderer captured only URLs, which made it difficult to find things that weren't explicitly described by their URL. Because URLs are rather cryptic to begin with, this didn't help the average user. Searching Yahoo! or the Galaxy was much more effective because they contained additional descriptive information about the indexed sites. Lycos At Carnegie Mellon University during July 1994, Michael Mauldin, on leave from CMU,developed the Lycos search engine. Types of Web Search Engines Search engines on the web are sites enriched with facility to search the content stored on other sites. There is difference in the way various search engines work, but they all perform three basic tasks. Finding and selecting full or partial content based on the keywords provided. Maintaining index of the content and referencing to the location they find Allowing users to look for words or combinations of words found in that index. The process begins when a user enters a query statement into the system through the interface provided. There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two. Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine. Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index. In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index hasn't been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated. So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine the relevance of the information in the index to what the user is searching for. One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing. Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking. Modern web search engines are highly intricate software systems that employ technology that has evolved over the years. There are a number of sub-categories of search engine software that are separately applicable to specific 'browsing' needs. These include web search engines (e.g. Google), database or structured data search engines (e.g. Dieselpoint), and mixed search engines or enterprise search. The more prevalent search engines, such as Google and Yahoo!, utilize hundreds of thousands computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity. Another category of search engines is scientific search engines. These are search engines which search scientific literature. Best known example is GoogleScholar. Researchers are working on improve search engine technology by making the engines understand the content element of the articles, such as extracting theoretical constructs or key research findings. Search engine categories Web search engines Search engines that are expressly designed for searching web pages, documents, and images were developed to facilitate searching through a large, nebulous blob of unstructured resources. They are engineered to follow a multi-stage process: crawling the infinite stockpile of pages and documents to skim the figurative foam from their contents, indexing the foam/buzzwords in a sort of semi-structured form (database or something), and at last, resolving user entries/queries to return mostly relevant results and links to those skimmed documents or pages from the inventory. Crawl In the case of a wholly textual search, the first step in classifying web pages is to find an ‘index item’ that might relate expressly to the ‘search term.’ In the past, search engines began with a small list of URLs as a so-called seed list, fetched the content, and parsed the links on those pages for relevant information, which subsequently provided new links. The process was highly cyclical and continued until enough pages were found for the searcher's use. These days, a continuous crawl method is employed as opposed to an incidental discovery based on a seed list. The crawl method is an extension of aforementioned discovery method. Except there is no seed list, because the system never stops worming. Most search engines use sophisticated scheduling algorithms to “decide” when to revisit a particular page, to appeal to its relevance. These algorithms range from constant visit-interval with higher priority for more frequently changing pages to adaptive visit-interval based on several criteria such as frequency of change, popularity, and overall quality of site. The speed of the web server running the page as well as resource constraints like amount of hardware or bandwidth also figure in. Link map The pages that are discovered by web crawls are often distributed and fed into another computer that creates a veritable map of resources uncovered. The bunchy clustermass looks a little like a graph, on which the different pages are represented as small nodes that are connected by links between the pages. The excess of data is stored in multiple data structures that permit quick access to said data by certain algorithms that compute the popularity score of pages on the web based on how many links point to a certain web page, which is how people can access any number of resources concerned with diagnosing psychosis. Another example would be the accessibility/rank of web pages containing information on Mohamed Morsi versus the very best attractions to visit in Cairo after simply entering ‘Egypt’ as a search term. One such algorithm, PageRank, proposed by Google founders Larry Page and Sergey Brin, is well known and has attracted a lot of attention because it highlights repeat mundanity of web searches courtesy of students that don't know how to properly research subjects on Google. The idea of doing link analysis to compute a popularity rank is older than PageRank. Other variants of the same idea are currently in use – grade schoolers do the same sort of computations in picking kickball teams. But in all seriousness, these ideas can be categorized into three main categories: rank of individual pages and nature of web site content. Search engines often differentiate between internal links and external links, because web masters and mistresses are not strangers to shameless self-promotion. Link map data structures typically store the anchor text embedded in the links as well, because anchor text can often provide a “very good quality” summary of a web page's content. Database Search Engines Searching for text-based content in databases presents a few special challenges from which a number of specialized search engines flourish. Databases can be slow when solving complex queries (with multiple logical or string matching arguments). Databases allow pseudo-logical queries which full-text searches do not use. There is no crawling necessary for a database since the data is already structured. However, it is often necessary to index the data in a more economized form to allow a more expeditious search. Mixed Search Engines Sometimes, data searched contains both database content and web pages or documents. Search engine technology has developed to respond to both sets of requirements. Most mixed search engines are large Web search engines, like Google. They search both through structured and unstructured data sources. Take for example, the word ‘ball.’ In its simplest terms, it returns more than 40 variations on Wikipedia alone. Did you mean a ball, as in the social gathering/dance? A soccer ball? The ball of the foot? Pages and documents are crawled and indexed in a separate index. Databases are indexed also from various sources. Search results are then generated for users by querying these multiple indices in parallel and compounding the results according to “rules.” See also Database search engine Enterprise search Search engine Search engine indexing Web crawler Word-sense disambiguation (dealing with Ambiguity) References Internet search engines